text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
MyTob is the “tipping point” in the development of viruses, Lovet told Infosecurity. “Before 2005, people would write viruses for fun, the challenge, or to cause annoyance. After this point, viruses became an economic activity for criminals”, he observed.
By combining a mass-mailer with a botnet, cybercriminals were able to “monetize” the botnets and generate billions of dollars per year in revenue, Lovet observed.
MyTob is one the top ten viruses identified by Lovet in his blog on the 40th anniversary of the computer virus. The first virus, called Creeper, was developed in 1971 by an employee working on ARPANET, the precursor of the Internet.
“Creeper looks for a machine on the network, transfers to it, displays the message ‘I’m the creeper, catch me if you can!’ and starts over, thereby hopping from system to system. It was a pure proof of concept that ties the roots of computer viruses to those of the internet”, Lovet wrote.
The second most significant virus over the 40 years since Creeper is Sasser, which appeared in 2004 and was similar to Code Red in that it automatically spread by exploiting a vulnerability in Microsoft Windows. Infected systems would turn off every couple of minutes, Lovet explained.
“It is the first time that physical systems were impacted by a computer virus. What I mean by physical systems are systems that are not normally related to the internet and that existed before the internet. For instance, AFP’s communications satellites were interrupted for hours, some planes from Delta Airlines were grounded, the British coast guard had to revert to print maps instead of GPS, and a hospital had to redirect emergency room functioning because of the virus”, he said.
“This was the first time that people realized that viruses could cause massive chaos and havoc, even to systems that are not generally related to the internet”, he added.
The third most important virus is Stuxnet, Lovet opined. Stuxnet exploited vulnerabilities in industrial control systems to disrupt the functioning of a nuclear power facility in Iran.
“This is the first time we heard about a virus being used as a weapon. The purpose of the virus was to physically harm, possibly destroy, an industrial system, a nuclear plant. Perhaps it did happen before but this was the first time we’ve heard about it, so it is really a major breakthrough in the virus world. It is probably the most complicated virus we have ever dealt with”, Lovet said.
Happy Anniversary, computer virus! | <urn:uuid:4824fd45-2fe1-4e1d-bd37-d6f9fd818462> | CC-MAIN-2017-04 | https://www.infosecurity-magazine.com/news/mytob-tops-list-of-most-significant-virus-over/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00563-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.969831 | 560 | 2.875 | 3 |
Recently, I read a RFP issued by a customer. The main topic focused on a perimeter security but a paragraph mentioned the protection of SCADA environments. I’ve no practical experience with SCADA and I tried to find relevant information about the deployment of security solutions in such environments. Here follows a compilation of information about this technology. This is just an introduction, I’m not a “guru”.
SCADA means “Supervisory Control And Data Acquisition“. It refers to the use of computers to monitor and control an industrial process. From a console (called HMI – “Human Machine Interface“), operators can interact with a bunch of sensors and programmable controllers. To explain shortly, it’s possible to collect information from a sensor (the “Acquisition” phase) or interact with active components (the “Control” phase). Practical examples are: to read the pressure in a pipe and to control the opening of a valve to reduce the pressure. SCADA could be compared (very rough comparison!) with SNMP: You can poll the value of SNMP OIDs from a network devices like a switch or receive traps and change the value of certain OIDs.
SCADA is used in multiple domains: industrial (example: energy production), infrastructure (distribution) or facilities (buildings, cooling or heating systems, airport luggage handling systems). By reading those examples, you immediately realize that the environments where SCADA components are deployed are really critical. Bad interpretation or unexpected behavior can have major impacts up to the highest level: risk of body injuries or even death! Can you imagine a SCADA environment controlled by hackers? That’s basically the scenario of the latest Die Hard movie. Scaring? But this is not a movie: vulnerabilities have already been discovered for SCADA products!
The different components of a SCADA infrastructure must exchange information via a communication link. In the first generations of products, this was performed over serial or modem connections. Today, the last generation uses (what a surprise!) TCP/IP networks. Ethernet is commonly used but, for longer distance (example: to manager railways infrastructure), communication are based on SONET links. Helas, TCP/IP means also more vulnerabilities. Note that a SCADA infrastructure does not rely on a unique standardized protocol. A lot of them have been developed and understood by most of the manufacturers (good examples are nice names like IEC 60870-5-101 or 104, IEC 61850 or DNP3). Old protocols are replaced by common networking protocols over IP. The “web madness” also reached the SCADA products manufacturers: more and more web interface are available to manage the components (more friendly interfaces).
What are the risks associated to a SCADA infrastructure?
- First, IP protocols are routed protocols. Packets can be routed/NATed. The SCADA components must be physically separated from any other IP network. Installing a firewall between the organization LAN and the HMI console is not enough with today’s attacks. Of course, any Internet connectivity must be prohibited.
- By introducing web interfaces, manufacturers increase the risks to introduce web vulnerabilities. Check out the OWASP Top-ten for more information about risks associated to web applications.
- DoS (“Deny of Services“) attacks. Attackers could try to interrupt communications between the components or flood a component with false-positive information.
- Common mistakes are the lack of controls and the principle of “security by obscurity”. SCADA protocols are obscure protocols but it does not mean they are not vulnerable.
- Network outages: components must be able to exchange information in real-time without any interruption.
Those issues can be classified in two main types of threats: Unauthorized accesses to a control station (HMI) or injections of rogue packets on a SCADA network.
How to protect a network carrying SCADA protocols? First, physically disconnect the network segment used by the SCADA components from the rest of your network. Still today, too many SCADA devices are directly available on the Internet. Protect against inappropriate physical access to the network. Switches must be properly secured: all unused ports must be disabled, port-security must be enabled (example: by learning the MAC addresses). Perform end-point authentication for all devices connected to the SCADA network (using VMPS, 802.1x or any other solution). If packets must cross other segments (business or technical requirements), encrypt all the traffic using a VPN or a SSL tunnel.
At host level (the HMI or console), restrict physical and logical access to the console. Prevent any communication with another network. Those hosts must be dedicated to the SCADA applications and never exchange e-mail, web traffic which are common sources of malwares and viruses. Local security must be enforced using anti-virus, anti-malwares or host-based IDS. Another good practice is to work with while lists of applications. Also, automatic patching can be disabled to prevent any unexpected reboot or problem. Patches must be validated first on a test system before being installed in the producion environment during defined intervention time-windows.
Like other networks, IDS and firewalls can be deployed. Intrusion Detection System could detect suspicious or malformed SCADA packets injected on a network. On Scadapedia, there are SCADA signatures ready to be used with Snort, a well-known open source IDS project. Signatures cover the following protocols: Modbus/TCP, DNP3 and ICCP. Some firewalls solutions exists for filter SCADA protocols. Those are commercial products and often based on a Linux kernel with iptables and a bunch of specific rules. Finally don’t forget to add some visibility on top of your SCADA network:
- Monitor the components health using a monitoring tool.
- Collect information about your network. Restrict the amount of trafic to the minimum to keep the network performant.
- Monitor the network response time and availability.
Another interesting project is called the SCADA Honeynet Project. It simulates several industrial environments. The ISA (“International Society of Automation“) developed a standard (ISA99) called: Security for Industrial Automation and Control Systems: Establishing an Industrial Automation and Control Systems Security Program. It addresses manufacturing and control systems whose compromise could result in situations like: the safety of public or employee, loss of confidential information, loss of profit, impact of the national security.
To conclude, protecting a SCADA network is like any other network: the CIA-principle must be respected. But with two main differences:
- Specific “obscure” protocols not supported by the common security devices (deployed solutions must be adapted to the SCADA protocols).
- Incident can have major impacts on infrastructure outside the IT infrastructure or on people. | <urn:uuid:0d1df7f8-b51f-40cb-9526-eaf8ffb525a3> | CC-MAIN-2017-04 | https://blog.rootshell.be/2010/03/06/scada-from-a-security-point-of-view/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00471-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.898838 | 1,440 | 2.875 | 3 |
Sakchoowong W.,Forest and Plant Conservation Research Office |
Hasin S.,Kasetsart University |
Pachey N.,Forest and Plant Conservation Research Office |
Amornsak W.,Kasetsart University |
And 3 more authors.
Asian Myrmecology | Year: 2015
In tropical rainforests, variability in the distribution of soil and litter arthropods is usually explained at regional scales by altitude, soil nutrients, and disturbance regimes. However, the influence of these factors on insect assemblages at the micro-habitat scale has rarely been studied. We investigated whether the species identity of decomposing leaves in tropical forest affected the composition of ant assemblages around them. Ants were extracted from litter below three common tree species, Parashorea stellata (Dipterocarpaceae), Intsia palembanica (Fabaceae) and Shorea gratissima (Dipterocarpaceae) in a 24 ha lowland rainforest plot in southern Thailand. A total of 2,257 individual ants, representing 71 species in 38 genera of 6 subfamilies were collected in the dry and wet seasons during 2010. Ant species richness was never significantly different among litter samples under the crown cover of three tree species. Ant species richness was higher in the wet season than the dry season. Our results demonstrate that ant assemblages are seasonally heterogeneous. Leaf mass and litter mass did not relate to the presence of ant diversity. Soil humidity was the only important factor influencing ant diversity in this study. Future studies should consider the importance of soil moisture related to litter ant diversity. © Watana Sakchoowong, Sasitorn Hasin, Nongphanga Pachey, Weerawan Amornsak, Sarayudh Bunyavejchewin, Pitoon Kongnoo and Yves Basset. Source
Peay K.G.,University of California at Berkeley |
Kennedy P.G.,Lewis And Clark College |
Davies S.J.,Harvard University |
Tan S.,Sarawak Forestry Corporation |
And 2 more authors.
New Phytologist | Year: 2010
Relatively little is known about diversity or structure of tropical ectomycorrhizal communities or their roles in tropical ecosystem dynamics. In this study, we present one of the largest molecular studies to date of an ectomycorrhizal community in lowland dipterocarp rainforest. We sampled roots from two 0.4 ha sites located across an ecotone within a 52 ha forest dynamics plot. Our plots contained > 500 tree species and > 40 species of ectomycorrhizal host plants. Fungi were identified by sequencing ribosomal RNA genes. The community was dominated by the Russulales (30 species), Boletales (17), Agaricales (18), Thelephorales (13) and Cantharellales (12). Total species richness appeared comparable to molecular studies of temperate forests. Community structure changed across the ecotone, although it was not possible to separate the role of environmental factors vs host plant preferences. Phylogenetic analyses were consistent with a model of community assembly where habitat associations are influenced by evolutionary conservatism of functional traits within ectomycorrhizal lineages. Because changes in the ectomycorrhizal fungal community parallel those of the tree community at this site, this study demonstrates the potential link between the distribution of tropical tree diversity and the distribution of tropical ectomycorrhizal diversity in relation to local-scale edaphic variation. © 2009 New Phytologist. Source
The sun has been pale for months here in Sumatra and the skies are grey all day — choked with pollution from the massive fires that rage across the Indonesian island. Since the late 1990s, the haze caused by these annual fires has posed a significant threat to the health of Sumatra’s rural communities. This year’s haze is especially bad and has affected major cities, both here and abroad; consequently, the fires have again made headlines around the world. Many of these news stories blame the big palm-oil companies for the fires. Slash-and-burn techniques remain the cheapest way to clear forest for new plantations. But scientific evidence suggests that this simple narrative is not absolutely true. A number of surveys have found that the bulk of these fires are started outside the official oil-palm concessions. Small-scale farmers seem to be more to blame. The haze in Indonesia is not just an environmental issue; it is a complex socio-economic problem that is driven partly by conflict over land ownership between palm-oil companies and rural communities — a struggle that the companies usually win. Besides holding financial and legal power, these companies also have science on their side. High-quality research at state-funded centres has found ways to increase the production of palm oil, such as the manipulation of the gene SHELL and ways to weed out oil-palm clones with reduced yields. These technologies have been developed by the Malaysian Palm Oil Board, and the big companies in the region can pay to license and use them. But such technologies are out of the reach of smallholders and the rural population. Yet smallholders produce a large proportion of the crops, mainly through conventional farming practices. Some 80% of Indonesian rubber, for example, is made by small-scale farmers who do not have access to the research products and whose welfare has not improved. What has science done to empower these people? The problems of Indonesian farmers might seem low on the list of global priorities. But as the nations of the world prepare to discuss a treaty on climate change in Paris next month, the fires that fuel the Sumatran haze offer a perfect example of how the relationship between science and industry must shift if we value sustainable development. Scientists need the private sector to provide funding and a ‘tunnel’ for commercialization; the private sector needs scientists to develop products. This alliance, together with support from the government, is called the triple helix — a concept that has driven the world’s economy since the Industrial Revolution. But is this concept still relevant? Although some parts of the world have achieved a stable economy driven by scientific advancement, around half of the world’s population still lives in poverty. The people of these regions also face environmental threats, such as deforestation and its extended impact, on a daily basis. Those who are most vulnerable benefit from science the least. There are scientists who want to transfer their knowledge to these people, but this has proved difficult. The failure of an experiment in the Solomon Islands to help indigenous people to exploit their local environment as ‘ecosystem services’ was attributed to a culture gap between scientists and local people. This claimed divide is often presented as a barrier to the transfer of science and technology. Scientists must try harder to bridge this gap. Science is a fuel for economic development, but its influence must extend beyond the triple helix. That model simply uses science to exploit natural resources for economic gain. Given the need to mitigate the harmful environmental effects of this conversion, the model is no longer enough. Mitigation must be the responsibility of everyone on the planet, not just scientists, businessmen and policymakers. Indigenous and local people should also be involved, especially those who call carbon sinks, such as tropical forests, home. There are already examples of science reaching out. The residents of the Wanang Conservation Area in Papua New Guinea, for instance, have offered 1,000 hectares of their 10,000-hectare protected forest for research conducted by institutions such as the Smithsonian Tropical Research Institute’s Center for Tropical Forest Science. In this zone, scientists and indigenous people collaborate to investigate the response of trees to climate change. Local people are trained then employed as field research assistants and have received compensation for the lease of their forest. Meanwhile, a project supported by the US Agency for International Development is training local people in West Kalimantan, Indonesia, to be plant parataxonomists. The project was initiated by Campbell Webb, a plant evolutionary biologist and bioinformaticist at the Arnold Arboretum of Harvard University who is based in West Kalimantan. It is teaching local people to collect plant data in Gunung Palung National Park, an area of high biodiversity that faces the threat of deforestation. The Paris talks should discuss the need for such initiatives to be copied and scaled up. For decades, the relationship between science, industry and government has been celebrated by all involved as a good thing. But not everybody benefits. Science might be able to pin the blame for the southeast Asia haze on Indonesian smallholders, but it has not yet given them — or others in their position — a way to help prevent it.
Harrison R.D.,CAS Kunming Institute of Botany |
Harrison R.D.,World Agroforestry Center |
Tan S.,Center for Tropical Forest Science |
Plotkin J.B.,University of Pennsylvania |
And 5 more authors.
Ecology Letters | Year: 2013
Hunting affects a considerably greater area of the tropical forest biome than deforestation and logging combined. Often even large remote protected areas are depleted of a substantial proportion of their vertebrate fauna. However, understanding of the long-term ecological consequences of defaunation in tropical forests remains poor. Using tree census data from a large-scale plot monitored over a 15-year period since the approximate onset of intense hunting, we provide a comprehensive assessment of the immediate consequences of defaunation for a tropical tree community. Our data strongly suggest that over-hunting has engendered pervasive changes in tree population spatial structure and dynamics, leading to a consistent decline in local tree diversity over time. However, we do not find any support for suggestions that over-hunting reduces above-ground biomass or biomass accumulation rate in this forest. To maintain critical ecosystem processes in tropical forests increased efforts are required to protect and restore wildlife populations. © 2013 Blackwell Publishing Ltd/CNRS. Source
Basset Y.,Smithsonian Tropical Research Institute |
Eastwood R.,Harvard University |
Sam L.,The New Guinea Binatang Research Center |
Lohman D.J.,Harvard University |
And 8 more authors.
Insect Conservation and Diversity | Year: 2013
1.Standardised transect counts of butterflies in old-growth rainforests in different biogeographical regions are lacking. Such data are needed to mitigate the influence of methodological and environmental factors within and between sites and, ultimately, to discriminate between long-term trends and short-term stochastic changes in abundance and community composition. 2.We compared butterfly assemblages using standardised Pollard Walks in the understory of closed-canopy lowland tropical rainforests across three biogeographical regions: Barro Colorado Island (BCI), Panama; Khao Chong (KHC), Thailand; and Wanang (WAN), Papua New Guinea. 3.The length and duration of transects, their spatial autocorrelation, and number of surveys per year represented important methodological factors that strongly influenced estimates of butterfly abundance. Of these, the effect of spatial autocorrelation was most difficult to mitigate across study sites. 4.Butterfly abundance and faunal composition were best explained by air temperature, elevation, rainfall, wind velocity, and human disturbance at BCI and KHC. In the absence of weather data at WAN, duration of transects and number of forest gaps accounted for most of the explained variance, which was rather low in all cases (<33%). 5.Adequate monitoring of the abundance of common butterflies was achieved at the 50ha BCI plot, with three observers walking each of 10 transects of 500m for 30min each, during each of four surveys per year. These data may be standardised further after removing outliers of temperature and rainfall. Practical procedures are suggested to implement global monitoring of rainforest butterflies with Pollard Walks. © 2012 The Royal Entomological Society. Source | <urn:uuid:afa55983-e575-43c6-afa9-13578adadfc3> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/center-for-tropical-forest-science-447543/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00197-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935486 | 2,479 | 2.546875 | 3 |
Taking Advantage of a New Modern Language
3. Will Go be the next big thing in programming? Asked whether he thought Go might be the next big thing, Pike replied, "I'd be thrilled if that happened, but I don't expect it.""It's not a Google official language the way that, say, Java was a Sun [Microsystems] official language," Pike said. "We're really launching this as an open-source experimental toy for people to play with. It needs time ... to become something that people would want to build companies around or anything like that. It may never get there. But, so far, the response has been really positive." 4. Google's Go owes some debt to Bell Labs' Plan 9 Plan 9 is a distributed operating system developed as the research successor to Unix at Bell Labs. Pike and Thompson were part of the original Plan 9 team at Bell Labs. And Go team member Russ Cox also was a Plan 9 developer. Although there was not a lot of direct use of Plan 9 technology in creating Go, the link between team members is not the only connection. The Plan 9 team produced some programming languages of its own, such as Alef and Limbo, which had a slight impact on the direction of Go. "We didn't pull out those languages and look at them again," Pike said. "But I think a better way to express it is that, particularly on the concurrency side of things-the parallel programming side of things-those languages are kind of in the same family tree and inspired by the same approach." Pike added, "Ken's compiler is entirely new ... it uses the Plan 9 compiler suite, linker and assembler, but the Go compiler is all new. So we borrowed a little technology, but that's just the compiler, that's not the vision. And naturally some of the people working on the project were around from the Plan 9 days, so there's got to be some intellectual cross-breeding there. But this wasn't an attempt to do a Plan 9. This is a language, not an operating system, and they address different things. Go doesn't even have a Plan 9 port at this point, although it would be nice to have one." 5. Seven engineers join Google's Go team Pike and Griesemer were officemates who groused about the problems with programming. Once they decided to try to create a language they invited Thompson to join, because "Ken really understands how to make things go fast and has really good ideas about things ... and besides he was in the next office. And in about 5 minutes we decided the three of us could really make something happen."
He added that he hopes the language will grow on its own merits. Then Pike noted a significant difference between Go and Java. | <urn:uuid:9b8cae87-3293-4d40-847d-9f205c433692> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Application-Development/12-Things-to-Know-About-Googles-Go-Programming-Language-859839/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00105-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.981784 | 565 | 2.703125 | 3 |
Munne A.,Catalan Water Agency |
Prat N.,University of Barcelona
Ecological Indicators | Year: 2011
Data on macroinvertebrates of selected reference sites were compiled from a long-term monitoring programme carried out in the Mediterranean Catalan Basins (NE Spain) that permitted analysis for nine years, from 1996 to 2004, using a homogeneous data collection procedure. This study aims to analyse the differences in composition and structure of macroinvertebrate communities at family level in five Mediterranean river types, and the values of biological quality metrics (IBMWP and IASPT indices, taxon richness and EPT) in reference conditions. Also differences between seasons (spring vs. summer) and between dry and wet periods were analysed. The dry and wet periods were determined using the Standardised Precipitation Index (SPI). A total of 29 reference sites were selected out of 184 sampling sites analysed, and 171 reference samples were available (from 1996 to 2004), of which 88 were sampled in dry periods, whereas 83 correspond to wet periods. Differences on community composition at family level were appreciated, clustering the rivers in three different groups: (1) rivers with a continuous flow regime located in siliceous zones; (2) rivers with a continuous flow regime located in calcareous zones; and (3) temporary rivers regardless of geology. Moreover, our results explain that the characteristics of hydrological periods (dry and wet) characterize the differences between communities better than just the season. The analysis of four biological quality metrics reveals clear differences between values obtained from dry and wet periods concerning taxon richness, EPT values and IBMWP biological indices, whereas the IASPT index does not show significant differences. The median taxonomic richness in wet periods is 32 macroinvertebrate families per sample while in dry periods this value falls to 22. Reference values of IBMWP index, the total number of taxa, and EPT metric are different between dry and wet periods in spring samples, while these differences are not relevant for IASPT index except for temporary streams. Hydrological specific conditions should additionally be considered in order to better calculate biological reference conditions, and to properly apply biological quality metrics used to establish the ecological status in Mediterranean rivers, especially in temporary ones. The use of the dry-wet period classification according to the climate characteristic results is a more accurate application of the Water Framework Directive in Mediterranean rivers. Implications of future climate change should be also considered from our results. © 2010 Elsevier Ltd. All rights reserved. Source
Murphy C.A.,University of Girona |
Casals F.,University of Lleida |
Sola C.,Catalan Water Agency |
Caiola N.,IRTA Aquatic Ecosystems |
And 2 more authors.
Ecological Indicators | Year: 2013
Bioassessments are used to measure system health and assess disturbance. While fish-based freshwater bioassessments are cost-effective and perform well in speciose systems, such bioassessments remain difficult to implement in species-poor Mediterranean regions. Population size structure metrics may provide meaningful biological information where depauperate communities preclude the richness and composition measures generally used. We focus our assessments of population size structure responses to anthropogenic perturbation on one of the most widespread native stream fish (Squalius laietanus). We explore a number of population size statistics as metrics for a Mediterranean region, where current bioassessments perform poorly. Our sampling encompassed 311 sites across Catalonia (NE Spain) where we characterized anthropogenic perturbation using a summary of impacts, including local data on stream condition and landscape indicators of degradation, via a principal component analysis. Anthropogenic perturbation in streams was collinear with altitudinal gradients and highlights the importance of appropriate statistical techniques. Of the population size structure metrics explored, average length was the most sensitive to anthropogenic perturbation and generally increased along the disturbance gradient. Although we expected to find consistent changes in variance, kurtosis, and skewness, the observed relationships were weak. River basin mediated responses suggest the importance of environmental landscape factors. The unexpected increases of mean S. laietanus body size with anthropogenic perturbation, strong effects of river basin, collinearity with spatial gradients and the species-specific nature of responses preclude the direct application of size structure in freshwater bioassessments. Although its application in fish-based freshwater bioassessments appears difficult, population size structure can provide insights in species-specific applications and management. © 2013 Elsevier Ltd. Source
Aguiar F.C.,University of Lisbon |
Segurado P.,University of Lisbon |
Urbanic G.,University of Ljubljana |
Urbanic G.,Institute for Water of the Republic of Slovenia |
And 13 more authors.
Science of the Total Environment | Year: 2014
This paper exposes a new methodological approach to solve the problem of intercalibrating river quality national methods when a common metric is lacking and most of the countries share the same Water Framework Directive (WFD) assessment method. We provide recommendations for similar works in future concerning the assessment of ecological accuracy and highlight the importance of a good common ground to make feasible the scientific work beyond the intercalibration.The approach herein presented was applied to highly seasonal rivers of the Mediterranean Geographical Intercalibration Group for the Biological Quality Element Macrophytes. The Mediterranean Group of river macrophytes involved seven countries and two assessment methods with similar acquisition data and assessment concept: the Macrophyte Biological Index for Rivers (IBMR) for Cyprus, France, Greece, Italy, Portugal and Spain, and the River Macrophyte Index (RMI) for Slovenia. Database included 318 sites of which 78 were considered as benchmarks. The boundary harmonization was performed for common WFD-assessment methods (all countries except Slovenia) using the median of the Good/Moderate and High/Good boundaries of all countries. Then, whenever possible, the Slovenian method, RMI was computed for the entire database. The IBMR was also computed for the Slovenian sites and was regressed against RMI in order to check the relatedness of methods (R2=0.45; p<0.00001) and to convert RMI boundaries into the IBMR scale. The boundary bias of RMI was computed using direct comparison of classification and the median boundary values following boundary harmonization. The average absolute class differences after harmonization is 26% and the percentage of classifications differing by half of a quality class is also small (16.4%). This multi-step approach to the intercalibration was endorsed by the WFD Regulatory Committee. © 2013 Elsevier B.V. Source
Determination of glyphosate in groundwater samples using an ultrasensitive immunoassay and confirmation by on-line solid-phase extraction followed by liquid chromatography coupled to tandem mass spectrometry
Sanchis J.,CSIC - Institute of Environmental Assessment And Water Research |
Kantiani L.,CSIC - Institute of Environmental Assessment And Water Research |
Llorca M.,CSIC - Institute of Environmental Assessment And Water Research |
Rubio F.,Abraxis, L.L.C. |
And 4 more authors.
Analytical and Bioanalytical Chemistry | Year: 2012
Despite having been the focus of much attention from the scientific community during recent years, glyphosate is still a challenging compound from an analytical point of view because of its physicochemical properties: relatively low molecular weight, high polarity, high water solubility, low organic solvent solubility, amphoteric behaviour and ease to form metal complexes. Large efforts have been directed towards developing suitable, sensitive and robust methods for the routine analysis of this widely used herbicide. In the present work, a magnetic particle immunoassay (IA) has been evaluated for fast, reliable and accurate part-per-trillion monitoring of glyphosate in water matrixes, in combination with a new analytical method based on solid-phase extraction (SPE), followed by liquid chromatography (LC) coupled to tandem mass spectrometry (MS/MS), for the confirmatory analysis of positive samples. The magnetic particle IA has been applied to the analysis of about 140 samples of groundwater from Catalonia (NE Spain) collected during four sampling campaigns. Glyphosate was present above limit of quantification levels in 41% of the samples with concentrations as high as 2.5 μg/L and a mean concentration of 200 ng/L. Good agreement was obtained when comparing the results from IA and on-line SPE-LC-MS/MS analyses. In addition, no false negatives were obtained by the use of the rapid IA. This is one of the few works related to the analysis of glyphosate in real groundwater samples and the presented data confirm that, although it has low mobility in soils, glyphosate is capable of reaching groundwater. © 2011 Springer-Verlag. Source
Kock-Schulmeyer M.,CSIC - Institute of Environmental Assessment And Water Research |
Ginebreda A.,CSIC - Institute of Environmental Assessment And Water Research |
Postigo C.,CSIC - Institute of Environmental Assessment And Water Research |
Garrido T.,Catalan Water Agency |
And 4 more authors.
Science of the Total Environment | Year: 2014
Pesticide contamination of groundwater is of paramount importance because it is the most sensitive and the largest body of freshwater in the European Union. In this paper, an isotopic dilution method based on on-line solid phase extraction-liquid chromatography (electrospray)-tandem mass spectrometry (SPE-LC(ESI)-MS/MS) was used for the analysis of 22 pesticides in groundwater. Results were evaluated from monitoring 112 wells and piezometers coming from 29 different aquifers located in 18 ground water bodies (GWBs), from Catalonia, Spain, for 4. years as part of the surveillance and operational monitoring programs conducted by the Catalan Water Agency. The analytical method developed allows the determination of the target pesticides (6 triazines, 4 phenylureas, 4 organophosphorous, 1 anilide, 2 chloroacetanilides, 1 thiocarbamate, and 4 acid herbicides) in groundwater with good sensitivity (limits of detection < 5 ng/L), accuracy (relative recoveries between 85 and 116%, except for molinate), and repeatability (RSD < 23%), and in a fully automated way. The most ubiquitous compounds were simazine, atrazine, desethylatrazine and diuron. Direct relation between frequency of detection of each target compound and Groundwater Ubiquity Score index (GUS index) is observed. Desethylatrazine and deisopropylatrazine, metabolites of atrazine and simazine, respectively, presented the highest mean concentrations. Compounds detected in less than 5% of the samples were cyanazine, molinate, fenitrothion and mecoprop. According to the Directive 2006/118/EC, 13 pesticides have individual values above the requested limits (desethylatrazine, atrazine and terbuthylazine lead the list) and 14 samples have total pesticide levels above 500. ng/L. The GWB with the highest levels of total pesticides is located in Lleida (NE-Spain), with 9 samples showing total pesticide levels above 500. ng/L. Several factors such as regulation of the use of pesticides, type of activities in the area, and irrigation were discussed in relation to the observed levels of pesticides. © 2013 Elsevier B.V. Source | <urn:uuid:772a8cbb-1d8f-4b83-9724-8468fbde7fe4> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/catalan-water-agency-463470/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00315-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913773 | 2,391 | 2.640625 | 3 |
Joined: 16 Dec 2004
First, post ur queries in new topic...
Here the answers are...
suppose if i am creating a table as
if i am writing a query as select * from staff...
which staff it will select
A. If u mention select * from staff....the staff (if it exists) table created under the userid currently u r logged in will be retrieved. If it is not there in the current user id u will have to specify userid for which u should have previleges to access it.
Brahmananda Reddy. K. | <urn:uuid:d99e38f9-a45a-484b-8996-5ac7d4579557> | CC-MAIN-2017-04 | http://ibmmainframes.com/about1743.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00159-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.817448 | 122 | 2.765625 | 3 |
This chapter introduces the different ways that Java and COBOL applications can call each other.
There are a number of ways of using COBOL and Java together. You can use the Interface Mapping Toolkit, the wizards and GUI tools to create interfaces to COBOL from Java, or you can hand code the calls using the supplied support libraries. Java itself provides access to other technologies, such as Remote Method Invocation (RMI) and Java Naming and Directory Interface (JNDI) which enable you to distribute objects across different machines.
You can use COBOL and Java together by:
Although Enterprise Server for Windows is required to run legacy programs exposed using the Interface Mapping Toolkit, you can choose to use Enterprise Server for Windows or Application Server for Net Express (or for Server Express) for running the other types of applications that use both COBOL and Java programs.
The Java language defines its own data types, which are different to the ones used in COBOL. The COBOL run-time system automatically converts between COBOL and Java types whenever you call Java from COBOL or COBOL from Java. Where a COBOL data type cannot be translated directly (for example, edited fields), it is translated to a string. Where a COBOL data type cannot be translated directly (for example, edited fields), it is under some circumstances translated to a string. See the chapter Java Data Types
You need a Java run-time system on any machine which is going to execute Java applications. If you are going to develop mixed Java and COBOL applications, you will also need a Java development environment. You can use either the Java Software Development Kit available from Sun, or any Java IDE which is based on either the Sun or Microsoft run-time environments listed below.
Net Express currently supports the following Java run-time systems on Windows:
Before you start writing COBOL and Java programs which interact, you need to set up the following environment variables for the COBOL and Java run-time systems:
If you have COBOL programs which are going to call Java, you must tell the COBOL run-time system which Java run-time system you are using. To do this, set environment variable COBJVM to one of the following:
In addition, if you are using the Sun Java run-time system, add the subdirectory containing the jvm.dll file to your system path. The location of this file depends on which version of the Java Development Kit (JDK) you are using. For example:
Where subdirectory might be client, classic, hotspot or server.
This environment variable enables the COBOL run-time system to locate the jvm.dll file contained in this directory. Do not move jvm.dll to a different location, because it has dependencies on other files shipped as part of the Sun Java run-time system.
If you have Java programs which are going to call COBOL, you need to provide some Java classes which interface to the COBOL run-time system. To do this, ensure that mfcobol.jar is specified by the CLASSPATH environment variable for your Java run-time system. You should also ensure that CLASSPATH specifies the current directory, denoted by a period (.). For example:
set classpath=%classpath%;.;c:\program files\Micro Focus\net express\base\bin\mfcobol.jar
You can also set the Java class path when you run a Java program, using the -classpath switch. For example:
java -classpath ".;c:\program files\Micro Focus\net express\base\bin\mfcobol.jar;%CLASSPATH%" MyClass
Any COBOL program which is going to call Java must be compiled with the following directive:
This does two things:
Any COBOL program which is going to be called from Java using CobolBean.cobcall*() methods methods should be compiled with the DATA-CONTEXT compiler directive. This enables the runtime system to create new application storage areas for each instance of a CobolBean that is created.
When you use the Net Express IDE to create OO COBOL classes for use with Java, the Class Wizard adds the Java wrapper classes for OO COBOL to your Net Express project. The Java classes are compiled by the IDE when you rebuild your project. To set up this support edit mfj.cfg, in your net express\base\bin directory, to contain the full path and filename of your Java compiler.
If your net express\base\bin directory does not contain a copy of mfj.cfg, you will need to create it. Net Express creates mfj.cfg file the first time you compile a Java program in the Net Express IDE, providing that it can find the location of a Java compiler in your PC's registry or on the PATH environment variable.
In addition to specifying the Java compiler to use, you can use mfj.cfg to specify any Java compiler command line arguments. If you edited mfj.cfg to contain the following:
every time you compiled a Java program from the Net Express IDE, Net
Express would use javac.exe in the d:\jdk\bin directory,
All COBOL programs for use with Java must be linked with the multi-threaded run-time system.
On Net Express, click Multi-threaded on the Link tab of the Project Build Settings dialog box. If you are debugging programs, click Settings on the Animate menu, and check Use multi-threaded runtime.
When you call COBOL from Java, the OO COBOL Java support loads one of several cbljvm_*.dll modules to interface to the Java Virtual Machine (JVM). The exact file loaded depends on the JVM you are using, and is selected by querying the name of the JVM the Java program is running under.
If your JVM is not one of those listed as supported, you will get a fatal "Unsupported JVM" error. You can force the loading of a particular JVM by setting a Java system property, called com.microfocus.cobol.cobjvm, to the name of one of the supported JVMs. For example, to load the support for the Sun JVM, contained in cbljvm_sun.dll, set this property to "sun".
You can set this property on the command line that runs your Java program, as follows:
java -Dcom.microfocus.cobol.cobjvm=name class
name is the name of the support module you want to use. For example, to
run Java program myclass with the support for the Sun JVM:
java -Dcom.microfocus.cobol.cobjvm=sun myclass
If you get the "unsupported JVM" error, try the Sun JVM first as many JVMs are rebadged from Sun.
Copyright © 2003 Micro Focus International Limited. All rights reserved.
This document and the proprietary marks and names used herein are protected by international law. | <urn:uuid:346becea-4df6-4c19-99db-cb90c6ecf11d> | CC-MAIN-2017-04 | https://supportline.microfocus.com/documentation/books/nx40/dijint.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00371-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.858854 | 1,509 | 2.953125 | 3 |
We've got two new videos to watch of the NASA Mars Curiosity rover making its landing onto Mars.
First up is this video from the Jet Propulsion Laboratory, showing a split-screen of the entire descent - on the left is the computer simulation of what it would look like from a third-perspective, while on the right you see the video from the Mars Descent Imager. Narration from JPL is also included in this video.
In this second video, we see a high-resolution video of the landing, which seems to be brighter and crisper than the original video that NASA and JPL produced after the landing. You can definitely see more of the Martian landscape and the heat shield as it separated from the lander.
Keith Shaw rounds up the best in geek video in his ITworld.tv blog. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter, Facebook, and Google+.
Watch some more cool videos: James Bond meets My Little Pony: Mashup gold This 13-foot Japanese robot is packing heat The Legend of Zelda as a Western Friday Funnies: Batman rants against the Dark Knight haters/a> Did this 1993 film predict Google Glasses and iPads? | <urn:uuid:a3048a47-8f04-4d12-b0b0-a8a442254f4a> | CC-MAIN-2017-04 | http://www.itworld.com/article/2720054/consumer-tech-science/more-videos-of-mars-curiosity-landing-emerge.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00361-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923305 | 267 | 2.546875 | 3 |
A binary search on an array is
O(log2 n) because at each test you
can ``throw out'' one half of the search space. If we assume n, the
number of items to search, is a power of two (i.e. n = 2x) then,
given that n is cut in half at each comparison, the most comparisons
needed to find a single item in n is x. It is noteworthy that for
very small arrays a linear search
can prove faster than a binary search. However as the size of the
array to be searched increases the binary search is the clear victor
in terms of number of comparisons and therefore overall speed.
Still, the binary search has some drawbacks. First of all, it
requires that the data to be searched be in sorted order. If there is
even one element out of order in the data being searched it can throw
off the entire process. When presented with a set of unsorted data
the efficient programmer must decide whether to sort the data and
apply a binary search or simply apply the less-efficient linear
search. Even the best sorting algorithm is a complicated process. Is
the cost of sorting the data is worth the increase in search speed
gained with the binary search? If you are searching only once, it is
probably better to do a linear search in most cases.
Once the data is sorted it can also prove very expensive to add or
delete items. In a sorted array, for instance, such operations
require a ripple-shift
of array elements to open or close a ``hole'' in the array. This is
an expensive operation as it requires, in worst case, log2 ncomparisons and n item moves.
The binary search assumes easy random-access to the data space it is
searching. An array is the data structure that is most often used
because it is easy to jump from one index to another in an array. It
is difficult, on the other hand, to efficiently compute the midpoint
of a linked list
and then traverse there inexpensively. The binary search tree
data structure and algorithm, which we discussed later,
attempt to solve these array-based binary search weaknesses. | <urn:uuid:906f913b-099e-4961-9745-599d81da59fc> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/alg/node11.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00573-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.904534 | 464 | 2.9375 | 3 |
Definition: An area of storage where items with a common property are stored. Typically tree data structures and sort algorithms use many buckets, one for each group of items. Usually buckets are kept on disk.
Generalization (I am a kind of ...)
Aggregate parent (I am a part of or used in ...)
radix sort, bucket sort, elastic-bucket trie, hash heap, extendible hashing.
Note: A bucket is used when a number of items need to be kept together, but the order among them is not important. Conceptually it is a bag (rather than a set).
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 8 January 2004.
HTML page formatted Mon Feb 2 13:10:39 2015.
Cite this as:
Paul E. Black, "bucket", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 8 January 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/bucket.html | <urn:uuid:5bb51ead-314e-4e7a-8251-257052daebfc> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/bucket.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00573-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.864549 | 241 | 3.21875 | 3 |
Cereal based beverages are the beverages or drinks made by fermenting the cereals and extracting the nutritional content from them. These beverages are generally used by the consumers as health supplements to enhance their health. There are various types of cereal based beverages produced in the world by fermenting the cereals. They are classified on the basis of the raw material used or the type of fermentation involved. Alcoholic beverages are classified into wines and beers and the non-alcoholic beverage fermentations are souring, mainly lactic acid fermentations.
The major driver for cereal based beverages is the changing demand of consumers for nutritious beverages. With the changing trends in the market and rising disposable income the demand for Cereal based beverages is rising. Increasing diseases, growth in the population and rising consumer health consciousness are driving the market for Cereal based beverages.
Cereal based beverages are sold largely through retail channels, hypermarkets and supermarkets. The manufacturers tend to open their own retail outlets to sell the products and increase their earnings. A variety of different products are introduced in the market by various manufacturers to meet the growing demand of the consumers.
Also there are lot of advancements in the product development processes and technology. Thus, the manufacturers of Cereal based beverages products are opting new innovative techniques to produce Cereal based beverages products with lesser input, minimum cost, and higher outputs.
There are several key players engaged in Cereal based beverages manufacturing and distributing companies including PepsiCo Inc., Dr. Pepper Snapple Group, and Nestle . These companies are developing market strategies such as mergers and acquisitions, Joint Venture, New product development and Expansion to increase their market share in Global Cereal based beverages Market.
Reasons to Buy the Report:
From an insight perspective, this research report has focused on various levels of analysis—industry analysis, market share analysis of top players, company profiles, which together comprise and discuss the basic views on the competitive landscape, emerging and high growth segments of the Global Cereal based beverages market, high-growth regions and countries and their respective regulatory policies, government initiatives, drivers, restraints, and opportunities.
The report will enrich both established firms as well as new entrants/smaller firms to gauge the pulse of the market, which in turn will help the firms in garnering a greater market share. Firms purchasing the report could use any one or combination of the below mentioned five strategies (market penetration, product development/innovation, market development, market diversification, and competitive assessment) for strengthening their market share.
The report provides insights on the following pointers:
- Market Penetration: Comprehensive information on Cereal based beverages offered by the top 10 players in the Global market.
- Product Development/Innovation: Detailed insights on upcoming technologies, research and development activities, and new product launches in the Global Cereal based beverages market.
- Market Development: Comprehensive information about lucrative emerging markets. The report analyzes the markets for various application of Cereal based beverages across Global.
- Market Diversification: Exhaustive information about new products, untapped geographies, recent developments, and investments in the Global Cereal based beverages Market.
- Competitive Assessment: In-depth assessment of market shares, strategies, products, and manufacturing capabilities of leading players in the Global Cereal based beverages Market.
Please fill in the form below to receive a free copy of the Summary of this Report
Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement | <urn:uuid:49b668e7-4070-4c30-ad88-409eeae97839> | CC-MAIN-2017-04 | http://www.micromarketmonitor.com/market-report/cereal-based-beverages-reports-8653448864.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00389-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946694 | 722 | 2.53125 | 3 |
Photo: The GIS Team at the Langley Research Center was tasked with scanning and photographiong some 30,000 square feet of the facility — including 2.7 miles of utility tunnels. Photo courtesy of NASA.
In 1917, President Woodrow Wilson signed an order to establish a laboratory dedicated to aeronautical research. As a result of that order, NASA’s Langley Research Center (LaRC) was established in Hampton, Va., to help America keep pace with rapid European advances in aircraft and flight technology. Today, Langley is America’s oldest civilian facility dedicated to aeronautical and aerospace research. With more than 290 buildings on 788 acres, LaRC is home to wind tunnels, test structures and laboratories that conduct research on military and civilian aircraft and spacecraft.
Langley is respected as a premier location for U.S. aerospace research and testing. Its achievements include research that first enabled aircraft to fly at supersonic speeds and development of methods for orbital rendezvous and docking. Langley served as the initial home for Project Mercury, America’s first manned spaceflight program, and had the lead role in the Mars Viking Lander program. A key facility in development of the space shuttle orbiter, LaRC has participated in testing and development of virtually every type of aircraft flown by the U.S. military.
Because of LaRC’s size and variety of functions, its facilities must constantly evolve. To support the changing needs of operations and facilities management, LaRC’s Center Operations Directorate (COD) needs up-to-date spatial information on its buildings and equipment. Much of this information is developed and managed in COD’s GIS. To manage the system, the LaRC’s GIS team evolved. The team is a group of GIS professionals, surveyors and engineers that provides positioning and spatial data services to the research center’s facilities managers and contractors. The research center’s size and varied needs has led the GIS team to develop new ways to collect and share spatial information. To gather data, the team uses a variety of high-accuracy positioning technologies including GPS, 3-D scanning and optical total stations. According to GIS Team Lead Brad Ball, the group’s passion for technology has given LaRC the reputation as the most advanced GIS of any NASA center.
The foundation for the center’s GIS is a geodetic coordinate system and series of fixed markers on the Langley base. When collecting data, surveyors and GIS operators use these markers to make sure that their measurements and position data fit together accurately. GPS is one of the primary methods for capturing positions, and the LaRC GIS team uses high-accuracy real-time kinematic (RTK) methods to measure positions accurate to a few centimeters. To provide the basis for this work, the GIS team installed a Trimble NetR5 GPS Reference Station on their building. Known as a continuously operating reference station (CORS), this local reference station serves as the basis for all GPS surveying and GIS positions on the base. It broadcasts information needed for RTK work and collects data for post-processed analyses. The CORS data is available to GIS operators and surveyors in the surrounding communities as well.
Photo: An engineer at the Langley Research Center uses a 3-D scanner to map a part of the facility's interior. Photo courtesy of NASA.
All of Langley’s roads, utilities, buildings and structures are tied to the LaRC GIS and coordinate system, which extends to the base boundaries and surrounding areas. When a new item is added to the GIS, the geodetic framework can be used to precisely determine the object’s physical relationship to its surrounding features.
An example of this integration is the use of building information models for facilities design and management in one of LaRC’s older buildings. Over the years, LaRC Building 1230 has undergone numerous modifications. Building drawings haven’t always kept pace with the changes, and engineers and construction teams often worked with outdated or incomplete information when operating and updating the facility. In preparation for a recent remodel, interior walls were removed from one wing of the building. With the structural and mechanical features exposed, the GIS team scanned the interior and exterior of the building. The scanner produced a point cloud consisting of millions of individual 3-D points that depicted the building; each point was accurate to a few millimeters.
For the work at Building 1230, the team used RTK to establish exterior setup points for the scanner. Using the CORS as a reference, the GPS points could be tied directly into the LaRC’s geodetic coordinate system. “We scanned the interior and exterior of the building from 15 different locations and composited the scans together,” said Jason Hall, a GIS analyst on the Langley team. “We linked the interior scans by carrying control through stairwells. We used the GPS points for the exterior scans, and connected them to the interior by sighting through window openings.”
The crew used the 3-D scanner’s video camera to capture photographic images of the scene. The images can be draped over the point cloud to produce a 3-D photorealistic image of the building, right down to individual bricks, bolts and fittings. In less than a week — and while continuing support to other projects — the team had developed a true-color, high-density point cloud of the wing’s approximately 30,000 square feet.
With the fieldwork complete, the LaRC team processed and checked the data. Then they extracted subsets and cross-sections that were sent to Autodesk Revit and other computer-aided design (CAD) systems for use in developing design and construction plans. In just a short time, scanner technology has become an important part of LaRC’s GIS and facilities management toolbox. Langley will be increasingly using building information modeling for its facilities development and management, and Ball said that the 3-D scanner plays an important role in validating the building models.
Just below the surface at NASA’s Langley facility, a series of tunnels contains steam lines, water pipes, electrical equipment and other utilities. LaRC’s utility systems have been repeatedly modified and upgraded, and recent studies of the steam system exposed some serious problems. The requirements for the legacy steam system have continually expanded and the system was originally installed without the benefit of currently available technology. As a result, there were numerous low points in the steam lines that produced water buildup and hammering. To address the problem, the GIS team needed to measure and catalog the steam lines to find all the low points. With that information, crews could install condensate drains to eliminate the water.
The work covered four tunnels with a combined length of nearly 2.7 miles. In addition to collecting horizontal and vertical locations of the steam lines and fixtures, the GIS team captured the location of all valves, pumps and other items that carried maintenance identification tags. “The water and high-pressure air systems are in the steam tunnels,” Ball said, “and some of these utility tunnel features haven’t been viewed for 50 years. This was an opportunity to collect accurate locations on all of this structure and equipment.”
Hall and his colleague, Dana Torres, worked in cramped spaces jammed with pipes, conduits and machinery. The team combined RTK with optical measurements to connect the tunnels to the GIS. Their work was hampered by the presence of maintenance and construction crews working on the various utility and communications systems. Hall and Torres used a global navigation satellite system for the RTK work, and another station for the optical portions. They collected positions and GIS attribute information on nearly 3,200 features, and maintained critical vertical accuracies of 0.04 feet. The information was loaded into geomatics software for quality checking, and then output to Esri ArcGIS and other software for graphical and numerical analysis. In addition to capturing locations for the utilities, the work corrected some old errors. “Jason and Dana thoroughly measured the sides and the corners of the tunnels,” Ball said. “They found parts of this massive tunnel system that were four feet off compared to earlier maps.”
For most GIS work, the GIS team uses the CORS in conjunction with the Trimble R8 GNSS RTK System to obtain centimeter accuracy. On a recent project to validate existing utility location data of the base, the team used RTK to measure roughly 1,000 points on the base. They compared the measured positions with legacy CAD data. The work revealed enough inaccuracy to justify creating new high-resolution aerial images of the base. To provide control for the new images, the team measured the positions of dozens of monuments and valve covers visible in the new images. Many of the valves were located close to buildings and couldn’t be measured using GPS. For these points, the team used integrated surveying techniques to combine RTK data with measurements taken with a high-performance surveying station. The result was a two-inch resolution image with accuracy needed to support utility corrections.
The GIS team uses 3-D scanning technology to support scientific research as well as facilities management. As part of a proposed new test, NASA researchers wanted to evaluate use of overhead crane rails in a wind tunnel to suspend test apparatus. For the test apparatus to function properly, the project engineers needed precise information about the relationship between the crane rails, and the bottom and throat of the tunnel. The GIS team scanned the wind tunnel and collected roughly 3 million points in less than one day. The point cloud was exported to Drawing Exchange Format and AutoCAD formats, and the researchers used the information to determine dimensions in the wind tunnel to a precision of 3 mm.
As part of its role in facilities management, the LaRC team uses georeferenced spatial information for space optimization and allocation planning. They developed a GIS-based approach to Space Utilization Optimization with the objective of minimizing operational costs while maximizing synergy between functional centers. The GIS team is developing facility consolidation plans that offer projected benefits valued in the range of hundreds of thousands of dollars.
The Langley GIS team’s accomplishments haven’t gone unnoticed. As part of NASA’s process of implementing new systems and methods, the Langley GIS team frequently travels to share their expertise with other NASA and federal facilities. In 2009, the team received a Special Achievement in GIS award from Esri in recognition of the team’s innovation and leadership in GIS and spatial data.
Through its use of scanning technology, the NASA Langley GIS Team has developed the reputation of measuring objects and facilities that otherwise would be very difficult to measure. “With this equipment, we deliver information that no one else can provide,” said John Meyer, an engineer on the GIS team. “The fact that we can supply accurate measurements to a variety of disciplines is very important.” The technology has demonstrated that it delivers significant time and cost savings on maintenance and remodel projects, and Ball believes that 3-D scanning and high-accuracy GIS are rapidly becoming standard tools for facilities management. According to Ball, it’s simply good business. “It’s important to let creative people pursue new ways to do things,” he said. “Some people are wary of investing in the new technology. But it’s not all that risky once you turn it loose.”
John Stenmark, LS, is a writer and consultant working in the architecture/engineering/construction and technical industries. He has more than 20 years’ experience in applying advanced technology to surveying and related disciplines. | <urn:uuid:543e2657-1b23-4999-9678-601d6697b44a> | CC-MAIN-2017-04 | http://www.govtech.com/geospatial/NASA-Aeronautics-Lab.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00113-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945025 | 2,450 | 3.4375 | 3 |
Students taking a geography class at Penn State University (PSU) might find themselves sitting next to a robot instead of just their peers.
PSU is running a pilot program involving Suitable Technologies’ Beam robots. A year and a half ago, Suitable Technologies, a Palo Alto-based company that produces remote presence devices, loaned PSU two Beam robots to test throughout its student body. According to Chris Stubbs, Manager of Emerging Technology and Media for PSU’s Teaching and Learning with Technology program (TLT), remote students can operate these robots through a mobile app. Standing about 5 feet tall and resembling “an iPad on a stick with wheels, but sturdier,” the robots attend classes physically and the student can see everything the robot sees. The slender white machines have a screen that displays the user’s face.
According to Stubbs, these robots offer great potential for students with disabilities and remote access to campus. Stubbs said that, for the past 18 months, the robots have been shuffled between different disciplines and across the university’s 24 campuses. Professors in different departments are determining how the robots can affect their students as they test the devices. Stubbs said that, while the Beam robots may not be optimal for 300-person lecture classes, they are useful for informal visits to professors and classes that involve moving around different desks, such as architecture studios.
“That’s where this technology really shines,” Stubbs said. “Movement matters.”
The TLT team continues to test the Beam robots throughout different campuses and departments. While the robots are technically able to travel wherever there is a Wi-Fi signal, Stubbs said users have been keeping them indoors. He said that, after the testing period, professors and administrators will discuss whether they want to start phasing robots in for continued use.
Remote students can view their classes through a live feed, but Stubbs said the traveling robot offers a richer experience. Beam said that, so far, 20 professors and even more students have used the robots. The geography class has the robots for testing now; next, the education classes will have a turn. Some of the users have included students who could not make it to class because they were working and international post-doctoral students who could not physically be on campus.
“The tech seems to convey a sense of presence you don’t get on a conference call or Skype,” Stubbs said. “There’s ownership in being able to move through space.” | <urn:uuid:456460b1-478d-4d09-9861-bbb65ed69e4b> | CC-MAIN-2017-04 | https://www.meritalk.com/articles/robots-join-the-nittany-lions/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00233-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951286 | 526 | 2.859375 | 3 |
Iron Mountain makes Public Greenhouse Gas Emissions Disclosure to CDP
How Carbon measurement and reporting are driving business decisions
With the US administration’s announcement of new EPA carbon pollution rules and the build-up to the United Nations Conference on Climate Change (COP21) in Paris this December, there is a lot of news these days about Greenhouse Gas (GHG) emissions and climate disruption. It can be hard to separate the important information from the politics in North America, but with real changes in climate becoming increasingly apparent, the key question for a global company like Iron Mountain is; how does this affect our business?
From more severe storms or droughts disrupting business operations or supply chains to the value of real estate as flood zones move, the effects of a changing climate are real, far reaching and very frequently difficult to predict. And the impacts on people are even more difficult to guess. What are the consequences if 100 million Bangladeshis are forced to leave their homes? The first businesses to realize that something was going on were the reinsurance companies like Swiss-RE who saw the risks for increased claims. Then institutional investors started to worry that companies in their portfolio had risks that they didn’t know about because business leaders didn’t have a way to measure, analyze or share their contributions to the problem or their exposure to the issues. In response to this blind spot, the London based Carbon Disclosure Project (now called CDP) was formed in 2000 with the goal of convincing large companies to measure and report their GHG emissions, risks and actions. Today 1000s of companies report and CDP counts 822 institutional investors with US$95 trillion under management who use this information.
But how can counting carbon deliver a value to our business? Just like in driving, blind spots in business are dangerous. By learning and measuring all the ways our operations create direct and indirect GHG emissions we see our business differently and more information often leads to better decisions. For example in general people think that the new carbon rules will impact energy costs, some say up and some say down which is not very helpful. Because of our new understanding of GHG impacts, we know our carbon emissions from energy by location and can use that information to predict where costs are likely to be most volatile in the future. And we’re already taking action such as signing deals for on-site solar power and considering long term fixed price renewable energy contracts which will help us avoid risks. Blind spots can also hid opportunities. For example addressing the carbon intensity of the energy we use for data centers helps us see how to solve a problem for our customers. What if they could reduce their GHG footprint by using our services? These are just a couple of examples of how understanding our environmental impact can quickly translate to financial results for the company.
By measuring and publically sharing our environmental and social impacts through our annual CR Report and with our disclosure to CDP we not only satisfy the demands of customers, investors and stakeholders for transparency and accountability, we can add new information to business decisions of all kinds. This is new for us and we’re still learning, but so far it’s been amazing to see employees from different parts of the business discover how new information can help them make more informed decisions. We’re looking forward to expanding our reach to more territories of the business and sharing this new approach with more employees. | <urn:uuid:6bb76e81-ed1c-4025-bba1-4288ea9dd25e> | CC-MAIN-2017-04 | http://www.ironmountain.com/About-Us/Corporate-Social-Responsibility/News-and-Noteworthy/Our-Planet/I/Iron-Mountain-makes-Public-Greenhouse-Gas-Emissions-Disclosure-to-CDP.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00198-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957459 | 691 | 2.546875 | 3 |
Don’t ignore the headlines – three steps to protect yourself online
By Stephanie Hopper, Information Security Engineer
Perhaps you’ve seen one of the recent headlines: “Computer bug ‘Heartbleed’ poses severe threat,” “Tech Companies Rush to Secure Products Against ‘Shellshock’ Bash Bug,” or “Shellshock security risk for millions of computer users.” Software vulnerabilities have been in the news a lot lately. Heartbleed, Shellshock (aka Bash) and Poodle have sent IT staffs across the globe scrambling to protect servers they once thought were safe. Perhaps more broadly, these vulnerabilities have begun to shake the confidence of computer users who once believed that all websites were inherently safe.
It appears that users feel inundated with this type of bad news and have become immune to caring about threats. Or maybe they just feel it’s the responsibility of others to protect them. However, the growth of online threats clearly illustrates the need for you to take a personal interest in your own security and responsibility for your behavior while operating in cyberspace, whether that’s in the comfort of your own home or in a public Wi-Fi space.
It’s true that the recent threats didn’t require much action from the average user. In the case of Heartbleed, our advice was to wait for the websites you visit to patch their servers then change your passwords. For Shellshock, you should patch your Mac or Linux computer and hope that everyone else does the same. But that doesn’t mean you shouldn’t worry or be vigilant about your security.
So, what can you do? The following list highlights three actions you should regularly take to help minimize potential losses from a hostile attack:
1. Install and regularly update antivirus and anti-malware.
2. Back up your important files and double-check to make sure it’s done.
3. Monitor your financial information and credit reports.
Technology greatly improves our lives and offers many benefits, but it also imposes certain responsibilities. So remember, don’t ignore the headlines, take the necessary steps to be informed and protect your valuable data.
For additional tips on Internet security, visit: http://news.centurylink.com/resources/tips/centurylink-consumer-security-tips-online-security. | <urn:uuid:0d8f8c82-ae8c-4cc0-839f-1aaf5591c943> | CC-MAIN-2017-04 | http://news.centurylink.com/blogs/security/dont-ignore-the-headlines-three-steps-to-protect-yourself-online | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00316-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925309 | 493 | 2.578125 | 3 |
Question 1) Test Yourself on CompTIA i-Net+.
Objective : Network
SubObjective : Understand and be able to Describe the Use of Internet Domain Names and DNS
Single Answer Multiple Choice
DNS uses a lookup table to resolve names. Which record type in the DNS table is used for the assignment of multiple, fully qualified domain names to one IP address?
An IP address can be assigned to more than one fully qualified domain name. The DNS lookup table must contain the multiple names for the IP address. The record in the DNS table that denotes an IP address assigned multiple, fully qualified domain names is CNAME, which stands for canonical name. An example of the CNAME record is:
A is an address record. The A record is used to directly map the record’s host name to its IP address.
MX is a mail exchange record. The MX record is used to identify the mail exchanger for a host.
PTR is a pointer record. The PTR record is used to directly map the record’s IP address to its hostname.
These questions are derived from the Self Test Software Practice Test for CompTIA Exam #IK0-002: i-Net+. | <urn:uuid:0a5565b8-89fa-4d41-979f-2dd67228e2c1> | CC-MAIN-2017-04 | http://certmag.com/question-1-test-yourself-on-comptia-i-net/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00042-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.869902 | 251 | 3.671875 | 4 |
Along with the continuous development of optical fiber communication, optical cable also in constant development. Fiber is a communication cable, composed by two or more glass fiber cable or plastic optical fiber core within the cladding of the fiber core is located in the protective outer sleeve, plastic PVC covering. Along the internal optical fiber signal transmission is generally used infrared.Fiber optic cables are usually made of glass or plastic but those materials actually slow down the transmission of light ever so slightly.
Recently, researchers at the university of Southampton, UK, have created a kind of hollow optical fiber cable. This kind of equipment in the middle is hollow, only by filling up the air, but its transport rate is 1000 times faster than the other fiber optic cable. Researchers revealed that light in the air velocity is about 99.7% of its speed in a vacuum.
The idea was not be put forward recently, but in the past when encountered in the process of light transmission in the corner, the signal will always diminish. The researchers optimized the design, making the new type of hollow optical fiber cable data loss is 3.5 dB/km, such an ideal level. In this way, making it suitable for use in supercomputer and data center applications.
Hollow fiber optic cable(indoor/outdoor fiber optic cable) can go through air rather than light, therefore in many areas it has much more advantages than the traditional optical fiber and will eventually replace the traditional optical fiber.
Using hollow optic fiber cable, rather than the traditional high purity silica doped fiber core, its advantage are optical fiber performance is not restricted by material characteristics of the fiber core. Traditional optical damage threshold, the parameters such as attenuation and group velocity dispersion and nonlinear effects are affected by the silicon materials and other corresponding parameters. Through reasonable design, hollow fiber can achieve more than 99% of the light in the air instead of in the glass, thus greatly reduce the material properties of optical fiber properties. So in many important areas, hollow fiber optic cable transceiver have more advantages than the traditional optical fiber.
Theoretically, this kind of fiber optic cable no fiber core, reduced the loss, to increase the communication distance, preventing the dispersion caused by the interference phenomenon, can support more wavelengths, and allows the stronger light power injection, estimate its communications capacity can reach 1000 times of the cable at present.
Promote hollow optic fiber cable of the ongoing research, with the extensive application of optical fiber and cable, the fiber optic cable has been unable to meet the needs of the people, therefore, need to continue to study new fiber optic cable in order to adapt to the needs of people.Researchers at the University of Southampton in the UK have created a hollow fiber-optic cable.From Fiberstore,we supply many different types of fiber optic cables, and customers have the flexibility to choose a cable plant to best fit their needs.If you need some cables,welcome to Fiberstore to find it. | <urn:uuid:0a5ce3f7-0c6e-4aeb-9cc0-eccf4effba1e> | CC-MAIN-2017-04 | http://www.fs.com/blog/new-fiber-optic-cable-the-advantages-of-the-hollow-fiber-cable.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00042-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927929 | 602 | 3.546875 | 4 |
Adapted from Windows Help Documentation
Volumes become fragmented as users create and delete files and folders, install new software, or download files from the Internet. Computers typically save files in the first contiguous free space that is large enough for the file. If a large enough free space is not available, the computer saves as much of the file as possible in the largest available space and then saves the remaining data in the next available free space, and so on.
After a large portion of a volume has been used for file and folder storage, most of the new files are saved in pieces across the volume. When you delete files, the empty spaces left behind fill in randomly as you store new ones.
The more fragmented the volume is, the slower the computer's file input/output performance will be.
Desktop Central provides option to run the defragmenter tool on multiple machines simultaneously. It supports the following options: | <urn:uuid:a5170180-a3de-448a-8e06-fb4298709fc6> | CC-MAIN-2017-04 | https://www.manageengine.com/products/desktop-central/help/misc/run_windows_disk_defragmenter.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00574-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91246 | 186 | 2.75 | 3 |
14 Amazing DARPA Technologies On TapGo inside the labs of the Defense Advanced Research Projects Agency for a look at some of the most intriguing technologies they're developing in computing, electronics, communications, and more.
2 of 14
As part of the Cognitive Technology Threat Warning System, soldiers wear an electroencephalogram (EEG) cap that monitors brain signals and records when a threat is detected. Users are shown images, and their brain signals indicate which images are significant. Image credit: DARPA
Military Transformers: 20 Innovative Defense Technologies
DARPA Demonstrates Robot 'Pack Mules'
DARPA Seeks 'Plan X' Cyber Warfare Tools
DARPA Cheetah Robot Sets World Speed Record
DARPA Demos Inexpensive, Moldable Robots
DARPA Unveils Gigapixel Camera
DARPA: Consumer Tech Can Aid Electronic Warfare
2 of 14 | <urn:uuid:6427d26b-d6d7-44dd-b504-77f1e491cf1d> | CC-MAIN-2017-04 | http://www.darkreading.com/risk-management/14-amazing-darpa-technologies-on-tap/d/d-id/1106551?page_number=2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00114-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.767986 | 185 | 2.546875 | 3 |
In general, text analysis refers to the process of extracting interesting and non-trivial information and knowledge from unstructured text. Text analysis differs from traditional search in that, whereas search requires a user to know what he or she is looking for, text analysis attempts to discover information in a pattern that is not known beforehand (through the use of advanced techniques such as pattern recognition, natural language processing, machine learning and so on). By focusing on patterns and characteristics, text analysis can produce better search results and deeper data analysis, thereby providing quick retrieval of information that otherwise would remain hidden.
Text analysis is particularly interesting in areas where users must discover new information, such as in criminal investigations, legal discovery and when performing due-diligence investigations. Such investigations require 100% recall; i.e., users cannot afford to miss any relevant information. In contrast, a user who uses a standard search engine to search the Internet for background information simply requires any information as long as it is reliable. During due diligence, a lawyer certainly wants to find all possible liabilities and is not interested in finding only the obvious ones.
Challenges Facing Text Analysis
Due to the global reach of many investigations, a lot of interest also exists with text analysis in multi-language collections. Multi-language text analysis is much more complex than it appears because, in addition to differences in character sets and words, text analysis makes intensive use of statistics as well as the linguistic properties (such as conjugation, grammar, tenses or meanings) of a language. A number of multi-language issues will be addressed later in this article.
But perhaps the biggest challenge with text analysis is that increasing recall can compromise precision, meaning that users end up having to browse large collections of documents to verify their relevance. Standard approaches to countering decreasing precision rely on language-based technology, but when textcollections are not in one language, are not domain-specific and/or contain documents of variable sizes and types, these approaches often fail or are too sophisticated for users to comprehend what processes are actually taking place, thereby diminishing their control.
Furthermore, according to Moore’s Law, computer processor and storage capacities double every 18 months, which, in the modern context, also means that the amount of information stored will double during this timeframe as well. The continual, exponential growth of information means most people and organizations are always battling with the specter of information overload.
Although effective and thorough information retrieval is a real challenge, the development of new computing techniques to help control this mountain of information is advancing quickly as well. Text analysis is at the forefront of these new techniques, but it needs to be used correctly and understood according to the particular context in which it’s applied. For example, in an international environment, a suitable text analysis solution may consist of a combination of standard relevance-ranking with adaptive filtering and interactive visualization, which is based on utilizing features (i.e. metadata elements) that have been extracted earlier.
Control of Unstructured Information
More than 90% of all information is unstructured, and the absolute amount of stored unstructured information increases daily. Searching within this information, or performing analysis using database or data mining techniques, is not possible, as these techniques work only on structured information. The situation is further complicated by the diversity of stored information: scanned documents, email and multimedia files (speech, video and photos).
Text analysis neutralizes these concerns through the use of various mathematical, statistical, linguistic and pattern-recognition techniques that allow automatic analysis of unstructured information as well as the extraction of high quality and relevant data. ("High quality" here refers to the combination of relevance [i.e. finding a needle in a haystack] and the acquiring of new and interesting insights.) With text analysis, instead of searching for words, we can search for linguistic word patterns, which enables a much higher level of search.
Text analysis is often mentioned in the same sentence as information visualization, in large part because visualization is one of the viable technical tools for information analysis after unstructured information has been structured.
A common visualization approach is a "treemap," in which an archive is presented as a colored grid (see figure left). The components of the grid are color-coded and sized based on their interrelationships and content volume. This structure allows you to get a quick visual representation of areas with the most entities. A value can also be allocated to a certain type of entity, such as the size of an email or a file.
These types of visualization techniques are ideal for allowing an easy insight into large email collections. Alongside the structure that text analysis techniques can deliver, use can also be derived from the available attributes such as "sender," "recipient," "subject," "date," etc.
Text Analysis on Non-English Documents
As mentioned earlier, many language dependencies need to be addressed when text- analysis technology is applied to non-English languages. | <urn:uuid:b75640e3-54d4-470a-afff-aae04d82aa29> | CC-MAIN-2017-04 | http://www.kmworld.com/Articles/ReadArticle.aspx?ArticleID=53890 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00022-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935552 | 1,014 | 3.484375 | 3 |
What happens if your websites suddenly draw a lot of traffic? How about if traffic suddenly spikes? Or what if your sites get ‘slashdotted’? This is where a link to your site pops up on a very popular site. While the additional exposure is great, if your infrastructure isn’t ready to handle a large influx of traffic, it can easily overwhelm your server.
Hosting your websites on one server is fine. Until that server cannot handle the traffic or goes down. Sooner or later, every server needs to be rebooted for system updates, software installation, or to try to fix an annoying problem. So you reboot your server, get it patched and banish that transient problem or install your shiny new software. But wait, what about your websites? What happened to them during that time? Well, if you’re hosting your websites on only one server, your websites were down.
Enter the load balancer
A load balancer is a hardware or software appliance designed to spread traffic across multiple servers. It gives you much more granular control over where traffic goes and how it’s handled. Load balancers can be deployed in a highly-available manner so there’s no single point of failure.
How do load balancers work?
Fundamentally, here’s how a load balancer works: Your domain name points to an IP address configured on the load balancer, and the load balancer starts to receive traffic from your users. When a load balancer is deployed, your servers sit ‘behind’ the load balancer, meaning that all traffic to your servers travels through the load balancer. When a request arrives, the load balancer choses one of your servers to be the machine to service the request. There are many ways of choosing which server will be selected, which is something that’s discussed with you prior to deployment.
Here’s an example: Suppose you have two servers, and a request comes in from one of your users looking to check out your website. The load balancer in this case, decides that server ‘A’ is the server to use, so the request is routed there. Meanwhile, a second request comes in from another user. The load balancer decides that server ‘B’ is a prime candidate, thus splitting the load on your site across multiple servers.
How do load balancers keep everything running smoothly?
Load balancers don’t just split your traffic, they also ensure that your servers are up and able to receive said traffic. As a load balancer is going about its regular housekeeping jobs, it also sends ‘health check’ requests to your servers. If the server replies that all is well, the load balancer will happily send traffic to it. If the server fails to respond (or says that all is NOT well), the load balancer will remove that server temporarily from the rotation until everything is running smoothly again. When a server is removed from the rotation, site availability isn’t interrupted as other servers continue to process traffic.
This has the added bonus of giving you some control over which servers handle traffic. If you want to take down server ‘A’ for maintenance, upgrades, et cetera you can cause the server to fail these health checks (thus removing it from the rotation) until you’re ready to bring it back online.
Get the benefits of load balancers
In short, a load balancer can make your server environment much more flexible, extensible, and resilient. We highly recommend that if you have two or more servers that you consider getting a load balancer.
Got a busy season coming up? You can also use our On-Demand Hybrid Cloud in concert with your load balancer to temporarily handle an increased load. By spinning up cloud servers and placing them behind your load balancer, you gain flexibility without needing an additional dedicated server. Once your busy season is over, simply shut down the cloud servers. | <urn:uuid:c0ca7cb8-eb6a-47b3-b640-20dd8f3b180b> | CC-MAIN-2017-04 | http://www.codero.com/blog/what-is-a-load-balancer-and-how-does-it-work/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00536-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928614 | 826 | 2.734375 | 3 |
While most of the attention to future spacecraft seems to focus attention on what the exterior of the ship looks like, Boeing and Bigelow Aerospace this week offered a look at what the interior of their respective systems.
Boeing's Crew Space Transportation (CST)-100 can hold a crew of seven and will be bigger than the Apollo capsule but smaller than NASA's Orion, which is also still under development.
+More on Network World: Greatest hits: When space and music collide+
By the looks of it you won't confuse the interior with that of a Boeing 737.
"Designing the next-generation interior for commercial space is a natural progression. A familiar daytime blue sky scene helps passengers maintain their connection with Earth," said Rachelle Ornan, regional director of Sales and Marketing for Boeing Commercial Airplanes in a statement.
The CTS-100 should be able to launch on a variety of different rockets, including Atlas, Delta and Space X Falcon.
Boeing is in high-stakes competition develop a commercial spacecraft with Sierra Nevada and SpaceX. All three companies have received the lion's share of NASA financial support through its Commercial Crew program.
Boeing has said from the beginning of CTS development that it envisions the spacecraft supporting the International Space Station and future Bigelow Aerospace Orbital Space Complex systems in particular.
Meanwhile Bigelow showed off one of its "expandable habitats," which are inflatable spacecraft that can function as independent space stations or be connected together in a modular fashion to create an even larger and more capable orbital space system.
Specifically Bigelow showed the interior of its BA 330 which can hold up to six people.
NASA last year awarded a $17.8 million contract to Bigelow Aerospace could bring the company's smaller inflatable space station components to the International Space Station. The smaller inflatable system called the Bigelow Expandable Activity Module, or BEAM could be launched to the station using a commercial cargo flight and robotically attached to the orbiting laboratory.
Check out these other hot stories: | <urn:uuid:6af08394-b2b4-4e05-bd4f-450391c553d2> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2226827/data-center/no-coach-class-here----boeing--bigelow-offer-look-inside-future-spaceships.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00536-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931026 | 421 | 2.546875 | 3 |
Recent Zero-Day Exploits
Standard defenses are powerless against zero-day threats
Zero-day attacks are cyber attacks against software flaws that are unknown and have no patch or fix.
It’s extremely difficult to detect zero-day attacks, especially with traditional cyber defenses. Traditional security measures focus on malware signatures and URL reputation. However, with zero-day attacks, this information is, by definition, unknown. Cyber attackers are extraordinarily skilled, and their malware can go undetected on systems for months, and even years, giving them plenty of time to cause irreparable harm.
Based on recently discovered types of zero-day attacks, it has become apparent that operating system level protection is becoming less effective, watering hole attacks are becoming more common, and cyber attacks are becoming more sophisticated and better at bypassing organizational defenses.
Recent Zero-Day Exploits and Vulnerabilities
FireEye has discovered 28 out of 49 zero-day exploits since 2013. | <urn:uuid:d9d7bc4d-6e02-472b-8ae2-95e54edde38d> | CC-MAIN-2017-04 | https://www.fireeye.com/current-threats/recent-zero-day-attacks.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00536-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961166 | 196 | 2.625 | 3 |
From smart homes to smart grids, the Internet of Things (IoT) has received a warm welcome in the energy sector.
Florida-based electric power holding company Duke Energy claims it has created a self-healing grid system to automatically reconfigure itself when you lose power in the home.
It says the electrical system has the capacity to automatically detect, isolate, and reroute power when a problem occurs.
Digital smart sensors at sub stations and on powerlines detect problems and communicate with this with the control system. Switches then automatically isolate the damaged section of line.
In theory, the control system continually monitors the state of the grid and determines the best way to reroute power to as many people as possible. It then automatically reconfigures the electric grid to restore power on the line. This should all happen in less than a minute.
The state of Indiana is currently modernizing its state-wide energy grid, so expect a verdict on this technology in due course.
Pacific Gas & Electricity Company (PG&E) said in May that it was testing drones to enhance the safety and reliability of its electric and gas service.
The test programs aim to explore the feasibility of using safety drones to monitor electric infrastructure in hard-to-reach areas and to detect methane leaks across its 70,000 square-mile service area.
Initial results look positive. The company has revealed that the use of drones for safety inspections is much easier and reduces risk to employees.
PG&E is currently working with NASA to test methane sensors. Watch this space for further updates on the trials.
Energy giant, EDF, is getting warm and cozy with Amazon. The firm says it has created an easy way for customers to control their account through Amazon’s Alexa voice service.
The EDF Energy skill – Amazon’s version of an app – should let customers interact with Amazon’s hands-free, voice-controlled speaker, the Echo to check their account balance and next payment date, among other things.
Handy if you happen to love Amazon devices and get your energy supply from EDF.
The National Grid is using demand side response company, Open Energi’s Direct Demand technology to balance supply and demand across the UK’s power grid.
Open Energi says the technology aggregates energy consumption from across customers’ sites to provide a fast, flexible solution which is equivalent to a power station, but instead of adjusting supply up or down to meet demand, it adjusts demand up or down to meet supply.
Supposedly, Dynamic Demand provides National Grid with a fast demand response and enables consumers to better manage their consumption, freeing up capacity for the whole grid.
The roll-out is ongoing, but Open Energi says Dynamic Demand has been installed at more than 50 sites to date.
The future of energy will not just be decided by the traditional service providers, not if Nissan has anything to do with it, anyway.
The auto manufacturer has partnered with ENEL, one of Europe’s largest power companies, to develop a ‘Vehicle-to-Grid’ system. The companies say this system will allow drivers and energy users to operate as individual energy hubs with the ability to use, store and return excess energy to the grid.
The first trials for the tech will be held in Denmark, with Germany, the Netherlands and some other European countries following thereafter, presumably if the trials are successful.
In New York, Brooklyn Microgrid has created a connected network for energy. The group says it is using blockchain technology to enable the first peer-to-peer energy exchange and an emerging energy internet, according to Fast Coexist.
The project, called TransActive Grid, is a joint venture between Brooklyn Microgrid developer LO3 Energy and blockchain technology developer ConsenSys.
The grid includes a hardware layer of smart meters and a software layer using blockchain. The participants’ homes are equipped with smart meters linked to the blockchain to track the electricity generated and used in the homes and manage transactions between neighbors.
As Fast Coexist explains: “On one side of President Street, five homes with solar panels generate electricity. On the other side, five homes buy power when the opposite homes don’t need it. In the middle is a blockchain network, managing and recording transactions with little human interaction.”
General Electric is no stranger to the IoT space. The company has just taken $800 million in digital industrial power orders across Asia-Pacific, and shows no signs of slowing down its bet on the Industrial Internet of Things (IIoT).
General Electric’s Asset Performance Management software is being used in a number of power plants to connect disparate data sources and to aid data analysis.
To make the tech tick, sensors are placed on key infrastructure, such as gas turbines, to monitor and collect data on operations and efficiency. The data collected can then be analyzed in the cloud to provide real-time updates on the use of gas within a plant.
For more detail, click here to see what GE is doing with Tepco in Japan.
RWE claims to have created one of the first shared solar power energy systems in the world.
Called The Shine, RWE claims the software offers users an easy to use home energy management system, which lets them optimize their use of solar energy. Users can also connect with others to share, buy or sell locally-generated green energy, as reported in Utility Week.
The Shine will be commercially available in Germany before it reaches the UK.
IoT has the power to connect even the most hard-to-reach areas.
UK power and gas supplier, E.ON is working with SSE Enterprise Telecoms to digitise its wind farm estate in the Scottish Highlands by connecting them to the internet.
Supposedly, SSE’s national Carrier Ethernet MPLS services provide E.ON with a meshed network that ‘guarantees that the critical communications systems that enable the control and live monitoring of each site are always available’ even in the more remote and torrid conditions.
E.ON says it now knows exactly how its business is performing across the highlands.
Last but not least is Hive. British Gas’ smart home meter has already made an impact upon many consumers lives (300,000 in the UK as of April this year) and needs little introduction.
Most recently, British Gas announced Hive 2. It’s a major upgrade on the initial Hive launch, giving consumers the option to control heating and hot water remotely from an app on a smartphone.
According to our IoB tech guru and contributing editor, Jan Maciejewski, “Hive is well on its way to creating a new standard in the home.”
For a deeper understanding, check out Jan’s review from earlier this year.
The Internet of Energy is Europe’s only forum dedicated to exploring the business case for the internet of things in the energy industry. Through early-adopter case studies from traditional utilities and perspectives of new ‘disrupters’, the event will explore the opportunities for the energy eco-system in an age of IoT. Click here for further details. | <urn:uuid:67f35469-fb6b-46c3-bbbc-e58eb36f74f8> | CC-MAIN-2017-04 | https://internetofbusiness.com/10-examples-showcasing-iot-energy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00171-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932887 | 1,486 | 2.609375 | 3 |
Mad Dog 21/21: Achilles And The iPhone
October 24, 2016 Hesh Wiener
In the 5th century BC, a Greek philosopher named Zeno invented clever, paradoxical puzzles. In the most famous, he argues that swift Achilles, racing a turtle that has been given a very modest lead, can never catch that torpid tortoise. Twenty-one centuries later, two geniuses simultaneously and independently used Zeno’s view of infinity and the infinitesimal to create calculus. It took another 500 years for Steve Jobs’s magnificent iPhone to newly define the infinite as Apple sold more than a billion devices, spawned a trillion dollar business, and inspired new forms of social and economic organization.
Zeno’s inspirational paradox, briefly, is this: Imagine a footrace between athletic Achilles and an ordinary tortoise. Achilles gives the tortoise a hundred yard lead and then tries to overtake it. To do so, Achilles must run to the position the tortoise occupied at the start of the race. By that time the tortoise has moved ahead. Achilles must then run to the where the tortoise was when Achilles ran the next segment of the race, and again by that time the tortoise would have plodded on some distance. So, while Achilles can always reach a point where the tortoise used to be, he can never catch up with the reptile, let alone overtake it.
Zeno’s paradox inspired all the philosophers and mathematicians who followed him to think about the division of time and space into ever smaller portions and the way those portions might or might not be added together. The notions inherent in this puzzle underlie the most powerful ideas behind calculus.
In the 17th century two extraordinary scholars, Isaac Newton in England and Gottfried Wilhelm Leibniz in Germany, happened to think about ways to create a branch of mathematics that would address matters not amenable to analysis using the basic algebra of their time. Each, coincidentally, developed what we call integral and differential calculus. The first enabled the solution of problems involving the area under a curve and volume within a three-dimensional object. The second provided a basis for describing rates of change. The two branches of calculus complement each other and together form a complete body of knowledge.
Each of the two mathematicians had interests far afield from calculus. Newton invented the reflector telescope, which he used to examine astronomical objects influenced by gravity. He used calculus to describe the way gravity determined the paths of the moon and the planets. Similarly, he applied his methods to things closer at hand, including falling objects, an inspiring one of which was an Apple. Thus, Apple is a very good name for an inspiring company.
Leibniz was in some ways more of an academician than Newton. It is his scheme of mathematical notation, including the elongated S commonly called the integral sign, that is still used to express the concepts of calculus. Leibniz wasn’t entirely devoted to the abstract, however. He built mechanical calculators that performed the four basic arithmetic operations: addition, subtraction, multiplication, and division. But in his abstraction of philosophical ideas he advanced the notion of simple objects that interacted with other objects in various ways. These monads, as he called them, inspired scientists of the 20th century to develop theories of neurons, each independent but nevertheless part of an interactive whole, that helped reveal the secrets of the brain. Smartphone users may be thought of as monads, too.
We will never know how Steve Jobs developed his wonderful vision of the iPhone. Whatever the provenance of his inspiration, it guided the engineers, programmers, designers, physicists, mathematicians, and marketing specialists at Apple as they developed the device, its supporting technologies and its surrounding ecosystem.
In 2007, when the iPhone was announced, pretty much every observer of information technologies who took a good look at it was not merely impressed but dazzled. And that was only the beginning, when the iPhone had, by today’s standards, limited computational abilities and a very modest complement of apps. That was before mobile networks offered particularly large bandwidth. That was before geolocation technologies and related libraries of maps, atlases, and other reference materials became powerful in their functionality and at the same time eminently affordable. That was before Apple was able to boast it had sold more than a billion iPhones, a milestone it passed last summer, even before the current iPhone 7 generation of instruments were yet to appear.
If Apple maintains the sales pace estimated by Gartner and other analysis firms, it could add another hundred million units to the population of smartphones by the end of this calendar year. These phones typically sell for $750 to $900 each, for a total hardware value of $75 billion to $90 billion in half a year. IBM’s revenue for all of 2016 from hardware, software, and services is expected to be about $80 billion or maybe a bit less. Apple’s iPhone hardware business alone, then, is bigger than all of IBM.
In addition to phones, Apple’s customers buy lots of apps, plenty of phone cases and other accessories plus communications services that usually cost more per month than a phone purchased on an installment plan. For Apple, for carriers around the world and for all the companies that piggyback their activities on the lively market, the iPhone universe represents a very big business.
Apple is likely to ship an additional hundred million by units during 2017 even before it begins delivery what market watchers call the iPhone 8 generation of products, successors to today’s phones. That’s another IBM’s worth of revenue. In 2015, IBM reported net income of just over $13 billion. For the fiscal year ending in September 2015, Apple reported revenue in excess of $233 billion and more than $53 billion in net income; Apple did not say how much income was directly attributable to the iPhone.
The iPhone may be the best-selling model but Apple is not the best-selling brand of smartphones. The leader, with many phone models, is Samsung, which sells phones at not quite double the rate that Apple does. On Apple’s heels come a number of Chinese rivals, led by Huawei, followed by a long list of other manufacturers, mainly from Korea and Taiwan. What most of the contenders have in common is their use of Google’s Android operating system.
Both the bigtime phone operating systems, iOS and Android, provide interconnected bundles for web searching, phone and app searching, navigation, email, messaging, contact management, telephony, web browsing, and cloud storage. Phone users don’t always have to use the OS-related apps for all these functions, but in many cases the core apps do the best job in the simplest way. Still, there are lots of apps that are quite popular. For navigation, Waze comes to mind, for email there is Gmail and the open source K9 client, for messaging there are quite a few popular apps, for telephony there are outstanding offerings like Skype, several web browsers are quite popular, and when it comes to cloud storage Dropbox and its ilk provide excellent alternatives to the cloud storage services offered by Apple and Google.
The importance of these key apps is that they are gateways to the outside world, gateways that often enable the monetization of glances and clicks. The value of these gateways, particularly when it comes to searching, is so huge that they enable players like Google to treat the development and support of Android as just one of many costs of maintaining a lively business.
If and when Yahoo is finally sold, probably to Verizon, we will be able to see whether the combination of a common carrier and a gateway is greater than the sum of its parts. Google, dabbling in the carrier game, thinks this might be the case. Apple, still not a carrier (but that could change at any time), would certainly invent some new ways to operate if it decided to add carrier services to its stew of mobile device offerings.
And then there is Amazon, another giant that has some powerful services. Amazon has tried and failed in phones, but succeeded with reader and media tablet. Amazon is significantly different from the other companies offering cloud-based services. Google sells the persuasive potential of its access while Amazon directly turns that click or glance into an actual purchase including the speedy delivery of goods. Amazon seems to have learned that its culture is not fit to compete with Apple in the communications business.
Like Achilles chasing the tortoise, Apple’s many and diverse rivals seem to be able to get to a position Apple occupied in the past only to discover Apple had moved on, remaining just out of reach. | <urn:uuid:7193cf65-0b9a-4c1e-ad1b-77ae2d851fba> | CC-MAIN-2017-04 | https://www.itjungle.com/2016/10/24/tfh102416-story04/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00565-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958658 | 1,783 | 2.546875 | 3 |
Jordan Journal of Mechanical and Industrial Engineering | Year: 2011
Emissions of CO and CO2 are understood to be the main cause of global warming, melting of glaciers, heavy rain fall in some areas resulting in catastrophic floods and severe draughts in others. Introduction of national quotas is a political solution to limit carbon emissions; however, it cannot provide answers to the complex problem of climatic change. A permanent solution would require combustion free technologies for converting the chemical energy of fuels directly into electricity. In this respect, devices such as fuel cells are highly efficient direct energy conversion devices which have the true potential to reduce carbon emissions. This paper describes a conceptual hybrid power plant comprising a solid oxide fuel cell (SOFC) and a closed cycle gas turbine. A simple analysis of the plant has been carried out to demonstrate that significant gains can be made in reducing carbon emissions, increasing energy utilisation efficiency and minimising the impact of thermal loading on the environment. © 2011 Jordan Journal of Mechanical and Industrial Engineering. Source
« EPA nudges up volume of renewable fuel in final requirements for 2014-2016 under RFS | Main | Study finds high concentration of CO2 protects sorghum against drought and improves seeds » The US Department of Energy’s (DOE’s) Advanced Research Projects Agency - Energy (ARPA-E) will award approximately $1.9 million to a project to develop a high-efficiency engine system that integrates a compact micro-hybrid configuration of a supercharger with an electric waste heat recovery system and employs high rates of recirculated exhaust gases. When combined with a sophisticated control strategy, this approach provides a solution for suboptimal engine breathing that is typical of transient engine operation. The performance is projected to match that of a naturally aspirated engine and have a 20% increase in fuel efficiency compared to a turbocharged downsized engine, at a cost that is half that of a mild-hybrid system. The project—Split Micro-hybrid Boosting Enabling Highly Diluted Combustion—is headed by Anna G. Stefanopoulou, a professor of mechanical engineering at the University of Michigan, ASME Fellow and director of the U-M Automotive Research Center. Modern engines are becoming smaller and smaller with high levels of dilution for efficiency. But the lack of extra muscle makes them slower to respond than their bigger counterparts. If we improve the responsiveness of small engines, then we can push for more efficient cars at low cost. To get small and diluted combustion engines to perform more like their larger cousins, Stefanopoulou has plans to help them breathe faster. An engine’s response depends on its ability to take in fresh air and convert it to power. Her method will augment traditional turbocharging to provide frugal yet instant air flow control while utilizing stored energy from regenerative braking and exhaust energy in a small 48V battery shared with start-stop functionality. The award was one of 41 announced by US Energy Secretary Ernest Moniz under ARPA-E’s OPEN 2015 program for a total of $125 million in awards. (earlier post) Open solicitations—also issued in 2009 and 2012—serve as an open call to scientists and engineers for transformational technologies across the entire scope of ARPA-E’s energy mission.
Dr. John Carson, president of Jenike & Johanson Inc, an engineering consulting firm specializing in the storage, flow and processing of powder and bulk solids, has been awarded the American Institute of Chemical Engineers 2015 Particle Technology Forum award. The Award recognizes a forum member’s lifetime outstanding scientific/technical contributions to the field of particle technology, as well as leadership in promoting scholarship, research, development, or education in this field. ‘John Carson has provided outstanding leadership to the bulk solids handling community for many years,’ said Timothy A. Bell, engineering fellow, DuPont Particle Technology Group. ‘Recognition of his contributions to technology, teaching, mentoring, and ASTM standards was long overdue.’ Dr. Carson is an author of more than 140 articles on various topics dealing with solids flow, including bin and feeder design, flow of fine powders, design of purge vessels, and structural failures of silos, he also lectures extensively on these and related topics. Besides being a founding member of AIChE’s Powder Technology Forum, Dr. Carson belongs to ASME, ASCE, and ASTM International, where he is chair of subcommittee D18.24, ‘Characterization and Handling of Powders and Bulk Solids.’ This story is reprinted from material from Jenike & Johanson Inc, with editorial changes made by Materials Today. The views expressed in this article do not necessarily represent those of Elsevier.
Nikhil Koratkar, the John A. Clark and Edward T. Crossan Professor of Engineering in the Department of Mechanical Aerospace and Nuclear Engineering at Rensselaer Polytechnic Institute, has been named a fellow of the American Society of Mechanical Engineers (ASME). The organization recognized Koratkar for his “exceptional achievement in the science and technology of one-dimensional (carbon nanotubes) and two-dimensional (graphene) nano-materials, leading to important breakthroughs in nanotechnology, energy and sustainability.” ASME is devoted to using engineering to improve the quality of life worldwide. Its members provide expertise to meet diverse global challenges and help shape government policy. The ASME Fellow Grade, which recognizes exceptional engineering achievements and contributions to the engineering profession, is bestowed on less than three percent of over 125,000 members. “Being elected a fellow of ASME is a wonderful recognition of Professor Koratkar’s outstanding research which represents a creative blend of fundamentals of advanced materials with high impact applications,” said Shekhar Garde, dean of the School of Engineering. “Nikhil continues to be a leader in his field, and differentiates himself from his peers by his unconventional thinking and extraordinary intuition. His work is harnessing modern micro and nanoscale materials science for important applications in energy and sustainability. We congratulate him on this special honor.” Koratkar’s research is positioned at the intersections of nanotechnology, energy, and sustainability. His research focuses on the synthesis, characterization, and application of nanoscale materials, such as graphene, phosphorene, carbon nanotubes, transition metal dichalcogenides, as well as metal and silicon nanostructures. He is studying the fundamental mechanical, electrical, thermal, magnetic, and optical properties of these one- and two-dimensional materials and developing a variety of composites, coatings, and device applications using these low-dimensional materials.
News Article | November 24, 2014
Polaris Partners, a Boston-based venture capital firm with heavy historical ties to MIT’s biotech research labs, has raised $450 million for a seventh fund. The new fund is slightly larger than the $400 million targeted in initial SEC filings earlier this year, likely buoyed by a string of recent portfolio company IPOs in the healthcare sector. In its announcement, Polaris was sure to highlight entrepreneurs from both the software and life sciences halves of its business. That’s notable as venture firms wrestle anew with the question of whether to specialize or tackle multiple industries at once. Just a few months ago, Cambridge, MA-based Atlas Venture went the other direction, splitting its life science and tech teams. Another example of specialization is Lightstone Ventures, which was founded in 2012 by healthcare investors from Advanced Technology Ventures and Morgenthaler Ventures. The time is certainly right for healthcare investors to count some returns: companies from the sector have made up the largest share of IPOs in the past year or so, according to research from IPO Scoop. Polaris has seen several life sciences companies go public in the past couple of years, including Acceleron Pharma, Bind Therapeutics, Genocea Biosciences, T2 Biosystems, and Cerulean Pharma. Notable investments on the tech side include real estate website Trulia, which was acquired by competitor Zillow, and Automattic, the company behind the widely used WordPress website publishing software. Polaris’s last fund was $375 million, raised in 2010. That was a huge downsizing from the previous fund, a 2006-vintage vehicle that was worth $1 billion. Of course, plenty changed in the financial markets in that span. Since its last fund, Polaris has also seen strategic changes on the tech side—it launched a startup incubator called Dogpatch Labs, which expanded from San Francisco to Boston, New York, Silicon Valley, and Dublin. Today, only the Dublin site remains. | <urn:uuid:54fc8a75-0748-4287-93c0-a94a1b2dc399> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/asme-762678/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00565-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93878 | 1,793 | 3.53125 | 4 |
ITIL clearly states that services, “…deliver value to customers by facilitating outcomes customers want to achieve…” However, sometimes organizations and people focus on outputs as opposed to outcomes, which sacrifices some of the value of the service. This leads to a question, what is the difference between an outcome and an output?
As ITIL indicates, an outcome is:
The results of carrying out an activity, following a process, or delivering an IT service etc. The term is used to refer to intended results as well as to actual results.
The official ITIL glossary does not specifically define the word output, however, the Table 3.3 in the Service Strategy book provides some insight. Basically, an output is:
What a service provider delivers.
This is a subtle distinction, however, it’s an important distinction to consider. In the context of my landscaping work, some of the outputs are:
- A raised play area for my kid’s swing set
- Flagstone and decomposed granite installed on one side of my house
- Stone flowerbeds in my front and side yards
- Cleaning out and stabilizing the French drains in my yard
The outcome is then what I do with those outputs. For example, some of the facilitated outcomes are:
- My child plays daily in his outside play area, enjoys it, and the outcome I want is for him to be happy, which in turn makes me happy
- Improved drainage on the west side of my house, resulting in fewer mosquitos, less standing water, and a pleasurable outdoor experience
- Improvement in overall appearance of the yard with flowerbeds, increasing the intrinsic value of my home
Service providers sometimes spend too much time focusing on outputs rather than outcomes, which often causes the customer to lose some of the value promised by the service. Some service providers involved in delivering IT services might be too heavily focused on the technology rather than how the customer uses the technology to facilitate outcomes. For example, a service provider might be focused on ensuring that a network is available for customers 24/7 without considering how customers use that network to facilitate outcomes. In this case, customers might have periods of peak volume where network performance suffered and periods of low volume for which the service is basically over-engineered. This is because the service provider is focused on the technical aspects rather than how those technical aspects are used to do something that the customer wants. The service provider hasn’t investigated patterns of business activity and how these are supported by the underlying technology.
It is imperative for service providers to understand clearly the difference in the outputs that they produce, and how those outputs are used by customers to facilitate outcomes. This is a subtle, yet important distinction. | <urn:uuid:08564e11-6dec-44f7-9f7d-1db024952912> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2012/07/25/outcomes-and-outputs/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00107-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955326 | 561 | 2.765625 | 3 |
The IT Infrastructure Library™ (ITIL®) describes the steps of the root cause analysis method called Kepner-Tregoe - Define and Describe the Problem, Establish possible causes, Test the most probable cause, and Verify the true cause.
The ITIL mentions Kepner-Tregoe, but does not give enough detail to use it to solve difficult problems.
Simple as it sounds, most technicians and technical leads do not actually follow Kepner-Tregoe. They rely instead upon preconceived ideas and often skip important steps. Then, without a plan and in desperation they fall back on the good old "when in doubt swap it out" technique.
Taking the time to use Kepner-Tregoe can result in dramatic improvements in troubleshooting, and deliver permanent fixes to prevent future problems as well.
Following I provide a template for using Kepner-Tregoe that problem managers and staff can use to accelerate root cause analysis.
The actual name is Kepner-Tregoe Problem Solving and Decision Making (PSDM). Kepner-Tregoe calls the part of PSDM that ITIL refers to Problem Analysis. Problem Analysis helps the practitioner make sound decisions. It provides a process to identify and sort all the issues surrounding a decision. As a troubleshooting tool, Problem Analysis helps prevent jumping to conclusions.
Immature troubleshooters use hunches, instinct, and intuition. These individual acts of heroism may seem brilliant, but they can also result in more problems since jumping to conclusions often compounds or expands problems instead of solving them.
Problem Analysis leverages the combined knowledge, experience, intuition, and judgment of a team, resulting in faster and better decisions. Using Problem Analysis to aid Problem Management not only brings the team together, but also helps identify root cause. Problem Analysis is a problem solving and decision making framework. Six Sigma, Lean Manufacturing and ITIL all describe Problem Analysis.
The Problem Analysis process divides decision-making into five steps:
Problem Analysis begins with defining the problem. The problem management team cannot overlook this critical step. Failure to understand exactly what the issue is often results in wasting precious time. Many immature troubleshooters consider this step as wasted effort since they know what they are going to do – and this is the critical mistake made by many. Preconceived notions often result in increased outage duration and even outage expansion due to poor judgment.
Since problem management is inherently a team exercise, it is important to have a group understanding of the problem. Consider the following examples. A poor problem definition might appear as follows:
"The server crashed."
A better problem definition should include more information. A good model for clarifying statements of all sorts is the Goal Question Metric (GQM) method. It results in a statement with a clear Object, Purpose, Focus, Environment, and Viewpoint. This results in an unambiguous and easily understood statement. A clarified problem definition might be:
"The e-mail system crashed after the 3rd shift support engineer applied hot-fix XYZ to Exchange Server 123."
When developing a problem definition always use the "5 Whys technique" to arrive at the point where there is no explanation for the problem. Using 5 Whys with Kepner-Tregoe only accelerates the process.
With a clear problem definition, the next step is to describe the problem in detail. The following chart provides a nice template for this activity. You can do this using a presentation board, paper, or common office software. Table 1 describes the basic worksheet used in the process.
The worksheet describes the four aspects of any problem: what it is, where it occurs, when it occurred, and the extent to which it occurred. The IS column provides space to describe specifics about the problem -- what the problem IS. The COULD BE but IS NOT column provides space to list related but excluded specifics -- what the problem COULD BE but IS NOT. These two columns aid in eliminating "intuitive but incorrect" assumptions about the problem. With columns one and two completed, the third column provides space to detail the differences between the IS and COULD BE but IS NOT. These differences form the basis of the troubleshooting. The last column provides space to list any changes made that could account for the differences.
|IS||COULD BE but IS NOT||DIFFERENCES||CHANGES|
|WHAT||System failure||Similar systems/situations not failed||?||?|
|WHERE||Failure location||Other locations that did not fail||?||?|
|WHEN||Failure time||Other times where failure did not occur||?||?|
|EXTENT||Other failed systems||Other systems without failure||?||?|
Anyone who has spent time troubleshooting knows to see "what has changed since it worked" and start troubleshooting by checking for changes. The problem is that many changes can occur, and that complicates things. Problem Analysis can help here by describing what the problem is and what the problem could be, but is not. For example:
Problem: "The e-mail system crashed after the 3rd shift support engineer applied hot-fix XYZ to Exchange Server 123."
|IS||COULD BE but IS NOT||DIFFERENCES||CHANGES|
|WHAT||Exchange Server 123 crashed upon application of hot-fix XYZ||Other Exchange Servers getting hot-fix XYZ||Different staff (3rd shift) applied this hot-fix||New patch procedure from vendor|
|WHERE||3rd floor production room without vendor/ contractor support||Anywhere else with vendor/ contractor support||Normally done by vendor||New procedure, first time 3rd shift applies hot-fixes|
|WHEN||Last night, 1:35am||Any other time or location||None noted|
|EXTENT||Any Exchange Server on 3rd floor||Other servers|
History (and best practice) says that the root cause of the problem is probably due to some recent change.
With the completed worksheet, some new possible solutions become apparent. Shown above is becomes clear that the root cause is probably procedural, and due to the fact the vendor did not apply the hot-fix, but rather gave procedures for the hot-fix to the company.
With a short list of possible causes (recent changes evaluated and turned into a list), the next step is to think-through each possible problem. The following aid can help in this process. Ask the question:
"If ____ is the root cause of this problem does it explain the problem IS and what the problem COULD BE but IS NOT?"
If this potential solution is the root cause then the potential solution has to "map to" or "fit into" all the aspects of the Problem Analysis worksheet (figure 2.) Use a worksheet like that shown in figure 3 to help organize your thinking around the potential solutions.
|Potential root cause:||True if:||Probable root cause?|
|Exchange Server 123 has something wrong with it||Only Exchange Server 123 has this problem||Maybe|
|Procedure incorrect||Same procedure crashes another server||Probably|
|Technician error||Problem did not always reoccur||Probably not|
The next step is to compare the possible root causes (Table 3) against the problem description (Table 2). Eliminate possible solutions that cannot explain the situation, and focus on the remaining items.
Before making any changes, verify that the proposed solution could be the root cause. Failure to verify the true cause invalidates the entire exercise and is no better than guessing. After verifying the true cause, you can propose the action required repair the problem.
It is important here as well to think about how to prevent similar problems from occurring in the future. The Problem Manager should consider how the issue arose in the first place by asking some questions:
The goal is to try to eliminate future occurrences of the problem.
Kepner-Tregoe is a mature process with decades of proven capabilities. There are worksheets, training programs, and consulting firms all schooled in the process. You can take courses at many local colleges as well.
Kepner-Tregoe Problem Analysis was used by NASA to troubleshoot Apollo XIII – even though the technicians did not believe the results, they followed the process and saved the mission. The rest of the story, as they say, is history...
Even without a lot of time available, using Kepner-Tregoe Problem Analysis can result in the most efficient problem resolutions. Armed with tools like 5 Whys and Ishikawa diagramming, a Problem Manager can capture the combined experience and knowledge of a team. When used with Kepner-Tregoe Problem Analysis the result is amazing. | <urn:uuid:3d6f86a8-3235-459a-a11e-f52cff4c06e4> | CC-MAIN-2017-04 | http://www.itsmsolutions.com/newsletters/DITYvol6iss19.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00015-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.909661 | 1,824 | 2.703125 | 3 |
Windows 7 hides certain files so that they are not able to be seen when you exploring the files on your computer. The files it hides are typically Windows 7 System files that if tampered with could cause problems with the proper operation of the computer. It is possible, though, for a user or piece of software to set make a file hidden by enabling the hidden attribute in a particular file or folder's properties. Due to this it can be beneficial at times to be able to see any hidden files that may be on your computer. This tutorial will explain how to show all hidden files in Windows 7.
To enable the viewing of hidden and protected system files in Windows 7 please follow these steps:
You will now be at your desktop and Windows 7 will be configured to show all hidden files.
If you have any questions about this tutorial please feel free to post them in our Windows 7 forums.
By default Windows hides certain files from being seen with Windows Explorer or My Computer. This is done to protect these files, which are usually system files, from accidentally being modified or deleted by the user. Unfortunately viruses, spyware, and hijackers often hide there files in this way making it hard to find them and then delete them.
Windows 8 hides certain files so that you are not able to view them while exploring the files and folders on your computer. Windows has two types of files that Windows will classify as hidden and hide from the user. The first type are actually hidden files, which are ones that are given the +H attribute or specified as Hidden in a file or folder's properties. The second type of file are System ...
Windows 7 allows you to have multiple users sharing the same computer under their own individual accounts. This allows each individual user to have their own location on the computer where they can store their personal documents, pictures, videos, saved games, and other personal data. This also allows the owner of the computer to assign certain accounts the ability to perform administrative tasks ...
If you use a computer, read the newspaper, or watch the news, you will know about computer viruses or other malware. These are those malicious programs that once they infect your machine will start causing havoc on your computer. What many people do not know is that there are many different types of infections that are categorized in the general category of Malware.
Windows Vista has made it a little harder to find the Folder Options settings than it had in previous versions. The easiest way is to use the Folder Options control panel to modify how folders, and the files in them, are displayed. You can still show the Folder Options menu item while browsing a folder, but you will need to hold the ALT key for a few seconds and then let go to see this menu. | <urn:uuid:71760785-ed37-42eb-af7c-3def4f10d60b> | CC-MAIN-2017-04 | https://www.bleepingcomputer.com/tutorials/show-hidden-files-in-windows-7/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00015-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930532 | 555 | 2.890625 | 3 |
Training for a government job was once a mundane procession of lectures, note taking and No. 2 pencils. Then came personal computers and PowerPoints.
Now, more agencies are turning to virtual reality to give their training programs an added boost of authenticity, to help employees game plan on-the-job scenarios and to save money by doing more training from a distance.
The General Services Administration’s Federal Acquisition Institute recently posted solicitation documents seeking “a dynamic and interactive [virtual reality training] environment with novel artificial intelligence architecture, such as interactive challenges.”
The training program should be designed as a game with rewards for correct answers and be accessible remotely by computer, smartphone and tablet for up to 200 concurrent users, the document said.
This week the Homeland Security Department’s Office of Intelligence and Analysis posted a solicitation seeking a virtual world module for its training programs. The DHS program is aimed at augmenting its existing training. The tasks it describes mirror traditional training students might do in person or over the Internet, such as attending lectures and working on group projects except in a virtual 3D environment.
Perhaps the most ambitious virtual reality training program in government is being implemented by the Centers for Disease Control and Prevention. That program for non-clinicians who help CDC field teams manage safety, stress management and “psychological first aid” aims to so closely mimic the experience of visiting a disease-ravaged African village that “trainees may temporarily forget it’s simulated.” | <urn:uuid:26696782-9a29-47cb-b909-17f5fc40b0ab> | CC-MAIN-2017-04 | http://www.nextgov.com/emerging-tech/2013/09/government-job-training-mundane-matrix/69972/?oref=dropdown | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00491-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927993 | 307 | 2.65625 | 3 |
What Is A DMZ and How to Configure DMZ Host
A DMZ is being referred to as the conceptual network designed with publicly accessible servers place on separate form or with isolated segment. The main work of the DMZ is to provide proper connection with the server and make them public accessible that cannot contact with the internal network segment in any event. A firewall that is particularly applicable in DMZ application.
It is responsible for safeguarding all the policies and will protect the local network and maintaining the accessibility to DMZ. It is because of the non-trivial nature of the DMZ implementation, it has been suggested not to attempt DMZ in the purpose of networking unless they have the strong familiarity with the connection off network. Generally DMZ is not usually required in all the purpose, it is generally used to encourage by the security careful network administrator.
A Sub network of the Subnet is being referred to as the practice help in dividing the network and is logically visible with sub division into two or more network called subnetting. The computer that belongs to the subnetting is being address with various identical and common groups with IP address. Actually this subnetting is the division of an IP address into two fields. One field is being provided for networking and rest filed is identifier that is host.
Generally the routing prefix in Subnet is visible in CIDR Rotation and it is written in the first address of the network and this will be followed with slash (/) character and end with the bit length of prefix. For example if the prefix of the internet protocol is subjected with given address, showing 24 bits allocate for network prefix then the remains 8 bits will be reserved for host addressing.
Generally the IP address specification can be addressed and the network is characterized with subnet mask. When this subnet mask is applied with logical application with the IP addresses then it will yields routing prefix. This subnet mask will be expressed in dot decimal notation like address.
Generally the benefits of subnetting will depend on the existing network that will vary with each development scenario. The subnets are generally arranged in a hierarchical architecture that will help in partition the organizing network into two domains and address will be tree like routing structure.
A VLAN is being referred to as the group if device configured on one or more LANs and will help to communicated as they are attached to the same wire but the fact is that they are located on the number of different LAN segments. VLAN is generally connected logically instead of physically and are extremely flexible as compared to the other. VLAN is defined as the broadcast domains and is layered with two layer of network. A broadcast domain considered as the set of device basically received the broadcast frames and is originating in any of the device with in the sets. Broadcast framed is bound by the rooters and will prevent to forward the broadcast frames, the layer 2 switches with create the broadcast domains based on the formation of the switch. All those switches are having multiport bridges facilities and will help to create multiple broadcast domains.
There are one or many virtual bridges within the switch. Generally each virtual domain created in the switch is defend as the new broadcast domain i.e. VLAN. This domain will restrict the traffic to pass through the VLAN within the switch or between the two switches. For interconnecting the two VLAN, it is preferably can used routers or 3 layer switches and the VLAN will acts as the catalyst series within the switches.
NAT is being considered as the process where the network device usually by using the firewall will allocate the public address to the computer inside the private network. The main purpose of the NAT is usually to limit the number of public IP address. Generally this NAT is the common form of network translation that will involves the numerous private networks using address in the private range. This address scheme will work well for the computers and will allow in accessing the resources inside the network. Router inside the private network mainly used to find the traffic and will route between the private address with no worry. The internet request that is required for NAT is quite complex but the process happened so quickly that the user did not get any time to identify the problem. The rooters inside the network will recognize the request and send it to the firewall. The firewall will sees the request from the computer with the internal IP at the same time they will made the same request to the internet using with its own private address and then send back the request to the internet resource to the computer inside the private network. The other uses of the NAT are help in allowing workstation with IP address to access through the internet.
Remote Access is giving the opportunity to the people to access the business computer even when they are not connected to the business computer. This function will give people with full access of the company's details, access to mails, and other system. This remote access will help the staff to log on the company's customer database from home, and will help in setting up the remote access server that will help the clients to download the details from the server. This Remote access will allow the employee of the business to send and receive mails from any computers.
Remote access will give the prime benefits to the workers to operate business activities and works effectively when away from the business and office premises. In case of any business dealing from home you need the network server to control the permission level and this will allow the remote access even by sitting at any corners of the country or the world.
In fact with the help of remote access the online business strategy are now increasing with great heights and becoming popular particularly for smaller business strategy. The more useful technique that will provide remote access to your company is by setting the Virtual Private Network that will provide perfect security between official networks with the employee network and maintain full privacy across the internet. All the information is shared between the office the employee should be scrambled and nobody else can interfere in this network and the employee will get all the access and information of the company and will totally give the feelings of official environment.
The term Telephony is basically denoting the technology that will facilitate people in maintaining the long distance voice communication. This term has been derived from the Greek word and hence it is giving the idea to keep connection in the far of distance and will give the idea of speaking from far. The term scope has been broadened with the advent of the new idea that will help people with new mode of communication technologies. In other sense the term is basically used for phone communication, internet calling, mobile communication, faxing, and voice mail and also in video conference facilities. The main idea of the Telephony is basically derived from plain old telephonic services. These two terms are interchangeable and is the best technology and is fiercely challengeable to great extent. The terms Telephony is basically used is channeling the voice calls, telephony and internet telephony and will create a network namely LANs and the internet.
Network access control is being referred to as the to the computer security that will be attempted to unify the endpoint security. Basically this NAC is a computer networking solution that has used to set the protocols and will define the implement policy; this will describe how to secure the access to network nodes by the use of this devices NAC attempt to access the network. This NAC will integrate the automatic remediation process and will allowing the network infrastructure such as routers, switches and firewalls will work together with the back office server and will stop the user computing equipment that will ensure the information system operating with secured process. NAC will aims to do and will help in access the network policies and will give the better support to the people and controls over the user and devices can go on a network.
Generally NAC is process that will help the computer to connect with the computer network that will permit the access and fulfills with business defined policy. NAC is being used to represent the emerging category of security deposits and the definition is controversial.
In the process of computing the virtualization is mainly defined as the virtual connection that will be a device or resource such as server, storage device, network and various other resource one or more execution environment that will give better benefits to the people where the network is diving the resource. Basically the term virtualization is being defined as the process that will be help in partitioning the once drive into two to creative separate hard drive useful in the process of networking. The term virtualization has now become the buzz word and now this term is related and associated with numerous computing technologies.
Cloud Computing is basically providing computing as a services and not as a product or material. This process helps in sharing resources, software and the information being provided from the one computer to the other over a network or with the help of internet. Cloud computing will depend on sharing the resource and converged infrastructure that will delivers to people in gaining all the information by the process of sharing from one device to another. Cloud computing is basically focuses on maximizing the effective report of the shared resources.
Platform as a service:
In the previous model the cloud has delivered to the people with computing platform that will includes operating system, database, and program language executing system. This application plat as service helps the developer to run the software solution on a cloud platform without any cost and complexity of managing and using this hardware and software layer.
Software as a device:
In the business model using the software as the device will provide the access of the application software and database. The cloud software will help to manage the platform and the infrastructure and help in running the application. The cloud applications are totally different from the other application on the basis of scalability. The proponents of the software device will help in cutting the cost of the IT and reduce the operational cost by outsourcing the hardware and then give the support to the cloud provider.
Unified Communication as a service:
This unified communication service will provide to people with multiple platform communication over the network and giving the great package by the service provider. The services will be provided with different forms such as compute and mobile devices. This Unified communication includes Telephony, unified messaging, video conferencing and mobile extension.
Private Cloud is mainly defined as the cloud infrastructure that will be operated personally and solely for a single organization and this service will be hosted internally or externally. Undertaking a private cloud project is useful in business environment and requires the organization to reevaluate the decision about the existing resources.
The Public cloud is being defined as the service that is rendered over the network that is free and opens the public use. This service is generally offered to people on pay-per usage model.
Hybrid Cloud is the main composition of two more clouds and remains in the distinct entities and offering the benefits to multiple developments. Hybrid cloud is also defined as the ability to connect colocation dedicated service with cloud resources.
Community cloud is being referred as the process that will share the infrastructure between several organizations from the specific community.
Layered security /defense in depth:
It is the multiple cloud components that will be associated in delivering the cloud computing service with full measure along with internal security facilities. Defense in depth is associated with military strategy and involve the multiple layer of defense and resists the attack by too rigid tactics.
Layered defense help in practicing the combining multiple mitigating security controls help in protecting the data. | <urn:uuid:62400e42-9dc8-4313-80ff-a0058b616c23> | CC-MAIN-2017-04 | https://www.examcollection.com/certification-training/security-plus-what-is-dmz-how-to-configure-dmz-host.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00309-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949696 | 2,292 | 2.984375 | 3 |
Accounting and Financial Systems
Learn the basics of accounting.
A basic understanding of accounting is necessary for success in many positions. In this course, you'll receive a broad overview of the concepts central to GAAP accounting. You'll learn about the difference between cash basis and accrual basis accounting and how to read the most common types of financial statements. Finally, you'll learn the importance of financial ratios and how to apply them.
Virtual short courses do not include materials or headsets. | <urn:uuid:f447e287-16ed-4595-ac50-e981ecccce11> | CC-MAIN-2017-04 | https://www.globalknowledge.com/ca-en/course/120429/accounting-and-financial-systems/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00309-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939403 | 99 | 2.671875 | 3 |
Definition: The beginning characters of a string. More formally a string v∈Σ* is a prefix of a string u∈Σ* if u=vu' for some string u'∈Σ*.
See also suffix.
Note: From Algorithms and Theory of Computation Handbook, pages 11-26 and 12-21, Copyright © 1999 by CRC Press LLC. Appearing in the Dictionary of Computer Science, Engineering and Technology, Copyright © 2000 CRC Press LLC.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 17 December 2004.
HTML page formatted Mon Feb 2 13:10:40 2015.
Cite this as:
Algorithms and Theory of Computation Handbook, CRC Press LLC, 1999, "prefix", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/prefix.html | <urn:uuid:31579c4a-70b8-4a11-983a-a9d1c50e4521> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/prefix.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00547-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.750325 | 225 | 3.078125 | 3 |
Written byCasey Coleman
The federal government spends more than $75 billion annually on information technology. Due to its size and critical mission requirements, feds have often lagged behind the commercial sector in embracing new technologies. With cloud computing, however, the federal government is leading the way.
From the earliest days of his Administration, President Obama along with his technology team focused on the importance of cloud computing as a key innovation that could reduce energy consumption and enhance agility within U.S. federal agencies. Federal Chief Information Officer Vivek Kundra laid out a vision for cloud computing that addresses the “Five Pillars” of the government’s IT agenda: citizen engagement, reducing the cost of government operations, driving innovation, transparency, and cybersecurity.
Many other governments around the world are getting on the cloud now. They are interested in the United States’ lessons learned and future vision. In a short time, federal agencies collaboratively created a formal definition for cloud computing; launched the first government cloud storefront, Apps.gov; created a federal cloud computing program and governance structure; and began addressing critical success factors such as security, portability and interoperability, and education and awareness.
Cloud computing is truly a game-changing development. A few examples:
- The cloud’s scalability implies that government agencies no longer need worry about reaching the limits of their digital capacity. By transitioning to the cloud, agencies tap into an infrastructure that is as flexible as their needs are varied.
- Cloud solutions encourage cross-agency collaboration and help government perform better for the American people.
- The federal government’s commitment to cloud computing signals to private industry that smart technology solutions are the future of government.
The General Services Administration experienced many of these benefits by moving the federal portal, USA.gov, to the cloud. GSA wanted to reduce costs and add scalability and flexibility to USA.gov in order to meet emerging citizen needs. Using a traditional IT procurement, it would likely have required six months to upgrade USA.gov to keep up with growing traffic, at a cost of approximately $2.47 million per year. In a cloud environment, GSA is able to perform upgrades in one day at an annual cost of $806,000. The transition significantly lowers GSA’s costs and improves the scalability of USA.gov, saving taxpayers $1.7 million annually.
Of course, the private sector is a vital partner helping the federal government realize the benefits of cloud computing. Companies such as those participating in the FedScoop Cloud Computing Shoot Out will play a critical role in advancing innovation and overcoming challenges. Cloud computing has a bright future, and today’s event is one more step in that direction. | <urn:uuid:203c0821-9c22-4ad0-b7a9-4276a900ed96> | CC-MAIN-2017-04 | https://www.fedscoop.com/cloud-computing-comes-of-age/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00547-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948974 | 557 | 2.71875 | 3 |
There are many classifications as far as forensic data collection is concerned, but much of it is still a de facto and Wild West when it comes to naming convention. This is especially true in the embedded system area.
When I refer to embedded systems, I think of specialized devices, sometimes in a larger system or machine. Embedded systems usually have at least one microprocessor with dedicated program, and limited options to extract the information in a sound forensic way. Cell phones, smart phones, tablets, DVD and BluRay players, advanced digital watches, TVs, cars, elevators, and even washers & dryers can have embedded systems.
I would like to suggest a more structured way to represent data collection methods for such systems. As this is a work in progress, I look forward to constructive criticisms that can benefit the forensics community.
The classification is broken down into six methodologies.
- Manual acquisition
- Logical acquisition
- Pseudo-physical acquisition
- Support-port acquisition
- Circuit read acquisition
- Gate read acquisition
Each methodology has their shortcomings and benefits. I categorized these into four areas, and ranked them in a scale of 1 to 10, with one for “least” and 10 “most”.
Destructiveness is the impact on the target device, and how likely that everything is fully functional after data collection.
Technical & Training is the required understanding and education in the area required to attempt the methodology.
Cost is simply the expenses involved with the resources required, such as equipment, tools and consumables, to attempt the methodology.
Forensically Sound, the final measurement, is how likely the original data is modified, knowingly or not.
This is the oldest and least training and equipment required methodology. The examiner takes advantage of the devices display and user interface and a camera to record all relevant information, as much as possible. The target device may record all display and user interface activity, and update system data as normal housekeeping.
Example: Secure cell phone in holding bracket, then using the keypad scroll through all relevant items while taking pictures of the cell phone with an external camera. A commercial product used for this kind of acquisition is Paraben Project-A-Phone.
Logical acquisition method is where the device’s operating system (OS) is in full control of what can be accessed, and provides the method to transfer the data. The examiner connects the device to a forensic workstation, and using various software packages communicates with the OS on the target device. The OS may record the connection, and communication on the target device, and update system data as normal housekeeping.
Example: Connect cell phone’s external port to USB port, using proprietary cable. Run Software to initiate serial communication with device, and request information from device using proprietary and device specific commands. A software, such as BitPim would be used for this type of acquisition.
The process of pseudo-physical collection involves forcing program code onto the target device in some way which allows access to most data areas. The code may only provide access and takes advantage of the target device’s OS to provide communication, or is a complete replacement of the OS with just collection functionality. Thereafter, the examiner connects the device to a forensic workstation, and using various software packages communicates with the program code, or the OS on the target device. The OS may record the connection, and communication on the target device, and update system data as normal housekeeping. The forced-on code may also impact the information on the target device.
Although often touted as physical acquisition by almost all vendors, this process is not, in my opinion, truly physical as most forensics examiners expect it to be. Most forensics examiners think “bit-by-bit” when they hear “physical”. In my experience, this is not the case, as unallocated and slack areas of the storage are not collected.
Example: Target device is connected to the forensic workstation with a USB to proprietary serial cable. The target device is placed in Device Firmware Update (UDF) mode. The software on the forensic workstation at this time may load a special program code onto the target device. The code allows the software on the forensic workstation to access most information on the target device. Sometimes the target device’s UDF mode software provides the communication features.
Most mass produced electronic devices have ports for testing the electronics, or for updating firmware on various onboard integrated circuitry. These “ports” can be implemented as user accessible ports such as a USB, RS232 or even some pin and socket connector (Molex), non-user accessible ports including pin headers or insulation-displacement connector, and finally test connection pads that appear on the printed circuit assembly (PCA).
To access these ports, almost all small electronics require disassembly, often voiding the manufacturer’s warranty. Once the device is disassembled, the port must be identified on the PCA, and the specific communication protocol must also be found. Communication is established with the specific storage circuitry, and data is requested. This data is then stored for further analysis.
The most often used protocols are Boundary Scan (often referred to by the standardizing group name Joint Test Action Group [JTAG]), Inter-Integrated Circuit (I2C), Serial Peripheral Interface (SPI), Enhanced Synchronous Serial Interface (ESSI), Controller Area Network (CAN), Local Interconnect Network (LIN), and Background Debug Mode (BDM).
Example: The target device is disassembled, and test access points (TAP) are located. Leads are soldered or clamped onto the TAP, and connected to a protocol specific universal asynchronous receiver/transmitter (UART). This device in turn is connected to a USB port of the forensic workstation. Specialized software using circuit-specific commands instructs the on-board device to download data from the circuit. The returned data is stored on the forensic workstation. No information is stored or written to the target device besides the temporary instructions.
For this acquisition methodology, the integrated circuits (IC) such as memory chips are desoldered from the PCA and data is extracted using chip specific pin-out and communication. This is often referred to as “chip-off” process.
There are several critical points with this method, including the possibility to permanently damage the IC during desoldering, dealing with stacked ICs (3D packaging) or monolithic configuration.
In this particular method, the IC is removed, socketed or soldered, and specific signals are sent to extract the data from the specific chip, using specialized software.
Example: The target device is disassembled, and data storage ICs are located. Pin out information, and timing details for communication with the IC is researched. Target device is preheated, and then the specific ICs are desoldered. The ICs are either placed in temporary sockets, or leads are soldered to appropriate pins. The socket or leads are connected to a communication device using proper communication protocol, such as a Transistor-Transistor Logic (TTL), which in turn is connected to the forensics workstation.
Specialized software using IC-specific commands instructs the socket to download data from the IC. The returned data is stored on the forensic workstation. No information is stored or written to the target IC.
This methodology requires both equipment, and chemicals that are usually not found in most digital forensics labs. The process involves the removal of the target IC in similar fashion as the Circuit Read acquisition methodology. Instead of attempting to communicate with the IC through electronic signals, the chip is literally sliced into multiple layers, to expose each original semiconductor lithographic layer, and information is reverse engineered from the layers.
The layers are measured in nanometers (1 x 10-9 m) or a billionth of a meter. Each layer is removed, photographed, and then reverse engineered from the photograph. The process is as much guess work as it is a very high level understanding of IC internals and IC lithography. The process works best with planarized chips. The steps of the process are device depoting or package removal, delayering, imaging, annotation, schematic, organization and finally analysis.
Example: The target device is disassembled, and data storage ICs are located. Pin out information for the IC is researched. Target device is preheated, and then the specific ICs are desoldered. The IC is bathed in chemicals to remove potting, or encasing. At this point, the only remaining items are the leads to a piece of silicon die. The leads are noted and photographed. The die using lapping (or other very precise slicing or abrasion method) removes each layer, and photographed. The layers are stacked in software, and reverse engineered using the shape, color density and interconnection of the layers. This process requires identification amongst other things the N-type, P-type silicon, the gates, power and ground.
|Manual||Logical||Pseudo-Physical||Support-port Read||Direct Circuit Read||Gate Read|
|Technical & Training||1||2||3||5||6||9|
Rankings are on a scale of 1 to 10, with one for “least” and 10 “most”. Ex. Most destructive would be a 10; Least costly would be a 1. | <urn:uuid:038d2405-77ed-4724-9ee8-d7575bf3729b> | CC-MAIN-2017-04 | https://articles.forensicfocus.com/2013/04/07/categorization-of-embedded-system-forensic-collection-methodologies/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00172-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921897 | 1,963 | 2.703125 | 3 |
Storage Networking 101
Storage networking traditionally has been based on fibre channel, an out-of-band solution that isolates storage traffic to the storage environment. The benefit is that there is no contention for network resources. You don’t have to worry about some clown bringing down your network by streaming the “Lord of the Rings” movie trilogy over his lunch break since the network is isolated and dedicated only to storage.
Further, this isolation make the storage network generally more secure than Ethernet, and fibre-connected devices generally are more reliable since all the connections are fully redundant. Any component along the fibre channel path can fail and the system will still remain running.
The impetus for accessing storage via Ethernet is a fairly recent phenomenon. Support for iSCSI Storage, say on a LeftHand SAN (storage area network), supported by VMware for a few years. Accessing NAS (network-attached storage) via NFS (network file system) or CIFS (Common Internet File System) has had traction for a bit longer, but it has only been during the past few years that people seriously considered using these protocols for mission-critical systems. When it comes to Ethernet networking, the name to know is Cisco. has only been
One major drawback with using Ethernet to access storage is that storage I/O will compete with regular network traffic. A mass e-mail or a virus outbreak can eat up network bandwidth, potentially causing disk corruption if a given system cannot write to disk.
However, if you work for a small company, a big load on your Ethernet network may not be a big deal if there isn’t much traffic on your LAN. More importantly, you may be willing to live with the performance issues caused by bursting network traffic if you don’t have the money to buy a separate fibre or Ethernet network. Fibre switches still cost tens of thousands of dollars.
There is an industry trend toward using a single converged network card (CNA) instead of separate HBAs (host bus adapter) and NIC (network interface controller) cards. The CNA cards support both fibre and Ethernet protocols at 10 Gbps. This convergence will enable a single adapter to provide fibre over Ethernet (FCoE).
Learning the Ropes
Let’s look at Cisco first. Cisco offers a training plan for the Cisco Data Center Storage Networking Design Specialist. Some great resources to study for the certification are Cisco’s Storage Networking Fundamentals and Storage Networking Protocol Fundamentals.
Cisco also has a more general Cisco Certified Entry Networking Technician. References for the CCENT include Cisco’s CCENT Official Exam Certification Guide or Que’s CCENT Exam Cram and Exam Prep guides. (CCENT) certification.
Shawn Conaway, VCP, MCSE, CCA, is a director of NaSPA and editor of Virtualize! and Tech Toys magazines. He can be reached at editor (at) certmag (dot) com. | <urn:uuid:8466312a-8916-4cfa-9e41-0d37904da0e2> | CC-MAIN-2017-04 | http://certmag.com/storage-networking-101/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00080-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922236 | 614 | 2.640625 | 3 |
Researchers are touting a prototype of a seamless foldable mobile device display that folds in half without a visible crease in the middle.
Researchers from the Samsung Advanced Institute of Technology in South Korea say fabricating a display that can fold in half would have the advantage of providing a large screen in a small, portable form without a visible crease between panels.
The Samsung researchers say they have demonstrated such as beast using what they called a foldable active matrix organic-light-emitting-diode (AMOLED) display. The display consists of two AMOLED panels, silicone rubber, a protective glass cover, and a module case. "The display has a very small folding radius of just 1 mm, so that one panel lies almost completely on top of the other when the display is folded at a 180° angle. Also, the glass cover not only prevents scratches, but can serve as a touch screen, "the researchers said in a statement.
More cool stories: The weirdest, wackiest and stupidest sci/tech stories of 2010
The researchers said they tested the foldable display's mechanical and optical strength by subjecting it to 100,000 folding-unfolding cycles, and found that the relative brightness at the junction decreased by 6%. Since this difference is hardly recognizable by the human eye, the deterioration is considered negligible, the researchers stated. Researchers published a detailed study on the display in a recent issue of Applied Physics Letters.
Samsung in particular is no stranger to building flexible displays. For example this SingularityHub.com article shows Samsung's technology at this year's CES show. The article states: "Despite recent hype, [AMOLED] technology has been in use for years - the Kodak Easyshare LS 633 digital camera with an AMOLED screen was released in 2003. Since then though, the technology has been relatively slow to take off, mostly due to manufacturing costs, and the inability to fill the demand of smart phone manufacturers. Despite difficulties, many companies - namely Sony and Samsung - remain committed to developing the technology. Their efforts seem to be paying off - AMOLED screens are used in current products such as Samsung's Galaxy S smart phones and the soon to be released Nokia E7, next-gen Galaxy Tab, and Sony's Cyber-shot digital camera TX100v."
Follow Michael Cooney on Twitter: nwwlayer8
Layer 8 Extra
Check out these other hot stories: | <urn:uuid:4f9659cd-7d36-485d-aa1e-961face9f818> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2228739/smb/researchers-tout-foldable-display-for-large-mobile-device-screens.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00566-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928336 | 498 | 2.84375 | 3 |
A Collaboration: Network Coding + Reliability
In 2009, Robert Calderbank then a professor at Princeton received an Air Force grant to help solve a difficult communications problem: how to make wireless networks more reliable and thus avoid the communications failures that can result from packet loss. For the Air Force, such failures, occurring during critical situations, have the potential to be life-threatening.
Packet loss has long been a big problem on wireless networks where interference, physical obstacles, and distance between network devices all contribute to the problem. A second problem is the limited capacity of wireless networks and the resulting inefficiencies. Over the years, solving the problems of reliability and efficiency has been the focus of much effort and research.
Calderbank himself had long been investigating the problem. A past vice president of AT&T Research with a background in computational mathematics, his particular focus was on coding theory, both to correct for noise (which can interfere with packet delivery) and to compress data for more efficiency. He was also aware of others were approaching the same problems.
The problem of reliabilty
The reliability problem exists because wireless networks are inherently lossy and constrained. In case of packet loss, TCP (the transport protocol that ensures a packet makes it to its destination) depends on end-to-end retransmission to recover lost packets. While this mechanism is sufficient for wired networks where there is little packet loss, it is not optimal for wireless networks, which are lossy.
When wireless networks came into use, TCP’s inability to operate efficiently under significant packet loss was a big problem, and end-to-end retransmission (where the packet is re-sent from the source), TCP’s sole mechanism for recovering a lost packet, can result in substantial reduction in throughput.
TCP is ill-suited for lossy wireless networks in another way: TCP interprets all packet loss as a sign of congestion—whether or not the cause is overburdened links or errors in the channel—and will reduce the transmission rate. While this response is appropriate for reducing congestion, it doesn’t solve the problem of wireless error-caused packet loss and in fact may result in links that sit underutilized while TCP attempts to alleviate non-existent congestion.
One way to avoid the inefficiency of underutilized links is to implement Explicit Congestion Notification (ECN) in TCP. ECN, which K.K. Ramakrishnan of AT&T Research has long proposed (K.K. Ramakrishnan, Raj Jain, "A Binary Feedback Scheme for Congestion Avoidance in Computer Networks with a Connectionless Network Layer") may be used to disambiguate between congestion-caused packet loss and wireless channel errors. Any packet loss not correlated with ECN could therefore be interpreted as due to wireless channel errors.
The possibilities of network coding
The problem of inefficiency is also addressed by network coding, a new way of forwarding packets that was developed independently over the last decade. In network coding, intermediate nodes within the network combine several packets into one coded packet so more packets can simultaneously use the same link without competing for resources. (Packets are combined using XOR or other linear operations; see side bar.)
The coded packet can then be decoded by the receiving node with the help of additional packets or side information. In a wired network, this information is relayed to receiving nodes in separate packets routed over less-congested paths in the network.
In wireless networks, this side information comes for free, thanks to the broadcast and overhearing capabilities inherent in a wireless medium. Because nodes can overhear one another’s transmissions, they can exchange decoding information without additional overhead.
Since intermediate nodes must know what packets the receiving nodes have overheard to know what new packet to send, receiving nodes must communicate their buffer state upstream to access nodes, often by appending this information on their own transmissions. This need to constantly communicate with other nodes adds both communication and computational overhead to the network coding approach.
But the advantage gained is that packets from different flows, which used to compete for scarce resources in bottleneck links, can now share and better utilize the same resources.
In an environment as resource-constrained as wireless networks, this sharing of resources is important.
Coding packets from different flows is known as inter-session coding, while coding packet within the same flow is known as intra-session coding. Inter-session coding improves efficiency (different flows can share the same bottle-necked links). Intra-session coding can add redundancy to a flow (by adding linear combinations of packets) and thus improve reliability in the presence of loss.
Forming a MURI
Calderbank saw network coding and the gain in efficiency as a way to build a better network for the Air Force. But networking coding is still relatively a new approach with solutions that are yet to be widely tested in practice; many aspects still need to be worked out. Implementing network coding would require expertise in networking, information theory, algorithms, and network protocols.
Calderbank in late 2009 formed a Multidiscipline University Research Initiative (MURI) to assemble the needed expertise. A MURI is a well-established framework for university collaborations with guidelines for sharing resources and funding.
"Our project is organized around the idea that transformational change in network management will require extraordinary interdisciplinary breadth," Calderbank said at the time, "in particular the infusion of fundamentally new mathematical ideas from algebraic topology and compressive sensing.”
For networking and application of network coding, he approached three researchers all working independently on different problems associated with network coding on wireless networks: Christina Fragouli, Suhas Diggavi, and Athina Markopoulou, faculty at l’École Polytechnique Fédéral de Lausanne, University of California at Los Angeles, and University of California at Irvine, respectively.
Fragouli and Diggavi were mainly working on theoretical problems and algorithms while Markopoulou along with her graduate student Hulya Seferoglu (also at UC Irvine) was focused more on the practical matter of implementing network coding and integrating it with TCP and other protocols. Specifically, Markopoulou and Seferoglu’s prior work studied inter-session network coding and its cross-layer optimization with TCP ("Network Coding-Aware Queue Management for TCP Flows over Coded Wireless Networks").
With a team in place for the network coding component, the issue of reliability remained, and for this Calderbank arranged for MURI members to attend a kickoff meeting in 2009, held at AT&T Research (Florham Park, NJ) where reliability for wireless networks is very much a practical engineering problem. From AT&T researchers working on the problem, MURI members heard first hand about the latest network research and which methods had the best chance to be deployed in the near future. (It was a homecoming of sorts. Several MURI members had at one time worked at AT&T Research. )
The meeting was followed by a workshop at UCLA in January 2010, this meeting focused specifically on network coding . Also attending was K.K. Ramakrishnan of AT&T Research.
Ramakrishnan had been working to improve the reliability of IP protocols over wireless networks. This work was in collaboration with researchers from the Rensselaer Polytechnic Institute (RPI)—located in Troy, NY—set up through AT&T Research’s VURI program (Virtual University Research Initiative), which facilitates collaborations between AT&T researchers and universities.
(For AT&T Research, collaborations with universities play an important role, since students have the time and inclination to fully and deeply investigate fundamental problems. University collaborations enable AT&T to expand research efforts, while students are given practical problems to solve—and often a ready-made thesis topic—along with the chance to work with experts in their field. Working with AT&T offers one more advantage, access to the tremendous amounts of network data maintained by AT&T.)
The collaboration between Ramakrishnan and RPI in place since 2005 had been looking to ensure reliability through redundancy by appending extra packets to each transmission. Each redundant packet contains enough information to replace any one lost data packet. The mechanism employed is forward error correction (FEC) using Reed-Solomon to encode information from a fixed-length block of packets.
In FEC, redundant packets can take the place of any lost packets
If a loss occurs in the block, the receiving node uses one of the redundant packets to reconstruct the lost information in the decoding process. Therefore, the loss of a data packet doesn’t matter if there is a redundant packet to replace it. It’s the total number of packets received that counts; if a receiving node expects eight packets and receives six data packets and two redundant packets, it’s received the requisite eight.
Since the coding and decoding is done at the end nodes (unlike network coding where the coding is done at intermediate nodes within the network), FEC is sometimes referred to as source coding.
Redundant packets add overhead (there’s more to transmit after all) but because FEC adds reliability, there are far fewer retransmissions. Ramakrishnan and his RPI collaborators further increase efficiency by “right-sizing,” or varying, the amount of redundancy depending on the reliability of the link, using more redundancy for unreliable links and less for reliable ones.
The theoretical meets the practical
Calderbank’s hunch was that Ramakrishnan’s practical work on transport reliability would complement Markopoulou’s work on network coding for better efficiency, and that combining the two methods would yield a more reliable and efficient network.
Seferoglu and Markopoulou at UC Irvine had looked at the interaction of network coding with TCP flows, but had not evaluated adding redundancy. But the team realized that the combination of network coding and packet-level FEC gracefully solved two problems at the same time: first, it reduced loss while improving the efficiency so critical to wireless channels, and second, it simplified network coding by making it unnecessary for nodes to constantly track which packets were transmitted, which were overheard, and by which nodes. When network coding is combined with packet-level FEC, all that is needed to determine the amount of redundancy is a simple percentage (of packets lost).
With the plan set, and with Ramakrishnan agreeing to advise the MURI on the FEC scheme and network architecture, work began in earnest in April 2010 and Seferoglu began working full time on the project.
Progress to date
The initial months were spent resolving the inconsistencies that inevitably occur when combining two methods, each separately evolved.
One of the first was how to handle the burstiness of TCP flows when inter-session network coding depends on a similar number of similar-sized packets from different flows. This was resolved by modifying active-queue management schemes in a way to work best in conjunction with TCP congestion control and wireless network coding, building on prior work at UC Irvine (see here).
Most of the work integrating TCP and network coding was done in the summer 2010 when Seferoglu worked at AT&T Research under the supervision of Ramakrishnan.
The issue of loss was also especially complicated in network coding because loss affects not only the direct links but the overhearing links as well (overhearing depends on good, error-free links). Because the performance of network coding declines on lossy networks, decisions have to be made on what percentage and which flows should be coded together (inter-session network coding) and how much redundancy (in this case in the form of intra-session network coding) is required.
Working through these and other problems resulted in a novel unifying scheme, I2NC, which builds on top of one-hop constructive network coding (COPE) and combined inter-session coding with intra-session redundancy. The team also designed a thin layer between TCP and I2NC, which codes/transmits and acknowledges/decodes packets at the sender and receiver nodes in such a way to make I2NC operations transparent to the TCP protocol. The benefits of I2NC include: bandwidth efficiency (thanks to inter-session coding); resilience to loss (thanks to intra-session redundancy); and reduced protocol overhead (setting the nodes free from the need to communicate with one another and exchange information about which packets they have overheard). A paper on I2NC (“Inter- and Intra- Session Network Coding“) has just been accepted at the IEEE Infocom 2011 conference.
The next step is implementation, and with an offer of help from Air Force Office of Scientific Research (AFOSR)—specifically the Operations Integration branch at Rome, NY—the MURI team has just started implementation at the AFOSCR Emulab testbed.
The next steps
The collaboration is now approaching its one-year mark, and the indications are that network coding with the FEC redundancy will provide both efficiency and reliability in wireless networks, and in a simpler way than was previously thought.
A real test will come as the team begins implementing I2NC on Android smartphones to determine whether I2NC is feasible on devices with limited resources, something that would be very hard to do when nodes were required to track overheard packets.
Certainly implementing network coding on smart phones was not foreseen at the beginning of the project, and it was only by collaboratively fusing two different methods that the necessary gains in efficiency were achieved.
Calderbank is pleased with the way the collaboration is going:
"When we held our workshop at UCLA we discovered two different approaches to improving the rate and resilience of Air Force communications and we started to explore whether the benefits were additive. I am delighted to see that they are additive and that collaboration with AT&T Labs is accelerating the transfer of technology to the Air Force.”
XORing two packets
Access nodes and other network devices see a packet as a string of 0s and 1s. In network coding, the bit strings of two packets are combined using the exclusive OR logical operation, or XOR (symbol ).
XORing assigns a “1” if two bits are different, “0” if the same.
The idea for bit-wise XORing packets in this way was proposed in the paper XORs in the Air: Practical Wireless Network Coding. It works like this:
An access node looks at its input queue to find similar-sized packets going to nearby destinations. The packets may be from different sessions.
The access node opens the packets and XORs the two packets’ bit strings to form a new coded packet:
The packet is decoded at the receiving node using information overheard from nearby nodes.
XORing is the simplest, not the only, method to combine packets (linear combinations are also used) | <urn:uuid:34b02722-a5ed-4dec-a313-e39dbf4add71> | CC-MAIN-2017-04 | http://www.research.att.com/articles/featured_stories/2010_10_slider-stories/201010_collaboration.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00382-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946745 | 3,087 | 2.734375 | 3 |
Ransomware, a malicious type of malware that works by encrypting files in exchange for a ransom, has yet to be a threat to Apple computers. This is not to say that Apple’s operating system is any more secure than Windows, it is just that malware developers have not yet figured out writing ransomware for OS X because infecting Windows machines has been extremely profitable enough. A few security researchers even demonstrated how easy it could be to develop ransomware that targets Macs. Rafael Salema Marque’s experiment to show how OS X can be targeted took him just a few days and security expert Perdo Vilaca created a proof-of-concept code for his Mac ransomware.
The infamous Cryptowall has proven that ransomware can be devastating to both companies and consumers alike, with losses of more than $18 million. The cost to get a decryption key could range from a few hundred to thousands of dollars, and it is not unusual for the cyber criminals to not even provide the key despite being paid.
A mac user that encounters ransomware would have to somehow be tricked into running it. Apple uses security technology called Gatekeeper which blocks apps from unidentified developers from running. This will help save those from being fooled into running something that is not available in the app store or is not from an identified developer. However, security experts have found software flaws that show that Gatekeeper can be circumvented. This, along with the experiments conducted by Vilaca and Marque, show that although penetrating the OS X is not something to be worried about as of now, never underestimate the potential of these malware developers because infiltrating Mac is not impossible. | <urn:uuid:b543d2f0-6190-4033-9be2-34fb8caa9e33> | CC-MAIN-2017-04 | http://www.bvainc.com/ransomware-not-yet-a-threat-to-macs/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00006-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.966093 | 332 | 2.9375 | 3 |
Hi Nathan -
On Fri, 1 Aug 2014 15:27:02 -0600, Nathan Andelin <nandelin@xxxxxxxxx>
Mainstream hosting providers use "A" records to define "subdomains"
and offer wildcard certificates such as *.example.com for all
"subdomains" of example.com. You seem to have a problem with that. But
Here's another way to look at it.
A domain is like a directory.
A subdomain is like a directory within another directory (where the
containing directory could be a domain directory or a subdomain
A host is like a file within one of those directories.
If someone said that they needed a program to edit customer orders
against the items in the item master directory, how much sense would
If lots of people talked about the item master directory, would that
somehow make a directory and a file be the same thing?
Opinions expressed are my own and do not necessarily represent the views
of my employer or anyone in their right mind. | <urn:uuid:e5f4d564-14a8-4425-9695-6e554ce19973> | CC-MAIN-2017-04 | http://archive.midrange.com/midrange-l/201408/msg00063.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00492-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931799 | 220 | 2.6875 | 3 |
Find New Perspectives to Solve Problems
When Orville and Wilbur Wright sought to put people in flight, they were actually latecomers to the field of contenders — inventors in Europe and North America had worked in earnest on design and construction for a flying machine several decades before the Wright brothers’ attempts on the Carolina coast.
Many of the other challengers had more-formal education, experience and resources than they did, but the Wrights brought something inimitable and invaluable to the equation: a fresh perspective.
Before the Wrights, would-be pilots looked at flying in terms of brute force. What was needed, they claimed, was simply the power to put a vehicle up in the air and sustain it in flight, but they gave little thought as to how the plane would actually be controlled. This resulted in some spectacular crashes, but nothing that could be accurately called flying.
Enter the Wright brothers, Wilbur in particular. They too had their share of crashes, but they learned from them and changed their designs accordingly.
Wilbur also spent significant amounts of time observing birds in flight, and he noticed they sometimes “tipped” their wings to one side or another to gain balance and adjust to the differences in the lifting forces caused by the air around them.
Thus, he realized early on that the problem wasn’t one of power but of novel concepts such as lift and drag.
The rest is history, of course, but the lessons don’t have to stay in the past — they can be applied to current and future problems, as well.
Here are a few problem-solving pointers for the project managers, networking engineers, applications developers and IT professionals out there who face an obstacle that they just can’t seem to hurdle.
- Challenge Everything. Once the slogan for a video game console, in this context, “challenge everything” refers to critical thinking. Don’t always accept the conventional wisdom, especially when others are using the same methods and failing. Think of new ways to do things and try them as often as your time and budgetary constraints allow.
- Ask Others. This doesn’t refer solely to experts in the field. Remember: Sometimes the freshest perspective is that of someone who hasn’t had any exposure to the subject matter. These people approach the situation without any kind of orthodox or prejudicial constraints on their thinking, so they’ll probably consider more options. Many of their conclusions might be wrong, but if they bring just a little bit of insight, it could help out in a big way.
- Study Other Great Minds. For inspiration in innovation, look to the lives of the great geniuses. The Wright brothers are but one example — you also could study the life of Thomas Alva Edison, who almost single-handedly managed to out-invent the rest of the world while he was alive, or George Washington Carver, who transformed agriculture into a science. Who knows? One day you might end up in this pantheon of innovators. | <urn:uuid:66a2f530-156b-4842-8f45-2a7b3ed4d565> | CC-MAIN-2017-04 | http://certmag.com/find-new-perspectives-to-solve-problems/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00492-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.968443 | 629 | 2.75 | 3 |
Accreditation and Why It’s Important
It’s getting difficult to tell the quality IT certification programs from the others. Most programs create good tests, manage their programs well, maintain high security and provide value. But some do not. And they still call themselves certification programs. How can candidates, employers and other organizations tell the difference?
Other industries, such as transportation, medical and finance, have dealt with this problem before and have used a process called “accreditation” to help solve it. These industries use accrediting organizations that evaluate certification programs according to national or international standards. For example, the International Standards Organization has created a set of standards for certification programs. ISO 17024 looks at test quality, certification management, communication with candidates, privacy and security and many other areas. A program with an accreditation based on these standards can be trusted and used by certification candidates, certificants, employers and any other stakeholder in today’s IT certifications.
In fact, recently I’ve been a part of discussions involving the interests of a large federal government agency. The logic is this: The agency would love to endorse or approve existing IT certifications as long as those certifications come from accredited programs. According to its needs, the agency would then hire directly those people with qualified certifications or recommend them to contractors or other hiring companies. What’s in it for the government? Obviously, like everyone else, they can be sure of hiring competent, certified professionals and also be sure that those professionals achieved the certification under strict testing and security procedures. Who can blame them for wanting such confidence in hiring decisions?
But let’s back up a bit and be clear on the definitions. “Accreditation” is a word that is often mistaken with “certification.” Let me explain how they are different.
Certification is the act of providing a credential to verify competence at a set of job skills or on a body of knowledge. Examples of certifications are the ones you know well in the IT industry. Other certifications are those obtained by mechanics, financial planners, pipe fitters, medical technologists and hundreds of other professions. (In some professions, individuals actually obtain licenses instead of certifications, meaning that a state or federal government agency approves the credential and provides a license to work.)
Accreditation is the act of providing a credential to an organization or program, not to an individual. Colleges and universities are accredited, for example. It is correct to say that people receive IT certifications, and IT certification programs receive accreditation. Accreditation means that the program adheres to specific standards.
Today, few IT certification programs have been accredited by independent industry accreditation organizations that look specifically at certification standards. Coincidentally, IT certification is experiencing a “crisis of confidence.” People within and without the IT industry are questioning the value of the certifications.
The time has come for the IT industry to solve this growing problem, before it gets out of hand, and each of you can help. So what is it you can do?
First, look for accredited programs if you are just beginning to look at becoming certified. Second, if you are already along the way, contact your certification program manager and request that the program obtain a national or international accreditation. And third, don’t whine if it ends up costing the program or you a little more money. The additional amount won’t be much, but it will definitely be worth it. The program itself will improve in many ways to meet the standards. Programs that are unwilling to step up will be left behind. And you will find that your credential is viewed with renewed respect by a much larger number of companies and perhaps a government agency or two.
Request that your certification program achieve accreditation. Refer them to any of these accrediting bodies:
- The American National Standards Institute (www.ansi.org).
- The National Organization for Competency Assurance (www.noca.org/ncca/accreditation.htm).
- The Buros Institue for Assessment Consultation and Outreach (www.unl.edu/BIACO).
While each has different strengths, obtaining one or more of these accreditations would certainly strengthen any program, increase confidence in the certifications awarded and grab the attention of employers. With more and more IT programs joining in, we can raise the value of IT certifications in general.
David Foster, Ph.D., is president of Caveon (www.caveon.com) and is a member of the International Test Commission, as well as several measurement industry boards. | <urn:uuid:2ee0e0b5-f33c-4e9d-82ba-18929ba4deaf> | CC-MAIN-2017-04 | http://certmag.com/accreditation-and-why-its-important/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00154-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952512 | 958 | 2.71875 | 3 |
Kenya is the largest economy in east Africa and has one of the most stable political systems in Africa, when it comes to investment Kenya has always been welcome to foreign investment and has taken many steps to increase the ease of doing business.
The government in collaboration with the private sector, civil society, development partners and other stakeholders formulated the Kenya Mission 2030 which aims to make Kenya an industrialized nation with a prosperous middle class providing high quality of life to its citizens by 2030. Kenya has adopted a strategy that has seen more investors venture into it. Due to the small Kenyan domestic market since Kenya has an approximate population of 40 million people, it has been building quite big infrastructure projects so as to link itself with other east African countries such as Tanzania, south Sudan, Uganda, Rwanda, Burundi, Somali and Ethiopia this making itself the hub of east Africa.
As they say if you conquer Nairobi (Kenyan capital city) you have conquered east Africa. This has increased the domestic market for companies in Kenya to up to 400million consumers. Kenya has had its economy being driven by agriculture in the past years but have opted for other means such as mining, tourism and trade. This has seen major discoveries of minerals including oil, rare minerals and gold in Kenya. | <urn:uuid:70c65943-bf97-43d6-b8d0-c6b365b991a8> | CC-MAIN-2017-04 | https://www.mordorintelligence.com/industry-reports/analysis-of-key-sectors-of-kenya-agriculture-dairy-meat-fruits-and-vegetables-water-manufacturing-and-construction-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00062-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.976243 | 254 | 2.71875 | 3 |
GnuPG, or gpg, is the free implementation of the well known Open PGP project. It allows you to encrypt or decrypt files based on keys. gpg uses what is commonly known as the public key cryptography, using a private and public key to allow safe encryption or decryption of files.
How gpg works
The private and public keys are required to offer security of files when transferring over unsecured networks. The private key is owned by the generator of the keys (which the owner should only have access to), and the other key is a public key (which can be distributed to anyone). The distribution list depends on what application you are using gpg for, and of course, the distribution list of users must have gpg installed as well.
The private key is for owners only and is further secured by a passphrase by the owner. The public key can be freely distributed or exchanged with other gpg users. Once you have been sent a public key from another user (or you may have downloaded it from a web site), you need to add it to your public keyring. This file holds all your public keys. You need this to encrypt files (assuming you are sending files to other users) using the intended recipients public key. They can then decrypt your encrypted file using their own private key.
In this article, I will demonstrate how to generate a pair of keys, as well as encrypting and decrypting files. Decrypting and encrypting can be used purely for ones own security, you do not have to share your public key if you are not exchanging files with other trusted users. I will also demonstrate how to sign a file and how to check the integrity of the contents. I will also demonstrate how to use gpg within a batch environment.
The current version of gpg is 2.0.16 and can be built from source. I have only been successful building it from source using a commercial C compiler. Since this is not an option for most administrators, I am using a pre-compiled binary gpg version 1.4.7 for this demonstration. Details of downloads can be found in the Resources section.
$ gpg gpg: Go ahead and type your message ... ^C $
You may get a warning about insecure memory similar to the following:
$ gpg gpg: WARNING: using insecure memory! gpg: please see http://www.gnupg.org/faq.html for more information gpg: Go ahead and type your message ...
To get rid of this warning, make the gpg binary set uid:
$ chmod 4755 gpg
Generating your key pair
Before you can begin to encrypt or decrypt files, your first task is to generate your public and private key using the gen-key option of gpg. Listing 1 is a truncated output of the command. You will be asked questions about the type of key, size, and how long should the key life should be. For this demonstration, I have answered those questions provided below:
Key: DSA and Elgamal Key size: 2048 Never expire the key Name: David Tansley Email: email@example.com Comment: aix admin Passphrase: watchmaker
For clarity I have used my full name in the key generation during this demonstration, but the name can be any alias that identifies you. I could have used dxtans, for example.
Note: Keep your passphrase secure, do not forget it, as you will be prompted for it when using gpg operations on files.
Listing 1. generate keys
$gpg --gen-key gpg (GnuPG) 1.4.7; Copyright (C) 2006 Free Software Foundation, Inc. gpg: please see http://www.gnupg.org/faq.html for more information Please select what kind of key you want: (1) DSA and Elgamal (default) (2) DSA (sign only) (5) RSA (sign only) Your selection? 1
DSA keypair will have 1024 bits. ELG-E keys may be between 1024 and 4096 bits long. What keysize do you want? (2048) Requested keysize is 2048 bits Please specify how long the key should be valid. 0 = key does not expire <n> = key expires in n days <n>w = key expires in n weeks <n>m = key expires in n months <n>y = key expires in n years Key is valid for? (0) Key does not expire at all Is this correct? (y/N) y Real name: david tansley Email address: firstname.lastname@example.org Comment: aix admin You selected this USER-ID: "david tansley (aix admin) <email@example.com>" Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O You need a Passphrase to protect your secret key:watchmaker gpg: /home/dxtans/.gnupg/trustdb.gpg: trustdb created gpg: key DD096620 marked as ultimately trusted public and secret key created and signed. uid david tansley (aix admin) <firstname.lastname@example.org> sub 2048g/25B15F46 2010-08-10
Once the keys have been created, a subdirectory will be present in your HOME directory called .gnupg . This directory will be the holding place for your public and private keys, as well as your public keyring. You will need public keys from users who send you their encrypted files, so you can decrypt them.
Now the keys have been generated, lets encrypt a file, then decrypt it.
Encrypting and decrypting a file for personal usage
The format to use gpg to encrypt files for personal usage is:
gpg <options> <filename>
Practically all options given to gpg are prefixed by two dashes, like so: - -
Encrypting a file for personal usage does not require the encrypt option. To encrypt a text file, specify the armour that creates a shield around the text. By default, gpg will assume it is a binary file you wish to encrypt. The symmetric option informs gpg to use a passphrase. Lastly, the name of the input file is parsed. In this case, it is the file-name myfile. Once the command is issued, you will be prompted to enter your passphrase. This phrase is the same one entered when the keys were generated. Notice we do not have to specify the type of cipher; by default gpg will use CAST5.
$ cat my_file Meet me at the same place tonight, come alone $ gpg --armour --symmetric my_file enter passphrase:
gpg will generate a new file with an extension of .asc, containing the encrypted information:
$ ls -l my_file* -rw-r--r-- 1 dxtans staff 46 Aug 12 19:37 my_file -rw-r--r-- 1 dxtans staff 211 Aug 12 19:37 my_file.asc
Looking at the now encrypted file, it is now unreadable:
$ cat my_file.asc -----BEGIN PGP MESSAGE----- Version: GnuPG v1.4.7 (AIX) jA0EAwMC/W81Yzg46lNgyUeKDVxXVGUNSVNv8x2HdzaDMLP4DRJlyjtxX5UXlhvH nM+/nftRgcbgJo/qmzKSa+XjoVqZALrVeFsdRm7yYGxLYRR1s5QhCg== =k+s4 -----END PGP MESSAGE-----
You can also specify the name of the encrypted file to be produced using the output option. In the following example, the encrypted file will be called myoutfile:
$ gpg --armour --output myoutfile --symmetric my_file
To decrypt the file just created, use the decrypt option specifying the name of the encrypted file. The format to use gpg to decrypt files is:
gpg <options> --decrypt <file-name>
The file will be displayed to standard output:
$ gpg --decrypt my_file.asc gpg: CAST5 encrypted data gpg: encrypted with 1 passphrase Meet me at the same place tonight, come alone gpg: WARNING: message was not integrity protected
To specify the output of the decrypted file, use the output option. In the following example the decrypted file will be called my_decrypt_file:
$ gpg --output my_decrypt_file --decrypt outfile
The following will achieve the same as the above example but this time using file redirection:
$ gpg --decrypt my_file.asc >my_decrypt_file
To disable gpg informational messages, use the quiet option when decrypting:
$ gpg --decrypt --quiet my_file.asc Meet me at the same place tonight, come alone gpg: WARNING: message was not integrity protected
Even after using the quiet option you may still get a warning about not being integrity checked. The warning is caused if the original file was not encrypted with modification detection checking enabled. This warning can be suppressed by issuing the no-mdc-warning when decrypting a file:
$ gpg --decrypt --no-mdc-warning --quiet my_file.asc
As a general rule, it is best not to have the warning at all rather then suppressing it. To encrypt a file forcing an mdc, use the option force-mdc:
$ gpg --force-mdc --armour --output outfile --symmetric myfile
Exporting and importing keys
When files are to be exchanged, the file is first encrypted using the intended recipients public key. Once the recipient gets the encrypted file, the recipient uses his own private key to decrypt the file. Typically, this public key can be uploaded onto a trusted site where uses can then download it and put it into their keyring. However, when using gpg within an enterprise business environment, and you are exchanging sensitive files, the users you give your public key to will be very selective. Accordingly, to receive encrypted files from other users, you will need their public key to unlock their file for you to read it; their public key will go into your keying.
Once your public key is exported, you can then transfer the key to its destination either by email, scp or ftp. The format to use gpg to export the public key is:
gpg --armour --output < file-name> –-export <key-name>
Where key-name is the real name you entered when generating your keys. Notice I am exporting with the armour option, as the key will be exported in ASCII mode.
To view your keys, you can list your current key keyring using the list-keys option:
$ gpg --list-keys /home/dxtans/.gnupg/pubring.gpg ------------------------------- pub 1024D/DD096620 2010-08-10 uid david tansley (aix admin) <email@example.com> sub 2048g/25B15F46 2010-08-10
In the above example for the UID, I could use either david tansley or firstname.lastname@example.org, as both names identify me. In this scenario, I will use:
You can also list your private key with:
$ gpg --list-secret-keys /home/dxtans/.gnupg/secring.gpg ------------------------------- sec 1024D/DD096620 2010-08-10 uid david tansley (aix admin) <email@example.com> ssb 2048g/25B15F46 2010-08-10
To export my public key in ASCII mode to the file dxtans_pubkey:
$ gpg --armor --output dxtans_pubkey --export david tansley $ ls -l dxtans_pubkey -rw-r--r-- 1 dxtans staff 1180 Aug 14 08:34 dxtans_pubkey
Alternatively, I could use redirection to produce the output file, like so:
$ gpg -–armor –-export david tansley > dxtans_pubkey
Listing 2. public key. Shows a truncated output of my public key
$ cat dxtans_pubkey -----BEGIN PGP PUBLIC KEY BLOCK----- Version: GnuPG v1.4.7 (AIX) mQGiBExhkgIRBACPxUNlJP9xTGUwbgs/6T9rG1p2CzODHszEyisjLAiKJ6sgTvdl /0Xr+ioEkknCq37XVotvu5+0plF3Say3BOtTL7qw1unL5kCWulYzvsc/BTrMiKYD Hmg5fjpR/pCXIYIsyJ0g+J0EHAEmMpH7fTOAhNgevK4d8J3GDz2liUmVXwCgzbKP MD5GS7y8AJ962Kj7LZx/jruTCYsD/GH6PgbzNw== =wLpz -----END PGP PUBLIC KEY BLOCK-----
The next task is to send my public key to user(s) that I want to send encrypted files to, so that they can decrypt the files using their own private key. In this demonstration, I am going to send my public key via email to the company security officer (busintel). That person's fictitious email address is firstname.lastname@example.org.
The key can now be transferred via ftp, scp, or just pipe the export command into an email. In the following example, my public key is exported, and the contents are piped into the email content with the subject line of pubkey:
$ gpg --armour --export email@example.com | mail -s pubkey firstname.lastname@example.org
Alternatively, I could send it as an attachment file called dxtans_pubkey.txt in a email:
#!/bin/sh email@example.com mail -s " intel.. Here is my public key" $list <<mayday Thanks dxtans ~<!uuencode dxtans_pubkey dxtans_pubkey.txt mayday
Now the key has been sent, user busintel can now download the file to import it into his keyring. The format of the command to import a key is:
gpg --import <filename>
User busintel would now perform the following to import my public key, assume he saves my public key as dxtans_pubkey:
$ gpg --import dxtans_pubkey gpg: key DD096620: public key "david tansley (aix admin) <firstname.lastname@example.org>" imported gpg: Total number processed: 1 gpg: imported: 1
To confirm the public key is now in busintel's key ring, user busintel could list his keys, like so:
$ whoami busintel $ gpg --list-keys /home/busintel/.gnupg/pubring.gpg ------------------------------ pub 1024D/2326BEEA 2010-08-11 uid busintel (company intel) <email@example.com> sub 2048g/8DCD62BC 2010-08-11 pub 1024D/DD096620 2010-08-10 uid david tansley (aix admin) <firstname.lastname@example.org> sub 2048g/25B15F46 2010-08-10
When decrypting files, gpg will ask if this person is who he says he is. You can inform gpg of your trust level in the person's public key you hold by informing gpg of your trust level. The format of the command is:
gpg –-edit-key <UID>
Listing 3 demonstrates how you can inform gpg of the trust for each public key you have in your keyring. When presented with the prompt, enter the keyword trust. Next, select from the menu your level of trust. In the following example, email@example.com is having his trust raised by user busintel to the level of ultimate trust.
Listing 3. Giving trust
$ gpg --edit-key firstname.lastname@example.org gpg (GnuPG) 1.4.7; Copyright (C) 2006 Free Software Foundation, Inc. pub 1024D/2326BEEA created: 2010-08-11 expires: never usage: SC trust: unknown validity: unknown sub 2048g/8DCD62BC created: 2010-08-11 expires: never usage: E [ unknown] (1). busintel (company intel) <email@example.com> import key: Command> trust pub 1024D/2326BEEA created: 2010-08-11 expires: never usage: SC trust: unknown validity: unknown sub 2048g/8DCD62BC created: 2010-08-11 expires: never usage: E [ unknown] (1). busintel (company intel) <firstname.lastname@example.org> Please decide how far you trust this user to correctly verify other users' keys (by looking at passports, checking fingerprints from different sources, etc.) 1 = I don't know or won't say 2 = I do NOT trust 3 = I trust marginally 4 = I trust fully 5 = I trust ultimately m = back to the main menu Your decision? 5 Do you really want to set this key to ultimate trust? (y/N) y pub 1024D/2326BEEA created: 2010-08-11 expires: never usage: SC trust: ultimate validity: unknown sub 2048g/8DCD62BC created: 2010-08-11 expires: never usage: E Command> quit
The trust setting has now been changed to ultimate trust for david tansley's public key.
I would now perform the same process to import busintel's public key into my keyring. Let's now assume that has been completed. We are now ready to exchange encrypted files.
Encrypting and decrypting files using public keys
Let's assume I have just completed some MIS statistics contained in a spreadsheet. I now need to send user busintel this sensitive file. The command format to encrypt a file with a public key is:
gpg <options> --encrypt –-recipient(s) <uid of person(s) sending to> <filename>
The file name to encrypt is
monthly_mis.xls. The file
name will be output as
2010_08_mis.xls. The recipient's
As this file is not ASCII, I will not use the armor option. The following command will encrypt the file, using the information provided above, using busintel's public key:
$ gpg --output 2010_08_mis.xls --encrypt --recipient 'busintel' monthly_mis.xls $ ls 2010* 2010_08_mis.xls $ cat 2010_08_mis.xls -----BEGIN PGP MESSAGE----- Version: GnuPG v1.4.7 (AIX) hQIOA/j/okH5aj/XEAgAhRaxI2fZnBoShlFOuaZ8/yDRRNsNXduIFAAyUOTKdKU+ d7asHYcKZFkVx8D2iQxORmvRtuYFg2lGG1i3YFLNrMggCJcf7HzQBO1G5DPoQjUU …. …..
The file 2010_mix.xls is now sent via email to busintel for that user to decrypt.
The command format to decrypt a file with a public key is:
gpg <options> --decrypt <filename>
The user busintel has now received the file and will now decrypt it; this is carried out using his own private key. The user busintel has decided to call the decrypted file aug_mix. He will be prompted for his own passphrase (in this example, it is called papercutter) to complete the decryption process.
User busintel would run the following command to decrypt which is in the previous encrypt example. I had encrypted the file using busintel's public key; he can now decrypt using his own private key:
$ gpg --output aug_mis --decrypt 2010_08_mis.xls You need a passphrase to unlock the secret key for user: "busintel (business intel) <email@example.com>" 2048-bit ELG-E key, ID F96A3FD7, created 2010-08-14 (main key ID 86C597BF) Enter passphrase:papercutter gpg: encrypted with 2048-bit ELG-E key, ID F96A3FD7, created 2010-08-14 "busintel (business intel) <firstname.lastname@example.org>
The file has now been decrypted and is called aug_mix for user busintel to review.
Over a period of time your keyring will become quite populated with keys, some keys will no doubt belong to users whom are no longer present within your key exchange. To delete these keys, use the delete-key option. The command format to delete a key is:
gpg --delete-key <UID>
Before running the above command, be sure to use the list-keys option to identify correctly the key before deletion. In the following example, the public key of bravo is deleted. Notice that I first list the keys to be sure of the correct UID I am using:
$ gpg --list-keys /home/dxtans/.gnupg/pubring.gpg ------------------------------- pub 1024D/DD096620 2010-08-10 uid david tansley (aix admin) <email@example.com> sub 2048g/25B15F46 2010-08-10 pub 1024D/28B78F84 2010-08-14 uid bravo (aix user) <firstname.lastname@example.org> sub 2048g/860FAE6D 2010-08-14 …. …. $ gpg --delete-key bravo gpg (GnuPG) 1.4.7; Copyright (C) 2006 Free Software Foundation, Inc. pub 1024D/28B78F84 2010-08-14 bravo (aix user) <email@example.com> Delete this key from the keyring? (y/N) y $
Dealing with encryption or decryption within a batch environment means one thing: automation. The only human interaction that is required to encrypt or decrypt files depending on the process is entering of the passphrase. gpg offers the batch option, where the passphrase can be parsed or read into the passphrase prompt. There are a few ways this can be done. One could echo a string and pipe it through to gpg or redirect from a file or read from a file into gpg.
The command format to run in batch mode is:
gpg < options> - - batch --passphraseX <filename>
Where passphaseX is
passphrase-fd n (file descriptor and
Using the batch option along with passphrase-fd. The passphrase will be read in via the local file descriptor. The following command encrypts the text file myfile and outputs it to tord3.gpg. Notice the use of the batch option along with the passphrase-fd 3. Here we open file descriptor 3 and then read into that the file descriptor the file called .passf. The contents of .passf is the single passphrase contained on one line, with no line feeds and only readable by the owner.
$ cat .passf watchmaker $ gpg --armor --output tord3.gpg --batch --symmetric --passphrase-fd 3 3<.passf myfile
Still using the passphrase-fd example, we could also echo in a string containing the passphrase into the input stream fd 0, like so:
$ echo watchmaker | gpg --output tord3.gpg --batch --symmetric -–passphrase-fd 0 myfile
Another example that is very similar to the above uses the batch option along with the passphrase-file. Here the file .passf containing the passphrase is read in by gpg.
$ gpg --armor --batch --symmetric --passphrase-file .passf --output tord3.gpg myfile
As in both examples, there is no user input required which makes it ideal for batch processing large volumes of files. The passphrase file containing the passphrase to use should only contain one passphrase and should not contain a line feed at the end of the line.
Listing 4 contains a script that will check the directory /opt/hfc/holding/ for files matching the pattern 'mis*' to encrypt. If found, it will encrypt these using the batch passphrase-file option. The file containing the passphrase is ./home/dxtans/.gnupg/.passf. The files are encrypted with user busintel's public key in readiness for that user to decrypt them (whom also has my public key). The files, once encrypted are created in the /opt/hfc/encrypt directory.
Listing 4. encrypt_files
#!/bin/sh # encrypt_files passf=/home/dxtans/.gnupg/.passf input_dir=/opt/hfc/holding output_dir=/opt/hfc/encrypt filelist=$(ls $input_dir/mis* 2>&1) if [ $? != 0 ] then echo "no files to process...exiting" exit 0 fi for txtfile in $filelist do filename=$(basename $txtfile) echo "attempting to encrypt..$txtfile" gpg --batch --force-mdc --output $output_dir/$filename.gpg --armor --passphrase-file $passf --symmetric --encrypt --recipient 'busintel' $txtfile if [ $? != 0 ] then echo "Failed on $txtfile to encrypt to $output_dir/$filename.gpg" else echo "OK $txtfile encrypted, new location $output_dir/$filename.gpg" # rm $txtfile fi done
When the script is run, the output is similar to the following:
$ /home/dxtans/.gnupg/encrypt_files attempting to encrypt../opt/hfc/holding/mis_341 OK /opt/hfc/holding/mis_341 encrypted, new location /opt/hfc/encrypt/mis_341.gpg attempting to encrypt../opt/hfc/holding/mis_342 OK /opt/hfc/holding/mis_342 encrypted, new location /opt/hfc/encrypt/342.gpg
Once the files have been processed, it is just a matter of decrypting them using the batch option again with the passphrase-file. For example, user busintel could decrypt the file mis_341.gpg as he has the public key of the sender. The file will be decrypted to: /opt/hfc/encrypt/mis_341
$ gpg --batch --quiet --output /opt/hfc/encrypt/mis_341 --decrypt --passphrase-file /home/bisintel/.gnupg/.passf /opt/hfc/encrypt/mis_341.gpg
There are cases where you will send text files to users, or perhaps a file you are uploading to a public area, where the file itself needs not be encrypted. But, the users need to be sure that the file has come from the actual person it says it has. One method to achieve this is to clearsign your file using your own private key. This does not encrypt your file , but rather creates another copy of the file, with the file extension of .asc. This new file contains your signature as well as the contents of the original file. You will be prompted for your passphrase when you sign the file.
To clear sign a file, the command format is:
gpg -–clearsign < filename>
For example, to sign the file my popfile, I could use:
$ gpg --clearsign popfile You need a passphrase to unlock the secret key for user: "david tansley (aix admin) <firstname.lastname@example.org>" 1024-bit DSA key, ID DD096620, created 2010-08-10
The above command will produce another file called popfile.asc. The signature can be verified by other users using the verify option:
$ gpg --verify popfile.asc gpg: Signature made Sat Aug 14 16:59:03 BST 2010 using DSA key ID DD096620 gpg: Good signature from "david tansley (aix admin) <
If the file has been tampered with, you will get a bad signature message, like so:
$ gpg --verify popfile.asc gpg: Signature made Sat Aug 14 16:59:03 BST 2010 using DSA key ID DD096620 gpg: BAD signature from "david tansley (aix admin) <email@example.com
To sign binary files, use the command:
gpg -–sign <filename>
GnuPG provides a secure method of encrypting your own personal files or files you exchange between users. In this article, I have demonstrated how gpg can be used to encrypt and decrypt files, also how gpg can be used within a batch environment.
Get products and technologies
- See the GnuPG website for more information.
- Get the GnuPG tar binary (1.0.6).
- Get the GnuPG tar binary (1.4.7).
- Get the GnuPG rpm binary (2.0.15).
- Follow developerWorks on Twitter.
- Get involved in the My developerWorks community.
- Participate in the AIX and UNIX® forums: | <urn:uuid:0c3de03b-0dc5-4c48-8676-e96b1756dc1b> | CC-MAIN-2017-04 | http://www.ibm.com/developerworks/aix/library/au-gnupg/index.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00272-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.823406 | 6,643 | 3.609375 | 4 |
Is global warming caused by us or something else? Those who believe in human-caused global warming (Camp A) point to increasing greenhouse gases, our excessive carbon footprint, use of fossil fuels and the like as evidence to support their position. Those who believe in natural global warming (Camp B) point to the close link between solar activity and temperatures on earth, the growing body of evidence that shows greenhouse gases are actually the result of global warming and not its cause, and the fact that global warming started a few centuries before the Industrial Revolution.
From an IT perspective, however, it doesn’t matter which “green” argument you believe: if you’re in Camp A, your goal might be to use technology to reduce your carbon footprint and hope to reduce global warming. If you’re in Camp B, your goal might be to use technology to reduce costs for your company. Either way, your actions will be identical, only the motives will be different.
Here are just two examples of how IT can help organizations be green or save green:
* Virtualization: Novell PlateSpin has a nice cost calculator on its Web site that shows that 50 physical servers (10% average processor utilization, 750 watts per server, 10 cents per kilowatt) can be converted to 10 physical servers, each running five virtual machines. The cost savings will be $52,560 per year.
* Videoconferencing: This is an important component for some unified communications systems and one that can significantly reduce travel costs. For example, at Interop last week I saw a demonstration of a high definition videoconferencing system from LifeSize that provides excellent performance at just 1Mbps. Plus, the cost of a base system is just $5,000 per site. If a business trip costs $1,000 in airfare, hotel, rental car, meals, etc., then a system like this could pay for itself in just five business trips that were replaced by videoconferencing, not to mention the productivity savings for travelers.
The bottom line is that unified communications and other technologies can have a dramatic and positive impact on your bottom line and are well worth the effort to investigate. | <urn:uuid:d9937949-137e-4ab4-b5b5-7b8a81ae471c> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2278295/data-center/it-and-global-warming.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00576-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926729 | 453 | 2.75 | 3 |
Is your computer a Simda Bot? Find out if your IP address is listed in the database of the tens of thousands of computers that make up the Simda* botnet. If your computer has been infected with Simda, it may contain malware, spyware and adware. Disclaimer * Simda is a “pay-per-install” malware used to distribute illicit software and different types of malware, including those capable of stealing financial credentials. The pay-per-install model allows cybercriminals to earn money by selling access to infected PCs to other criminals who then install additional programs on it. The database of infected computer IP addresses was uncovered by experts from IT companies and law-enforcement agencies from different countries, who jointly succeeded in detecting and disrupting the botnet: INTERPOL, the Cyber Defense Institute, the FBI and the Dutch National High-Tech Crime Unit (NHTCU), Kaspersky Lab, Microsoft and Trend Micro.
April 17 update: 59 615 IPs added to the database
YOUR IP ADDRESS IS:
Your IP address was found in a database of infected computers!
If your IP address is listed in the database, it does not necessarily mean that your computer is infected. In some cases several computers on the same network could use the same IP address (e.g. if they have the same Internet Service Provider) or can change over time, preventing a specific compromised device from being identified. Whatever your circumstances, though, you should scan your computer for malware.
Your IP address was not found in the database of infected computers.
This does not mean that your computer is safe from any risks. Malicious programs can remain on your device without your knowledge for a long time. For security reasons, we recommend that you scan your device for cyberthreats using the free Kaspersky Security Scanner. | <urn:uuid:a119d51c-f3f9-4132-a735-2d7177efc58b> | CC-MAIN-2017-04 | https://checkip.kaspersky.com/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00236-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938092 | 377 | 2.59375 | 3 |
Promises performance and cost savingsbut, oh, the details...
It was with a sense of déjà vu that we examined this latest new thing, grid computing.
An aging player still hoping to jump from the minors to the majors, grid computing is a new name on an idea thats been researched by computer scientists since at least the 1970s.
While grid computing is, at its heart, distributed computing by another name, its a new concept in one important way: an ambition to tackle intraorganizational or even planetwide grids (rather than primarily departmentwide or campuswide projects). This growth in scope makes broad hardware support and shared communications standards even more critical than before.
There are also many parallels between historical distributed computing research results and grid computing.
The two most important lessons right now for early adopters are to stick with specialized computing tasks (see chart for the kinds of jobs that work well with grid computing) and to expect a maturation period of several years before this technology is more broadly applicable.
Making grid computing work for a large class of computer problems is a difficult challenge thats been researched for decades pastand will be for decades to come.
Moores Law is what makes the idea of grid computing steadily more tantalizing. The overall processing power of entry-level CPUs continues to double and double again while prices stay constant, and given typical corporate usage patterns, most of these CPUs are idle most of the time. Moreover, there are more systems running on organizational networks than ever before. Gartner Inc. estimates that 113 million PCs were sold in 2000, adding to the already hundreds of millions of systems deployed.
Finally, these systems are interconnected by ever faster and more reliable networks.
Being able to take better advantage of the hardware and network assets an organization has is a compelling return-on-investment argument to make to a board of directors.
While grid computing isnt necessarily cheapSun Microsystems Inc.s Sun Grid Engine Enterprise Edition starts at $20,000 for deployments of up to 50 CPUs, for example, and hidden costs such as power and cooling also go upit can still be very cost-effective.
Actual deployment data is now emerging that shows grid computing can provide big performance gains for applications other than scientific computing and image rendering.
Aerospace company Pratt & Whitney, a division of United Technologies Corp., uses Platform Computing Inc.s Platform LSF grid software to handle computer-aided simulations of space propulsion systems and aircraft engines during design and development. The East Hartford, Conn., company also uses the software to allocate resources across workstations when employees run desktop applications that require additional processing power.
"Using grid technologies lowers our development costs significantly because of the ability to harness idle power to get jobs done significantly faster," said Peter Bradley, associate fellow for high-intensity computing at Pratt & Whitney. "Weve been doing grid computing for so long that its baked into our business plan. We couldnt live without it at this point."
Despite the lure of taking advantage of spare CPU and network resources, major hurdles lie ahead for grid computing. The two most important ones are the development of programming standards (specifically, standard APIs to grid-enable applications) and interoperability standards (standard grid communication and management protocols, so different grid implementations can connect).
Although a few distributed systems try to do clustering in a way invisible to standard applications, most grid computing packages require applications to be rewritten to use vendor-specific APIs.
There is some good news on this front, as the Globus Projects Globus Toolkit has emerged as the leading grid computing tool set. (See interview with Globus Project co-leader Ian Foster, Page 37.) The project is backed by IBM, Compaq Computer Corp. (now part of Hewlett-Packard Co.), Sun and Microsoft Corp., as well as supercomputer players such as Hitachi Ltd., NEC Corp., Fujitsu Ltd., Cray Inc. and Silicon Graphics Inc.
Globus Toolkit 2.0 shipped in November, and the first commercially supported version has already been released by Platform Computing (several others are in development by project backers). This effort should produce interoperable grid software from a variety of vendors.
The Globus Project is now developing its next-generation standard, OGSA (Open Grid Services Architecture), which will form the basis of Globus Toolkit 3.0, expected next year.
OGSA incorporates XML data transfer and emerging Web services standards into grid computing, something that should give grid computing and Globus Toolkit a boost.
The first technology preview release was posted May 17 and can be downloaded via www.eweek.com/links. The server is written in Java, but the download includes a client written in Microsofts C# language to demonstrate client-side interoperability.
Besides standard protocols, secure communication, strong authentication, shared data formats (XML will play a major role here), resource governance, usage costing and chargeback options, failure handling, and distributed administration are all important to grid computings future.
However, grid computing has already provided itself a creative way to do more with less, and if companies can do so without heavy redevelopment costs, and in ways their key technology suppliers support, grid computing has an important role ahead.
West Coast Technical Director Timothy Dyck can be reached at email@example.com. | <urn:uuid:36e69037-879d-4dd3-b0cb-7dfbf412255e> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Cloud-Computing/Grid-Technical-Challenges-Daunting | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00144-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931176 | 1,103 | 2.5625 | 3 |
Copyright has a long tradition in American and English law. Today, however, cultural, technological, political and economic changes are stretching both the definition and scope of copyright in ways that can fundamentally change its meaning and destroy traditional notions associated with the term. The potential impact of these changes for governments -- both as regulators of copyright laws and purveyors of information -- are substantial.
The rise of copyright is associated with a key advance in information technology: the development of printing presses. As detailed by Lisa Jardine in Worldly Goods, a recent history of the Renaissance, the development of the printing press brought with it a rush to publish and sell heretofore rare books available only in monasteries and a few private collections. Scholars also began translating classical works into vernacular, or developing their own commentaries on classical books, which were rushed into print by continental and English publishers.
While the development of the printing press was a technological leap over hand-printed books, the costs of production were substantial, since each page required laborious typesetting, papermaking and printing. In order to protect their investments, publishers sought a monopoly to publish certain works through national governments. Governments were also interested in controlling printing within their boundaries in order to ban heretical works. In England, the control of printed works began in earnest with a charter granted in 1557 to the Stationers' Company, which kept records of books acceptable to royal censors and granted exclusive publishing rights to certain printers.
That charter lapsed in 1695, and was replaced by the Statute of Anne in 1710. The new legislation placed time limits on copyrights, removed the requirement of censorship prior to publication and for the first time recognized the rights of authors to copyright, rather than strictly upholding the economic rights of publishers. In effect, the Statute of Anne gave authors a time-limited monopoly over their intellectual property. This limited monopoly over works helped authors receive payment for their work and provided an incentive to publishers to seek out new authors, since time limits required the identification of new works to publish.
The American colonies followed the English tradition closely. Article 1 of the Constitution authorized Congress to establish copyrights "to promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries." The first copyright act was passed in 1790, but was targeted more at protecting American economic interests than the universal monopoly of authors, since it protected only American authors published by American publishers. It was not until 1891 -- when the economic advantages of protecting American writers and publishers in the international market outweighed the advantages of freely pirating European authors like Dickens -- did Congress enact an international copyright act.
In 1909 and again in 1976, Congress enacted copyright legislation to cover the development of new information media such as motion picture, radio, television and other technologies. The Copyright Act of 1976 brought the United States more closely in line with the Berne Convention, a copyright agreement established among European publishers in 1886, but which was resisted for a century by American media companies for economic reasons. The 1976 act also expanded authors' copyright monopolies over the length of their lives plus 50 years and codified for the first time the concept of fair use in American law.
The concept of fair use is critical both to the notion of copyright and the battles over copyright in an electronic, networked environment. In general, it seeks to balance an author's -- and by extension a publisher's -- monopoly right over a work against society's need to promote the exchange and diffusion of knowledge and foster advances in science and the arts. While fair use in American jurisprudence harks back to 1841 (Folsom vs. March), the 1976 Copyright Act attempted to establish general ground rules under which a work or portions of a work could be copied free of charge. It applied four factors by which courts should judge whether use of an author's work by others could be considered fair use:
1) the purpose and character of use, such as whether copying was for commercial or nonprofit educational reasons;
2) the nature of the copyrighted work, such as whether the work is freely available for sale or out of print, and whether it can be easily referenced without copying the whole body, as in the case of a painting;
3) the amount and substantiality used, which seeks to determine how much of the core value of the work was affected by copying; and
4) the effect of copying on the subsequent market value of the work.
In general, recent scholars and court decisions have given greatest weight to the fourth factor, the work's market value. However, even with greater weight given to market value, decisions about what constitutes fair use are by no means straightforward and come under greater pressure as our ability to copy and quickly transfer information becomes easier.
Recent rulings by the Supreme Court and others have not clarified the ground rules of fair use. This makes it difficult to set the boundaries of fair use in new technological environments like the Internet, where the purpose of use, availability, substantiality and market value of information are all subject to redefinition and interpretation depending on the user.
Intellectual property concepts like copyright, trademarks and fair use are national, rather than international, in scope. While certain notions have gained wide acceptance through such vehicles as the Berne Convention, other notions are unevenly distributed. For instance, fair use is a concept common to the United Kingdom and the United States, but not found in French and German copyright law. Similarly, moral rights (which allow an author to object to a work's distortion or mutilation) are common to the Berne Convention, but until recently, were specifically deleted from American laws.
These variations in meaning and intent of intellectual property protection have led to a concerted attempt, particularly on the part of the United States, to modify international agreements to protect the economic interests of American companies in an electronic environment. However, these attempts have had limited success at venues such as the World Intellectual Property Organization (WIPO), which met in January 1997. WIPO tabled many American recommendations for further international copyright restrictions.
A more fundamental boundary issue also remains. Copyright and other intellectual property agreements define how much and what type of information flows within a country's boundaries. Establishment on an international level of any nation's notion of what should be subject to copyright monopolies impinges on national sovereignty and runs into problems balancing between individual property (copyright) and public good.
ART OR TRADEMARK?
A further complication in the debate over copyrights is what constitutes "Science and useful Arts," as enumerated in the Constitution.
While it is likely that the Founding Fathers would consider many of today's works of art, literature and science appropriate for copyright protection, some critics argue that the mass marketing of objects which derive from "useful art" takes the copyright concept too far.
For instance, while the original Mickey Mouse cartoon may be considered a work of art, at what point do derivative products lose their "transformative" aspect and become mere commodities, better protected under trademark regulations, which are less international in nature?
Similarly, what happens when a copyrighted image becomes part of the cultural lexicon? If such a transformation occurs, should the artwork be considered in the public domain and no longer subject to monopoly protection? Does continued copyright protection constitute a form of cultural censorship by the copyright holder? That might be addressed in an electronic environment -- where cultural diffusion is much quicker -- by shortening the time under which copyright protections apply.
Central to discussions of copyright in the United States are libraries and the role they play as public disseminators of information. Traditionally, libraries have been viewed as locations where citizens who cannot purchase their own copies of copyrighted works can borrow or copy them, with fair use protections. Similarly, libraries have acted as repositories for hard-to-find and limited distribution items, for which the market is not large enough for general distribution. Libraries have allowed lower income individuals to share in society's general knowledge and allowed for better distribution of information across society.
The twin developments of the Internet as a low-cost means of information dissemination and of various schemes to charge for Web-based information represent competing threats to libraries. If hard-to-find information is placed on the Web for general distribution, the role of libraries as depositories is diminished. On the other hand, if schemes are developed for online charging for information access, libraries could become no more than marketing agents for copyright holders, rather than vehicles for general cultural dissemination.
The Internet muddies the waters of the market value of information. Traditionally, the value of information was the price for which a book, record, tape, painting, or other work was sold to a limited set of customers. The price included the cost of producing, distributing and marketing the physical work, any publisher's profit, and the value of the work as intellectual property (expressed as the author's royalties) divided by the projected number of copies sold. If customers purchased a book, for instance, the price would reflect all the components of value. If they copied the book on a Xerox machine, they often paid reproduction costs and the costs for republication rights (often in excess of the original price of the book).
Electronic distribution of creative works significantly lowers the production, marketing and distribution costs while greatly expanding the potential market. Establishing fair pricing models for electronically-distributed copyrighted work will require much experimentation.
At least in the United States, the market models will have to include the concept of promoting societal good, intrinsic in fair use. In fact, some more radical proponents of fair use argue that the combination of decreased production costs coupled with an expanded market might make the marginal value of any electronically produced copy almost negligible. That may necessitate finding other ways of paying authors for their intellectual property.
Copyright and intellectual property issues in the Information Age are not likely to disappear anytime soon, notwithstanding intentions of the Clinton administration or others to the contrary. Some observers advocate separate copyright, trademark, decency or other laws for cyberspace. Others maintain the sovereign rights of individual nations or of the public good in the free flow of information, while still others maintain the rights of individual and corporate property are paramount in this arena. No party is likely to achieve all their goals, and no final resolution can be expected soon.
Terrence Maxwell, Ph.D., is executive director of the New York State Forum for Information Resource Management, and editor of the Forum's magazine, "Open Forum." Internet: . E-mail: | <urn:uuid:bf634253-d2e6-461a-a2ef-a1717a079ad1> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/The-Copyright-Conundrum.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00144-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944223 | 2,148 | 3.40625 | 3 |
ZIFFPAGE TITLETracking SuppliesBy John McCormick | Posted 2005-04-06 Email Print
How a U.N relief agency had technology on the ground within 48 hours to help rush food to the victims of Asia's tsunami.
At the heart of the World Food Programme's logistics information system is Compas, or the Commodity Movement Processing and Analysis System. At any point along its supply chain—from warehouses, to trucks, to distribution centers—the internally developed software program can give relief workers an accurate, up-to-date snapshot of its food stocks.
All food shipment data is sent from the field to Rome, where a software program takes all the information coming in from the disaster area and updates an Oracle database at headquarters, which, in turn, can then be accessed by people in-country.
While Compas monitors food from port to distribution point, an SAP R/3 system tracks food being shipped from donor countries, such as the U.S., according to quantity and destination.
Together, the systems give the WFP "a complete, global picture," says Bruni, and allow the agency to divert food from one area to another that might be in greater need.
For instance, right after the tsunami hit, the relief organization was able to spot a U.S. donation of some 5,500 tons of rice that had just arrived in Indonesia as part of the agency's normal food relief. The WFP decided to split the stock, keeping 60% of the shipment for Indonesia, but sending the remainder to Sri Lanka to help victims there.
In addition to monitoring food distribution, the World Food Programme uses the SAP system to track donations, on which the agency is totally dependent. Contributions from donors around the world are recorded in the R/3 system and matched against distribution data from Compas, allowing for a full accounting of donations and disbursements.
"If you're not able to show where funds are and how they're being used, there won't be future funds," says Tom Shirk, president of SAP's global public services unit.
In the past, the accuracy of data input into Compas wasn't always consistent. For instance, staffers sometimes keyed in partial information, such as just the first seven digits of an eight-digit shipping notice. The edit controls, Curran explains, weren't as strong as they should have been.
Over the past year, however, the WFP has built in features, such as pop-up screens, that require users to verify what they type in before the information is accepted by the system.
Another limitation of Compas, however, can't be fixed as easily. Compas is what's known as a batch system, which means it collects data from various sources and then processes it at a predetermined time, such as the end of a day. SAP, on the other hand, is capable of processing data as it is input. As a result, the two systems aren't always in sync.
While this isn't a major headache, the ICT says it could better manage and adjust shipments if it had up-to-the-minute data from Compas. Curran says the agency is now looking to replace the Compas system, possibly with SAP's supply chain software. | <urn:uuid:0c818f32-502d-44de-8015-6701c4886773> | CC-MAIN-2017-04 | http://www.baselinemag.com/c/a/Projects-Supply-Chain/World-Food-Programme-Wave-of-Support/2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00446-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961796 | 671 | 2.53125 | 3 |
In a time of rising energy costs and burgeoning social responsibility, many manufacturers are looking for new ways to streamline processes and improve energy efficiency. For some, ZigBee is the solution.
What Is ZigBee?
ZigBee is a wireless protocol designed for short-range personal area networks based on the IEEE 802.15.4 standard. It differs considerably from competitors like Wi-Fi or Bluetooth. ZigBee is designed to be an inexpensive and simple way of transmitting relatively small amounts of information at regular intervals. | <urn:uuid:8caa876e-950c-49f2-b637-8057c0b9399d> | CC-MAIN-2017-04 | https://www.infotech.com/research/manufacturing-the-zigbee-buzz-is-growing-louder | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00173-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916223 | 104 | 3.015625 | 3 |
QSAM : Queued Sequential Access Method : A file access method for reading, writing and updating sequential data sets and partitioned data set members.
BSAM : Basic Sequential Access Method
BDAM : Basic Direct Access Method
ISAM : Indexed Sequential Access Method : A method for managing how a computer accesses records and files stored on a hard disk. While storing data sequentially, ISAM provides direct access to specific records through an index. This combination results in quick data access regardless of whether records are being accessed sequentially or randomly.
BPAM : Basic Partitioned Access Method : An access method for reading members of partitioned data sets. | <urn:uuid:f4647f0c-0fc0-48ca-b049-298e25acdba0> | CC-MAIN-2017-04 | http://ibmmainframes.com/about2678.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00081-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.784823 | 136 | 2.609375 | 3 |
Unix systems make it easy to make output that you don't want to see simply disappear. Toss output into the void. Send errors in one direction, useful output in another. Something of a bit bucket, black hole, and digital garbage disposal, /dev/null is one of the very clever things that Unix introduced into the computing world. And what a very clever and unusual one!
Nearly everyone who spends time on the Unix command line has probably heard of /dev/null and a good many of us probably use it routinely -- especially those of us who write scripts. But how much have you thought about the file's many peculiarities? Let's take a deep dive into our systems' implementation of anti-matter and see how very unusual a thing it really is.
First, the creation date of /dev/null is the date/time that your system last booted. Interestingly, this intriguing file is created anew every time you reboot your system. That's probably a very good thing as /dev/null is a file that you wouldn't ever want to lose. Here it is on one of the systems I manage:
$ ls -l /dev/null crw-rw-rw- 1 root root 1, 3 Dec 12 20:00 /dev/null
Examining this long listing, you can see that /dev/null is a character device. This tells you that it processes data character by character rather than block by block. Notice also that it doesn't require execute permission to behave as it does. It's a pseudo device, not an executable.
If you use the stat command to look at the file's metadata, you'll note that its size is reported to be 0. Yes, non-executable and empty, /dev/null still manages to do some very interesting things for its users.
$ stat /dev/null File: `/dev/null' Size: 0 Blocks: 0 IO Block: 4096 character special file Device: 11h/17d Inode: 3720 Links: 1 Device type: 1,3 Access: (0666/crw-rw-rw-) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2015-12-12 20:00:19.359656104 -0500 Modify: 2015-12-12 20:00:19.359656104 -0500 Change: 2015-12-12 20:00:19.507656104 -0500
The file's read and write for everyone permissions make sense only when you think about how /dev/null is used. You write to /dev/null every time you use it in a command such as touch file 2> /dev/null. You read from /dev/null every time you empty an existing file using a command such as cat /dev/null > bigfile or just > bigfile. Because of the file's nature, you can't change it in any way; you can only use it. And, if you can remove it, a reboot puts it back. How nice.
Fortunately, you can tell if an operation is successful even if your output simply disappears -- and this is key to using /dev/null in a script. For example, in the script excerpt below, we touch a file and then check to see if the file was updated or created by examining the return code -- the code that tells us whether the command just processed was successfully completed. If the operation failed, the return code will be 1 or greater. If it succeeded, the return code will always be 0.
You can check return codes on the command line with simple checks like this:
$ echo hello > /dev/null $ echo $? 0
In a script, you're likely to do something like this:
touch $file 2> /dev/null if [ $? != 0 ]; then echo File creation failed exit 1 else
The use of /dev/null in the code above ensures that the person running the script sees a gentle error "creation failed" message rather than a "Permission denied" error. The benefit of crafted messages increases dramatically with the volume of data might otherwise be filling your screen.
tar xf /var/tmp/app.tar 2> /dev/null
In the tar command shown above, we extract from a tar file, but hide possible errors from view. This is the kind of thing that many sysadmins will do in a script to reduce the output that their scripts will generate. Checking the return code will let you know if there were errors.
This kind of command probably won't make a lot of sense to use except when all you want to know if whether a command that you ran completed successfully, not what kind of output it might have produced. That's an easy thing to do and quite commonly done in scripts. Just run the command and then make use of the return code to determine whether the command ran successfully.
#!/bin/bash cd /usr/local/apps tar xf /var/tmp/app.tar 2 >/dev/null if [ $? != 0 ]; then echo extract failed exit 1 fi
So, /dev/null acts like a file and looks like a file (and maybe even smells like a file), but it's really a data sinkhole implemented as a file.
The most common use of /dev/null is to get rid of standard out, standard error, or both. Selecting which data source you want to squelch is easy. Get rid of standard output with commands like echo hello > /dev/null or echo hello 1> /dev/null. Get rid of errors with commands like touch file 2> /dev/null.
The second common use is to empty files that you don't want to remove, but you want to lose their content. This can be very important if a running process needs to write to that file and will lose its connection to it if you remove the file and recreate it (remember that running processes connect to files using file handles).
$ ls -l bigfile.log -rw-r--r-- 1 oracle oinstall 10485769265 Nov 10 2013 bigfile.log $ cat /dev/null > bigfile.log $ ls -l bigfile.log -rw-r--r-- 1 oracle oinstall 0 Jan 21 09:51 bigfile.log
You can't pipe data to /dev/null because the receiving end of a pipe has to be an executable that will be able to process the received data.
$ echo hello | /dev/null -bash2: /dev/null: Permission denied
Unix sysadmins often use /dev/null in cron jobs so that cron never sends them email. If they want to receive email from those scripts, they'll generally code those messages in the scripts themselves and send them using a command like mailx.
*/30 * * * * /usr/local/bin/chkSamba > /dev/null
I was surprised a couple years ago to discover that Windows has a /dev/null type feature. In a command prompt, you can run commands like echo haha > NUL and you'll get the same effect as you would redirecting data to /dev/null on a Unix system.
Also referred to as the "bit bucket," /dev/null might be one of the most novel ideas that went into Unix. It has been around since the beginning of the OS and provides some wonderfully handy options for dealing with data that you don't want to see or retain.
You can think of /dev/null as being something like anti-matter or "the void". While /dev/null is itself just a file that you'll find on every Unix system, its use is entirely different than any other file you'll run into.
General forms for /dev/null use
Send standard out to /dev/null:
command > /dev/null
Send standard error to /dev/null:
command 2> /dev/null
Send both standard out and standard error to /dev/null:
command > /dev/null 2>&1
Extremely useful and innovative, /dev/null has been one of the things about Unix that has made my work so much easier all these years. And, even after so many decades, it still retains a certain mystique.
This article is published as part of the IDG Contributor Network. Want to Join? | <urn:uuid:958f327c-99a2-4562-8d37-c0a4692c9604> | CC-MAIN-2017-04 | http://www.computerworld.com/article/3025497/linux/sending-data-into-the-void-with-dev-null.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00383-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.915089 | 1,755 | 2.90625 | 3 |
Researchers across scientific disciplines are clamoring for exascale systems that can handle bigger, more complex models. When it comes to the climate modeling and weather forecasting business, researchers are finding promise in using new HPC architectures, such as the one used in the Green Flash cluster, to get closer to the exascale goal.
Green Flash is a specialized supercomputer designed to showcase a way to perform more detailed climate modeling. The system uses customized Tensilica-based processors, similar to those found in iPhones, and communication-minimizing algorithms that cut down on the movement of data, to model the movement of clouds around the earth at a higher resolution than was previously possible, without consuming huge amounts of electricity.
The computational and power-consumption problems that had to be overcome to get the higher resolution climate models are clearly explained in this Berkeley Science Review article. In short, scientists are eager to improve upon the current cloud climate modeling systems, which have a resolution of 200 km. A model that’s composed of a grid with data points that are 1 km to 2 km apart would be much more useful, and would result in much more accurate weather forecasts and a greater understanding of the science behind climate modeling.
However, the computational demands involved in high resolution climate modeling don’t increase linearly–they increase geometrically. Not only is the mesh in the grid much more compact, but more “time steps” are required to keep the equations from falling apart. Dr. Michael Wehner, a researcher at LBL, ran the numbers and found that the 2 km model requires 1 million times as many FLOPs as the 200 km model.
Translated into real world figures, such a high-resolution system would require 27 petaflops of sustained capacity, and a peak capacity of 200 petaflops, according to the BSR story. This theoretical system–bigger than anything ever actually built–would require 50 to 200 megawatts of power to run, which is comparable to the electric demands of an entire city. Its power bill would be hundreds of millions of dollars a year. Clearly, a different approach was needed.
Instead of building a general purpose supercomputer, Wehner and others with LBL, UC Berkeley’s Electrical Engineering and Computer Science Department, and the RAMP (Research Accelerator for Multiprocessors) project decided to try a customized system, where hardware and software are designed together.
The design came together with Green Flash, which combines energy-efficient Tensilica processors with communication-minimizing algorithms. Currently, Green Flash, which has been called “the iPod supercomputer,” is running 4 km models. The combination is predicted to yield the capability to run the 2 km cloud model on a system with only 4 megawatts of power, which is 12 to 40 times smaller than a conventional supercomputer would need to run the same model.
This approach does have its downsides, however. Because Green Flash was designed specifically for climate modeling workloads, it won’t work with other types of HPC applications, such as analyzing genes or financial transactions. (In fact, it doesn’t even work with all the different climate modeling systems that are in use.) It’s not nearly as flexible as other supercomputers in the LBL stable, such as Hopper, BSR notes in its story.
However, when one considers the energy wall that’s imposed when taking the generic approach, the custom-built approach to designing the next generation of supercomputers to solve specific HPC problems may be part of solution for the exascale equation. | <urn:uuid:7293ea7f-7079-4a09-8a39-07f8c3d06fdf> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/07/18/green_flash_heralds_potential_breakthrough_in_climate_modeling_exascale_design/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00071-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951451 | 748 | 3.25 | 3 |
Can the Net revive the vote?
- By William Matthews
- Sep 04, 2000
In Arizona in the spring of 1996, encountering a voting Democrat was about
as likely as finding a snowball in the desert. Of 843,000 registered Democrats,
only 12,800 voted in the 1996 Democratic primary. In a state where the governor,
both senators and five out of six House members are Republicans — and registered
Republicans outnumber Democrats nearly 2-1 — it's easy to see why Democrats
might get discouraged.
State Democratic leaders anxious to reinvigorate their party concluded
that they had to make voting "more convenient and easier," said party executive
director Cortland Coleman. So for this year's primary, they turned to the
Arizona's Democratic primary in March became the nation's first binding
election conducted in part via the Internet. During a four-day period, voters
cast ballots from computers at home, work, libraries, schools, community
centers, Indian reservations and polling places.
Voting among Democrats shot up by a factor of more than six — about 86,000
cast ballots. Of those, 36,000 opted to vote from computers via the Internet.
Another 32,000 sent in absentee ballots by mail, and 18,000 Democrats traveled
to polling places to vote the traditional way.
For Coleman, the election was a rousing success, and he credits Internet
voting. The convenience of online voting holds a promise of reversing the
decades-long decline in voting in the United States. But those most familiar
with the mechanics of elections advise election officials to go slowly.
Issues ranging from ballot security to voter privacy to accurate vote tabulation
still must be resolved. This won't happen soon.
Proponents say online voting would permit people to vote from their
homes or workplaces when polling places might be inconvenient. Patients
could vote from hospitals, military personnel could vote from overseas locations,
and business travelers and others could vote from distant locations as long
as they have access to the Internet. And it might attract 18-to-25-year-olds,
who tend to be Internet-savvy but politically indifferent.
Bill Taylor, a senior vice president at election.com, the Internet company
that managed the online part of the Arizona Democratic primary, said Internet
voting could give rise to a sort of New Age, laid-back democracy. After
the election, he and colleagues recounted stories about families "who voted
at home together," a couple who invited friends in "to share a cup of coffee
and cast their vote online," and the president of the Navajo Nation, who
voted online from Window Rock, the Navajo capital.
But to many election experts, the Arizona primary highlighted Internet voting's
pitfalls as much as its potential.
"People had major problems with how the Democratic primary was run,"
said Penelope Bonsall of the Federal Election Commission's Office of Election
Troubles ranged from the inability of most Macintosh computers and some
older PCs to link to election.com's voting Web site, to lack of privacy
assurances for voters, to a legal challenge to the election on grounds that
Internet voting discriminated against those who lack Internet access.
The FEC does not oppose voting via the Internet, Bonsall said. The agency
"is completely neutral" on the technology but is a strict proponent of
standards that ensure the integrity of the voting process. And on that front,
Internet voting raises many questions.
Election integrity means that the voting process must be tamper-proof
so that votes cannot be changed after they have been cast; must have some
way of ensuring voters' identities and that they are not using another person's
name or identification number to vote; must allow ballots to be cast privately
so voters cannot be coerced; must ensure that voters are able to vote only
once; and must have some way to reliably recount votes if the results are
The FEC, which has developed standards for other voting systems, attempted
to develop standards for online voting, "but we came to the very stark conclusion
that there's no way to ensure privacy and security," Bonsall said.
A study by the California Internet Voting Task Force came to a similar
conclusion earlier this year. "Technological threats to the security, integrity
and secrecy of Internet ballots are significant," the task force reported.
Dangers range from sophisticated Trojan horse software attacks that could
secretly change or divert votes cast from home or office computers to viruses
that could shut down computer voting systems to power failures.
"Additional technical innovations are necessary" before voting from
home or office computers can be considered, the task force concluded. But
current technology may be good enough to permit Internet voting from polling
Thurston County, Wash., tested Internet voting in February with a nonbinding
Internet election held at the same time as the state's Feb. 29 primary.
The county, which includes Tacoma and the state capital, Olympia, is
comfortable with information technology. "There is a rising public expectation
that we will be voting on the Internet in binding elections in upcoming
years," said election manager Kimberley Wyman.
In 1993, the county began to experiment with voting by mail to enhance convenience
and increase participation. It has become the county's most popular method
of voting, Wyman said. "In the past three elections, over 75 percent of
the ballots came through the mail," she said.
Acceptance of voting from home by mail makes the idea of voting from home
via the Internet almost a natural next step.
For the February test, the county issued voters 10-digit personal identification
numbers. Once on the Web site, they had to supply names, addresses and county
voter identification numbers as further proof of identity. In a genuine
election, additional security steps would probably be taken, Wyman said.
Voters could cast ballots via the Internet during an 18-day period that
ended on election day. In addition to voting from home, from work or from
other "remote computers," voters could also cast ballots online at polling
places on election day.
During the test, 3,638 people voted over the Internet. According to Wyman,
91.5 percent said they would vote online again if that was an option. Ninety-three
percent said they felt comfortable with the accuracy and security of the
results. And 66 percent judged online voting easier than voting at the polls
or mailing in ballots.
The public may have loved it, but "most election officials are really
very skeptical" about Internet voting, Wyman said. "There are some really
big hurdles that must be overcome" before Internet voting can be permitted
on a wide scale in binding public elections, she said. Election fraud and
the digital divide are the main concerns.
Imagine the outcome of a presidential election secretly altered by a
foreign government. Deborah Phillips, chairwoman of the Virginia-based Voting
Integrity Project, warns that Internet voting poses vast new opportunities
to corrupt elections. Perhaps the most frightening is the possibility of
"The Internet itself is not a secure environment, nor is it an "American'
environment," she wrote in a recent report titled "Is Internet Voting Safe?"
Half of the Internet's users are outside America, and among them are hostile
foreign governments already using it for terrorist and military purposes.
For them, "developing the ability to interfere with or manipulate the outcomes
of American elections would almost certainly become an attractive goal,"
Denial-of-service attacks and viruses could crash the election system
and prevent voters from voting. "But the real fear is the type of hacking
that could result in deliberately manipulated election outcomes," she wrote.
That could include Trojan horse programs that lie undetected in voting systems
and silently change votes as they are cast. Unlike credit card fraud, where
victims ultimately discover the crime upon receiving a bill, "an e-voter
would likely be unaware his vote was stolen," Phillips wrote.
But Internet voting fraud doesn't have to be high-tech at all.
Permitting voters to cast ballots from computers at home, work or other
places creates opportunities for vote coercion. Violation of ballot secrecy
and pressure to cast votes in a particular way could come from family members,
employers, union officials or anyone, Phillips wrote.
And time-honored methods of election fraud — such as duplicate registrations,
registering unqualified voters and voting using identifications and registrations
of those who have moved away or died — could be incorporated into Internet
"All those possibilities are there and are real," said Paul Craft, manager
of voter systems at the Florida Division of Elections. "But the fact that
risk exists does not mean Internet voting is impossible; it simply means
you have to address the risk." And companies in the Internet voting business
are addressing the risk, said Craft, who is studying the potential of Internet
voting in Florida.
To Coleman, the Arizona Democratic Party director, the threats posed
by Internet voting are being overstated. He said a higher degree of authentication
was required to cast a vote via the Internet than to vote in a polling location.
"We're committed to making the election process more secure than it
has ever been before," said election.com's Taylor. Indeed, during the election,
a national magazine tested security by hiring a computer expert to try to
hack into the election's Web site, Taylor said. The hacking attempt failed.
The Internet voting company VoteHere.net, which conducted the Thurston
County test, said its voting system blocked 101 attempts to vote more than
once. An audit trail created by VoteHere.net showed that attempts were
made to guess voter ID numbers and the special 10-digit numbers issued for
the election. But the company reported that none of the attempts succeeded.
VoteHere.net also used 1,024-bit encryption to protect the integrity
and secrecy of ballots. In a binding election, officials probably would
insist on stricter security measures, said county election manager Wyman.
Ideally, they would link voters' handwritten signatures or a biometric identifier
such as a voice print to a digital certificate that only voters can use
to digitally sign their ballots, she said.
On election day in Thurston County, reliability turned out to be more
troublesome than security. Two of the computers set up at polling places
crashed. Although service was restored, the outages served as a reminder
that there may be "problems that cannot be controlled by staff and of the
need for contingency plans for voting," Wyman said.
The Digital Divide
If Internet voting can be made foolproof, election officials will still
have to confront the digital divide. According to the Voting Integrity Project's
Phillips, Internet voting discriminates against those who lack computers
and Internet access. The group has filed a lawsuit charging that the Arizona
primary violated the Voting Rights Act of 1965.
Phillips backs the discrimination claim with these statistics: 19 percent
of African Americans and 16 percent of Hispanics have Internet access, compared
with 38 percent of caucasians. And caucasians are gaining Internet access
at a faster rate than minorities, she said.
But those are national statistics that do not apply well to Arizona Democrats,
said Northern Arizona University political science professor Fred Solop.
In a survey he conducted, Solop found that in Arizona, the electronic haves
and have-nots are more divided along lines of age and education. Older people
and those with less education are not as likely to have Internet access
as racial minorities.
That may become a significant finding in the lawsuit Phillips has filed
because the Voting Rights Act of 1965 addresses racial discrimination but
not discrimination by age or education, Solop said.
Election.com's Taylor dismisses the claim that Internet voting discriminates
against anyone. "We haven't restricted anyone from voting, we've just enabled
more people to vote."
And the company says Internet voting increased minority participation in
the primary. Compared with 1992 and 1996, voter turnout this spring increased
by more than 600 percent. In two predominantly Hispanic legislative districts,
turnout increased more than 800 percent and 1,000 percent respectively,
According to Florida's Craft, the digital divide is an issue election
administrators must take seriously. But that should not preclude Internet
voting, he said.
One version of Internet voting that especially interests Florida officials
involves voting via the Internet from polling places. The benefit is that
voters could vote from any polling place, making it easier for working people
to vote and easier for election officials to oversee the process.
In the long term, Florida might also be interested in limited voting
from home to make it easier for sick, elderly and disabled people to vote.
Thurston County sees more general support for online voting as an option,
Wyman said. And as people conduct more business online, the demand for Internet
voting is likely to grow.
But ultimate acceptance depends on convincing election officials that Internet
voting is at least as safe from tampering as traditional methods of voting.
"The Internet is in its relative infancy. It is anyone's guess what the
next-generation election system will look like," Wyman said. | <urn:uuid:8c9350a5-cee8-4a76-9527-55275445d080> | CC-MAIN-2017-04 | https://fcw.com/articles/2000/09/04/can-the-net-revive-the-vote.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00493-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956264 | 2,790 | 2.625 | 3 |
Article by Tim Henneway, support manager for mSeven Software
Many web services or computer systems today require strong passwords. So why are strong passwords so important? Is it really worth all the trouble?
You might also wonder what makes a password strong? Is it just the length of the password and why all the emphasis on mixed case and special characters. This article will discuss the why, what and how of strong passwords.
Cryptography is King
Cryptography is the hiding of information via the encryption and decryption of data. To keep you data safe whether on a web site or on your device, it needs to be stored in an encrypted form with a strong algorithm like Blowfish or AES. Encryption has become so strong that today’s hackers must use computers to attack security systems.
Brute Force Attack
Even though your data is strongly encrypted, it may still be vulnerable to a brute force attack if the hacker has access to your database. A brute force attack is where a hacker uses software to try a series of common passwords or all possible passwords in an attempt to guess your password and gain access to your data.
The best protection against this type of attack is a strong password because, as you will see, it will take too long for the hacker to figure out your password. Using strong encryption and a strong password will provide a very high level of security for your data.
How Long is Strong?
A strong password is not just a long string, but is also determined by the number of different characters that are used in forming each character of the password. For example, it takes less than a second for a fast computer to run all the permutations of 4 digit PIN (i.e., 2578) containing only digits.
By simply making the 4-digit password out of any lowercase, uppercase letters, numbers and symbols (i.e., Bc1@), it now takes 25 seconds to generate all permutations -- a major improvement!
Time to generate all permutations of 4 character password
Character Set Digits Only (0…9) All ASCII Characters
1 second 25 seconds
Now let’s see what impact password length has on password strength. In 2010, a top password recovery service in the US reported that their state-of-the-art computing systems can try about 20 million passwords a second.
This means that only hackers with state-of-the-art resources should be able to obtain this same level, while the average hacker is going to probably take twice as long as these numbers.
Time to Crack*
Password Length 6 characters 7 characters 8 characters 9 characters
11 hours 6 weeks 5 months 10 years
*assumes each character can be any ASCII character.
As you can see, with a password as small as nine characters you can make it very hard for a hacker to crack your database.
Many will hear that a 9-character password can be strong and then select any easy-to-remember 9-character word and use that as a password. This can be a big mistake! Hackers know people will do this, and they will create and share dictionaries of common passwords and will even mine your personal data for keywords they can use to reduce the 10 year crack time to mere hours.
For example, let’s say you use you the word “mountain” as your password. Since the word is in the dictionary, a hacker using the dictionary as a set of passwords will crack your data rather quickly.
The trick is to create a password that is memorable and yet long enough while using a wide array of characters.
Pumping Up Your Password
Here are some ideas on how to create strong passwords. Pick an 8-character word that is easy to remember and make it strong. For our example we will use the word “mountain.” You will note that this word is all lowercase characters, which is not very secure. Let’s pump it up!
- Change at least one letter to uppercase (you don't want to pick the first letter, as that would be more common and easy to guess). The revised password is now “mounTain.”
- Add at least one number to it. Let’s replace the “o” with an “0”, making the revised password “m0unTain.”
- Finally, include a symbol. Let’s replace the “a” with the symbol “@” making our new password “m0unT@in.”
We now have a much stronger password using a combination of uppercase, lowercase, numbers and symbols. While an 8-character password is a good length, you will recall from the chart above that we need a 9-character minimum password.
Let’s make it more secure by adding another character. “m0unT@in” could become “m0unT@ins”, or even better “m0unT@in$”, where we have swapped the “s” for a “$”. Many people also put an “!” at the end of any password or a “+” at the beginning and end of all their passwords.
The general idea is to choose a word or phrase that you will be able to remember and a simple algorithm for converting it to a strong password. Even the best encryption systems in the world are not going to protect your data if you are using weak passwords and a hacker gains physical access to your mobile device.
To keep your data safe, it is important to understand what makes a strong password and create a password that is easy for you to remember and type into the login screen of your password manager.
Passwords that are about 9 characters in length and include lowercase letters, uppercase letters, numbers and symbols are considered the best defense to the hacker’s brute force attack. | <urn:uuid:7384082e-5e32-4321-b9b9-b9ae54fc05cf> | CC-MAIN-2017-04 | http://www.infosecisland.com/blogview/20890-Pump-Up-Your-Pw0rd.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00337-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923907 | 1,242 | 3.390625 | 3 |
Definition: An efficient implementation of a priority queue. The linear hash function monotonically maps keys to buckets, and each bucket is a heap.
See also bucket sort.
Note: This is a bucket sort where the buckets are organized as heaps. The linear hash function maps increasing keys into nondecreasing values, that is, key1 > key2 implies h(key1) is greater than or equal to h(key2). It is not clear what happens if a bucket gets full.
Let R be the ratio between the key range and the range of the hash function. If R is so large there is only one bucket, we have a regular heap. If R is one, it is a direct mapped array. This data structure was proposed by Chris L. Kuszmaul <firstname.lastname@example.org> in the news group comp.theory 13 January 1999.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 17 December 2004.
HTML page formatted Mon Feb 2 13:10:39 2015.
Cite this as:
Paul E. Black, "hash heap", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/hashheap.html | <urn:uuid:33a5993e-b9b9-493a-b8f0-4e54a2a38332> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/hashheap.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00549-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.884825 | 301 | 3.140625 | 3 |
We all are familiar with the term ‘firewall’ but majority of us don’t know the source of its origination. It is actually a terminology used for describing a part of automobile that separated the automobile’s interior from the compartment of engine. In the world of network this term is used metaphorically to show the way internal networks are separated from the hazards of external world. With the help of firewalls the networks are divided into various physical networks and as a result of this the occurrence of potential damages can be controlled otherwise these damages can spread to different subnets.
This works in the same way as the original firewalls worked to stop the spreading of a fire.
On the other hand, in the world of network security a firewall is considered as a piece of hardware or software that actually serves like a barrier between the reliable or internal network and the unreliable external network i.e. the internet. Practically speaking, a firewall is actually a set of associated programs which are designed to implement the policy of access control between two or more than two networks.
Firewall design works on paired mechanism and it serves two main functions.
- One part of the mechanism unblocks traffic.
- The second part of the mechanism blocks traffic.
A firewall which is referred to as a set of associated or related programs is positioned at a network gateway server and its purpose is to save the private network’s resources from the users on other networks. There are different means that provide the basic services of firewall.
- With the help of static packet filtering
- Using Circuit-level firewalls
- Proxy server
- And the application server
Whether you emphasize more on blocking the traffic or allowing it solely depends on the conditions you find, in case of existing or modern firewall designs the effort is to balance both the functions. Prior to the enforcement of a particular firewall solution, it is important to identify an access control policy. When the firewall is deployed, it allows the access from your network to others through the firewall. The range of firewall designs may vary from a single firewall solution meant for a small network to multiple firewall designs meant for large network in order to protect number of network segments.
For example, if you host an application for use over the set of connection, then the access of public to the private network resources can be managed with firewalls like this. Firewalls can monitor all the attempts made to enter any private network, and some can even set off alarms on illegal entry.
The filtering style of firewalls is based on various parameters like its source address as well as on its port number. Use of specific protocol is another basis of filtering the traffic (FTP, HTTP or Telnet). As a result of this the traffic is either allowed or rejected. In order to filter traffic the firewalls can use packet attribute. | <urn:uuid:7901116e-98bd-4bfd-8459-b757ecabeb89> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2012/how-firewall-works | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00457-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933941 | 584 | 3.75 | 4 |
To put it more bluntly, the reason crime dropped so much in the 90’s was because many of the people who would have been committing the crimes simply weren’t there. In other words, federal support of abortion lowered the crime rate.
Wikipedia on the Donohue and Levitt Study:
Donohue and Levitt use statistics to point to the fact that males aged 18 to 24 are most likely to commit crimes. Data indicate that crime started to decline in 1992. Donohue and Levitt suggest that the absence of unwanted aborted children, following legalization in 1973, led to a reduction in crime 18 years later, starting in 1992 and dropping sharply in 1995. These would have been the peak crime-committing years of the unborn children.
The authors argue that states that had abortion legalized earlier and more widespread should have the earliest reductions in crime. Donohue and Levitt’s study indicates that this indeed has happened: Alaska, California, Hawaii, New York, and Washington experienced steeper drops in crime, and had legalized abortion before Roe v. Wade. Further, states with a high abortion rate have experienced a greater reduction in crime, when corrected for factors like average income. Finally, studies in Canada and Australia have established a correlation between legalized abortion and crime reduction.
So that raises the questions:
- What percentage of Americans do you think know this?
- How do you think a responsible society should use such information? | <urn:uuid:e5e3376e-ef2f-4039-a6d0-5cceca749377> | CC-MAIN-2017-04 | https://danielmiessler.com/blog/how-many-of-you-know-that-the-drop-in-crime-in-the-90s-happened-because-of-roe-vs-wade/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00329-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965071 | 297 | 2.890625 | 3 |
Tech Glossary – A to B
Adware is free software that is supported by advertisements. Common adware programs are toolbars that sit on your desktop or work in conjunction with your Web browser. Most adware is safe to use, but some can serve as spyware, gathering information about you from your hard drive, the Web sites you visit, or your keystrokes.
An application, or application program, is a software program that runs on your computer. Web browsers, e-mail programs, word processors, games, and utilities are all applications. The word “application” is used because each program has a specific application for the user.
Bitrate, as the name implies, describes the rate at which bits are transferred from one location to another. In other words, it measures how much data is transmitted in a given amount of time. Bitrate is commonly measured in bits per second (bps), kilobits per second (Kbps), or megabits per second (Mbps).
This wireless technology enables communication between Bluetooth-compatible devices. It is used for short-range connections between desktop and laptop computers, PDAs (like the Palm Pilot or Handspring Visor), digital cameras, scanners, cellular phones, and printers.
Blu-ray is an optical disc format such as CD and DVD. It was developed for recording and playing back high-definition (HD) video and for storing large amounts of data. While a CD can hold 700 MB of data and a basic DVD can hold 4.7 GB of data, a single Blu-ray disc can hold up to 25 GB of data. Even a double sided, dual layer DVD (which are not common) can only hold 17 GB of data. Dual-layer Blu-ray discs will be able to store 50 GB of data. That is equivalent to 4 hours of HDTV. | <urn:uuid:075b0712-a1b0-4f6b-a430-92dbc2dd1ab5> | CC-MAIN-2017-04 | http://icomputerdenver.com/tech-glossary/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00447-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.917637 | 379 | 3.03125 | 3 |
Talkin' 'bout Information RevolutionsBy CIOinsight | Posted 03-06-2006
Talkin' 'bout Information Revolutions
Carlota Perez knows a thing or two about bubbles. in fact, the Venezuelan scholar has studied all five technological revolutions over the past two centuries, and she's found a series of similarities among them. According to her, we've still got a ways to go until the information revolution runs its course.
Perez is a research fellow at the University of Sussex and the University of Cam-bridge, and her book, Technological Revolutions and Financial Capital: The Dynamics of Bubbles and Golden Ages (Edward Elgar, 2002), is widely studied by the academic and business communities.CIO INSIGHT: What are the five technological revolutions you have identified?
PEREZ: Before our current information and telecommunications technology revolution that has made us all wonder how anybody survived without mobile phones and e-mail, we had the mass-production revolution, which created the "American Way of Life." It was based on the suburban home full of electrical appliances for domestic work and home entertainment, and of course there was a car so that you could drive to the nearby supermarket to stock the fridge and the freezer. It was the age of the automobile, cheap oil and synthetic materials, with universal electricity, telephones, airways and a highway network. That one was preceded, in the 1870s, by the age of steel and heavy engineering (including civil, naval, electrical and chemical). It set up powerful cross-continental systems of railways, steamships and world telegraph. It was the first global revolution, and we can draw many parallels between it and our own times.
The second revolution, which took place in the 1830s, was the age of steam and iron railways. And the first was the original Industrial Revolution, which introduced machines moved by water power and the construction of a canal network to move supplies and products from river to river and to the sea for export.
Each of these revolutions had a new central transport network for goods, people, energy and/or information, and each was made possible by the creativity of a different group of engineers: from the mechanical engineers who, in a sense, created our earlier revolutions, through to chemical and electrical engineers, and now the to microelectronics and information-technology innovators of our present day. But perhaps the most useful feature, in terms of being able to use the past to say something about possible futures, is the regularity in the sequence of periods of each of these great surges of development.
What are the phases of these technological surges?
Basically there are two main periods-installation and deployment-of about two or three decades, each with a financial crash in the middle. The NASDAQ collapse divided the current surge in two halves, just as the crash of 1929 cut the mass-production surge, and the "canal panic" and the "railway panic" (both in England) did during the other surges.
The installation period begins with what I call a Big Bang-a new universe of opportunities. In our case that would have been Intel's microprocessor in 1971 and, in the previous period, it was the Model T Ford. The early decades of installation are the "irruption phase" with all the new technologies happening in an old and declining economy. The later decades are the frenzy phase, when the new paradigm rejuvenates all the existing industries and flourishes fully. The problem is that the euphoria tends to create a financial bubble that inevitably ends in a collapse.
The deployment period is the Golden Age. That is when all the wealth creating potential of each revolution can really spread across the economy and benefit a wider cross-section of society. The deployment period ends when the new technologies and their applications reach maturity, innovation opportunities are exhausted, and productivity increases dwindle. That creates the conditions for the next revolution.
Where Are We Now
What phase are we in today?
Actually, we are smack in the middle. Between the two periods there is what I have called the turning point, which is a time of institutional recomposition. It has lasted anywhere from two to 13 years (as was the case in the last one, in the 1930s). This is the period in which perverse trends have to be reversed: The income polarization and the madness of the bubble years have to be overcome, but also the decision-making power has to move (or be moved) from the hands of financial capital to those of production capital. Today, long-term investment is almost impossible, because the quarterly pressure for profits coming from the financial markets ties the hands of CEOs and every other top manager. Under those conditions cost-cutting can become more important than innovation, and lots of human capital can be wasted.
The most recent turning point saw the establishment of both the national welfare state and the main international organizations. That created conditions for the growth of mass markets in each country and around the world. This global revolution is going to need an equally imaginative set of institutions if it is to flourish in a peaceful and prosperous world. What have the global impacts of the IT revolution been?
Good and bad. Globalization is basically the result of the fantastic power and low cost of digital telecommunications. Because of that power, global corporations can be huge, and they can relate to an extremely large network of suppliers and allies across the world. Finance can function 24 hours a day, around the globe, for the same reason. Outsourcing, offshoring and every other phenomenon that Tom Friedman describes-in terms of the incorporation of new regions of the world to full participation in world markets-has come about on the wave of worldwide installation of the IT revolution. The knowledge society, with its potential for raising the quality of life of a great many people, is based on IT.
However, that same IT and that very globalization have also led to intense polarization. The rich have gotten richer and the poor poorer, within each country and across the globe. While many countries in Asia have risen to development, Latin America has been marginalized, and Africa and the Middle East practically excluded. The great migratory pressures of those who would risk their lives for a chance to work, and to live a better life, have the same origin as much of the violence and resentment that angry leaders so successfully foment, and that threatens the advanced world. One group fights with hope; the other with hate. And naturally, the latter can also take advantage of IT technologies
What are you seeing now in terms of the flow of global capital?
There is obviously a very high concentration of investment and employment creation in China and India, to the detriment not only of the other developing countries but also of the developed ones. If it weren't for the reinvestment of the Asian surpluses in the U.S., the American economy would be in very serious trouble. In my view, jobless growth is unacceptable for societies that have known a high quality of life for all. Something has to be done in order to induce a significant wave of investment in the advanced countries.
But none of this will be done by the markets alone. A major process of consensus would have to be undertaken to shift the present conditions through intelligent policies and a shared vision. Right now a lot of capital is being used in derivative mountains, infinite hedge fund loops and housing bubbles. All that could be creating jobs in America, in Europe and across the excluded regions of the world.
Is that similar or dissimilar to previous revolutions?
Well, the income polarization has occurred with each of the major bubbles, and the need to expand markets and to create opportunities for investment has appeared each time at this point in the surge. The Golden Age is essentially the flourishing of the full range of possibilities under more favorable institutional conditions, often with the intervention of the state reining in finance and redistributing income.
Each Revolution is Unique
But each revolution is unique both in its opportunities and its dangers, in the progress it brings and the problems it creates. That is the social challenge that faces us in each surge. Institutional innovations are needed now to accompany and foster the technological and organizational ones.
Thoughts on what the next revolution may focus on?
I expect the next one to be structured around biotechnology, bioelectronics, nanotechnology and new materials, which in the next decades are all likely to take significant strides and have many isolated successes within the logic of the ICT paradigm. That is how it has always been. No revolution emerges from thin air; the future components must have been in gestation for some time before irrupting as a constellation.
Decades before the microprocessor, the world had electronic tubes, and then transistors; it had radar and analog control instruments; it had telecommunications by phone, telex and radio; it had television as well as cash registers, electric typewriters and, especially, mainframe computers. They were all disjointed and they developed under the logic of the mass-production paradigm. If you look at the history of semiconductors, you realize that the original diffusion of transistors was to make radios and record players portable. The microprocessor breakthrough makes information processing powerful and cheap, and the devices using microprocessors can become smaller and smaller; the computer chip enables the coming together of control, processing and transmission of information, and that synthesis gives birth to a whole new engineering logic.
A similar analysis can be made of the internal combustion engine, oil as fuel and petrochemical raw material, the early automobiles, made one by one as luxury vehicles in machine shops, and so on.
We do not know the nature of the breakthrough that will usher in the next revolution. By definition, a radical leap is unpredictable. What we can expect from historical experience, with a high level of likelihood, is that the present wave of opportunities and transformations will reach maturity in a few decades, and will be transformed in the next surge. It occurs to me, however, just as a stab in the dark, that within a bio-nano-materials revolution, perhaps bioelectronics could do for semiconductor information technology what the shift to steel did to iron railways: a quantum jump in power and possibilities. But I am sure your readers can hazard their own guesses about that. | <urn:uuid:07810a51-4d31-427e-a497-ec529c2b10b3> | CC-MAIN-2017-04 | http://www.cioinsight.com/print/c/a/Technology/Talkin-bout-Information-Revolutions | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00266-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958321 | 2,103 | 2.53125 | 3 |
A new form of biometric security using brain waves to authenticate users has been developed by researchers from the University of California, Berkeley.
Rather than a using a password to gain access, a user would submit a “passthought,” generating a unique signal from brainwaves that may or may not prove difficult to duplicate by a hacker, Phys.org reported. The recent commercialization of external electroencephalogram (EEG) devices -- the researchers used a Neurosky MindSet, which connects wirelessly via bluetooth and costs about $100 -- makes this technology plausible.
The research, conducted by John Chuang, Hamilton Nguyen and Charles Wang, included measuring brainwaves while subjects performed various mental tasks. Sometimes all subjects were asked to perform the same task, such as visualizing a bouncing ball; other times the subjects were asked to visualize a mental image or perform a mental image that only they knew of.
In all the tests, researchers were able to differentiate between users. In fact, researchers found there was little difference between the tests where users were all asked to visualize the same thing and those in which users chose their own secret images.
Chuang, Nguyen and Wang say their technology is secure, and could someday be used to replace traditional passwords. | <urn:uuid:a2e9ca20-4df7-4b9b-98ae-83e63b886161> | CC-MAIN-2017-04 | http://www.govtech.com/security/Will-Passthoughts-Replace-Passwords.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00266-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965341 | 259 | 2.953125 | 3 |
As part of the Federal Government's Big Data Extravaganza, the Defense Advanced Research Projects Agency (DARPA) detailed some of its programs the agency says could benefit from the $200 million research program announced today by President Obama. The Big Data Research and Development Initiative is expected to bolster the tools and techniques needed to access, organize, and glean discoveries from huge volumes of digital data.
In other news: 14 cool, but off-beat inventions
DARPA's big data programs include:
- The Anomaly Detection at Multiple Scales (ADAMS) program looks at the problem of anomaly-detection and characterization in massive data sets. In this context, anomalies in data are intended to cue collection of additional, actionable information in a wide variety of real-world contexts. The initial ADAMS application domain is insider-threat detection, in which anomalous actions by an individual are detected against a background of routine network activity.
- The Cyber-Insider Threat (CINDER) program seeks to develop novel approaches to detect activities consistent with cyber espionage in military computer networks. As a means to expose hidden operations, CINDER will apply various models of adversary missions to "normal" activity on internal networks. CINDER also aims to increase the accuracy, rate and speed with which cyber threats are detected.
- The Insight program addresses key shortfalls in current intelligence, surveillance and reconnaissance systems. Automation and integrated human-machine reasoning enable operators to analyze greater numbers of potential threats ahead of time-sensitive situations. The Insight program aims to develop a resource-management system to automatically identify threat networks and irregular warfare operations through the analysis of information from imaging and non-imaging sensors and other sources.
- The Machine Reading program seeks to realize artificial intelligence applications by developing learning systems that process natural text and insert the resulting semantic representation into a knowledge base rather than relying on expensive and time-consuming current processes for knowledge representation require expert and associated knowledge engineers to hand craft information.
- The Mind's Eye program seeks to develop a capability for "visual intelligence" in machines. Whereas traditional study of machine vision has made progress in recognizing a wide range of objects and their properties-what might be thought of as the nouns in the description of a scene-Mind's Eye seeks to add the perceptual and cognitive underpinnings needed for recognizing and reasoning about the verbs in those scenes. Together, these technologies could enable a more complete visual narrative.
- The Mission-oriented Resilient Clouds program aims to address security challenges inherent in cloud computing by developing technologies to detect, diagnose and respond to attacks, effectively building a "community health system" for the cloud. The program also aims to develop technologies to enable cloud applications and infrastructure to continue functioning while under attack. The loss of individual hosts and tasks within the cloud ensemble would be allowable as long as overall mission effectiveness was preserved.
- The Programming Computation on Encrypted Data (PROCEED) research effort seeks to overcome a major challenge for information security in cloud-computing environments by developing practical methods and associated modern programming languages for computation on data that remains encrypted the entire time it is in use. By manipulating encrypted data without first decrypting it, adversaries would have a more difficult time intercepting data.
- The Video and Image Retrieval and Analysis Tool (VIRAT) program aims to develop a system to provide military imagery analysts with the capability to exploit the vast amount of overhead video content being collected. If successful, VIRAT will enable analysts to establish alerts for activities and events of interest as they occur. VIRAT also seeks to develop tools that would enable analysts to rapidly retrieve, with high precision and recall, video content from extremely large video libraries.
- The XDATA program seeks to develop computational techniques and software tools for analyzing large volumes of semi-structured and unstructured data. Central challenges to be addressed include scalable algorithms for processing imperfect data in distributed data stores and effective human-computer interaction tools that are rapidly customizable to facilitate visual reasoning for diverse missions. The program envisions open source software toolkits for flexible software development that enable processing of large volumes of data for use in targeted defense applications.
Layer 8 Extra
Check out these other hot stories: | <urn:uuid:b387fee4-78de-46aa-95f7-5149c3f6fbf8> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2222020/malware-cybercrime/darpa-does-big-data-in-a-big-way.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00082-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.908407 | 861 | 2.796875 | 3 |
Recent developments in nanotechnology have fueled the demand for nanochemicals or nanoparticles. Nanochemicals are class of compounds whose particle size ranges between 1-100 nanometers and they exhibit special physical and chemical characteristics. Nanochemicals can be of different type viz., ceramic materials, metallic nanochemicals and polymer based nanomaterials. However two majorly used nano forms are of zinc oxide and titanium dioxide. These chemicals have been increasingly used in semiconductor, cosmetics, foods, drugs & textiles. It is used as a colorant in food industry, whereas it is used as an anti-bacterial agent for garments.
The last decade has seen active participation from different stakeholders which have led to the development of this industry. Companies have increased their investments for nanotechnology. Significant investments have seeped in from government organizations in the form of subsidies to support the growth of this industry. But, the growth of nanochemicals had seen a slowdown of late due to stringent regulations.
EPA, the regulatory body have recently expressed concerns over the use of industrial nanomaterials and have asked different companies to notify the nanochemicals used across multiple industries. Environmentalists were worried over its possible toxic composition which may pose health risks for consumers in food and cosmetic industry. Health experts believe that nanochemicals are extremely smaller in size and may penetrate easily through human cells and skins. Some nanochemicals are believed to cause precancerous lesions. Due to such health risks, the use of nanochemicals in food and consumer products has been restricted to certain extent.
North America continues to spearhead the nanotechnology revolution and is a major market for nanochemicals followed by Europe and Asia- Pacific region. The development of nanoparticles or nanochemicals is highly dependent on regulations due to the potential health & environment risks involved in these compounds.
This study aims at estimating the global market size of Nanochemicals for 2014 and to project the expected demand/market size of the same by 2019. This market research study provides a detailed qualitative and quantitative analysis of the global nanochemicals market. It also analyses the key market drivers, restraints, and key issues in the market. The market is segmented for important regions, such as North America, Europe, Asia Pacific and Rest of the World. We have used various secondary sources, such as encyclopedia, directories, and databases to identify and collect useful information for this study. The primary sources, experts from related industries and suppliers, have been interviewed to obtain and verify critical information as well as to assess the future prospects of this market.
Competitive scenarios of the major players in the nanochemicals market have been discussed in detail. We have also profiled leading players of this industry, such as Advanced Nano Products, Inc, Akzo Nobel, BASF, DuPont Agriculture, Dow Agro Sciences, Graphene NanoChem Plc, Sea Spray Aerosol, Inc, and Nano Chemical Systems.
1.1. ANALYST INSIGHTS
1.2. MARKET DEFINITIONS
1.3. MARKET SEGMENTATION & ASPECTS COVERED
2. RESEARCH METHODOLOGY
2.1. ARRIVING AT GLOBAL NANOCHEMICALS MARKET SIZE
2.2. MARKET SIZE ESTIMATION
2.3. TOP DOWN APPROACH
2.4. BOTTOM UP APPROACH
2.5. DEMAND (CONSUMPTION) SIDE ANALYSIS
3. EXECUTIVE SUMMARY
4. MARKET OVERVIEW
4.2. VALUE-CHAIN ANALYSIS
4.3. MARKET DYNAMICS
4.3.1. MARKET DRIVERS
5. GLOBAL NANOCHEMICALS MARKET, BY TYPE
5.2. METALLIC NANOCHEMICALS
5.3. CERAMIC NANOCHEMICALS
5.4. POLYMER NANOCHEMICALS
6. GLOBAL NANOCHEMICALS MARKET, BY APPLICATION
6.2. SEMICONDUCTORS & ELECTRONICS
7. GLOBAL NANOCHEMICALS MARKET, BY GEOGRAPHY
7.1. NORTH AMERICA
7.4. REST OF WORLD
8. GLOBAL NANOCHEMICALS MARKET, BY COMPANY
8.1. ADVANCED NANO PRODUCTS, INC
8.2. AKZO NOBEL
8.4. DUPONT AGRICULTURE
8.5. DOW AGRO SCIENCES
8.6. GRAPHENE NANOCHEM PLC
8.7. NANO CHEMICAL SYSTEMS
8.8. SEA SPRAY AEROSOL, INC.
Please fill in the form below to receive a free copy of the Summary of this Report
Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement | <urn:uuid:49068f77-af93-4081-b796-b98955a42491> | CC-MAIN-2017-04 | http://www.micromarketmonitor.com/market-report/nanochemicals-reports-5593144271.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00384-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.857092 | 1,070 | 3.109375 | 3 |
Last week, Jim Rogers, the director of operations at ORNL’s National Center for Computational Sciences, disclosed the name of the new Cray XT6 machine that will be number crunching in support of climate research. The appropriate moniker? Gaea, or Mother Earth. The beautiful mural artwork covering the front seven cabinets was also revealed, depicting sun-capped, snow-covered mountains.
Rogers gave the following statement by way of Frank Munger’s Atomic City Underground blog.
“The name of the machine is Gaea, Mother Earth, from Greek mythology. Gaea was the Protogenos (primeval divinity) of earth, one of the primal elements who first emerged at the dawn of creation, along with air, sea and sky. This name was selected from among a large list of contributions from the staff that were building the machine. The name is reflective of a primary mission of the machine, the assessment of climate variability and change on the Earth Systems.”
Prior to this latest development, it was announced that the machine had passed its five acceptance tests, putting Gaea ahead of schedule for its October 1, 2010, release date. Several users have already been given limited access.
The new Cray XT6 supercomputer represents a five-fold increase in computational capability over NOAA’s current best machine, Rogers said, and there are more ugprades planned as part of the lab’s $215 million agreement with the National Oceanic and Atmospheric Administration.
Gaea currently has a peak capability of 260 teraflops, but with those planned upgrades will reach the petascale level. The system is located near Jaguar, another Cray supercomputer, currently rated the world’s fastest. | <urn:uuid:02287c04-bb8d-4e05-97c9-e4f57183501e> | CC-MAIN-2017-04 | https://www.hpcwire.com/2010/10/08/ornl_climate_systems_big_reveal/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00202-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956089 | 360 | 2.84375 | 3 |
Researchers from the University of Melbourne in Australia have come to the conclusion that cloud computing is not always the greenest option on the storage and processing as well as the software level. This research examined the issue in both the public and private cloud context in comparison to the energy consumption used for the same tasks on a local system.
The authors argued that most studies seeking an answer to a similar question about the “green” nature of the cloud have only looked at the datacenter’s energy consumption and have thus failed to include the important issue of energy use during data transfer. They suggest that the transport of data to and from datacenters, particularly since public cloud center might be a continent away, uses quite a bit more energy overall than simply storing data locally.
PhysOrg.com reported that, “for cloud processing services (in which a server such as Amazon Elastic Compute Cloud processes large computational tasks only and smaller tasks are processed on the user’s computer) the researchers again found that the cloud alternative can use lower consumption only under certain conditions.” This is because “the large number of router hops required on the public Internet greatly increases the energy consumption in transport, and private cloud processing requires significantly fewer routers.”
The leader of the research project, Rod Tucker, told PhysOrg.com that when one is using the cloud for data storage (for instance on Amazon’s Simple Storage platform) cloud uses less energy than typical computing, but only when that service is used infrequently and not in a high-performance context since data transport energy use is minimal.
While the study focused on more garden variety processors and systems common for desktop users, this research might lend some insight to larger enterprise centers that are reliant on the cloud for some or all of their business operations. While many enterprise users might look at their bottom line before analyzing their overall carbon footprint, a study on the large enterprise scale that takes data transfer into account to offer a “green” score for a company might be a good idea.
Making the process of data transport more energy efficient needs to become a priority, but luckily there are incentives to do so. While the end user might not be bearing much of the cost of inefficient data transfer consumptions, it is in the best interest of cloud providers, who must remain competitive via pricing models, to constantly improve this critical aspect of their datacenters.
The research from the University of Melbourne will be published soon from Jayant Baliga and colleagues. The paper is called “Green Cloud Computing: Balancing Energy in Processing, Storage and Transport” and will be published in the journal Proceedings of the IEEE. | <urn:uuid:101108c3-1ab8-4e3e-ab1a-4184140e780a> | CC-MAIN-2017-04 | https://www.hpcwire.com/2010/10/11/lost_in_transport_why_cloud_isn_t_always_the_greenest_option/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00046-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943669 | 542 | 2.90625 | 3 |
Number of the week: 34% use primitive and easily brute-forced passwords to protect their data
05 Sep 2012
A brute-forced or stolen password can give access to a user’s every last detail – starting with personal photos and finishing with credit card details. That’s why it’s highly advisable to use complex passwords to access online services. It is also important not to use the same password for different services, for fear of losing not only important data but also your “online” personality, for example, via accounts on social networking sites. A survey carried out for Kaspersky Lab by O+K Research in 25 countries worldwide shows that the risks of simple passwords is not fully understood by users – 34% of respondents are practically unprotected.
According to the survey, insecure passwords which are easily brute-forced without any special techniques are used far too often. Examples include a date of birth (17%), a middle name (10%) or a pet’s name (9%). This sort of information may be known not only by your close friends or relatives. A creative fraudster can easily find it on the Internet, for example, on social networking sites. Another 8% of those surveyed use a simple combination of figures such as ‘123456’ or similar, and 5% of respondents simply use the word “password”. This type of “protection”, like other passwords based on easy-to-guess words, can be easily and quickly brute-forced.
Another problem which is often overlooked is the repeated use of the same password. In theory, this avoids the danger of forgetting passwords. In practice, though, if this universal password is compromised, fraudsters have an easy path into several accounts, services and programs. According to O+K Research, 9% use one password for all accounts and 37.1% use several passwords. Given one third of the survey participants (36%) use five or more password-protected services and applications we can imagine the size of the potential security breach.
As mentioned above, the place where you store your password is very important when it comes to data security. Most users (71%) prefer to memorize them which is not bad in itself, but often results in simple passwords or one password for several accounts. 46% admitted that they have forgotten a vital password at least once. 12% just write the password on a piece of paper and leave it near their computer, while 23% use an ordinary paper notebook for this purpose. Special programs designed to store passwords are used by just 7%, even though such solutions offer user data the best protection. For example, the Password Manager integrated in Kaspersky PURE 2.0 makes it possible to generate new brute-force-resistant passwords, and automatically enter them at the user’s request. As a result the user gets a reliable and, even more importantly, unique password that is inaccessible for unauthorized use.
The full report on the O+K Research survey results is available at http://www.kaspersky.com/downloads/pdf/kaspersky-lab_ok-consumer-survey-report_eng_final.pdf | <urn:uuid:e1c3fc4b-a590-41d9-b59c-99ab2cc31f9a> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/virus/2012/Number_of_the_week_34_use_primitive_and_easily_brute_forced_passwords_to_protect_their_data | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00440-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916793 | 652 | 2.875 | 3 |
The human rhinovirus, otherwise known as the common cold, is usually a fairly minor inconvenience for its human hosts. But the symptoms are far more serious for those suffering from asthma and chronic obstructive pulmonary disease (COPD). Today roughly 70 percent of heightened asthma symptoms are linked to the common cold. Over 50 percent of those affected eventually require hospitalization. It is also responsible for sending more than 35 percent of COPD sufferers to hospitals each year.
To help the at-risk population, Biota Holdings Ltd., a Melbourne-based company is developing an antiviral drug. This has led members from St. Vincent’s Institute of Medical Research (SVI) and the University of Melbourne to research how the drug works against the rhinovirus.
In a press release describing the work, scientists are using supercomputing simulations as a basis for revealing the drug’s efficacy. Professor Michael Parker, who leads the research team, explained the basic mechanics behind the new compound. “Our recently published work with Biota shows that the drug binds to the shell that surrounds the virus, called the capsid. But that work doesn’t explain in precise detail how the drug and other similar acting compounds work,” he said.
Part of the team’s study involved creating a 3D model of the rhinovirus. It was simulated using the recently deployed Avoca supercomputer at Melbourne University. An IBM Blue Gene/Q machine, the system has 65,536, 1.6GHz power cores with 65 terabytes of memory. At 838 peak teraflops (690 teraflops Linpack), Avoca is the fastest computer in Australia and ranks 31st on the June 2012 TOP500 list.
Parker said that supercomputers like Avoca have enabled scientists to study how drugs function at a molecular level. The new system can now simulate the entire rhinovirus in useful time frames, which in turn can accelerate the discovery and development of new treatments. Prior to the Avoca’s installation, researchers only had the capability to run simulations on portions of the virus.
If the research proves successful, it could reduce the lethality of the common cold for at-risk populations while reducing associated medical costs. Dr. John Wagner, the manager at IBM’s Research Collaboratory for Life Sciences in Melbourne, noted that simulations like this are going to drive life science research moving forward. “This is the way we do biology in the 21st Century,” he said. | <urn:uuid:905d83b2-55d0-46f1-8084-7732bd03e6d8> | CC-MAIN-2017-04 | https://www.hpcwire.com/2012/07/18/aussie_supercomputer_simulates_common_cold_s_susceptibility_to_new_drug/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00072-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941943 | 521 | 3.421875 | 3 |
A firewall in terms of traditional network configurations serves as a default gateway for hosts connecting to one of its secured subnets. A transparent firewall acts like a “stealth firewall” and it is actually a Layer 2 firewall. In order to implement this, the connection of the security equipment is made to same network on both the internal and external ports. However, there is a separate VLAN for each interface.
Now let’s discuss the characteristics of transparent firewall mode:
- Transparent firewall mode supports outside interface and an inside interface.
- The best thing about transparent firewall mode is that it can run in both the single and multiple context modes.
- Instead of routing table lookups the MAC lookups are performed. | <urn:uuid:7fbc81aa-fc6f-4da7-aaf9-f06f105885e9> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/tag/transparent-firewalls | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00064-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.912147 | 149 | 2.53125 | 3 |
Originally published May 5, 2005
The focus of this article is to present and discuss the basics of Nanotechnology, in which we cover some of the different fields of Nanotechnology and just what can be explored in this tiny but vast world. We also speculate a bit on where each of the areas is heading, and what impact that area might have on computing technology. Like it or not, Nanotechnology is the next BIG THING.
Different Types of Nanotechnology
There are many different types of Nanotechnology available. In general they can be classified into the following categories: carbon nanotube, optical (or particle-wave based), crystalline, DNA, and quantum (see “The Age of the Spiritual Machine”, by Ray Kurzweil). Each of these categories has a significant impact in the study of Nanotechnology. You see, Nanotechnology is not just technology. It is the study of atoms, and the world as we know it. It is the ability to look deep into what and how basic elements are created and how they can be manipulated to benefit mankind.
Of course, like any other “technology” or societal advancement, it can be turned into destructive forces. This advancement is no different. Like it or not, Nanotechnology is the next BIG THING. So what’s the big deal about these areas? How do they differ, why should I care? To explain the answers to these questions will take a life-time, and I hope that through this article many useful thoughts and understandable ideas will emerge. For now, let’s take a look at some of the basics from each of these categories.
What is a carbon nanotube? An oversimplified analogy might be that of a vacuum tube; however the “tube” is made of carbon molecules instead of glass. The carbon nanotube can contain other substances, any substance that is desired which doesn’t interact with carbon molecules. Since carbon is the most non-interactive molecule, it becomes the logical choice to be a container. Once the nanotube is filled with its payload, it can be sealed in a number of different ways—but in general, a breakable bond is setup between one part of the tube, and another (between multiple carbon elements). Of course the tubes, can be strung together to create “wire-like” properties. The surprising thing is they are not limited to moving just electricity! They can also contain any number of other substances and move those along the tube as well. See this excellent white paper: on carbon nanotubes and its uses.
Suppose you had a radio active chemical, like a cancer fighting drug—to deliver to a tumor deep inside the body. Does it really make sense to treat the entire body with ever increasing doses of this drug? Or is it better to disperse the chemical directly to the tumor, and release the chemical into the “bad cells.” This is one example and proposed use of carbon nanotubes. The tubes can be filled with the chemotherapy drugs, and sealed. The drugs can be tainted to “attract” but not bond to, specific cell structures, i.e., those that are cancerous—throughout the body.
These nanotubes can then be dispersed in very small doses through an injection into the bloodstream. They travel the body and attach to the cancerous cells. Then, with either ultraviolet light, or light from another wave-length, sometimes a sound wave, the carbon nanotubes bond is broken (it responds to a certain frequency), and the medicine is delivered on the spot. This is the typical medical scenario found in the bio and drug company Nanotechnology writings. Today they are working on perfecting the delivery and release mechanisms. It will be a few years before the FDA can approve this type of technology.
There are other uses for carbon nanotubes, which include housing liquid structures that change color when activated by electron beams. Thus, acting like super-small vacuum tubes, they can represent on/off, and various levels between. When attached to a computing circuit, they can change their chemical representation by shifting electrons to higher or lower orbital. In other words, here is where we start to see items like the “auto-frosting glass” that are available for public purchase today. The carbon nanotubes form the containment structure and don’t conduct electricity. But because the nanotubes are hexagonal shape, and have “holes”, the contained chemicals within can be shifted into different materials through the application of electricity.
Optical or Particle Wave Based Nanotechnology
This is a much different method of computation. It involves the notions that particles can act as both waves and atoms at the same time. The best or simplest explanation here, is the notion of light, or light waves if you will. Think back to high school physics and chemistry (OUCH!). Remember the photon? It’s both a particle and a wave at the same time. Ever hear of Schrödinger’s Cat? That’s right, the theory that the cat is both alive and dead at the same time due to particle waves and quantum mechanics. Nanotechnology is exploring the use of particle wave exchanges for multiple computation abilities; in other words, exchanging electrons (without wires) between computational devices by creating standing waves from one device, passing the waves through walls, through space, and around the world. This amounts to instant communication, no wires. They’re also investigating wireless power for the same reasons.
This particular technology is developing at a slower rate and is more difficult to produce because it relies on super-conductivity, the ability to manipulate individual electrons on atoms, and pass waves without interference or alteration. This is just in terms of computational ability. When we look at the use of Nanotechnology applied to optical devices, the range is much broader and more successful. There are coatings for glass that make it virtually indestructible, Nanotechnology coatings for fiber cables making them more bendable, as well as more resistant to loss of signal, and revolutionary new optical filtering capacities and light emitting capacities.
The use of optical Nanotechnology in bioinformatics to stain cells and watch DNA computation is incredible compared to the old “dying the cell” methods. There are tremendous advances for light-emitting nano compounds that help us dive deeper and allow us to better understand our tiny world.
Nanocrystals are structures which basically are attached in a lattice or crystalline shape (like ice for instance, that’s a crystalline form of water when it freezes). These structures (because of their lattice shape) are extremely strong. For example, a 3-inch-thick slab of ice can be much stronger than a 3-inch-thick piece of red-wood. Nanocrystals are not yet used as computational devices, but in the future, this may change. We may actually come to have something like a crystalline computing device that reacts to sound waves and changes colors without any visible power-source.
"Metal nanocrystals might be incorporated into car bumpers, making the parts stronger, or into aluminum, making it more wear resistant. Metal nanocrystals might be used to produce bearings that last longer than their conventional counterparts, new types of sensors and components for computers and electronic hardware.
Nanocrystals of various metals have been shown to be 100 percent, 200 percent and even as much as 300 percent harder than the same materials in bulk form. Because wear resistance often is dictated by the hardness of a metal, parts made from nanocrystals might last significantly longer than conventional parts."
Another use for nanocrystals is to house anti-bacterial material without drug interaction and without chemical bonding at the site. For instance, Smith & Nephew produces Nanotechnology crystalline structures with silver that helps to eliminate bacterial infection. Smith & Nephew also markets an antimicrobial dressing covered with nanocrystalline silver (A patented Technology of NUCRYST Pharmaceuticals). The nanocrystalline coating of silver rapidly kills a broad spectrum of bacteria in as little as 30 minutes.
Ok, so here’s the deal: we’re currently inundated with incredible Nanotechnology—why aren’t we seeing all the benefits today? We are, we just don’t know it. To paraphrase an advertising line from 3M: Nanotechnology doesn’t make the products, it makes the products better.
This is of particular interest to me. It holds incredible promise, yet at the same time—incredible risks. DNA Nanotechnology or computing is the ability of man to understand, map, manipulate, replicate and alter strands of DNA within molecules. Of course, each cell is comprised of many DNA strands. The cell with RNA, and enzymes can perform on its own like a mini-computer. As I wrote in one of my recent Nanotechnology articles (DNA Computing), it has been done already, by DARPA in 1999. They managed to search terabytes of information in under 10 seconds in a DNA solution within a beaker. I would suggest reading DNA computing devices.
Again, What Does this Mean to Me?
Well that depends. If you’re in the world of fabrication of electronic devices then it means a lot to you today (or it should). If you’re the CEO or an executive in one of these organizations, then I would strongly urge your company to invest in such technology, or you’ll be left behind the 8-ball—and by the way, once you’re behind the 8-ball on this one, there’s NO catching up! The only way to catch-up would be to re-invent your company from the ground up.
If you’re the average technology user, it simply means begin to be concerned about your personal privacy. First you’ll experience using different and better monitors, smaller computers, and faster devices. Pretty soon you won’t be able to recognize just what is man-made, and what is made of natural chemicals. The lines are blurred; it’s too late to worry about this. Eventually we’ll need Nanotechnology labels on not just food, but clothing and products we purchase. Within 10 years, we’ll need Nanotechnology warnings on services or intangibles we buy, especially if the goods are delivered electronically (like software or upgrades to software). Without warnings, we won’t know exactly what will be affected by Nanotechnology delivery.
Quantum Nanotechnology is the sum of all things based on quantum mechanics, in other words—all of the above types of Nanotechnology rolled together. It is mankind’s ability to control the atom and the atomic elements, even creating our own atomic elements that are not found in nature.
I would like to thank-you for taking the time to read through this exploration of Nanotechnology. I hope you enjoyed this brief journey through different areas of Nanotechnology. Nanotechnology is an important part of life today and therefore must be discussed in public forums. If you aren’t yet aware of Nanotechnology or haven’t researched its far-reaching impacts, it is suggested that this discovery process begin. In future articles I will explore additional uses and applications of each of these Nanotechnology areas. In addition I will continue to speculate on just what this all means to the computing society and devices with which we currently depend on.
Recent articles by Dan Linstedt | <urn:uuid:e7c102c9-048a-4591-bd30-96e3440244f6> | CC-MAIN-2017-04 | http://www.b-eye-network.com/view/836 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00458-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937203 | 2,377 | 3.3125 | 3 |
Network Your Files in a Snap with NFS
Part One: With NFS, you can provide your Unix users with a centralized file server for everything from simple storage to networked home directories. Here's what you need to get started.
NFS, Network File System, is the original file-sharing method among UNIX-based computers. Originally developed by Sun, NFS is still widely used, since it is a (relatively) simple and effective means to provide a centralized file server.
We will be implementing an NFS server step by step in this article, exploring methods for simply sharing a directory, and also briefly talking about making users' home directories live on the server. A second installation will deal with the intricacies of NFS options, auto-mounting, and the differences between operating systems' NFS implementations.
Older NFS versions, which most people use for the sake of interoperability, have practically zero security. The server will believe what it's told about the UID/GID of files, so it should be protected from the Internet. Additionally, it should be limited to only serving files for clients that you designate. The easiest way to limit NFS mounts is with tcpwrappers, configurable via /etc/hosts.allow. Portmap, lockd, rquotad, statd, and mountd should all be limited to networks or specific IP addresses of trusted NFS clients.
Since Linux' NFS configuration options are quite similar to other Unix variants, we will be assuming a Linux client and server for this article.
First things first: We should begin by starting the necessary NFS services. On the server side, most distributions have a startup script designed to accomplish this. Running something like /etc/init.d/nfs start will fire up the NFS server properly on most distributions.
Using rpcinfo -p should return a bit of information about which RPC (define) services are running. At a minimum, for NFS to function, you should see: portmap, status, mountd, nfs, and nlockmgr. Any missing items will require that you figure out why they are missing before proceeding. Note that these names are based on the most current nfs-utils package, currently nfs-utils-1.0.6-22. Your specific Linux distribution's documentation should provide more information about how to make sure everything is started at boot time.
Now on to the fun part: sharing directories. The file /etc/exports is used to specify which file systems should be exported to which clients. This is basically a listing of:
"directory machine1(options) machine2(options)…"
Examples should make it clear:
To share /usr read-only to two IP addresses:
/usr 192.168.0.1(ro) 192.168.0.2(ro)
To share /usr/local read-write to one machine, and read-only to everyone else:
/usr/local 192.168.0.5(rw) *(ro)
There are many ways to share directories, and many configurable options. Client lists can be netgroups, IP addresses, a single host, wildcards, or IP networks. Refer to "man exports" for more exhaustive details. The server also needs to be told to reread the configuration when it changes. This can be accomplished by sending -HUP to the nfs daemon, or by running exportfs -ra.
If everything was done properly, this server should be ready to serve NFS. The command showmount -e will list the exported file systems. If an RPC error was returned, that generally means a necessary service is not running. | <urn:uuid:d3cb7820-6e02-45e2-a11c-d20179eaab26> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/netos/article.php/3490921/Network-Your-Files-in-a-Snap-with-NFS.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00210-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.909033 | 771 | 3.109375 | 3 |
PHP is a scripting language that is deployed on countless web servers and used in many web frameworks.
“PHP is a widely-used general-purpose scripting language that is especially suited for Web development and can be embedded into HTML.”
In 2007, at least 20 million websites had PHP deployed. The exponential growth of PHP came from the development of LAMP/WAMP stacks.
These stand for Linux/Apache/MySQL/PHP and Windows/Apache/MySQL/PHP respectively.
These ensure that deployment of PHP applications are simple enough for the most novice web developer. Many of you may have heard of Wordpress, Drupal, or Joomla.
These are common web applications that are written entirely in PHP. Many sites run PHP as their main scripting language, such as Youtube, Facebook, Digg, and Wikipedia.
PHP also powers cybercrime. A large majority of publicly disclosed vulnerabilities are PHP related. In 2009, 5733 PHP Remote File Inclusion vulnerabilities were disclosed.
In situations where exploiting PHP RFI is possible, most likely SQL Injection and Cross Site Scripting are all possible. This is due to the exploits having the same root cause or lacking input validation.
What is a PHP Remote File Injection (RFI) attack? A PHP RFI attack occurs when there is unvalidated input to a PHP script.
This allows PHP code to be injected by a malicious person. For example, a typical PHP URL would look something like this:
How can this be abused to cause PHP RFI? The errors.php script is taking a file as input, which in the example, is errorsfile.php.
If the site is vulnerable and does not have input validation, any file could be used as input, even files from remote servers. When the vulnerable server requests www.example.com/errors.php?error=http://evilhaxor.com/remoteshell.php, the remoteshell.php file will be processed by the web server.
Attackers can do quite a bit with remotely included PHP files, including opening a shell, enumerating users or programs, and defacing the website.
Basically, whatever user the web server is running as, an attacker can run commands as that user.
How do we fix PHP RFI?
There are several variables within the PHP configuration that can be set to provide a more secure environment for PHP code to run in. These are register_globals, allow_url_fopen, and allow_url_include.
In an ideal world, we would be able to set all of these variables in the php.ini file to OFF. However, in most cases this will break applications dependent on these functions.
A thorough review of their usage should be done before setting any of them to OFF. Another solution is to implement secure coding practices in PHP, and to implement input validation.
Detailing input validation methods and ways to securely code PHP is too complex for this article.
However you can discover more by reading the OWASP Top 10 entries for PHP RFI, and the Web Application Security Consortium article on PHP RFI. Both will help you learn about this threat and take precautions for your own network.
Cross-posted from State of Security | <urn:uuid:05917c07-01eb-4257-97ce-6cb4769cf193> | CC-MAIN-2017-04 | http://www.infosecisland.com/blogview/4340--Understanding-PHP-RFI-Vulnerabilities-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00267-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.910636 | 678 | 3.359375 | 3 |
Fiber Patch Cables, also known as fiber jumpers or fiber patch cords, are used to connect fiber optic equipment to fiber optic cross-connects, inter connects, and information outlets. Having a thick layer of protection, it is used to connect the optical transmitter, receiver and the terminal box.
Fiber patch cable Features:
1. Low insertion loss and High Return Loss;
2. Fully compliant with standards of IEC and YD-T826/1996;
3. Temperature stability: Operating temperature: -20 to +75°C;
4. High durability, more than 500 times mating;
5. Individual package with detail information labe.
Applications of fiber optic patch cords:
Data processing networks;
Wide Area Networks (WANs);
Industrial, mechanical and military.
General Types Of Fiber Patch Cable
Generally there are two types of fiber optic patch cords: single mode fiber optic patch cords and multimode fiber optic patch cords. The word mode means the transmitting mode of the fiber optic light in the fiber optic cable core. Single-mode fiber is generally yellow, with a blue connector and a long transmission distance. Multi-mode fiber is generally, orange or grey, with a cream of black connector and a short transmission distance.
Single mode fiber optic patch cord has a small core and only one pathway of light. With only a single wavelength of light passing through its core, single mode realigns the light toward the center of the core instead of simply bouncing it off the edge of the core as with multimode. Single mode fiber patch cable is typically used in long-haul network connections spread out over extended areas–longer than a few miles. For example, telecommunications use it for connections between switching offices. Single mode cable features a 9-micron glass core.
Single-mode fiber cable is primarily used for applications involving extensive distances. Multimode fiber, however, is the cable of choice for most common local fiber systems as the devices for multimode are far cheaper.
Multimode fiber optic patch cord has a large-diameter core that is much larger than the wavelength of light transmitted, and therefore has multiple pathways of light-several wavelengths of light are used in the fiber core. Multimode optical fiber patch cords can be used as cross connect jumpers, equipment and work area cords, used for most general fiber applications. Use multimode fiber for bringing fiber to the desktop, for adding segments to your existing network, or in smaller applications such as alarm systems. Multimode cable comes with two different core sizes: 50 micron or 62.5 micron.
Special types of fiber optic patch cables:
FTTH Patch Cables;
Polarization Maintaining PM Fiber Patch Cables;
Mode conditioning fiber optic patch cables (mode conditioning patch cord, mode conditioning cable);
Pre-terminated pigtail (fiber pigtails, fiber pigtail, fiber optic pigtail, fiber optic pigtails).
Fiberstore offers fiber optic patch cables with different fiber connector types, low insertion loss and low back reflection. FiberStore Technology fiber patch cables are widely used in applications of Telecommunication Networks, Gigabit Ethernet and Premise Installations. | <urn:uuid:5e8009fd-90f9-4839-b009-25e18d0e1a49> | CC-MAIN-2017-04 | http://www.fs.com/blog/fiber-patch-cables-features-applications-and-types.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00385-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.886142 | 660 | 3 | 3 |
Overexposure: A Problem for Certification Tests
Item exposure is becoming a problem, at least in what I read and hear. Item exposure refers to the fact that test questions—called items—are shown to test-takers like you. That’s no big revelation, of course. That’s what they are for. People are supposed to be exposed to them, read them and then answer them. So what’s the problem?
The problem comes when the exposure of test questions goes beyond reasonable limits, leading to general familiarity. It also means that the content of the items, including answers, is shared freely from candidate to candidate. When that happens, the question becomes worthless. It is no longer able to do its job, which is to help distinguish the competent candidates from the incompetent.
How do we know if a question has been exposed too much in too many tests or exposed improperly? You can tell that a ball is slowly losing air because it doesn’t bounce as high. The “bounce” of a test question can be seen in its statistics.
One statistic in particular, the point-biserial correlation, measures the relationship across all test-takers between answering the question correctly and total test score. That is, it is expected that those who score higher on the test will answer a particular question correctly, and those who score lower on the test will answer the question incorrectly. It is a simple statistical matter to correlate total test score with whether or not the item was answered correctly. A high correlation, closer to 1.0, means the question is performing well; closer to zero means it is not doing well.
When a good question is freely given out, as is the case when it appears at brain-dump sites, it loses its ability to discriminate high performers from low performers, simply because everyone now knows the question and how to answer it, regardless of overall knowledge or ability.
So, if a question can be monitored often and its point-biserial correlation calculated, it is possible to detect when it has been exposed improperly. When that occurs, it is time to replace the question with a more effective one.
There are several important item exposure factors that determine how long it takes before a question needs to be replaced. Here are a few:
- The number of times it is presented in tests. Obviously, the more test-takers actually see and respond to a question, the more likely it is that it will be shared with others.
- Type of item. Simple multiple-choice items that measure a person’s knowledge of facts are easier to memorize and share with others. By contrast, those that require the use of a simulation or an actual software product are more immune to exposure effects.
- The ethics of the candidate. Even knowing it’s wrong, candidates sometimes feel obligated to share the content of a test they have just taken with friends or colleagues. And worse, there are individuals and corporations that profit from gathering and selling test questions.
- The quality of security measures in place to protect it. Having too few questions on a test, or the lack of strict monitoring during the test, will allow easier and more effective strategies for stealing test questions.
- Importance of the test. Certification tests, resulting in high-stakes decisions, provide the highest motivation to remember and disclose questions.
So why should you care about item exposure issues?
First of all, increased security means that test prices will likely rise. One only has to look at the airline industry to see the effects of increased security efforts. The same is expected in certification testing. Creating more test questions to replace existing ones can be expensive. Second, there will be increased security steps to take a test. Biometrics will be increasingly used to verify the identity of the test-taker. Third, there will be more prequalifications to take a test. For example, a person might have to have two years of experience before being allowed to take the test. Today’s relatively easy road to certification in IT will become bumpier. And, as with air travel, we accept the inconvenience and cost in order to enjoy the advantages.
Now, none of these things may happen, but I see a growing problem with the casual approach to the security of tests, both by candidates and certification programs. Something will need to be done soon to shore up the integrity of exams and questions, and the value of IT certifications.
David Foster, Ph.D., is a member of the International Test Commission and sits on several measurement industry boards. | <urn:uuid:86fb19e9-6759-4984-8d60-780e353c97b2> | CC-MAIN-2017-04 | http://certmag.com/overexposure-a-problem-for-certification-tests/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00285-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957362 | 938 | 2.96875 | 3 |
Definition: (1) A spatial access method that defines hyperplanes, in addition to the orthogonal dimensions, which node boundaries may parallel. Space is split by hierarchically nested polytopes (multidimensional boxes with nonrectangular sides). The R-tree is a special case that has no additional hyperplanes. (2) A spatial access method that splits space by hierarchically nested polytopes. The R-tree is a special case in which all polytopes are boxes.
Generalization (I am a kind of ...)
tree, spatial access method.
Specialization (... is a kind of me.)
Aggregate child (... is a part of or used in me.)
Note: (2) by Jagadish after [GG98].
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 16 November 2009.
HTML page formatted Mon Feb 2 13:10:40 2015.
Cite this as:
Paul E. Black, "P-tree", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 16 November 2009. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/ptree.html | <urn:uuid:cffa0e01-bf82-42b0-9a6b-501865513ddb> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/ptree.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00129-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.866489 | 273 | 2.5625 | 3 |
With budget pressures, ever-increasing enrollment numbers and limited faculty resources, schools are encouraged to use technology to overcome these challenges and continue to provide enriching learning experiences to students. HD video conferencing has emerged as one of the most efficient ways to bring more students into the classroom than ever before and utilize school resources wisely.
The old days of a teacher standing in front of a classroom lecturing about a given subject are over. Well, sort of.
Now, it’s virtual.
Not only does video conferencing allow teachers to literally be in two (or three, or four) places at once, it also allows to students to attend specialized classes that may not be offered in their home school, like foreign language or advanced subjects beyond their grade level.
The key to broadcasting and recording lectures is crystal-clear quality (so it feels as lifelike as possible) and access on multiple devices, including smartphones and tablets.
Here is a video that demonstrates what a 21st century classroom looks like.
For real-life stories of K-12 schools and universities that have implemented 21st century technology in their classrooms, read the case studies for State College of Florida, ESU 10, Globe University and YES Prep.
Does your school use HD video conferencing? Have you ever participated in distance learning? Tell us about your experiences in the comment box below. | <urn:uuid:88ab24f2-7423-4bf6-8415-43d42d18e0c1> | CC-MAIN-2017-04 | http://www.lifesize.com/video-conferencing-blog/doing-more-with-less/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00065-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955912 | 282 | 2.5625 | 3 |
Information classification according to ISO 27001
Classification of information is certainly one of the most attractive parts of information security management, but at the same time, one of the most misunderstood. This is probably due to the fact that historically, information classification was the first element of information security that was being managed – long before the first computer was built, governments, military, but also corporations labeled their information as confidential. However, the process on how it worked remained somewhat a mystery.
So in this article I’ll give you an outline of how information classification works, and how to make it compliant with ISO 27001, the leading information security standard. Although classification can be made according to other criteria, I’m going to speak about classification in terms of confidentiality, because this is the most common type of information classification.
The four-step process for managing classified information
Good practice says that classification should be done via the following process:
This means that: (1) the information should be entered in the Inventory of Assets (control A.8.1.1 of ISO 27001), (2) it should be classified (A.8.2.1), (3) then it should be labeled (A.8.2.2), and finally (4) it should be handled in a secure way (A.8.2.3).
In most cases, companies will develop an Information Classification Policy, which should describe all these four steps – see the text below for each of these steps.
Asset inventory (Asset register)
The point of developing an asset inventory is that you know which classified information you have in your possession, and who is responsible for it (i.e., who is the owner).
Classified information can be in different forms and types of media, e.g.:
- electronic documents
- information systems / databases
- paper documents
- storage media (e.g., disks, memory cards, etc.)
- information transmitted verbally
Classification of information
ISO 27001 does not prescribe the levels of classification – this is something you should develop on your own, based on what is common in your country or in your industry. The bigger and more complex your organization is, the more levels of confidentiality you will have – for example, for a mid-size organization you may use this kind of information classification levels with three confidential levels and one public level:
- Confidential (top confidentiality level)
- Restricted (medium confidentiality level)
- Internal use (lowest level of confidentiality)
- Public (everyone can see the information)
In most cases, the asset owner is responsible for classifying the information – and this is usually done based on the results of the risk assessment: the higher the value of information (the higher the consequence of breaching the confidentiality), the higher the classification level should be. (See also ISO 27001 risk assessment & treatment – 6 basic steps.)
Very often, a company may have two different classification schemes in place if it works both with the government and with a private sector. For example, NATO requires the following classification with four confidential levels and two public levels:
- Cosmic Top Secret
- NATO Secret
- NATO Confidential
- NATO Restricted
- NATO Unclassified (copyright)
- NON SENSITIVE INFORMATION RELEASABLE TO THE PUBLIC
Once you classify the information, then you need to label it appropriately – you should develop the guidelines for each type of information asset on how it needs to be classified – again, ISO 27001 is not prescriptive here, so you can develop your own rules.
For example, you could set the rules for paper documents such that the confidentiality level is to be indicated in the top right corner of each document page, and that it is also to be indicated on the front of the cover or envelope carrying such a document, as well as on the filing folder in which the document is stored.
Labeling of information is usually the responsibility of the asset owner.
Handling of assets
This is usually the most complex part of the classification process – you should develop rules on how to protect each type of asset depending on the level of confidentiality. For example, you could use a table in which you must define the rules for each level of confidentiality for each type of media, e.g.:
So in this table, you can define that paper documents classified as Restricted should be locked in a cabinet, documents may be transferred within and outside the organization only in a closed envelope, and if sent outside the organization, the document must be mailed with a return receipt service.
As before, ISO 27001 allows you freedom to set your own rules, and this is usually defined via the Information classification policy, or the Classification procedures.
So, as you can see, the classification process might be complex, but it does not have to be incomprehensible – ISO 27001 actually allows you great freedom, and you should definitely take advantage of it: make the process both adapted to your special needs, but at the same time secure enough so that you can be sure your sensitive information is protected.
Click here to see a free preview of Information Classification Policy. | <urn:uuid:7952508a-d466-4291-9c5e-0a8e61865161> | CC-MAIN-2017-04 | https://advisera.com/27001academy/blog/2014/05/12/information-classification-according-to-iso-27001/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00487-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927463 | 1,057 | 2.828125 | 3 |
Managing Security Risks in a Wireless World
Wireless networks are extremely prevalent today, both at home and in work settings. This increased adoption of wireless networks can be attributed to lower cost and ease of installation, combined with benefits such as increased portability and productivity.
Setting up wireless networks generally does not require drilling holes or cabling. All you need to do to connect is plug in a wireless access point (AP) or router. The lack of cabling expands a network to one without a physical boundary and allows an end user to be portable and productive from anywhere within the wireless network range.
This open connectivity brings with it risks, however, some of which are similar to those in wired networks, while others are unique and increased on wireless networks. Poor security standards, coupled with immature technologies, flawed implementations and limited user awareness, make it difficult to design and deploy “secure” wireless networks. All the vulnerabilities of wired networks exist in wireless networks as well. The most noteworthy is the openness of the communication medium (airwaves). This is akin to storing valuables in a glass safe.
Wireless security threats include confidentiality, integrity and availability (CIA) of resources and information. Organizations have information to protect. This information can be financial, personal and intellectual, all of which can be sensitive. Unauthorized intruders can intercept and gain access, disclosing sensitive information (confidentiality breach) if encryption and other protective mechanisms between wireless devices are weak or vulnerable.
Disclosed information can be altered (integrity breach) intentionally by the intruder or unintentionally due to malfunction in data-synchronization routines between the wireless clients and the back-end storage. Intruders can launch attacks against wireless devices in the network and consume network bandwidth causing Denial of Service (DoS) attacks (availability breach), as well.
Know Your Enemy
Sixth-century Chinese general and master military strategist Sun Tzu, in his book Art of War, wrote: “Know your enemy and know yourself, find naught in fear for 100 battles.”
Enemies and threat agents that exploit wireless security vulnerabilities can be grouped into three major categories:
Script kiddies ($cr1p7k1dd13s): These enemies are motivated primarily by the thrill of electronically trespassing and are deterred quite easily by simple security measures. They usually are unaware of the consequences of breach and use tools and scripts readily available to gain access to networks on which they are not authorized. They are the least of the threats and are also referred to as “war dialers.”
Resource thieves: They consume resources such as bandwidth and disk space, downloading pirated movies, MP3 and pornography using stolen airwaves and networks. They, like script kiddies, are motivated by thrill of freeloading and the need to be untraced. They are capable of writing scripts to exploit vulnerabilities, but often look for easily exploitable vulnerabilities and don’t pose a significantly greater threat than script kiddies.
Information thieves: They know exactly what they want (sensitive information), know how to get it, know how to hide their footprints and are capable of harm. They are not easily deterred and often go the extra mile in figuring out the network topology to gain access to the network.
The 5 W’s of Wireless Networks
With the understanding of the risks and threat agents associated with wireless networks, important questions one must answer before designing and implementing secure wireless networks are:
Why do you need to set up a wireless network? Ease of access (flexibility), unrestricted workspace (portability and productivity).
Where are you setting up the wireless network? Home, work, public location.
Who will be using your wireless networks? Internal employees, vendors, customers, general public. What is it that you need to safeguard? Customer information, financial information, intellectual property, trade secrets.
When should you setup a wireless network? The right time to setup a wireless network is when you can acceptably manage and mitigate risks.
At a bare minimum, the following should be in place to thwart intruders in wireless networks:
Change all default settings. Most wireless devices (routers and APs) come with weak default configurations. Blank admin passwords or “admin/admin” username password combinations are classic examples. Due to flawed implementation and limited user awareness and education on the implications of deployment of these wireless devices with default configurations, many wireless networks are susceptible to security threats.
Select products that can support more secure technologies. For backward compatibilities, if you are required to support weaker security technologies like Wired Equivalent Privacy (WEP) instead of Wi-Fi Protected Access (WPA and WPA2), do so only after doing a risk analysis and developing a plan to phase them out with products that can support more secure technologies. E.g., more secure technologies are WPA and client AP isolations in which the client devices on your wireless network cannot see one another.
Educate, train and certify users and employees. This is the most proactive approach to implementing security in wireless networks. There is no greater defense than educated and trained personnel making wise decisions pertinent to wireless security.
Get employees certified in wireless security. The Certified Information Systems Security Professional (CISSP) credential by (ISC)2 is a Gold Standard certification that covers wireless security concepts. Another good vendor-neutral certification is the Certified Wireless Security Professional (CWSP) by CWNP.
Placebo Wireless Security
Some of the most common wireless security measures are myths and give a false sense of security. These include:
SSID cloaking: The Service Set Identifier (SSID) in a wireless AP is the name configured to be broadcast to client devices (laptop, PDAs) so that they can associate with the AP. In SSID cloaking, the SSID is not broadcast by the AP, but is distributed by out-of-band mechanisms beforehand to the wireless network users. Most organizations use SSID cloaking as a security measure. Although this is a recommended best-practice by the PCI Data Security Standard (PCI DSS), it provides little to no protection because every time a client associates with an AP, the SSID is present in clear text, and a man-in-the-middle (MITM) attack can deduce the SSID, allowing an intruder to easily bypass any intended security mechanism.
MAC address filtering: Every network device has a unique machine access code (MAC). Allowing access to your wireless networks based on MAC addresses is akin to having a bouncer with a valid set of names to allow into the party. With a plethora of MAC spoofing tools, coupled with the MAC address being sent in the header of every packet, MAC address filtering easily can be defeated.
Disabling DHCP: Dynamic Host Configuration Protocol (DHCP) provides the automatic assignment of Internet Protocol (IP) addresses for the clients associating with the wireless network. Disabling DHCP has little to no security value, as it would take a determined intruder fewer than 10 minutes to determine the IP assignment scheme and bypass security controls.
The Real Deal
Now that we are aware of how not to secure a wireless network, how should we?
Start with physical access control. Walls and physical boundaries provide little to no protection against wireless security threats. Nevertheless, it is imperative that wireless security measures are supplemented with physical security controls such as gated access, motion detectors, closed-circuit televisio | <urn:uuid:00dca92d-fe95-4afd-b600-ad5c44663a58> | CC-MAIN-2017-04 | http://certmag.com/managing-security-risks-in-a-wireless-world/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00449-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927546 | 1,530 | 2.703125 | 3 |
Fiber optic fusion splicing is always needed when we want to fuse two optical fibers together for the continuity of fiber optic cable plant. However, in the optical fiber fusion splicing process, fiber tips are required to have a smooth end face that is perpendicular to the fiber axis. Sufficiently perpendicular and planar fiber end face can be achieved via the fiber cleaving process.
Simply speaking, a fiber cleaver is a piece of tools of equipment to make a perfect fiber end face cut that will assure the quality of the joint of bare fibers in the optic fusion process, resulting in lower attention of the fiber connection line.
In the fiber cleaving process, the fiber is pressed against the little cut to force it to break at 90 angle and expose a mirror like fiber end face. The fiber is scratched with a very hard diamond edge scribing tool, which induces a sufficiently large surface crack, then the fiber cleaver applies a tensile stress to the fiber which caused the crack to expand rapidly across the fiber cross section. There are also types of fiber cleaves apply the tensile stress first and then scratch the fiber with the diamond edge scribing tool. A quality fiber cleaver is very essential in determining the fusion splicing loss. This is especially correct for some special fibers including dispersion-compensating fibers and erbium-doped fibers.
A fiber cleave is initiated by lightly scratching the surface of the fiber. When the fiber is thereafter pulled or bent, a crack will originate at the scratch and propagate rapidly across the width of the fiber. This produces a nearly flat cleave of an optical fiber. Under the direction of this idea, there developed a variety of commercial optical fiber cleaving tools in the market:
Some cleavers apply a tensile stress to the fiber while scratching the fiber’s surface with a diamond edge. There are also other designs scratch the fiber surface first, and then apply tensile stress. Some cleaves apply a tensile stress which is uniform across the fiber cross section while others bend the fiber trough a tight radius, producing high tensile stresses on the outside of the bend.
Fiber optic cleaving tool is usually designed for cutting different number of fibers at a time. Single fiber cleaver and ribbon fiber cleaver are typical. They work on the same principles but ribbon cleaver is for simultaneously cleaving all the fibers in a ribbon cable, which is somewhat interior to that of a single fiber cleaver. Most today’s fiber cleavers are suitable for precision cleaving of all common single silica glass fibers. There are also some special cleaver designed ones for applications such as in research, measurement technology and production of optic components.
Under on-side conditions, most high precision cleavers produce a cleave angle deviation within 0.5° with very high reliability and low scattering. Diamond bladed presents the highest cleaver quality and can last over 10,000 cleaves.
It is easy for a modern fiber cleaver to cleave a 125um diameter fiber, but difficult to cleave >200um fibers. This is especially turn when the fiber is not crystalline. Besides, torsion will produce a non perpendicular endface.
Typical brand for fiber optic cleaver are Fujiura, Sumitomo, Furukawa, etc. Typical models of these brand are also available at Fiberstore.
Take Sumitomo FC-6S and Fujikura CT-30 for example. The CT-30 cleavers are available for either single or ribbon fiber splicing applications. It is idea for FTTx applications and equally at home in a splicing van or in a bucked truck. With The 16-position blade yields 48,000 single-fiber cleaves, or 4,000 12-fiber ribbon cleaves beforerequiring replacement, and the built-in scrap collector conveniently stores fiber shardsuntil they can be safely discarded. The FC-6 cleaver is available with a single fiber adapter for 250 to 900 micron coated single fibers. This cleaver is simple for users to operate by removing or installing the single fiber adapter and alternate between mass and single fiber cleaving.
Additional information on a variety of fiber cleaving equipments, please visit Fiberstore fiber optic cleavers page or contact our sales team by firstname.lastname@example.org. | <urn:uuid:34313e1d-bba2-4d19-8381-8ee368723345> | CC-MAIN-2017-04 | http://www.fs.com/blog/sumitomo-and-fujikura-fiber-optic-cleavers-from-fiberstore.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00048-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.918104 | 882 | 3 | 3 |
More than a quarter (27 per cent) of parents believe their children have been exposed to online risks, such as accessing inappropriate content or cyber bullying in the past 12 months, according to a Kaspersky Lab study.
Despite this, research to mark Safer Internet Day taking place on 11th February, has found that one in five parents (22 per cent) takes no action to govern their children’s online activity – whether on the home computer or mobile devices.
“Regardless of how their children are accessing the internet, parents must remain vigilant, supervise their internet use and consider parental control technologies. However, as a parent myself, I find these statistics particularly worrying when you consider the increasing number of children using connected smartphones today. After all, when children use mobile devices to access the web, they are using the same internet, with the same risks – yet parents are often not as aware of the dangers,” says David Emm, senior security researcher at Kaspersky Lab.
The study also found that 18 per cent of parents had lost money or data from their personal device as a result of their child’s unmonitored access. With smartphone apps often being blamed in the press for children inadvertently spending hundreds of pounds, effective controls and open channels of communication around smartphone use is imperative.
David Emm continues: “There is a common misconception that smartphones and tablets don’t need the same level of protection as a PC, but with such a high percentage of parents not having a clear view of their children’s online activity, this way of thinking needs to change. The internet is an incredible resource, both for social use and in an educational capacity. But in the same way as we would teach our children to cross the road safely, we must teach them to be aware of, and respect, the dangers of the internet. Just because a threat is out of sight, it doesn’t mean we shouldn’t keep it front of mind.”
David Emm offers the following tips to stay safe online:
1. Both Android smartphones and iPhones come with in-built parental controls – when purchasing a smartphone, ask the sales assistants to demonstrate these features. They have policies in place and a responsibility to make parents aware of these. By creating a demand, it is more likely they will let other parents know.
2. Apply settings that prevent in-app purchases to save hefty bills should children stumble across a game with expensive add-ons.
3. Install security software – these providers will offer apps to filter out inappropriate content, for example, adult images and senders of nuisance SMS messages.
4. Encourage children to talk about their online experience and in particular, anything that makes them feel uncomfortable or threatened. Open a channel of communication so they feel they can discuss all areas of their online life without fear of judgement or reprimand.
5. Protecting children from cyber bullies is especially challenging with smartphones as they can be targeted in so many ways, especially out of view of their parents. Deal with cyber bullying as you would in real life by encouraging children to be open and talk to a trusted adult if they experience any threatening or inappropriate messages. Numbers and contacts on apps can both be blocked if they are making children uncomfortable. | <urn:uuid:0b6cb3e2-a1ed-4d29-a8c2-b38d5e33a5b5> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2014/02/12/parents-fear-their-kids-are-exposed-to-online-threats/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00442-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954931 | 669 | 3.109375 | 3 |
Frame Relay Polling, Error Handling, and Specification Enhancements
Network Consultants Handbook - Frame Relay
by Matthew Castelli
SVC X.121/E.164 Addressing
X.121 is a hierarchical addressing scheme that was originally designed to number X.25 nodes. X.121 addresses are up to 14 digits in length and are structured as follows:
- Country Code: 3 digits
The first digit is a zone number that identifies a part of the world. For example, Zone 2 covers Europe and Zone 3 includes North America). The zone numbers can be found in Appendix C, List of ITU-TX.121 Data Country or Geographical Codes. These codes can also be found in ITU-T Recommendation X.121.
- Service Provider: 1 digit
- Terminal Number: Up to 10 digits
- E.164 is a hierarchical global telecommunications numbering plan, similar to the North American Number Plan (NANP). E.164 addresses are up to 15 digits in length and are structured as follows:
- Country Code: 1, 2, or 3 digits
This code is based on the international telephony numbering plan and can be found in Appendix D, International Country Codes. These codes can also be found in any phone book.
- National Destination Code and Subscriber Number: Up to 14 digits in length (maximum length is dependent on the length of the Country Code).
Subaddress: Up to 40 digits
Frame Relay Status Polling
The Frame Relay Customer Premises Equipment (CPE) polls the switch at set intervals to determine the status of both the network and DLCI connections. A Link Integrity Verification (LIV) packet exchange takes place about every 10 seconds, verifying that the connection is still good. The LIV also provides information to the network that the CPE is active, and this status is exported at the other end. Approximately every minute, a Full Status (FS) exchange occurs, passing information regarding which DLCIs are configured and active. Until the first FS exchange occurs, the CPE does not know which DLCIs are active, and as such, no data transfer can take place.
Frame Relay Error Handling
Frame Relay uses the Cyclic Redundancy Check (CRC) method for error detection. Frame Relay services perform error detection rather than error checking; error detection is based on the premise that the underlying network media is reliable. Frame Relay error detection uses the CRC checksum to determine if the frame is received by the Frame Relay networking device (router or switch) with, or without, error. Error correction is left to the upper-layer protocols, such as the TCP (of the TCP/IP protocol suite).
NOTE: Error detection detects errors, but does not make attempts to correct the condition. Error correction detects errors and attempts to correct the condition, usually under control or direction of a higher-layer protocol. The termination node performs error detection.
Frame Relay Frame Format
Figure 15-13 illustrates the standard Frame Relay frame format.
Table 15-7 presents a description of each of the Frame Relay standard frame fields.
|Flags||Delimits the beginning and end of the frame. The value of this field is always the same and is represented as hexadecimal 7E or as binary 0111110.|
|Address||Contains the following information:
|Data||Contains encapsulated upper-layer data. Each frame in this variable-length field includes a user data or payload field that will vary in length up to 4096 bytes. This field serves to transport the higher-layer protocol data unit (PDU) through a Frame Relay network.|
|Frame Check Sequence (FCS)||Ensures the integrity of transmitted data. This value is computed by the source device and is verified by the receiver to ensure integrity of the data transmission.|
Frame Relay LMI
The Frame Relay LMI is a set of Frame Relay specification enhancements. The original LMI was developed in 1990 by the Gang of Four (Cisco, DEC, Nortel, and StrataCom). LMI includes support for the following:
- Keepalive mechanismsVerify the flow of data
- Multicast mechanismsProvide the network server with local and multicast DLCI information
- Global addressingGive DLCIs global rather than local significance
- Status mechanismsProvide ongoing status reports on the switch-known DLCIs
The original LMI supports a number of features, or enhancements, to the original Frame Relay protocol, for managing Frame Relay internetworks. The most notable Frame Relay LMI extensions include support for the following:
- Global addressing -- The LMI global addressing extension gives Frame Relay DLCI values a global, rather than local, significance. These global DLCI values become Frame Relay networking device addresses that are unique in the Frame Relay WAN.
NOTE: As discussed earlier in this chapter, global addressing has an inherent limitation in that no more than 992 DLCIs (1024 DLCIs less the 32 reserved DLCIs) can be used. In a Frame Relay network of more than 992 sites, global addressing will not work. Apart from global addressing of DLCIs, the LMI status message presents an inherent limitation on the number of DLCIs that can be supported by an interface. Cisco has published a brief detailing these limitations at http://www.cisco.com/warp/public/125/lmidlci.html.
- Virtual circuit status messagesProvide communication and synchronization between Frame Relay network access devices (FRADs) and the network provider devices (switches). These messages report (in a regular interval) the status of PVCs, which prevents data from being pointed to a PVC that does not exist.
- MulticastingSupports the assignment management of multicast groups. Multicasting preserves bandwidth by enabling routing updates and address-resolution (such as ARP, RARP) messages to be sent only to specific groups of routers.
LMI VC status messages provide communication and synchronization between Frame Relay DTE and DCE devices. These messages are used to periodically report on the status of PVCs, which prevents data from being sent into black holes (over PVCs that no longer exist).
Three types of LMI are found in Frame Relay network implementations:
- ANSI T1.617 (Annex D)Maximum number of connections (PVCs) supported is limited to 976. LMI type ANSI T1.627 (Annex D) uses DLCI 0 to carry local (link) management information.
- ITU-T Q.933 (Annex A)Like LMI type Annex D, the maximum number of connections (PVCs) supported is limited to 976. LMI type ITU-T Q.933 (Annex A) also uses DLCI 0 to carry local (link) management information.
- LMI (Original)Maximum number of connections (PVCs) supported is limited to 992. LMI type LMI uses DLCI 1023 to carry local (link) management information.
NOTE: LMI Type LMI (Original) is annotated as LMI type Cisco within the Cisco IOS.
NOTE: The frame MTU setting impacts LMI messages. If PVCs appear to be bouncing, (that is, repeated up/down indications), it might be because of the MTU size of the Frame Relay frame. If the MTU size is too small, not all PVC status messages will be communicated between the service provider edge and the Frame Relay access router. If this condition is suspected, the next step is to contact the network service provider to troubleshoot.
LMI Frame Format
Figure 15-15 illustrates the LMI frame format to which Frame Relay LMI frames must conform, as deemed by the LMI specification.
Table 15-8 presents a description of each LMI field.
|Flag||Delimits the start and end of the LMI frame.|
|LMI DLCI||Identifies the frame as an LMI frame rather than a Frame Relay data frame. The DLCI value is dependent on the LMI specification used; LMI (original) uses DLCI 1023, LMI (Annex A) and LMI (Annex D) use DLCI 0.|
|Unnumbered Information Indicator||Sets the poll/final bit to zero (0).|
|Protocol Discriminator||Always contains a value indicating that the frame is an LMI frame.|
|Call Reference||Always contains zeros. This field is currently not used for any purpose.|
|Message Type||Labels the frame as one of the following message types:
|Information Elements||Contains a variable number of individual information elements (IEs). IEs consist of the following fields:
|Frame Check Sequence (FCS)||Ensures the integrity of transmitted data.|
The LMI global addressing extension gives Frame Relay DLCI values global rather than local significance. DLCI values become DTE addresses that are unique in the Frame Relay WAN. The global addressing extension adds functionality and manageability to Frame Relay internetworks. Individual network interfaces and the end nodes attached to them can be identified by using standard address-resolution and discovery techniques. Additionally, the entire Frame Relay WAN appears as a LAN to routers on the periphery.
The LMI multicasting extension allows multicast groups to be assigned. Multicasting saves bandwidth by allowing routing updates and address-resolution messages to be sent only to specific groups of routers. The extension also transmits reports on the status of multicast groups in update messages.
Our next segment from Cisco Press' Network Consultants Handbook will deal with Frame Relay Applications. | <urn:uuid:a3c0dd0c-498f-4ad2-8009-91c939c11912> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/netsp/article.php/962071/Frame-Relay-Polling-Error-Handling-and-Specification-Enhancements.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00258-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.823377 | 2,040 | 2.546875 | 3 |
You may be asking yourself, what if I have a cell phone virus and what is it anyway? You know you keep a lot of precious, valuable data on your phone, and when you hear in the news that mobile threats are on the rise, it’s easy to lose sight of the context behind the numbers and worry that you’ve gotten a dreaded mobile phone virus that’s going to steal your personal info and eat your children. Hopefully we can clarify things by addressing some of the questions that we hear most about so-called Android “viruses.”
Historically carried over from the old PC world, a “virus” is a program that replicates itself by attaching to another program. Hackers often used this method to spread their nefarious work, and virus became a popular term to refer to all types of malicious software (malware) on computers. In the case of smartphones, to date we have not seen malware that replicate itself like a PC virus can, and specifically on Android this does not exist, so technically there are no Android viruses. However, there are many other types of Android malware. Most people think of any malicious software as a virus, even though it is technically inaccurate.
Malware, short for malicious software, is software designed to secretly control a device, steal private information or money from the device’s owner. Malware has been used to steal passwords and account numbers from mobile phones, put false charges on user accounts and even track a user’s location and activity without their knowledge. Learn about some of the most notable malware Lookout has blocked in Resources Top Threats.<?p>
Through Lookout’s research for the State of Mobile Security 2012, we’ve found that user behavior and geography greatly influence your risk of encountering malware. The safest bet is to stick with downloading well-known apps from well-known apps from reputable markets like Google Play in addition to having a security app. Fraudsters make it their job to disguise malware as innocent-looking mobile apps on app stores and websites. So if you’re thinking that it’s a good idea to download a just-published, supposedly free version of Angry Birds you found on a random Chinese app store, it’s probably not. Once installed, these apps may appear to work just as described, but they are can be busy with additional secret tasks. Some apps start out clean, but are given malicious capabilities after a seemingly routine software update.
And conscientious app downloading won’t always minimize your risk. Sneaky, drive-by-download sites can download a potentially malicious app file without any user intervention. Safe Browsing in Lookout Premium for Android will block web-based threats like that, but even so, you also shouldn’t install random downloads from your download manager that you didn’t expect to find there.
It’s pretty simple to minimize the risk of encountering malware, and we’ve got 5 simple mobile security tips right here. The top two ways to protect yourself are to download a mobile security app like Lookout to catch those pesky “phone viruses” and to be judicious about what apps you download and were you download them from. Lookout will scour your phone or tablet for any existing malware, and also examine every new app you download to ensure it is safe. But even before you let Lookout scan your newly downloaded app, you should only download apps from sites you trust, check the ratings and read reviews to make sure they’re widely used and respected.
So, should you worry about getting a phone virus? Nope, because they technically don’t exist. (If they ever do crop up, Lookout will weed them out.) And should you worry about the more accurately termed malware? Well, with a little bit of awareness and Lookout on your phone and by your side, you can keep malware and other mobile threats at bay. | <urn:uuid:dc897684-56bc-41aa-bcfa-345e4340b35f> | CC-MAIN-2017-04 | https://www.lookout.com/know-your-mobile/android-virus | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00432-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94256 | 814 | 2.875 | 3 |
Don’t panic. Quantum computing, that strange beast that appears to defy logic, is set to debut on a commercial scale, thanks to Lockheed Martin and D-Wave Systems.
Because of the unusual properties of particles at very small scale, quantum computing is like traditional computing on steroids. Why is quantum computing the hare to traditional computing’s tortoise? Quantum computing relies on subatomic particles that inhabit a different range of states, which can be used to find the best outcome, allowing certain types of problems to be solved faster than ever before.
Lockheed Martin plans to use the quantum computing system it bought from D-Wave two years ago for the manufacturing of radar, space and aircraft systems. The system works by chilling the processor to almost an absolute zero, then a set of mathematical equations is programmed into the lattice of superconducting wires. The processor sorts out the equations until it finds the lowest energy required, which is known as the optimal outcome. This allows for Lockheed to test a myriad of scenarios on the systems in mere moments, instead of weeks.
“This is a revolution not unlike the early days of computing,” Ray Johnson, Lockheed’s chief technical officer, said in an interview with the New York Times. “It is a transformation in the way computers are thought about.”
Quantum computing is being employed for other industries besides aerospace. The healthcare industry is using it to research genetic data, and hopes for the field are so high that D-Wave has secured investments from Goldman Sachs and even Jeff Bezos, the founder of Amazon.
Though that’s not to say the field isn’t without its critics.
Some D-Wave skeptics say the company hasn’t offered adequate proof of its claims. Some of the criticism developed when D-Wave failed to deliver on its 2007 promise to produce a commercial quantum computer by 2008. And D-Wave’s scientists have yet to publish data proving that the system computes faster than standard binary computers.
“There’s no reason quantum computing shouldn’t be possible, but people talked about heavier-than-air flight for a long time before the Wright brothers solved the problem,” said Scott Aaronson, a professor of computer science at the Massachusetts Institute of Technology. D-Wave, he said, “has said things in the past that were just ridiculous, things that give you very little confidence.”
Regardless of which side of the argument you fall on, the possibilities of quantum computing can reach all the way to infinity…and beyond. | <urn:uuid:0f7a55ad-7ca9-468d-b867-4f4c5d3e5235> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/03/22/quantum_computing_nears_commercial_adoption/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00340-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92192 | 531 | 3.28125 | 3 |
Pouring Resources into Tera
-Scale"> Meanwhile, Intel is pouring a tremendous amount of resources into the effort. About 40 percent of the researchers in its Corporate Technology Group, which include its Microprocessor Technology Lab, its Communications Technology Lab and Systems Technology Lab, and about 900 researchers, are working on some 80 projects involving Tera-Scale Computing. Those projects could be interwoven in many ways to support the project. A chip with many, many cores would need a very big pipeline for data. Intel researchers are working on Silicon Photonics, a project that involves building optical connections into silicon using standard manufacturing techniques.Meanwhile, inside the MTL, researchers are designing new TCP/IP processing cores and new types of memories, including configurable caches, 3-D stacked memory and high-bandwidth memory. Researchers are also advocating for transactional memory, which coordinates multiple threads accessing the same memory versus todays approach of locking it for use one thread at a time. Click here to read more about Intels chip manufacturing plans. Intels labs are also working on new types of transistors which are smaller, faster and more efficient, in addition to techniques that would allow two different chip wafers to be glued together, creating the ability to tightly pair processors and memory in a way that resembles an Oreo cookie. The new approach is unlikely to be rolled out all at once. An intermediate phase in which some of the features of Tera-Scale Computing are pulled forward in the less-distant future. Given its customarily conservative nature, Intels likely already working on a many-core chip that would bridge the gap between its current architectural approach and one of Tera-Scale computing. McVeigh declined to discuss any such efforts, but did not deny their existence. Chips with many-cores are at least six or eight years out, said Kevin Krewell, editor-in-chief of the Microprocessor Report, in an interview with eWEEK. However, he said Tera-Scale-style chips could change the landscape of chip design. "Theres lots of re-architecting that could be done," Krewell said. "If you have the right software, say, a function can be taken out of main CPU and diverted off to a dedicated piece of hardware. "Once you get more sophisticated scheduling, a processor can make the decision it can decide then [data] is going off to an accelerator." Next Page: Making the chips.
The project, which the company has said is targeted at chip-to-chip connections, could present one avenue for creating pipelines to keep the chip flush with data. | <urn:uuid:c857d0a3-0d9e-4de9-8795-ffa5e56aedb1> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/IT-Infrastructure/Intel-Scales-Up-the-Stakes-with-Multicore-Chip-Strategy/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00066-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956481 | 536 | 2.734375 | 3 |
Using the Pupil as a Crime-Fighting Tool
/ December 31, 2013
In the U.K., scientists are using the human pupil as a crime-fighting tool -- one that allows pinpointing the perpetrators by simply looking more closely at the reflections in their victim's eyes.
As was initially reported in the journal PLOSONE, the researchers wrote that for crimes in which the victims are photographed, such as hostage taking or child sex abuse, reflections in the eyes of the photographic subject could help to identify perpetrators.
“The pupil of the eye is like a black mirror," Dr. Rob Jenkins, a psychologist at the University of York, said in a statement. "To enhance the image, you have to zoom in and adjust the contrast. A face image that is recovered from a reflection in the subject’s eye is about 30,000 times smaller than the subject’s face." | <urn:uuid:e2135b47-3071-4dd9-8c2d-ff6b14021b07> | CC-MAIN-2017-04 | http://www.govtech.com/photos/Photo-of-the-Week-Using-the-Pupil-as-a-Crime-Fighting-Tool.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00552-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955123 | 185 | 3.15625 | 3 |
Distributed File System (DFS)
Many companies introduce Microsofts Distributed File System
(DFS) nowadays. Usually in migration and standardization
A lot of people think that DFS is just a simple file service,
which requires no great conceptual work. But without planing and a
good concept you hardly reach the desired result, you expect from
You can rely on the experience of our specialists, gained in many
projects with intercontinental DFS installations.
Basics of DFS-N and DFS-R
The product DFS consists of two independent parts.
- DFS-N virtual tree view of
multiple file services
- DFS-R data replication
between multiple file servers.
You can use both products independently. A DFS
tree does not require DFS replication in advance. However, much
more important is that a DFS replication can take place even
without DFS tree.
The DFS tree is the original function of DFS before additional
functions such as replication have been added. The DFS Tree
summarizes distributed file service resources in an abstracted
Planning a Distributed File System tree assumes that the way
people work in the company is known well. And the DFS tree is
visible to the end user. So it will - hopefully positive -
influence his or her daily work.
The intersection between the DFS tree and the file service is
the file system share. A folder
in the DFS tree has one or more folder targets,
which in turn correspond to the shares on file servers. These
shares do not have to be necessarily provided by Windows servers.
Almost every CIFS (common internet file system)
based sharing may be included in the DFS tree. Possible are various
NAS filer, SAMBA shares or CIFS enabled NetWare Services. However,
this applies only for the DFS tree. The Distributed File System
Replication necessarily presupposes a Windows server, as there is a
Microsoft service is installed.
An essential function of the DFS tree is the
switching between DFS folder targets. A Distributed
File System folder can have one share or multiple shares as its
destination (referrals). These target servers may reside in
different locations. The Distributed File System
client can choose the best folder target due to the Sites and
Services information in Active Directory. You can
influence the choice by configuring preferable folder
targets. This setting instructs all clients to use the
same target folder. Only when the primary target is
not reachable, it is changed to the secondary
At this point, we need to talk about the DFS-R
The change between folder targets only makes sense if there are
identical datasets. DFS-R replication can synchronize
data between multiple Windows servers and is based on a highly
efficient WAN-optimized protocol.
Unfortunatelly there are a few limitations for the
dynamic exchange between the target folders. The DFS-R
protocol depends on the available bandwidth and the rate of change
in the file system. Changes are put into a queue
(backlog queue) and processed sequentially. This means
that no replication can be guaranteed within a certain
time. An automatic switch between Folder Targets
can cause clients to access different databases. There
is unfortunately no clear support statement from Microsoft about
it. A few good notes on DFS Replication can be found
In one of our current projects Microsoft has refused to support
a scenario with two active folder targets on two synchronized by
DFS-R servers. Here the inactive folder target has been disabled
and is activated manually in case of failures.
Data synchronization between Windows file servers can be done
with a special transfer protocol: DFS-R. The DFS-R replication
does not require a DFS tree. It is independent of
an existing distributed file system structure. The DFS-R replicated
directory may even contain subdirectories, which are linked in
various DFS tree.
WAN Optimization for DFS-R
DFS-R was highly optimized for the use in WAN environments.
Under TCP RFC a maximum TCP window size of 64
Kbytes was set. In the context of WAN connections, which
usually have higher latency, the 64kByte TCP window is a
Data packets need to be confirmed before further packets can be
sent. At high latency it leads to very low throughput.
With RFC 1323 "TCP Window Scale option" has been
introduced. This allows an increase of the TCP window to
max 16 MB. Using the TCP Windows scale option DFS-R is not
very sensitive to latency anymore. You can still use up to 80% of
the bandwidth at latency times of 500 ms.
The DFS-R protocol contains the compression algorithm
RDC (Remote Differential Compression) in order to
save transmission bandwidth. RDC detects changes in Office
documents and transmits only the changes. Once one of the two
replication partners operates with the Enterprise Edition of
Windows Server, the advanced feature Cross File RDC runs
automatically. If data components already exist in other files on
the target server, Cross File RDC uses these parts locally create
the replicated files.
A replication of ordinary office documents with cross file RDC can
save 50% - 80%.
Example: a Word document
Word saves changes while editing in a temporary file. When you
save your temporary work, a new file is created. For the RDC
protocol it is a new file. Changes can not be detected and the file
is completely transferred.
Only the Cross File RDC, which is active with the Enterprise
Edition of the server can help you here. Cross File RDC compares
the file with alreasy replicated data and assembles the file on the
target server from the replicated parts.
DFS-R is a multi-master replication. Changes can be performed on
all replication partners. Only the file logging status of the files
is not replicated.
If a Word document is changed on both replication partners, the
file with the most recent timestamp becomes the new version
(last writer wins). Therefore, it is not
recommended to work with several replication partners with write
access. A good example of a read-only replication is the Sysvol
directory of an Active Directory domain. Multiple replication
partners are available but only with read access. There can be no
change conflicts, as Sysvol is read-only.
Other examples are HUB / SPOKE implementations, which are used
for data backup. The SPOKE server is enabled for write access, the
HUB server receives changes and has even no active Folder
The most common case is to use a DFS tree to simplify the access
to distributed file services for the end user. The DFS-R
replication is often used for data backups at remote sites that
don't have a local backup infrastructure.
Designing a DFS tree needs some preparation. It is important to
know the work process of the end user works.
A good example is a DFS tree for a company that has several
Without DFS, network drives are connected to the file server at
each site. The necessary UNC paths are difficult to understand for
most end users and can be difficult to remember. A DFS tree
can summarize these UNC paths to an abstract tree view and
make it transparent for the end user. The next design step combines
related data together into virtual nodes.
The marketing department of our example company operates in
various locations: Hamburg, Frankfurt and Munich. A virtual DFS
folder "marketing" has subfolders with the names of locations.
These subfolders point to the file servers at the sites. And that
is really transparent to the user. That way, all data from the
marketing department is nicely summarized in one node .
You should be careful with the naming of the folders. Department
codes are not recommandable, since each reorganization in the
company would result in an adjustment of the DFS tree.
Let us have a look at DFS-R. In our example the IT
infrastructure of the sites should be consolidated. The focus of
the consolidation are the local backup servers. DFS-R will be used
to replicate the data to the central office, where they are
secured. The most important design element is the expected rate of
change here. A good starting point may be an evaluation of latest
daily backups here.
Rough estimate: for the replication of data 500GByte with a daily
change rate of 3% and a saving by RDC of 50% would require a
long-term average bandwidth of 0.7 Mbit/s.
The employees in the marketing department have transparent access
to all data of their department regardless the location they are
situated. In addition, the availability of the file service is
increased. The data is replicated to the headquarter and links are
there activated in the event of an error DFS. Finally, you can save
backup infrastructure at external locations, because the data is
replicated using DFS-R for backup in the headquarter. | <urn:uuid:59c422c4-8982-407d-9a89-1bf3f351ea64> | CC-MAIN-2017-04 | http://www.firstattribute.com/en/active-directory/know-how/distributed-file-system-(dfs)/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00212-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.898889 | 1,965 | 2.6875 | 3 |
by Barnaby Jack @barnaby_jack
I watched the TV show Homeland for the first time a few months ago. This particular episode had a plot twist that involved a terrorist remotely hacking into the pacemaker of the Vice President of the United States.
People follow this show religiously, and there were articles questioning the plausibility of the pacemaker hack. Physicians were questioned as to the validity of the hack and were quoted saying that this is not possible in the real world .
In the episode, a terrorist finds out that the Vice President has a bad heart and his pacemaker can be remotely accessed with the correct serial number. The terrorist convinces a congressman to retrieve the serial number, and then his hacker accomplice uses the serial to remotely interrogate the pacemaker. Once the hacker has connected to the pacemaker, he instructs the implantable device to deliver a lethal jolt of electricity. The Vice President keels over, clutches his chest, and dies.
My first thought after watching this episode was “TV is so ridiculous! You don't need a serial number!”
In all seriousness, the episode really wasn't too implausible. Let's dissect the Hollywood pacemaker hack and see how close to reality they were. First, some background on the implantable medical technology:
The two most common implantable devices for treating cardiac conditions are the `````pacemaker and the implantable cardioverter-defibrillator (ICD).
The primary function of the pacemaker is to monitor the heart rhythm of a patient and to ensure that the heart is beating regularly. If the pacemaker detects certain arrhythmia (an irregular heartbeat or abnormal heart rhythm), the pacemaker sends a low voltage pulse to the heart to resynchronize the electrical rhythm. A pacemaker does not have the capability to deliver a high voltage electrical shock.
Newer model ICD's not only have pacing functionality, but they are also able to deliver a high voltage shock to the heart. The ICD most commonly detects an arrhythmia known as ventricular fibrillation (VF), which is a condition that causes irregular contractions of the cardiac muscle. VF is the most common arrhythmia associated with sudden cardiac death. When an ICD detects VF, it will deliver a shock to the heart in an attempt to reverse the condition.
Back to Homeland, let's see how close Hollywood was to getting it right. Although they mention a pacemaker in the episode, the actual implantable device is an ICD.
HOMELAND: The congressman is told to retrieve the serial number from a plastic box in the Vice President’s office. The plastic box he retrieves the serial number from has a circular wand and telephone inputs on the side.
Fact! The plastic box on the episode is a commonly used remote monitor/bedside transmitter. The primary use of the bedside transmitter is to read patient data from the pacemaker or ICD and send the data to a physician over the telephone line or network connection. The benefit to this technology is reduced in-person visits to the physician’s clinic. The data can be analyzed remotely, and the physician can determine whether an in-person visit is required.
Before 2006, all pacemaker programming and interrogation was performed using inductive telemetry. Programming using inductive telemetry requires very close skin contact. The programming wand is held up to the chest, a magnetic reed switch is opened on the implant, and the device is then open for programming and/or interrogation. Communication is near field (sub 1MHZ), and data rates are less than 50KHZ.
The obvious drawback to inductive telemetry is the extremely close range required. To remedy this, manufacturers began implementing radiofrequency (RF) communication on their devices and utilized the MICS (Medical Implant Communication Service) frequency band. MICS operates in the 402-405MHZ band and offers interrogation and programming from greater distances, with faster transfer speeds. In 2006, the FDA began approving fully wireless-5based pacemakers and ICDs.
Recent remote monitors/bedside transmitters and pacemaker/ICD programmers support both inductive telemetry as well as RF communication. When communicating with RF implantable devices, the devices typically pair with the programmer or transmitter by using the serial number, or the serial number and model number. It's important to note that currently the bedside transmitters do not allow a physician to dial into the devices and reprogram the devices. The transmitter can only dial out.
So far, so good.
HOMELAND: The hacker receives the serial number on his cell phone and uses his laptop to remotely connect to the ICD. The laptop screen displays real-time electrocardiogram (ECG) readouts, heart beats-per-minute, and waveforms generated by the ICD. He then instructs his software to deliver defibrillation. Then it’s game over for the victim at the other end.
If you look past the usual bells and whistles that are standard issue with exploit software on TV, the scrolling hex dumps, the c code in the background—the waveform and beats per minute (BPM) diagnostic data from the implant that is shown in real-time is legitimate and can be retrieved over the implant’s wireless interface. The official pacemaker/ICD programmers used by the physicians have a similar interface. In the case of ICDs, the pacing and high voltage leads are all monitored, and the waveforms can be displayed in real time.
The attacker types in a command to remotely induce defibrillation on the victim’s ICD. It is possible to remotely deliver shocks to ICDs. The functionality exists for testing purposes. Depending on the device model and manufacturer, it is possible to deliver a jolt in excess of 800 volts.
Moving on to the remote attack itself, the TV episode never mentioned where the attacker was located, and even if it was possible to dial into the remote monitor, it was disconnected.
Some manufacturers have tweaked the MICS standard to allow communication from greater distances and to allow communication over different frequencies. Let's talk a little about how these devices communicate, using a fairly new model ICD as an example.
The Wake-up Scheme:
Before an implantable device can be reprogrammed, it must be “woken up”. Battery conservation is extremely important. When the battery has been depleted, surgery is required to replace the device. To conserve power, the implants continually run in a low power mode, until they receive an encoded wake-up message. The transceiver in this ICD supports wake-up over the 400MHZ MICS band, as well as 2.45GHZ.
To wake-up and pair with an individual device, you must transmit the serial and model number in the request. Once a device has paired with the correct model and serial number and you understand the communications protocol, you essentially have full control over the device. You can read and write to memory, rewrite firmware, modify pacemaker/ICD parameters, deliver shocks, and so on.
When using a development board that uses the same transceiver, with no modifications to the stock antenna, reliable communication will max out at about 50 feet. With a high gain antenna, you could probably increase this range.
Going back to Homeland, the only implausible aspect of the hack was the range in which the attack was carried out. The attacker would have had to be in the same building or have a transmitter set up closer to the target. With that said, the scenario was not too far-fetched. The technology as it stands could very easily be adapted for physicians to remotely adjust parameters and settings on these devices via the bedside transmitters. In the future, a scenario like this could certainly become a reality.
As there are only a handful of people publicizing security research on medical devices, I was curious to know where the inspiration for this episode came from. I found an interview with the creators of the show where the interviewer asked where the idea for the plot twist came from . The creators said the inspiration for the episode was from an article in the New York Times, where a group of researchers had been able to cause an ICD to deliver jolts of electricity .
This work was performed by Kevin Fu and researchers at the University Of Massachusetts in 2008. Kevin is considered a pioneer in the field of medical device security. The paper he and the other researchers released (“Pacemakers and Implantable Cardiac Defibrillators: Software Radio Attacks and Zero-Power Defenses”) is a good read and was my initial inspiration for moving into this field.
I recommend reading the paper, but I will briefly summarize their ICD attack. Their paper dates back to 2008. At the time, the long range RF-based implantable devices were not as common as they are today. The device they used for testing has a magnetic reed switch, which needs to be opened to allow reprogramming. They initiated a wakeup by placing a magnet in close proximity to the device, which activated the reed switch. Then they captured an ICD shock sequence from an official ICD/pacemaker programmer and replayed that data using a software radio.
The obvious drawback was the close proximity required to carry out a successful attack. In this case, they were restricted by the technology available at the time.
At IOActive, I've been spending the majority of my time researching RF-based implants. We have created software for research purposes that will wirelessly scan for new model ICDs and pacemakers without the need for a serial or model number. The software then allows one to rewrite the firmware on the devices, modify settings and parameters, and in the case of ICDs, deliver high-voltage shocks remotely.
At IOActive, this research has a personal attachment. Our CEO Jennifer Steffen's father has a pacemaker. He has had to undergo multiple surgeries due to complications with his implantable device.
Our goal with this research is not for people to lose faith in these life-saving devices. These devices DO save lives. We are also extremely careful when demonstrating these threats to not release details that could allow an attack to be easily reproduced in the wild.
Although the threat of a malicious attack to anyone with an implantable device is slim, we want to mitigate these risks no matter how minor. We are actively engaging medical device manufacturers and sharing our knowledge with them.
At RSA this year, we once again have the presidential suite at the St. Regis for IOAsis. For anyone interested in medical devices or embedded security in general, I'll be giving a roundtable discussion in our suite on Tuesday. It will be a chance for you to get a closer look at our current research and to see what the future holds. Reserve your spot by RSVP'ing to RSVP@ioactive.com | <urn:uuid:cb2c60a6-0d4b-48c8-a3fa-753bfcb9f42f> | CC-MAIN-2017-04 | http://blog.ioactive.com/2013/02/broken-hearts-how-plausible-was.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00148-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941813 | 2,265 | 2.578125 | 3 |
Instruction–Level Parallelism: Instruction Prefetch
Break up the fetch–execute cycle and do the two in parallel.
This dates to the IBM Stretch (1959)
The prefetch buffer is implemented in the CPU with on–chip registers.
prefetch buffer is implemented as a single register or a queue.
The CDC–6600 buffer had a queue of length 8 (I think).
Think of the prefetch buffer as containing the IR (Instruction Register)
the execution of one instruction completes, the next one is already
in the buffer and does not need to be fetched.
Naturally, a program branch (loop structure,
conditional branch, etc.)
invalidates the contents of the prefetch buffer, which must be reloaded.
Instruction–Level Parallelism: Pipelining
Better considered as an “assembly line”
Note that the throughput is distinct from the time required for the execution of a single instruction. Here the throughput is five times the single instruction rate.
What About Two Pipelines?
Code emitted by a compiler tailored for this architecture has the possibility to run twice as fast as code emitted by a generic compiler.
Some pairs of instructions are not candidates for dual pipelining.
C = A + B C = A + B
D = A + C C = C / D
2, 4, or 8 completely independent pipelines on a CPU is very
resource–intensive and not directly in response to careful analysis.
the execution units are the slowest units by a large margin. It
is usually a better use of resources to replicate the execution units. | <urn:uuid:b1684a9d-c6a0-41c6-8bf5-b4af966f4d61> | CC-MAIN-2017-04 | http://edwardbosworth.com/My5155_Slides/Chapter09/PipeliningAndSuperscalar.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00269-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921471 | 338 | 3.421875 | 3 |
Here's how you can control which CPU is used by Windows applications on your system. 1. Right click on the taskbar and go to Task Manager. 2. Go to the Details tab. 3. Find the application process in the list. 4. Right click the application process and select Set Affinity in the menu. 5. A menu will appear that will let you select which processor(s) you want to allow that application to use.
For more, see the original article at the link below.
How To Force Windows Applications to Use a Specific CPU | How To Geek | <urn:uuid:9ebc7ce4-d765-4424-aef7-c9c71845d3f4> | CC-MAIN-2017-04 | http://www.itworld.com/article/2713180/enterprise-software/forced-windows-applications-to-use-specific-cpu.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00205-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.876512 | 118 | 2.59375 | 3 |
But what they often fail to consider is the governance, or taxonomy, that will enable them to reap the benefits of such systems.
As a definition, content is all of a company's stored knowledge, from Internets and Intranets to sales presentations, to the extensive technical documents and other proprietary materials it creates. And, in simplest terms, a company's taxonomy is the skeleton from which all content hangs.
Developing A Taxonomy
Content management systems have evolved considerably over the past few years to cover a broad range of capabilities, such as Web content management, document management and imaging, records management and team collaboration.
Companies such as Documentum, IBM and Open Text among others have developed suites of products that can solve most of a company's content management needs from a software perspective. However, these products won't analyze and develop a taxonomy.
Creating a taxonomy should be central to any enterprise content strategy. Without that framework, even the best technology may not meet expectations because of the numerous intranet sites and discrete pieces of information it has no way of interconnecting.
Even though developing a taxonomy is both expensive and difficult, the benefits are enormous: better and quicker access to information; improved relationships with partners, regulators and customers; and less duplication of work.
Forrester researchers note in Best Practices in Taxonomy Development and Management that, ideally, taxonomies "represent agreed-upon terms and relationships between ideas or things and serve as a glossary or knowledge map helping to define how the business thinks about itself and represents itself, its products and services to the outside world." | <urn:uuid:b69fdced-7cfa-4a78-988c-3781769c0e71> | CC-MAIN-2017-04 | http://www.cioupdate.com/insights/article.php/3352981/Getting-the-Most-from-Content-Management.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00351-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941185 | 328 | 2.53125 | 3 |
After 11 years, NASA scientists running the Mars Odyssey Orbiter have decided to switch the machine's redundant computing functions from one side to the other in an attempt to keep the technology serviceable as possible.
Odyssey, which spends its time performing a number of science functions like taking close-up shots of the Red Planet and relaying information from the Curiosity and Opportunity rovers operating on the surface of the planet, has redundant systems - side A and side B, NASA says.
"We have been on the A side for more than 11 years. Everything on the A side still works, but the inertial measurement unit on that side has been showing signs of wearing out," said Odyssey Mission Manager Chris Potts at NASA's Jet Propulsion Laboratory, in a release. "We will swap to the B side on Nov. 5 so that we still have some life available in reserve on the A side. The spare inertial measurement unit is factory new, last operated on the day before launch."
The inertial measurement unit is a gyroscope mechanism senses changes in the spacecraft's orientation, providing important information for control of pointing the antenna, solar arrays and instruments, NASA said.
The side swap will take place on Nov. 5 and will put Odyssey into "safe mode." As the team and the spacecraft verify all systems can operate well over the following several days, the orbiter will return to full operations, conducting its own science observations, as well as serving as a communications relay, NASA said. The Curiosity and Opportunity rover teams will reduce the amount of data planned for downlinking until Odyssey returns to full capacity after the side swap is complete, and will maintain near-normal tactical operations in the interim, NASA said.
According to NASA Odyssey's website, all of Odyssey's computing functions are performed by the command and data handling subsystem. The heart of this subsystem is a RAD6000 computer, a radiation-hardened version of the PowerPC chip used on most models of Macintosh computers. With 128 megabytes of RAM and three megabytes of non-volatile memory, which allows the system to maintain data even without power, the subsystem runs Odyssey's flight software and controls the spacecraft through interface electronics.
In addition, using three redundant pairs of sensors, the guidance, navigation and control subsystem determines the spacecraft's orientation, or "attitude." A sun sensor is used to detect the position of the sun as a backup to the star camera. A star camera is used to look at star fields. Between star camera updates, a device called the inertial measurement unit collects information on spacecraft orientation.
This system also includes the reaction wheels, gyro-like devices used along with thrusters to control the spacecraft's orientation. Like most spacecraft, Odyssey's orientation is held fixed in relation to space as opposed to being stabilized via spinning. There are a total of four reaction wheels, with three used for primary control and one as a backup, NASA says.
Odyssey launched April 7, 2001, began orbiting Mars on Oct. 24 of that year, began systematic science observations of Mars in early 2002, and broke the previous record for longest-working Mars spacecraft in December 2010.
Check out these other hot stories: | <urn:uuid:a2983e0a-f72f-4bad-8e46-512aaa9830a1> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2223434/data-center/nasa-shifts-vital-computer-tasks-onboard-long-running-mars-odyssey-satellite.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00131-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930793 | 651 | 2.65625 | 3 |
The Electronic Numerical Integrator and Calculator (ENIAC)
The next significant
development in computing machines took place at Moore School of Engineering at
At the time the ENIAC was proposed, it was well understood that such a machine could be built. The problem was the same as noted by Zuse and his assistants – how to create a large machine using the fragile vacuum tubes of the day. The ENIAC, as finally built, had over 18,000 vacuum tubes. Without special design precautions, such a complex electronic machine would not have remained functional for any significant time interval, let alone perform any useful work.
The steps taken by Eckert to solve the tube reliability problem are described in a paper “Classic Machines: Technology, Implementation, and Economics” in [R11]
Eckert solved the tube reliability problem with a number of techniques:
· Tube failures were bimodal, with more failures near the start of operation as
well as after many hours and less during the middle operating hours. In
order to achieve better reliability, he used tubes that had already been
burnt-in [powered up and left to run for a short time], and had not suffered
from ‘infant mortality’ [had not burned out when first turned on].
· The machine was not turned off at night because power cycling [turning the
power off and on] was a source of increased failures.
· Components were run at only half of their rated capacity, reducing the
stress and degradation of the tubes and significantly increasing their
· Binary circuits were used, so tubes that were becoming weak or leaky
would not cause an immediate failure, as they would in an analog computer.
· The machine was built from circuit modules whose failure was easy to
ascertain and that could be quickly replaced with identical modules.”
“As a result of this excellent engineering, ENIAC was able to run for as long as a day or two without a failure! ENIAC was capable of 5,000 operations per second. It took up 2,400 cubic feet [about 70 cubic meters] of circuitry, weighed 30 tons, and consumed 140kW of power.”
While the ENIAC lacked what we would call memory, it did have twenty ten-digit accumulators used to store intermediate results. Each of the ten digits in an accumulator was stored in a shift register of ten flip-flops (for a total of 100 flip-flops per register). According to our reference, a 10–digit accumulator required 550 vacuum tubes, so that the register file by itself required 11,000 vacuum tubes. A bit of arithmetic will show that a 10–digit integer requires 34 bits to represent. Roughly speaking, each accumulator held four 8–bit bytes.
It is this arrangement of the accumulators that might give rise to some confusion. In its instruction architecture, the ENIAC was a decimal machine and not a binary machine; each number being stored as a collection of digits rather than in binary form. As noted above, the ENIAC did use binary devices to store those digits, probably coded with BCD code, and thus gained reliability.
The next figure shows a module for a single digit in a decimal accumulator. Although this circuit was not taken from the ENIAC, it is sufficiently compatible with the technology used in the ENIAC that it might have been taken from that machine.
Single Digit from a Burroughs 205 Accumulator, circa 1954.
The ENIAC was physically a very large machine as shown by the pictures below.
Figure: Two Pictures of the ENIAC
There are several ways to describe the size of the ENIAC. The pictures above, showing an Army private standing at an instrument rack is one. Other pictures include two women programming the ENIAC (using plug cables) or four women, each of whom is holding the circuit to store a single digit in a progression of technologies.
Miss Patsy Simmers, holding an ENIAC board (Operational
in Fall 1945)
Mrs. Gail TAylor, holding an EDVAC board (Delivered in August 1949)
Mrs. Milly Beck, holding an ORDVAC board (Operational in March 1952)
Mrs. Norma Stec, holding a BRLESC-I board (Delivered in May 1961)
Figure: Four Women With Register Circuits
It is interesting to note that in 1997,
as a part of the 50th anniversary of the ENIAC, a group of students
Figure: Two women programming the ENIAC
(Miss Gloria Ruth Gorden on the left and Mrs. Ester Gertson on the right)
Some Reminiscences about the ENIAC
The following is taken from an interview with J. Presper Eckert, one of the designers of the ENIAC. It was published in the trade journal, Computerworld, on February 20, 2006.
“In the early 1940’s, J. Presper Eckert was
the designer and chief engineer building ENIAC, the first general-purpose
all-electronic computer (see story, page 18).
It was a huge undertaking; ENIAC was the largest electronic device that
had ever been built. So why did Eckert –
on a tight schedule and with a limited staff – take time out to feed electrical
wire to mice?
Because he knew that ENIAC’s hundreds of miles of wiring would be chewed by the rodents. So he used a cageful of mice to taste-test wire samples. The wire whose insulation the mice chewed on least was the stuff Eckert’s team used to wire up the ENIAC. It was an elegant solution to an unavoidable problem.”
In the same issue, an earlier interview with Mr. Eckert was recalled.
“In our 1989 interview, I asked ENIAC
co-inventor J. Presper Eckert to recall the zaniest thing he did while
developing the historic computer. His
‘The mouse cage was pretty funny. We knew mice would eat the insulation off the wires, so we got samples of all the wires that were available and put them in a cage with a bunch of mice to see which insulation they did not like. We used only wire that passed the mouse test’.”
The Electronic Discrete Variable Automatic Calculator (EDVAC)
The ENIAC, as a result of its quick production in response to war applications, suffered from a number of shortcomings. The most obvious shortcoming is that it had no random access memory. As it had no place to store a program, it was not a stored–program computer.
The EDVAC, first proposed in 1945 and made operational in 1951, was the first true stored program computer in that it had a memory that could be used to store the program. In this sense, the EDVAC was the earliest design that we would call a computer. This design arose from the experience gained on the ENIAC and was proposed in a report “First Draft of a Report on the EDVAC” written in June 1945 by J. Presper Eckert, John Mauchly, and John von Neumann. The report was distributed citing only von Neumann as the author, so that it has only been recently that Eckert and Mauchly have been recognized as coauthors. In the meantime, the name “von Neumann machine” has become synonymous with “stored program computer” as if von Neumann had created the idea all by himself.
Unlike modern computers, the EDVAC and EDSAC (described below) used bit serial arithmetic. This choice was driven by the memory technology available at the time; it was a serial memory based on mercury acoustic delay lines; each bit of a word had to be either read or written sequentially. To access a given word (44 bits) on its memory, the EDVAC had to wait on the given word to circulate to the memory reader and then read it out, one bit at a time. This memory held 1,024 44–bit words, about 5.5 kilobytes.
The EDVAC had almost 6,000 vacuum tubes and 12,000 solid–state diodes. The CPU addition time was 864 microseconds; the multiplication time 2.9 milliseconds. On average, the EDVAC had a mean time between failures of eight hours and ran 20 hours a day.
The goals of the EDVAC, as quoted from the report, were as follows. Note the use of the word “organs” for what we would now call “components” or “circuits”. [R 37]
“1. Since the device is primarily a computer it will have to perform the elementary operations of arithmetic most frequently. Therefore, it should contain specialized organs for just these operations, i.e., addition, subtraction, multiplication, and division.”
“2. The logical control of the device (i.e., proper sequencing of its operations) can best be carried out by a central control organ.”
“3. A device which is to carry out long and complicated sequences of operation must have a considerable memory capacity.”
“4. The device must have organs to transfer information from the outside recording medium of the device into the central arithmetic part and central control part, and the memory. These organs form its input.”
“5. The device must have organs to transfer information from the central arithmetic part and central control part, and the memory into the outside recording medium. These organs form its output.”
At this point the main challenge in the development of stored program computers is the availability of technologies suitable for creation of the memory. The vacuum tubes available at the time could have been used to create modest-sized memories, but the memory would have been too expensive and not reliable. Consider an 8KB memory – 8,192 bytes of 8 bits each, corresponding to 65,536 bits. A vacuum tube flip-flop implementation of such a memory would have required one vacuum tube per bit, for a total of 65,536 vacuum tubes.
While the terms here are anachronistic, the problem was very real for that time.
Given the above calculation, we see that the big research issue for computer science in the late 1940’s and early 1950’s was the development of inexpensive and reliable memory elements. Early attempts included electrostatic memories and various types of acoustic delay lines (such as memory delay lines).
The Electronic Delay Storage Automatic Calculator (EDSAC)
While the EDVAC stands as
the first design for a stored program computer, the first such computer
actually built was the EDSAC by Maurice Wilkes of
In building the EDSAC, the first problem that Wilkes attacked was that of devising a suitable memory unit. By early 1947, he had designed and built the first working mercury acoustic delay memory (see the following figure). It consisted of 16 steel tubes, each capable of storing 32 words of 17 bits each (16 bits plus a sign) for a total of 512 words. The delay lines were bit-serial memories, so the EDSAC had a bit-serial design. The reader is invited to note that the EDSAC was a very large computer, but typical of the size for its time.
The next two figures
illustrate the EDSAC I computer, as built at
Figure: Maurice Wilkes and a Set of 16 mercury delay lines for the EDSAC [R 38]
Figure: The EDSAC I [R39]
Note that the computer was built large, with components on racks, so that technicians could have easy access to the electronics when doing routine maintenance.
The design of the EDSAC was somewhat conservative for the time, as Wilkes wanted to create a machine that he could actually program. As a result, the machine cycle time was two microseconds (500 kHz clock rate), slower by a factor of two than other machines being designed at the time. The project pioneered a number of design issues, including the first use of a subroutine call instruction that saved the return address and the use of a linking loader. The subroutine call instruction was named the Wheeler jump, after David J. Wheeler who invented it while working as a graduate student [R11, page 3]. Another of Wheeler’s innovations was the relocating program loader.
Magnetic Core Memory
The reader will note that most discussions of the “computer generations” focus on the technology used to implement the CPU (Central Processing Unit). Here, we shall depart from this “CPU–centric view” and discuss early developments in memory technology. We shall begin this section by presenting a number of the early technologies and end it with what your author considers to be the last significant contribution of the first generation – the magnetic core memory. This technology was introduced with the Whirlwind computer, designed and built at MIT in the late 1940’s and early 1950’s.
reader will recall that the ENIAC did not have any memory, but just used twenty
10–digit accumulators. For this reason, it was not a stored program computer, as it had no memory in which to store the program. To see the reason for this, let us perform a thought experiment and design a 4 kilobyte (4,096 byte) memory for the ENIAC. We shall imagine the use of flip–flops fabricated from vacuum tubes, a technology that was available to the developers of the ENIAC. We imagine the use of dual–triode tubes, also readily available and well understood during the early and late 1940’s.
A dual–triode implementation requires one tube per bit stored, or eight tubes per byte. Our four kilobyte memory would thus require 32,768 tubes just for the data storage. To this, we would add a few hundred vacuum tubes to manage the memory; about 34,000 tubes in all.
Recall that the original ENIAC, having 18 thousand tubes, consumed 140 kilowatts of power might run one or two days without a failure. Our new 52 thousand tube design would have cost an exorbitant amount, consumed 400 kilowatts of power, and possibly run 6 hours at a time before a failure due to a bad tube. An ENIAC with memory was just not feasible.
Maurice Wilkes of
The next step in the development of memory technology was taken at the
Each of the Manchester Baby and the Manchester/Ferranti Mark I computers used a new technology, called the “Williams–Kilburn tube”, as an electrostatic main memory. This device was developed in 1946 by Frederic C. Williams and Tom Kilburn [R43]. The first design could store either 32 or 64 words (the sources disagree on this) of 32 bits each.
The tube was a cathode ray tube in which the binary bits stored would appear as dots on the screen. One of the early computers with which your author worked was equipped with a Williams–Kilburn tube, but that was no longer in use at the time (1972). The next figure shows the tube, along with a typical display; this has 32 words of 40 bits each.
Early Williams–Kilburn tube and its display
Note the extra line of 20 bits at the top.
While the Williams–Kilburn tube sufficed for the storage of the prototype Manchester Baby, it was not large enough for the contemplated Mark I. The design team decided to build a hierarchical memory, using double–density Williams–Kilburn tubes (each storing two pages of thirty–two 40–bit words), backed by a magnetic drum holding 16,384 40–bit words.
The magnetic drum can be seen to hold 512 pages of thirty–two 40–bit words each. The data were transferred between the drum memory and the electrostatic display tube one page at a time, with the revolution speed of the drum set to match the display refresh speed. The team used the experience gained with this early block transfer to design and implement the first virtual memory system on the Atlas computer in the late 1950’s.
Here is a picture of the drum memory from the IBM 650, from about 1954. This drum was 4 inches in diameter, 16 inches long, and stored 2,000 10–digit numbers. When in use, the drum rotated at 12,500 rpm, or about 208 revolutions per second.
Figure: The IBM 650 Drum Memory
The MIT Whirlwind and Magnetic Core Memory
The MIT Whirlwind was designed as a bit-parallel machine with a 5 MHz clock. The following quote describes the effect of changing the design from one using current technology memory modules with the new core memory modules.
“Initially it [the Whirlwind] was designed using a rank of sixteen Williams tubes, making for a 16-bit parallel memory. However, the Williams tubes were a limitation in terms of operating speed, reliability, and cost.”
“To solve this problem, Jay W. Forrester and others developed magnetic core memories. When the electrostatic memory was replaced with a primitive core memory, the operating speed of the Whirlwind doubled, and the maintenance time on the machine fell from four hours per day to two hours per week, and the mean time between memory failures rose from two hours to two weeks! Core memory technology was crucial in enabling Whirlwind to perform 50,000 operations per second in 1952 and to operate reliably for relative long periods of time” [R11]
The reader should consider the previous paragraph carefully and ask how useful computers would be if we had to spend about half of each working day in maintaining them.
Basically core memory consists of magnetic cores, which are small doughnut-shaped pieces of magnetic material. These can be magnetized in one of two directions: clockwise or counter-clockwise; one orientation corresponding to a logic 0 and the other to a logic 1. Associated with the cores are current carrying conductors used to read and write the memory. The individual cores were strung by hand on the fine insulated wires to form an array. It is a fact that no practical automated core-stringing machine was ever developed; thus core memory was always a bit more costly due to the labor intensity of its construction.
original core memory modules were quite large.
This author used a Digital Equipment Corporation PDP–9 in 1972 while
doing his Ph.D. research at
To more clearly illustrate core memory, we include picture from the Sun Microsystems web site. Note the cores of magnetic material with the copper wires interlaced.
Figure: Close–Up of a Magnetic Core Memory [R40]
The IBM System/360 Model 85 and Cache
Although the development of magnetic core memory represented a significant development in computer technology, it did not solve one of the essential problems. There are two goals for the development of a memory technology; it should be cheap and it should be fast. In general, these two goals are not compatible.
The problem of providing a large inexpensive memory that appeared to be fast had been considered several times before it was directly addressed on the IBM System/360, Model 85. The System/360 family of third–generation computers will be discussed in some detail in the next pages; here we want to focus on its contribution to memory technology. Here we quote from the paper by J. S. Liptay [R58], published in 1968. In that paper, Liptay states:
“Among the objectives of the Model 85 is that of providing a System/360 compatible processor with both high performance and high throughput. One of the important ingredients of high throughput is a large main storage capacity. However, it is not feasible to provide a large main storage with an access time commensurate with the 80–nanosecond processor cycle of the Model 85. … [W]e decided to use a storage hierarchy. The storage hierarchy consists of a 1.04–microsecond main storage and a small, fast store called a cache, which is integrated into the CPU. The cache is not addressable by a program, but rather is used to hold those portions of main storage that are currently being used.”
The main storage unit for the computer was either an IBM 3265–5 or an IBM 2385, each of which had a capacity between 512 kilobytes and 4 megabytes. The cache was a 16 kilobyte integrated storage, capable of operating at the CPU cycle time of 80 nanoseconds. Optional upgrades made available a cache with either 24 kilobytes or 32 kilobytes.
Both the cache and main memory were divided into sectors, each holding 1 kilobyte of address space that was aligned on 1 KB address boundaries. Each sector was further divided into sixteen blocks of 64 bytes each; these correspond to what we now call “pages”. The cache memory was connected to the main memory via a 16–byte (128 bit) data path so that the processor could fetch or store an entire block in four requests to the main memory. The larger main memory systems were four–way interleaved, so that a request for a block was broken into four requests for 16 bytes each, and sent to each of the four memory banks.
While it required two processor cycles to fetch data in the cache (one to check for presence and one to fetch the data), it was normally the case that requests could be overlapped. In that case, the CPU could access data on every cycle.
In order to evaluate various cache management algorithms, the IBM engineers compared real computers to a hypothetical computer having a single large memory operating at cache speed. This was seen as the upper limit for the real Model 85. For a selected set of test programs, the real machine was seen to operate at an average of 81% of the theoretical peak performance; presumably meaning that it acted as if it had a single large memory with access time equal to 125% that of the cache access time.
Almost all future computers would be designed with a fast cache memory fronting a larger and slow memory. Later developments would show the advantage of having a split cache with an I–cache for instructions and a D–cache for data, as well as multilevel caches. We shall cover these advances in later chapters of the textbook; this is where it began
Interlude: Data Storage Technologies In the 1950’s
We have yet to address methods commonly used by early computers for storage of off–line data; that is data that were not actually in the main memory of the computer. One of the main methods was the use of 80 column punch cards, such as seen in the figure below.
Figure: 80–Column Punch Card (Approximately 75% Size)
These cards would store at most eighty characters, so that the maximum data capacity was exactly eighty bytes. A decent sized program would require a few thousand of these cards to store the source code; these were transported in long boxes. On more than one occasion, this author has witnessed a computer operator drop a box of program cards, scattering the cards in random order and thus literally destroying the computer program.
Seven–Track Tapes and Nine–Track Tapes
cards were widely used by businesses to store account data up to the mid
1950’s. At some point, the requirement
to keep all of those punch cards became quite burdensome. One life insurance company reported that it
had to devote two floors of its modern skyscraper in
Data centers began to use larger magnetic tape drives for data storage. Early tape drives were called seven–track units in that they stored seven tracks of information on the tape: six tracks of data and one track of parity bits. Later designs stored eight data bits and parity.
The picture at left shows the first magnetic tape drive produced by IBM. This was the IBM 729, first released in 1952. The novelty of the design is reflected by an actual incident at the IBM receiving dock. The designers were expecting a shipment of magnetic tape when they were called by the foreman of the dock with the news that “We just received a shipment of tape from 3M, but are going to send it back … It does not have any glue on it”
The next picture shows a typical data center based on the IBM 7090 computer, in use from 1958 through 1969. Note the large number of tape drives. I can count at least 16.
The picky reader will note that the IBM 7090 is a Generation 2 computer that should not be discussed until the next section. We use the picture because the tape drives are IBM 729’s, first introduced in 1952, and because the configuration is typical of data centers of the time.
The IBM RAMAC
The next step in evolution of data storage is what became the disk drive. Again, IBM lead the way in developing this storage technology, releasing the RAMAC (Random Access Method for Accounting and Control) in 1956.
Figure: The RAMAC (behind the man’s head)
The RAMAC had fifty aluminum disks, each of which was 24 inches in diameter. Its storage capacity is variously quoted at 4.4MB or five million characters. At 5,000,000 characters we have 100,000 characters per 24–inch disk. Not very impressive for data density, but IBM had to start somewhere.
Vacuum Tube Computers (Generation 1 – a parting shot)
By the end of the first generation of computing machines in the middle 1950’s, most of the elements of a modern stored program computer had been put in place – the idea of a stored program, a reasonably efficient and reliable memory element, and the basic design of both the central processing unit and the I/O systems.
We leave this generation with a few personal memories of this author. First, there is the IBM 650, a vacuum-tube based computer in production from 1953 through 1962, when the last model was produced. This author recalls joining a research group in 1971, on the day an IBM 650 was being removed to storage to be replaced by a modern PDP–9.
Figure: The IBM 650 – Power Supply, Main
Unit, and Read-Punch Unit
The reader will note that the I/O unit for the computer is labeled a “Read–Punch Unit”. The medium for communicating with this device was the punched card. Thus, the complete system had two components not seen here – a device for punching cards and a device for printing the contents of those cards onto paper for reading by humans.
It was also during the 1950’s that this author’s Ph.D. advisor, Joseph Hamilton, was a graduate student in a nuclear physics lab that had just received a very new multi-channel analyzer (a variant of a computer). He was told to move it and, unaware that it was top-heavy, managed to tip it over and shatter every vacuum tube in the device. This amounted to a loss of a few thousand dollars – not a good way to begin a graduate career.
One could continue with many stories, including the one about the magnetic drum unit on the first computer put on a sea-going U.S. Navy ship. The drum was spun up and everything was going well until the ship made an abrupt turn and the drum, being basically a large gyroscope, was ripped out of its mount and fell on the floor.
Discrete Transistor Computers (Generation 2 – from 1958 to 1966)
The next generation of computers is based on a device called the transistor. Early forms of transistors had been around for quite some time, but were not regularly used due to a lack of understanding of the atomic structure of crystalline germanium. This author can recall use of an early germanium rectifier in a radio he built as a Cub Scout project – the instructions were to find the “sweet spot” on the germanium crystal for the “cat’s hair” rectifier. Early designs (before the late 1940’s) involved finding this “sweet spot” without any understanding of what made such a spot special. At some time in the 1940’s it was realized that there were two crystalline forms of germanium – called “P type” and “N type”. Methods were soon devised to produce crystals mostly of exactly one of the forms and transistors were possible.
The transistor was invented
by Walter Brattain and John Bardeen at Bell Labs in December 1947 and almost
instantly improved by William Shockley (also of Bell Labs) in February 1948. The transistor was announced to the world in
a press conference during June 1949. In
The next year, in April 1952, Bell Labs held the Transistor Technology Symposium. For eight days the attendees worked day and night to learn the tricks of building transistors. At the end of the symposium, they were sent home with a two volume book called “Transistor Technology” that later became known as “Mother Bell’s Cookbook”.
The previous two paragraphs were adapted from material on the Public Broadcast web page http://www.pbs.org/transistor/, which contains much additional material on the subject.
While transistors had
appeared in early test computers, such as the TX–0 built at MIT, it appears
that the first significant computer to use transistor technology was the IBM
7030, informally known at the “Stretch”.
Development of the Stretch was begun in 1955 with the goal of producing
a computer one hundred times faster than either the IBM 704 or IBM 705, the two
vacuum tube computers being sold by IBM at the time, for scientific and
commercial use respectively. While
missing its goal of being one hundred times faster, the IBM 7030 did achieve
significant speed gains by use of new technology. [R11]
1. By switching to germanium transistor technology, the circuitry ran ten times faster.
2. Memory access became about six times faster, due to the use of core memory.
At this time, we pick up our
discussion of computer development by noting a series of computers developed
and manufactured by the Digital Equipment Corporation of
The technology used in these computers was a module comprised of a small number of discrete transistors and dedicated to one function. The following figure shows a module, called a Flip Chip by DEC, used on the PDP–8 in the late 1960’s.
Figure: Front and Back of an R205b Flip-Chip (Dual Flip-Flop) [R42]
In the above figure, one can easily spot many of the discrete components. The orange pancake-like items are capacitors, the cylindrical devices with colorful stripes are resistors with the color coding indicating their rating and tolerances, the black “hats” in the column second from the left are transistors, and the other items must be something useful. The size of this module is indicated by the orange plastic handle, which is fitted to a human hand.
These circuit cards were arranged in racks, as is shown in the next figure.
Figure: A Rack of Circuit Cards from the Z–23 Computer (1961)
was one of the world’s first minicomputers, so named because it was small
enough to fit on a desk. The name
“minicomputer” was suggested by the miniskirt, a
knee–length skirt much vogue during that time. This author can say a lot of interesting things about miniskirts of that era, but prefers to avoid controversy.
To show the size of a smaller minicomputer, we use a variant of the PDP–1 from about 1961 and the PDP–8/E (a more compact variant) from 1970. One can see that the PDP–1 with its line of cabinets is a medium–sized computer, while the PDP–8/E is rather small. It is worth note that one of the first video games, Spacewar by Steve Russell, was run on a PDP–1.
history of the development of DEC computers is well documented in a book titled
Computer Engineering: A DEC View of Hardware System Design, by C. Gordon Bell,
J. Craig Mudge, and John E. McNamara [R1]. DEC built a number of “families” of computers, including one line that progressed from the PDP-5 to the PDP-8 and thence to the PDP-12. We shall return to that story after considering the first and second generation computers that lead to the IBM System 360, one of the first third generation computers.
The Evolution of the IBM–360
We now return to a discussion of “Big Iron”, a name given informally to the larger IBM mainframe computers of the time. Much of this discussion is taken from an article in the IBM Journal of Research and Development [R46], supplemented by articles from [R1]. We trace the evolution of IBM computers from the first scientific computer ( the IBM 701, announced in May 1952) through the early stages of the S/360 (announced in March 1964).
We begin this discussion by considering the situation as of January 1, 1954. At the time, IBM has three models announced and shipping. Two of these were the IBM 701 for scientific computations and the IBM 702 for financial calculations (announced in September 1953),. Each of the designs used Williams–Kilburn tubes for primary memory, and each was implemented using vacuum tubes in the CPU. Neither computer supported both floating–point (scientific) and packed decimal (financial) arithmetic, as the cost to support both would have been excessive. As a result, there were two “lines”: scientific and commercial.
The third model was the IBM 650, mentioned and pictured in the above discussion of the first generation. It was designed as “a much smaller computer that would be suitable for volume production. From the outset, the emphasis was on reliability and moderate cost”. The IBM 650 was a serial, decimal, stored–program computer with fixed length words each holding ten decimal digits and a sign. Later models could be equipped with the IBM 305 RAMAC (the disk memory discussed and pictured above). When equipped with terminals, the IBM 650 started the shift towards transaction–oriented processing.
figure below can be considered as giving the “family tree” of the IBM
Note that there are four “lines”: the IBM 650 line, the IBM 701 line, IBM 702 line, and the IBM 7030 (Stretch). The System/360 (so named because it handled “all 360 degrees of computing”) was an attempt to consolidate these lines and reduce the multiplicity of distinct systems, each with its distinct maintenance problems and costs.
Figure: The IBM System/360 “Family Tree”
As mentioned above, in the 1950’s IBM supported two product lines: scientific computers (beginning with the IBM 701) and commercial computes (beginning with the IBM 702). Each of these lines was redesigned and considerably improved in 1954.
Generation 1 of the Scientific Line
In the IBM 704 (announced in May 1954), the Williams–Kilburn tube memory was replaced by magnetic–core memory with up to 32768 36–bit words. This eliminated the most difficult maintenance problem and allowed users to run larger programs. At the time, theorists had estimated that a large computer would require only a few thousand words of memory. Even at this time, the practical programmers wanted more than could be provided.
The IBM 704 also introduced hardware support for floating–point arithmetic, which was omitted from the IBM 701 in an attempt to keep the design “simple and spartan” [R46]. It also added three index registers, which could be used for loops and address modification. As many scientific programs make heavy use of loops over arrays, this was a welcome addition.
The IBM 709 (announced in January 1957) was basically an upgraded IBM 704. The most important new function was then called a “data–synchronizer unit”; it is now called an “I/O Channel”. Each channel was an individual I/O processor that could address and access main memory to store and retrieve data independently of the CPU. The CPU would interact with the I/O Channels by use of special instructions that later were called channel commands.
It was this flexibility, as much as any other factor, that lead to the development of a simple supervisory program called the IOCS (I/O Control System). This attempt to provide support for the task of managing I/O channels and synchronizing their operation with the CPU represents an early step in the evolution of the operating system.
Generation 1 of the Commercial Line
The IBM 702 series differed from the IBM 701 series in many ways, the most important of which was the provision for variable–length digital arithmetic. In contrast to the 36–bit word orientation of the IBM 701 series, this series was oriented towards alphanumeric arithmetic, with each character being encoded as 6 bits with an appended parity check bit. Numbers could have any length from 1 to 511 digits, and were terminated by a “data mark”.
The IBM 705 (announced in October 1954) represented a considerable upgrade to the 702. The most significant change was the provision of magnetic–core memory, removing a considerable nuisance for the maintenance engineers. The size of the memory was at first doubled and then doubled again to 40,000 characters. Later models could be provided with one or more “data–synchronizer units”, allowing buffered I/O independent of the CPU.
Generation 2 of the Product Lines
As noted above, the big change associated with the transition to the second generation is the use of transistors in the place of vacuum tubes. Compared to an equivalent vacuum tube, a transistor offers a number of significant advantages: decreased power usage, decreased cost, smaller size, and significantly increased reliability. These advantages facilitated the design of increasingly complex circuits of the type required by the then new second generation.
The IBM 7070 (announced in September 1958 as an upgrade to the IBM 650) was the first transistor based computer marketed by IBM. This introduced the use of interrupt–driven I/O as well as the SPOOL (Simultaneous Peripheral Operation On Line) process for managing shared Input/Output devices.
The IBM 7090 (announced in December 1958) was a transistorized version of the IBM 709 with some additional facilities. The IBM 7080 (announced in January 1960) likewise was a transistorized version of the IBM 705. Each model was less costly to maintain and more reliable than its tube–based predecessor, to the extent that it was judged to be a “qualitatively different kind of machine” [R46].
The IBM 7090 (and later IBM 7094) were modified by researchers at M.I.T. in order to make possible the CTSS (Compatible Time–Sharing System), the first major time–sharing system. Memory was augmented by a second 32768–word memory bank. User programs occupied one bank while the operating system resided in the other. User memory was divided into 128 memory–protected blocks of 256 words, and access was limited by boundary registers.
The IBM 7094 was announced on January 15, 1962. The CTSS effort was begun in 1961, with a version being demonstrated on an IBM 709 in November 1961. CTSS was formally presented in a paper at the Joint Computer Conference in May, 1962. Its design affected later operating systems, including MULTICS and its derivatives, UNIX and MS–DOS.
As a last comment here, the true IBM geek will note the omission of any discussion of the IBM 1401 line. These machines were often used in conjunction with the 7090 or 7094, handling the printing, punching, and card reading chores for the latter. It is just not possible to cover every significant machine.
The IBM 7030 (Stretch)
In late 1954, IBM decided to undertake a very ambitious research project, with the goal of benefiting from the experience gained in the previous three project lines. In 1955, it was decided that the new machine should be at least 100 times as fast as either the IBM 704 or the IBM 705; hence the informal name “Stretch” as it “stretched the technology”.
In order to achieve these goals, the design of the IBM 7030 required a considerable number of innovations in technology and computer organization; a few are listed here.
new type of germanium transistor, called “drift transistor” was developed. These
faster transistors allowed the circuitry in the Stretch to run ten times faster.
2. A new type of core memory was developed; it was 6 times faster than the older core.
was organized into multiple 128K–byte units accessed by low–order
interleaving, so that successive words were stored in different banks. As a result,
new data could be retrieved at a rate of one word every 200 nanoseconds, even
though the memory cycle time was 2.1 microseconds (2,100 nanoseconds).
lookahead (now called “pipelining”) was introduced. At any point in
time, six instructions were in some phase of execution in the CPU.
disk drives, with multiple read/write arms, were developed. The capacity and
transfer rate of these devices lead to the abandonment of magnetic drums.
pair of boundary registers were added so as to provide the storage protection
required in a multiprogramming environment.
It is generally admitted that the Stretch did not meet its design goal of a 100 times increase in the performance of the earlier IBM models. Here is the judgment by IBM from 1981 [R46].
“For a typical 704 program, Stretch fell short of is performance target of one hundred times the 704, perhaps by a factor of two. In applications requiring the larger storage capacity and word length of Stretch, the performance factor probably exceeded one hundred, but comparisons are difficult because such problems were not often tackled on the 704.” [R46]
It seems that production of the IBM 7030 (Stretch) was limited to nine machines, one for Los Alamos National Labs, one (called “Harvest”) for the National Security Agency, and 7 more.
The Evolution of the PDP–8
While this story does not exactly fit here, it serves as a valuable example of the transition from the second generation of computers to later generations. We quickly trace the CPU size of the PDP–8 computer by the Digital Equipment Corporation from its predecessor (the
PDP–5, introduced in 1963), through the first PDP–8 (1964), to a later implementation.
In 1971, the PDP-8/E (a smaller version of the PDP-8) had only three boards with 240 square inches of circuit space for the CPU. In 1976, the Intersil Corporation produced a fully-functioning PDP-8 CPU on a single chip, about 1/4 inch on a side. Thus we have a more powerful CPU on a form factor that is reduced in scale by a factor of 33,600.
Smaller Integrated (SSI and MSI) Circuits (Generation 3 – from 1966 to 1972)
In one sense, the evolution of computer components can be said to have stopped in 1958 with the introduction of the transistor; all future developments merely represent the refinement of the transistor design. This statement stretches the truth so far that it hardly even makes this author’s point, which is that packaging technology is extremely important.
What we see in the generations following the introduction of the transistor is an aggressive miniaturization of both the transistors and the traces (wires) used to connect the circuit elements. This allowed the creation of circuit modules with component densities that could hardly have been imagined a decade earlier. Such circuit modules used less power and were much faster than those of the second generation; in electronics smaller is faster. They also lent themselves to automated manufacture, thus increasing component reliability.
It seems that pictures are the best way to illustrate the evolution of the first three generations of computer components. Below, we see a picture of an IBM engineer (they all wore coats and ties at that time) with three generations of components.
Figure: IBM Engineer with Three Generations of Components
The first generation unit (vacuum tubes) is a pluggable module from the IBM 650. Recall that the idea of pluggable modules dates to the ENIAC; the design facilitates maintenance.
The second generation unit (discrete transistors) is a module from the IBM 7090.
The third generation unit is the ACPX module used on the IBM 360/91 (1964). Each chip was created by stacking layers of silicon on a ceramic substrate; it accommodated over twenty transistors. The chips could be packaged together onto a circuit board. The circuit board in the foreground appears to hold twelve chips. This author conjectures, but cannot prove that the term “ACPX module” refers to the individual chip, and that the pencil in the foreground is indicating one such module.
It is likely that each of the chips on the third–generation board has a processing power equivalent to either of the units from the previous generations.
74181 Arithmetic Logic Unit by
The first real step in the third generation of computing was taken when Texas Instruments introduced the 7400 series of integrated circuits. One of the earliest, and most famous, was a MSI chip called the 74181. It was an arithmetic logic unit (ALU) that provided thirty–two functions of two 4–bit variables, though many of those functions were quite strange. It was developed in the middle 1960’s and first used in the Data General Nova computer in 1968.
Figure: The 74181 in a DIP Packaging
The figure above shows the chip in its DIP (Dual In–line Pin) packaging. The figure below shows the typical dimensions of a chip in the DIP configuration.
Figure: The 74181 Physical Dimensions in Inches (Millimeters)
For those who like circuit diagrams, here is a circuit diagram of the 74181 chip. A bit later in the textbook, we shall discuss how to read this diagram; for now note that it is moderately complex, comprising seventy five logic gates.
Here is a long description of the 74181, taken from the Wikipedia article.
is a 7400 series medium-scale integration (MSI) TTL integrated circuit,
containing the equivalent of 75 logic gates and most commonly packaged as a
24-pin DIP. The 4-bit wide ALU can perform all the traditional add / subtract /
decrement operations with or without carry, as well as AND /
The 74181 performs these operations on two four bit operands generating a four bit result with carry in 22 nanoseconds. The 74S181 performs the same operations in 11 nanoseconds.
Multiple 'slices' can be combined for arbitrarily large word sizes. For example, sixteen 74S181s and five 74S182 look ahead carry generators can be combined to perform the same operations on 64-bit operands in 28 nanoseconds. Although overshadowed by the performance of today's multi-gigahertz 64-bit microprocessors, this was quite impressive when compared to the sub megahertz clock speeds of the early four and eight bit microprocessors
Although the 74181 is only an ALU and not a complete microprocessor it greatly simplified the development and manufacture of computers and other devices that required high speed computation during the late 1960s through the early 1980s, and is still referenced as a "classic" ALU design.
Prior to the introduction of
the 74181, computer CPUs occupied multiple circuit boards and even very simple
computers could fill multiple cabinets. The 74181 allowed an entire CPU and in
some cases, an entire computer to be constructed on a single large printed
circuit board. The 74181 occupies a historically significant stage between
older CPUs based on discrete logic functions spread over multiple circuit
boards and modern microprocessors that incorporate all CPU functions in a
single component. The 74181 was used in various minicomputers and other devices
beginning in the late 1960s, but as microprocessors became more powerful the
practice of building a CPU from discrete components fell out of favor and
the 74181 was not used in any new designs.
Many computer CPUs and subsystems were based on the 74181, including several historically significant models.
- First widely available 16-bit minicomputer manufactured by
Data General. This was the first design (in 1968) to use the 74181.
- Most popular minicomputer of all time, manufactured by
Digital Equipment Corporation. The first model was introduced in 1970.
Alto - The first computer to use the desktop metaphor and graphical
user interface (GUI).
11/780 - The first VAX, the most popular 32-bit computer of the
1980s, also manufactured by Digital Equipment Corp.”
Back to the System/360
Although the System/360 likely did not use any 74181’s, that chip does illustrate the complexity of the custom–fabricated chips used in IBM’s design.
The design goals for the System/360 family are illustrated by the following scenario. Imagine that you have a small company that is using an older IBM 1401 for its financial work. You want to expand and possibly do some scientific calculations. Obviously, IBM is very happy to lease you a computer. We take your company through its growth.
1. At first, your company needs only a small
computer to handle its computing
needs. You lease an IBM System 360/30. You use it in emulation
mode to run your IBM 1401 programs unmodified.
2. Your company grows. You need a bigger computer. Lease a 360/50.
3. You hit the “big time”. Turn in the 360/50 and lease a 360/91.
You never need to rewrite or recompile your existing programs.
You can still run your IBM 1401 programs without modification.
In order to understand how the System/360 was able to address these concerns, we must divert a bit and give a brief description of the control unit of a stored program computer.
Control Unit and Emulation
In order to better explain one of the distinctive features of the IBM System/360 family, it is necessary to take a detour and discuss the function of the control unit in a stored program computer. Basically, the control unit tells the computer what to do.
All modern stored program computers execute programs that are a sequence of binary machine–language instructions. This sequence of instructions corresponds either to an assembly language program or a program in a higher–level language, such as C++ or Java.
The basic operation of a stored program computer is called “fetch–execute”, with many variants on this name. Each instruction to be executed is fetched from memory and deposited in the Instruction Register, which is a part of the control unit. The control unit interprets this machine instruction and issues the signals that cause the computer to execute it.
Figure: Schematic of the Control Unit
There are two ways in which a control unit may be organized. The most efficient way is to build the unit entirely from basic logic gates. For a moderately–sized instruction set with the standard features expected, this leads to a very complex circuit that is difficult to test.
In 1951, Maurice V. Wilkes (designer of the EDSAC, see above) suggested an organization for the control unit that was simpler, more flexible, and much easier to test and validate. This was called a “microprogrammed control unit”. The basic idea was that control signals can be generated by reading words from a micromemory and placing each in an output buffer.
In this design, the control unit interprets the machine language instruction and branches to a section of the micromemory that contains the microcode needed to emit the proper control signals. The entire contents of the micromemory, representing the sequence of control signals for all of the machine language instructions is called the microprogram. All we need to know is that the microprogram is stored in a ROM (Read Only Memory) unit.
While microprogramming was sporadically investigated in the 1950’s, it was not until about 1960 that memory technology had matured sufficiently to allow commercial fabrication of a micromemory with sufficient speed and reliability to be competitive. When IBM selected the technology for the control units of some of the System/360 line, its primary goal was the creation of a unit that was easily tested. Then they got a bonus; they realized that adding the appropriate blocks of microcode could make a S/360 computer execute machine code for either the IBM 1401 or IBM 7094 with no modification. This greatly facilitated upgrading from those machines and significantly contributed to the popularity of the S/360 family.
The IBM System/360
As noted above, the IBM chose to replace a number of very successful, but incompatible, computer lines with a single computer family, the System/360. The goal, according to an IBM history web site [R48] was to “provide an expandable system that would serve every data processing need”. The initial announcement on April 7, 1964 included Models 30, 40, 50, 60, 62, and 70 [R49]. The first three began shipping in mid–1965, and the last three were replaced by the Model 65 (shipped in November 1965) and Model 75 (January 1966).
The introduction of the
System/360 is also the introduction of the term “architecture”
as applied to computers. The following quotes are taken from one of the first papers
describing the System/360 architecture [R46].
is used here to describe the attributes of a system as seen by
the programmer,, i.e., the conceptual structure and functional behavior, as distinct
from the organization of the data flow and controls, the logical design,
and the physical implementation.”
last few years, many computer architects have realized, usually implicitly,
that logical structure (as seen by the programmer) and physical structure (as seen
by the engineer) are quite different. Thus, each may see registers, counters, etc.,
that to the other are not at all real entities. This was not so in the computers of the
1950’s. The explicit recognition of the duality of the structure opened the way
for the compatibility within System/360.”
At this point, we differentiate between the three terms: architecture, organization, and implementation. The quote above gives a precise comparison of the terms architecture and organization. As an example, consider the sixteen general–purpose registers (R0 – R15) in the System/360 architecture. All models implement these registers and use them in exactly the same way; this is a requirement of the architecture. As a matter of implementation, a few of the lower end S/360 family actually used dedicated core memory for the registers, while the higher end models used solid state circuitry on the CPU board.
The difference between organization and implementation is seen by considering the two computer pairs: the 709 and 7090, and the 705 and 7080. The IBM 7090 had the same organization (hence the same architecture) as the IBM 709; the implementation was different. The IBM 709 used vacuum tubes; the IBM 7090 replaced these with transistors.
The requirement for the
System/360 design is that all models in that series would be
“strictly program compatible, upward and downward, at the program bit level”. [R46]
[strictly program compatible] means that a valid program, whose logic will
not depend implicitly upon time of execution and which runs upon configuration A,
will also run on configuration B if the latter includes as least the required storage, at
least the required I/O devices, and at least the required optional features.”
would ensure that the user’s expanding needs be easily accommodated
by any model. Compatibility would also ensure maximum utility of programming
support prepared by the manufacturer, maximum sharing of programs generated by the user, ability to use small systems to back up large ones, and exceptional freedom in configuring systems for particular applications.”
Additional design goals for the System/360 include the following.
1. The System/360 was intended to replace two mutually incompatible
lines in existence at the time.
a) The scientific series (701, 704, 7090, and
7094) that supported floating
point arithmetic, but not decimal arithmetic.
b) The commercial series (702, 705, and 7080)
that supported decimal
arithmetic, but not floating point arithmetic.
2. The System/360 should have a
“compatibility mode” that would allow it
to run unmodified machine code from the IBM 1401 – at the time a very
popular business machine with a large installed base.
was possible due to the use of a microprogrammed control unit. If you
want to run native S/360 code, access that part of the microprogram. If you
want to run IBM 1401 code, just switch to the microprogram for that machine.
3. The Input/Output Control Program should be
designed to allow execution by
the CPU itself (on smaller machines) or execution by separate I/O Channels
on the larger machines.
4. The system must allow for autonomous
operation with very little intervention by
a human operator. Ideally this would be limited to mounting and dismounting
magnetic tapes, feeding punch cards into the reader, and delivering output.
5. The system must support some sort of
extended precision floating point
arithmetic, with more precision than the 36–bit system then in use.
Minicomputers: the Rise and
Fall of the VAX
So far, our discussion of the third generation of computing has focused on the mainframe, specifically the IBM System/360. Later, we shall mention another significant large computer of this generation; the CDC–6600. For now, we turn our attention to the minicomputer, which represents another significant development of the third generation. Simply, these computers were smaller and lest costly than the “big iron”.
In 1977, Gordon Bell provided a definition of a minicomputer [R1, page 14].
“[A minicomputer is a] computer originating in the early 1960s and predicated on being the lowest (minimum) priced computer built with current technology. From this origin, at prices ranging from 50 to 100 thousand dollars, the computer has evolved both at a price reduction rate of 20 percent per year …”
The above definition seems to ignore microcomputers; probably these were not “big players” at the time it was made. From the modern perspective, the defining characteristics of a minicomputer were relatively small size and modest cost.
As noted above, the PDP–8 by Digital Equipment Corporation is considered to be the first minicomputer. The PDP–11 followed in that tradition.
This discussion focuses on the VAX series of computers, developed and marketed by the Digital Equipment Corporation (now out of business), and its predecessor the PDP–11. We begin with a discussion of the PDP–11 family, which was considered to be the most popular minicomputer; its original design was quite simple and elegant. We then move to a discussion of the Virtual Address Extension (VAX) of the PDP–11, note its successes, and trace the reasons for its demise. As we shall see, the major cause was just one bad design decision, one that seemed to be very reasonable when it was first made.
The Programmable Data Processor 11, introduced in 1970, was
an outgrowth of earlier lines of small computers developed by the Digital
Equipment Corporation, most notably the
PDP–8 and PDP–9. The engineers made a survey of these earlier designs and made a conscious decision to fix what were viewed as design weaknesses. As with IBM and its System/360, DEC decided on a family of compatible computers with a wide range of price ($500 to $250,000) and performance. All members of the PDP–11 family were fabricated from TTL integrated circuits; thus the line was definitely third–generation.
The first model in the line was the PDP–11/20, followed soon by the PDP–11/45, a model designed for high performance. By 1978, there were twelve distinct models in the PDP–11 line (LSI–11, PDP–11/04, PDP–11/05, PDP–11/20, PDP–11/34, PDP–11/34C, PDP–11/40, PDP–11/45, PDP–11/55, PDP–11/60, PDP–11/70, and the VAX–11/780). Over 50,000 of the line had been sold, which one of the developers (C. Gordon Bell), considered a qualified success, “since a large and aggressive marketing organization, armed with software to correct architectural inconsistencies and omissions, can save almost any design”[R1, page 379].
With the exception of the
VAX–11/780 all members of this family were essentially 16–bit machines with
16–bit address spaces. In fact, the
first of the line (the PDP–11/20) could address only 216 = 65536
bytes of memory. Given that the top 4K
of the address space was mapped for I/O devices, the maximum amount of physical
memory on this machine was
61,440 bytes. Typically, with the Operating System installed, this left about 32 KB space for user programs. While many programs could run in this space, quite a few could not.
According to two of the PDP–11’s developers:
“The first weakness of minicomputers was their limited addressing capability. The PDP–11 followed this hallowed tradition of skimping on address bits, but it was saved by the principle that a good design can evolve through at least one major change. … It is extremely embarrassing that the PDP–11 had to be redesigned with memory management [used to convert 16–bit addresses into 18–bit and then 22–bit addresses] only two years after writing the paper that outlined the goal of providing increased address space.”[R1, page 381]
While the original PDP–11 design
was patched to provide first an 18–bit address space (the
PDP–11/45 in 1972) and a 22–bit address space (the PDP–11/70 in 1975), it was quickly obvious that a new design was required to solve the addressability problem. In 1974, DEC started work on extending the virtual address space of the PDP–11, with the goal of a computer that was compatible with the existing PDP–11 line. In April 1975, work was redirected to produce a machine that would be “culturally compatible with the PDP–11”. The result was the VAX series, the first model of which was the VAX–11/780, which was introduced on October 25, 1977. At the time, it was called a “super–mini”; its later models, such as the VAX–11/9000 series were considered to be mainframes.
By 1982 Digital Equipment Corporation was the number two computer company. IBM maintained it's lead as number one. The VAX series of computers, along with its proprietary operating system, VMS, were the company sales leaders. One often heard of “VAX/VMS”.
The VAX went through many different implementations. The original VAX was implemented in TTL and filled more than one rack for a single CPU. CPU implementations that consisted of multiple ECL gate array or macrocell array chips included the 8600, 8800 super–minis and finally the 9000 mainframe class machines. CPU implementations that consisted of multiple MOSFET custom chips included the 8100 and 8200 class machines. The computers in the VAX line were excellent machines, supported by well–designed and stable operating systems. What happened? Why did they disappear?
Figure: A Complete VAX–11/780 System
In order to understand the reason for the demise of the VAX line of computers, we must first present some of its design criteria, other than the expanded virtual address space. Here we quote from Computer Architecture: A Quantitative Approach by Hennessy & Patterson.
“In the late 1960’s and early 1970’s, people realized that software costs were growing faster than hardware costs. In 1967, McKeeman argued that compilers and operating systems were getting too big and too complex and taking too long to develop. Because of inferior compilers and the memory limitations of computers, most system programs at the time were still written in assembly language. Many researchers proposed alleviating the software crisis by creating more powerful, software–oriented architectures.
In 1978, Strecker discussed how he and the other architects at DEC responded to this by designing the VAX architecture. The VAX was designed to simplify compilation of high–level languages. … The VAX architecture was designed to be highly orthogonal and to allow the mapping of high–level–language statements into single VAX instructions.” [R60, page 126]
What we see here is the demand for what would later be called a CISC design. In a Complex Instruction Set Computer, the ISA (Instruction Set Architecture) provides features to support a high–level language, and minimize what was called the “semantic gap”.
The VAX has been perceived as the quintessential CISC processing architecture, with its very large number of addressing modes and machine instructions, including instructions for such complex operations as queue insertion/deletion and polynomial evaluation. Eventually, it was this complexity that caused its demise. As we shall see below, this complexity was not required or even found useful.
The eventual downfall of the VAX line was due to the CPU design philosophy called “RISC”, for Reduced Instruction Set Computer. As we shall see later in the text, there were many reasons for the RISC movement, among them the fact that the semantic gap was more a fiction of the designers imagination than a reality. What compiler writers wanted was a clean CPU design with a large number of registers and possibly a good amount of cache memory. None of these features were consistent with a CISC design, such as the VAX.
The bottom line was that the new RISC designs offered more performance for less cost than the VAX line of computers. Most of the new designs, including those based on the newer Intel CPU chips, were also smaller and easier to manage. The minicomputers that were so popular in the 1970’s and 1980’s had lost their charm to newer and smaller models.
DEC came late to the RISC design arena, marketing the Alpha in 1992, a 64 bit machine. By then it was too late. Most data centers had switched to computers based on the Intel chips, such as the 80486/80487 or the Pentium. The epitaph for DEC is taken from Wikipedia.
“Although many of these products were well designed, most of them were DEC–only or DEC–centric, and customers frequently ignored them and used third-party products instead. Hundreds of millions of dollars were spent on these projects, at the same time that workstations based on RISC architecture were starting to approach the VAX in performance. Constrained by the huge success of their VAX/VMS products, which followed the proprietary model, the company was very late to respond to commodity hardware in the form of Intel-based personal computers and standards-based software such as Unix as well as Internet protocols such as TCP/IP. In the early 1990s, DEC found its sales faltering and its first layoffs followed. The company that created the minicomputer, a dominant networking technology, and arguably the first computers for personal use, did not effectively respond to the significant restructuring of the computer industry.”
“Beginning in 1992, many of DEC’s assets were spun off in order to raise revenue. This did not stem the tide of red ink or the continued lay offs of personnel. Eventually, on January 26, 1998, what remained of the company was sold to Compaq Computer Corporation. In August 2000, Compaq announced that the remaining VAX models would be discontinued by the end of the year. By 2005 all manufacturing of VAX computers had ceased, but old systems remain in widespread use. Compaq was sold to Hewlett–Packard in 2002.”
It was the end of an era; minicomputers had left the scene.
Large Scale and Very Large Scale Integrated Circuits (from 1972 onward)
We now move to a discussion of LSI and VLSI circuitry. We could trace the development of the third generation System/360 through later, more sophisticated, implementations. Rather than doing this, we shall trace the development of another iconic processor series, the Intel 4004, 8008, and those that followed.
The most interesting way to describe the beginning of the fourth generation of technology, that of a microprocessor on a chip [R1], is to quote from preface to the 1995 history of the microcomputer written by Stanley Mazor [R61], who worked for Intel from 1969 to 1984.
“Intel’s founder, Robert Noyce, chartered Ted Hoff’s Applications Research Department in 1969 to find new applications for silicon technology – the microcomputer was the result. Hoff thought it would be neat to use MOS LSI technology to produce a computer. Because of the ever growing density of large scale integrated (LSI) circuits, a ‘computer on a chip’ was inevitable. But in 1970 we could only get about 2000 transistors on a chip and a conventional CPU would need about 10 times that number. We developed two ‘microcomputers’ 10 years ahead of ‘schedule’ by scaling down the requirements and using a few other ‘tricks’ described in this paper.”
Intel delivered two Micro Computer Systems in the early 1970’s. The first was the MCS–4, emphasizing low cost, in November 1971. This would lead to a line of relatively powerful but inexpensive controllers, such as the Intel 8051 (which in quantity sells for less than $1). The other was the MCS–8, emphasizing versatility, in April 1972. This lead to the Intel 8008, 8080, 8086, 80286, and a long line of processors for personal computers.
was originally developed in response to a demand for a chip to operate a
hand–held calculator. It was a four–bit
computer, with four–bit data memory, in response to the use of 4–bit BCD codes
to represent the digits in the calculator.
The components of the
MCS–4 included the 4001 (Read Only Memory), 4002 (Random Access Memory), and the 4004 Microprocessor. Each was mounted in a 16–pin package, as that was the only package format available in the company at the time.
Figure: Picture from the 1972 Spec Sheet on the MCS–4
In 1972, the Intel 4004 sold in quantity for less than $100 each. It could add two 10–digit numbers in about 800 microseconds. This was comparable to the speed of the IBM 1620, a computer that sold for $100,000 in 1960.
The MCS–8 was based on the Intel 8008, Intel’s first 8–bit CPU. It was implemented with TTL technology, had 48 instructions, and had a performance rating of 0.06 MIPS (Million Instructions per Seconds, a term not in use at the time). It had an 8–bit accumulator and six 8–bit general purpose registers (B, C, D, E, H, and L). In later incarnations of the model, these would become the 8–bit lower halves of the 16–bit registers with the same name.
The Intel 8008 was placed in an 18–pin package. It is noteworthy that the small pin counts available for the packaging drove a number of design decisions, such as multiplexing the bus. In this, the bus connecting the CPU to the memory chip has at least two uses, and a control signal to indicate what signals are on the bus.
The 8008 was followed by the 8080, which had ten times the performance and was housed in a 40–pin plastic package, allowing separate lines for bus address and bus data. The 8080 did not yet directly support 16–bit processing, but arranged its 8–bit registers in pairs. The 8080 was followed by the 8085 and then by the 8086, a true 16–bit CPU. The 8088 (an 8086 variant with a 8–bit data bus) was selected by IBM for its word processor; the rest is history.
As we know, the 8086 was the first in a sequence of processors with increasing performance. The milestones in this line were the 80286, the 80386 (the first with true 32–bit addressing), the 80486/80487, and the Pentium. The line will soon implement 64–bit addressing.
The major factor driving the development of smaller and more powerful computers was and continues to be the method for manufacture of the chips. This is called “photolithography”, which uses light to transmit a pattern from a photomask to a light–sensitive chemical called “photoresist” that is covering the surface of the chip. This process, along with chemical treatment of the chip surface, results in the deposition of the desired circuits on the chip.
Due to a fundamental principle of Physics, the minimum feature size that can be created on the chip is approximately the same as the wavelength of the light used in the lithography. State–of–the–art (circa 2007) photolithography uses deep ultraviolet light with wavelengths of 248 and 193 nanometers, resulting in feature sizes as small as 50 nanometers [R53]. An examination of the table on the next page shows that the minimum feature size seems to drop in discrete steps in the years after 1971; each step represents a new lithography process.
artifact of this new photolithography process is the enormous cost of the plant
used to fabricate the chips. In the
terminology of semiconductor engineering, such a plant is called a “fab”. In 1998, Gordon Moore of Intel Corporation
“An R&D fab today costs $400 million just for the building. Then you put about $1 billion of equipment in it. That gives you a quarter–micron [250 nanometer] fab for about 5,000 wafers per week, about the smallest practical fab.”
The “quarter–micron fab” represents the ability to create chips in which the smallest line feature has a size of 0.25 micron, or 250 nanometers.
One can describe the VLSI generation of computers in many ways, all of them interesting. However, the most telling description is to give the statistics of a line of computers and let the numbers speak for themselves. The next two pages do just that for the Intel line.
Here are the raw statistics, taken from a Wikipedia article [R52].
Here is the obligatory
Two other charts are equally
interesting. The first shows the
increase in clock speed as a function of time.
Note that this chart closely resembles the
The second chart shows an entirely expected relationship between the line size on the die and the number of transistors placed on it. The count appears to be a direct function of the size.
Dual Core and Multi–Core CPU’s
Almost all of the VLSI central processors on a chip that we have discussed to this point could be called “single core” processors; the chip had a single CPU along with some associated cache memory. Recently there has been a movement to multi–core CPU’s, which represent the placement of more than one CPU on a single chip. One might wonder why this arrangement seems to be preferable to a single faster CPU on the chip.
In order to understand this phenomenon, we quote from a 2005 advertisement by Advanced Micro Devices (AMD), Inc. for the AMD 64 Opteron chip, used in many blade servers. This article describes chips by the linear size of the thinnest wire trace on the chip; a 90nm chip has features with a width of 90 nanometers, or 0.090 microns. Note also that it mentions the issue of CPU power consumption; heat dissipation has become a major design issue.
“Until recently, chip suppliers emphasized the frequency of their chips (“megahertz”) when discussing speed. Suddenly they’re saying megahertz doesn’t matter and multiple cores are the path to higher performance. What changed? Three factors have combined to make dual-core approaches more attractive.”
“First, the shift to 90nm process technology makes dual-core possible. Using 130nm technology, single-core processors measured about 200 mm2, a reasonable size for a chip to be manufactured in high-volumes. A dual-core 130nm chip would have been about 400 mm2, much too large to be manufactured economically. 90nm technology shrinks the size of a dual-core chip to under 200 mm2 and brings it into the realm of possibility.”
“Second, the shift to dual-core provides a huge increase in CPU performance. Increasing frequency by 10 percent (often called a “speed bump”) results in at best a 10 percent boost in performance; two speed bumps can yield a 20 percent boost. Adding a second core can boost performance by a factor of 100 percent for some workloads.”
“Third, dual-core processors can deliver far more performance per watt of power consumed than single core designs, and power has become a big constraint on system design. All other things being equal, CPU power consumption increases in direct relation to clock frequency; boosting frequency by 20 percent boosts power by a least 20 percent. In practice, the power consumption situation is even worse. Chip designers need extra power to speed up transistor performance in order to attain increased clock frequencies; a 20 percent boost in frequency might require a 40 percent boost in power.”
“Dual-core designs apply this principle in reverse; they lower the frequency of each core by 20 percent, so that both cores combined use only a tad more power than a single–core at the higher frequency. This means the dual–core processors showing up this spring [ 2006?] will actually boost performance by a factor of approximately 1.7 over single core designs that fit in the same thermal envelope.
You can read more about dual-core technology at: www.amd.com/dual–core.”
Dual–core and quad–core systems are becoming quite popular. While there seems to be no theoretical upper limit on the number of cores (CPU’s) on a chip, we may expect the practical limit to be in the range 8 to 16.
The State of the Art in 2008
The current state of the art can be illustrated by examination of a number of web sites for companies that sell computers. We consider the following examples.
PowerEdge™ 2900 by Dell can be configured with 2 Quad–Core Intel Xenon
processors, each operating at 3.16 GHz.
The memory can range from 1 GB to 48 GB of
667 MHz synchronous DRAM, with error correcting code capability. The standard hard drives are either 3.5” SAS (operating at 15,000 RPM) with a maximum capacity of 450 GB per drive or a 3.5” SATA (operating at 7,200 RPM) with a maximum capacity of 1 TB per drive. The maximum internal disk storage is either 8 TB for SATA or 10 GB for SAS.
latest IBM zSeries
Another development of note is the blade server, which is a computer especially packaged to be rack mounted in a blade enclosure. There are many manufacturers of these devices. Blade server systems first appeared in 2001, with the introduction of the System 324 by RLX, Inc. This system could mount 324 blade servers in one enclosure about 74 inches high. Early blade servers delivered very interesting performance, but at a considerable downside. A typical rack of servers would consume about 30 kilowatts of power, necessarily radiating that as 30 kilowatts of heat that needed a good HVAC system to remove.
Modern blade servers are designed for lower power consumption. In the figure below, we see an IBM blade server on the right and a blade enclosure by Sun Microsystems, Inc. on the left. Sun describes the blade enclosure as follows.
“What makes IT organizations choose Sun is the additional power and cooling efficiencies they get with the Sun Blade 6000 Modular System over similar products from competitors. The Sun Blade 6000 Modular System is designed with strict front-to-back cooling, straight airflow, intelligent fan speed control, and better algorithms for maintaining sufficient airflow in compromised situations such as operating with failed fans. More efficient cooling means fewer watts spent on the cooling subsystem, a benefit that is amplified by lower CPU power consumption resulting from lower operating temperatures.”
Figure: Blade Enclosure and a Computer Configured as a Blade Server
Supercomputers: Their Rise, Fall and
High–performance computing has always been of great importance in a wide variety of scientific and other disciplines. The emphasis on high performance gave rise to the supercomputer. While there have been many attempts to give the term “supercomputer” a precise definition, the common practice is just to equate the term with the list of computers designed and developed by Seymour Cray, first of the Control Data Corporation and then of the Cray Computer Corporation (a company he founded).
first supercomputer is universally acknowledged to be the CDC–6600, introduced
in 1964; the first unit was delivered to the
Much to the annoyance of the president of IBM, the CDC–6600 was significantly faster than the IBM 7094, up to then the standard machine for scientific computing.
The CDC 6600 is often called the
first RISC (Reduced Instruction Set Computer), due to the simplicity of its
instruction set. The reason for its
simplicity was the desire for speed.
Cray also put a lot of effort into matching the memory and I/O speed with the CPU speed.
As he later noted, “Anyone can build a fast CPU. The trick is to build a fast system.”
The CDC 6600 lead to the more successful CDC 7600. The CDC 8600 was to be a follow–on to the CDC 7600. While an excellent design, it proved too complex to manufacture successfully, and was abandoned. This feature will be seen again in future designs by Mr. Cray; he simply wanted the fastest machine that could be developed, almost without regard to the practicalities associated with its manufacture.
Cray left Control Data
Corporation in 1972 to found Cray Research, based in
“It boasted a world–record speed of 160 million floating–point operations per second (160 megaflops) and an 8 megabyte (1 million word) main memory. … In order to increase the speed of this system, the Cray–1 had a unique “C” shape which enabled integrated circuits to be closer together. No wire in the system was more than four feet long. To handle the intense heat generated by the computer, Cray developed an innovative refrigeration system using Freon.”
In addition to being the fastest standard computer of its day, the Cray–1 offered a very fast vector processor. The standard processor, by comparison, was called a scalar processor. Most scientific codes of the day made very heavy use of vectors, which are really just groupings of scalars (real numbers, integers, etc.). Consider this FORTRAN code.
DO 100 J = 1, 10
100 A[J] = B[J] + C[J]
This is just a vector addition; the elements of a 10–element vector B being added to those of a 10–element vector C and deposited in a 10–element vector A. Using a standard scalar processor, this would require ten distinct additions. In a vector processor, it essentially required only one addition. This proved a great advance in processing speed.
At over $8 million a copy, the Cray–1 was called the “world’s most expensive love seat”. Note its appearance in the figure below, the low “seats” are really power supplies.
The Cray–1 at the
The fundamental tension at Cray Research, Inc. was between Seymour Cray’s desire to develop new and more powerful computers and the need to keep the cash flow going.
Seymour Cray realized the need for a cash flow at the start. As a result, he decided not to pursue his ideas based on the CDC 8600 design and chose to develop a less aggressive machine. The result was the Cray–1, which was still a remarkable machine.
With its cash flow insured, the company then organized its efforts into two lines of work.
1. Research and development on the CDC 8600 follow–on, to be called the Cray–2.
of a line of computers that were derivatives of the Cray–1 with
improved technologies. These were called the X–MP, Y–MP, etc.
The X–MP was introduced in 1982. It was a dual–processor computer with a 9.5 nanosecond (105 MHz) clock and 16 to 128 megawords of static RAM main memory. A four–processor model was introduced in 1984 with a 8.5 nanosecond (118 MHz) clock.
The Y–MP was introduced in 1988, with up to eight processors that used VLSI chips. It had a 32–bit address space, with up to 64 megawords of static RAM main memory. It was the first commercial computer to exceed 1 Gigaflop (109 floating point operations per second), achieving a sustained speed of 2.3 Gigaflops [R54].
assistant, Steve Chen, oversaw the production of the commercially successful
X–MP and Y–MP series, Seymour Cray pursued his development of the Cray–2, a design based on the CDC 8600, which Cray had started while at the Control Data Corporation. The original intent was to build the VLSI chips from gallium arsenide (GaAs), which would allow must faster circuitry. The technology for manufacturing GaAs chips was not then mature enough to be useful as circuit elements in a large computer.
The Cray–2 was a four–processor computer that had 64 to 512 megawords of 128–way interleaved DRAM memory. The computer was built very small in order to be very fast, as a result the circuit boards were built as very compact stacked cards.
Due to the card density, it was not possible to use air cooling. The entire system was immersed in a tank of Fluorinert™, an inert liquid intended to be a blood substitute. When introduced in 1985, the Cray–2 was not significantly faster than the Y–MP. It sold only thirty copies, all to customers needing its large main memory capacity.
In 1989, Cray
left the company in order to found Cray Computers, Inc. His reason for leaving was that he wanted to
spend more time on research, rather than just churning out the very profitable
computers that his previous company was manufacturing. This lead to an interesting name game:
Cray Research, Inc. producing a large number of commercial computers
Cray Computer, Inc. mostly invested in research on future machines.
The Cray–3 was the first project Mr. Cray envisioned for his new company. This was to be a very small computer that fit into a cube one foot on a side. Such a design would require retention of the Fluorinert cooling system. It would also be very difficult to manufacture as it would require robotic assembly and precision welding. It would also have been very difficult to test, as there was no direct access to the internal parts of the machine.
The Cray–3 had a 2 nanosecond cycle time (with a 500 MHz clock). It used Gallium Arsenide semiconductor circuits (with a 80 picosecond switching time), which were five times faster than the silicon semiconductors used in the Cray–2. A single processor machine would have a performance of 948 megaflops; the 16–processor model would have operated at 15.2 gigaflops. A single–processor Cray–3 was delivered in 1993, but the 16–processor model was never delivered. The Cray–4, a smaller version of the Cray–3 with a 1 GHz clock was in development when Cray Computer, Inc. went bankrupt in 1995. Seymour Cray died on October 5, 1996 after an automobile accident.
In the end, the development of traditional supercomputers ran into several problems.
end of the cold war reduced the pressing need for massive computing facilities.
Some authors have linked the “computer technology race” to the nuclear arms race.
2. The rise of microprocessor technology allowing much faster and cheaper processors.
3. The rise of VLSI technology, making multiple processor systems more feasible.
Parallel Processors (MPP)
In the late 1980’s and 1990’s, the key question in high–performance computer development focused on a choice: use a few (2 to 16) very high performance processors or a very large number of modified VLSI standard processors. As Seymour Cray put it: “If you were plowing a field, which would you rather use: Two strong oxen or 1024 chickens”.
Although Seymour Cray said it more colorfully, there were many objections to the transition from the traditional vector supercomputer (with a few processors) to the massively parallel computing systems that replaced it. The key issue in assessing the commercial viability of a multiple–processor system is the speedup factor; how much faster is a processor with N processors. Here are two opinions from the 1984 IEEE tutorial on supercomputers.
“The speedup factor of using an n–processor system over a uniprocessor system has been theoretically estimated to be within the range (log2n, n/log2n). For example, the speedup range is less than 6.9 for n = 16. Most of today’s commercial multiprocessors have only 2 to 4 processors in a system.”
“By the late 1980s, we may expect systems of 8–16 processors. Unless the technology changes drastically, we will not anticipate massive multiprocessor systems until the 90s.”
facilitated the introduction of the Massively Parallel Processor systems.
1. The introduction of fast and cheap processor chips, such as the Pentium varieties.
2. The discovery that many very important problems could be solved by algorithms
that displayed near “linear speedup”. Thus, multiplying the number of processors
by a factor of ten might well speed up the solution by a factor of nine.
Here is a
picture of the Cray XT–5, one of the later and faster products from Cray, Inc.
It is a MPP (Massively Parallel Processor) system, launched in November 2007.
This is built
from a number of Quad–Core AMD Opteron™ processor cores, which are
variants of the Intel Pentium chip. At least one model of this computer, as well as a MPP model from IBM, have sustained a speed of 1 Teraflop (1012 floating point operations per second) on real problems of significance to research scientists. The supercomputer is back. | <urn:uuid:08ab9fd0-8aa3-40f5-9430-fa6a44a7df8e> | CC-MAIN-2017-04 | http://edwardbosworth.com/My5155Text_V07_HTM/MyText5155_Ch01B_V07.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00039-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958901 | 19,386 | 4 | 4 |
RIVIÈRE-DU-LOUP, QUÉBEC--(Marketwired - May 9, 2014) - With the arrival of the warmer weather more and more people are strolling along the shores of the St. Lawrence. During such an outing it is altogether possible that they may happen upon a young seal lying on a beach. A person who is unfamiliar with seals may find this situation alarming and could be moved to take action. It should be noted that, in most cases, the animal will be in good health and the situation perfectly normal; any intervention may actually be fatal for the young seal!
Harbour seals give birth in May and June. Consequently, during this period beached or free-swimming seal pups can be heard crying out. They are calling for their mothers that have either left them alone temporarily, in order to fish for food offshore (during the nursing period, that lasts from May to early July), or permanently (after weaning takes place, from mid-June to late July). Seal pups spend a lot of time resting out of the water, often on beaches, to conserve energy for growth. Most are unaware of danger and do not flee at the approach of humans.
If you see a young seal on the shore maintain your distance, keep domestic animals away and, most importantly, do not touch or manipulate the pup. If it has not yet been weaned, its mother is probably in the water nearby, waiting for high tide in order to recover her offspring. Human presence or the scent of a human on the small animal could incite its mother to permanently abandon it. In this case, the pup would be condemned to death. Weaned seals go through a normal adaptation period during which time they search out their mothers. They must learn to live on their own as wild animals; any human presence could prove harmful. Keep in mind that seals are wild, they can bite and there is the possibility of the transmission of infectious diseases.
If, on the other hand, you see a seal that remains in a given area on a beach near humans without returning to the water at high tide, it could be in trouble. In a case like this, contact the Quebec Marine Mammal Emergency Response Network by dialling, toll-free, 1-877-7BALEINE (1-877-722-5346).
The Quebec Marine Mammal Emergency Response Network is made up of a dozen private and governmental organizations. It has been mandated to organize, coordinate and implement measures to reduce the accidental death of marine mammals, help animals in trouble and gather information in cases of beached or drifting carcasses in waters bordering the province of Quebec.
The Network is counting on people who live along, or navigate on, the St. Lawrence to rapidly report all cases of marine mammals either in trouble or dead to 1-877-7BALEINE (1-877-722-5346). Thank you for your precious collaboration! | <urn:uuid:7762ba77-dc3c-4595-93ac-d6d0c77297b2> | CC-MAIN-2017-04 | http://www.marketwired.com/press-release/do-beached-seal-pups-require-assistance-1908587.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00525-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954073 | 605 | 2.8125 | 3 |
Despite current economic difficulties, the companies that construct and operate the data centers that run the Internet and store vast amounts of corporate and government data are expecting growth this year of about 19%. (Source: Datacentre Dynamics survey 2011).
This is not surprising when you look at current trends in the information technology sector. This year, mobile Internet handsets are estimated to send more than eight trillion text messages, process over $240 billion in mobile commerce payments, and transmit over six million terabytes of data. (Source: DEA-543 © 2012 General Electric Company) In the next decade, global digital content is forecast to increase some 30 times to 35 zettabytes (that’s 35 trillion gigabytes) and cloud computing will grow to a $240 billion market. (Source: DEA-543 © 2012 General Electric Company)
The cost and availability of energy for our power-hungry computers is a top concern, and national governments and industry agree that we must find ways to control energy demand and at the same time reduce our carbon footprint. Data centers are estimated to consume 2% of the U.S. electrical grid capacity, resulting in $2 billion per month in utility bills. A McKinsey & Company study estimates carbon dioxide (CO2) emissions from data centers will quadruple to exceed emissions from the airline industry by 2020, owing to the rapid growth in global demand for computing power. (Source: DEA-543 © 2012 General Electric Company )
The US Department of Energy is focused on “green” initiatives to improve energy efficiency and have set a target to create energy savings of 10% overall in U.S. data centers, which is 10.7 billion KWh or the equivalent of electricity consumed by one million typical U.S. households. Although most of the industry effort has been on improving the efficiency of information technology (IT) and cooling equipment, the uninterruptible power source (UPS) is now in the spotlight for energy optimization with manufacturers such as General Electric. (Source: DOE 2011)
The U.S. Department of Energy (DOE), the U.S. Environmental Protection Agency (EPA), The Green Grid and other organizations are developing metrics to measure data center efficiency and track improvements. The EPA has new programs for Energy Star ratings for data centers, IT servers and UPSs in various stages of development. The Green Grid consortium has established the power usage effectiveness (PUE) measurement to define and track data center efficiency.
PUE, a ratio of the total power consumption divided by IT power consumption, measures the energy efficiency of the electrical and cooling infrastructure supporting the IT loads. A typical PUE is around 2.0, with the IT load, cooling system and UPS system being the largest energy consumers. Recent energy optimization of the cooling system has resulted in a PUE of 1.5 or less for new data centers. To improve PUE below 1.5, optimization of UPS efficiency is required.
The UPS is a critical part of the data center power infrastructure that improves the reliability of power supplied by an energy utility company from a typical 99.9% to the 99.999% required by a data center facility. Double conversion UPS systems have become the preferred technology for data center applications.
There are significant improvements in UPS energy efficiency available today, based on new technology, better understanding of IT server power-quality requirements and reliability of the utility grid. Recent technology improvements in the high-efficiency operating mode (previously called “eco mode”) for double conversion UPSs significantly reduces UPS energy consumption by almost 75%. The data center industry will be able to adopt this high-efficiency mode for double-conversion UPSs without compromising operational reliability.
The power quality of the utility grid determines the availability of the UPS high-efficiency operating mode. The more reliable the utility power, the higher the availability of the UPS high-efficiency mode. EPRI developed the System Average RMS Variation Frequency Index to measure the actual utility voltage variation at more than 200 substation sites around the U.S. for a two-year period.
EPRI shows that the average outside the ITI (CBEMA) curve is only about 25 utility events per year. Although most of these events are of very short duration (less than 10 seconds), a conservative assumption of one-hour duration per voltage event results in the UPS only needing to operate in double-conversion mode about 25 hours per year; the high-efficiency operating mode availability would exceed 99%.
The eco mode UPS is included as an energy-efficiency recommendation in the recent Green Grid Data Center Maturity Model, and The Green Grid will be offering a presentation entitled “Evaluation of Eco Mode” at the upcoming Green Grid tech forum in March. Also, the U.S. EPA has included eco mode in the upcoming EPA Energy Star specification for UPSs, and it even stated that using eco mode is an innovative strategy for saving energy in UPS systems.
The cost savings of the high-efficiency UPS operating mode are significant when evaluated over a 10-year life cycle. For a data center with 5 MW of IT load, operating the UPS in traditional double-conversion mode will entail power costs exceeding $4.5 million over 10 years, assuming $0.10/kWh. The same data center operating in UPS high-efficiency mode will only have power costs of about $1.2 million, providing a savings of $3.2 million over 10 years.
If the data center industry adopted UPS high-efficiency mode as normal operation, the impact would be significant. According to Frost & Sullivan UPS World Reports, the last 10 years of UPS-installed base is between 60,000 to 70,000 MW, and the efficiency of this legacy UPS double-conversion technology is probably 90% or less. If this installed base were replaced with UPSs operating in high-efficiency mode, the data center industry would be able to realize at least 4,000 MW of energy savings, producing annual energy cost savings of more than $3 billion per year. (Source: Frost & Sullivan 2011)
UPS Manufacturers like GE have responded with a new range of energy-efficient models, and although the conversion of the legacy UPS-installed base to high-efficiency UPS mode will not happen overnight, the technology is proven, the business case is clear and the opportunity for new data centers to install and operate UPSs in high-efficiency mode is attainable today.
About the Author
Brad Thrash is a product manager at GE Energy working in the power-quality business and is responsible for the global three-phase UPS product line. He has over 25 years of experience in GE power generation and power-quality businesses, with leadership roles in application engineering, service engineering, sales and product management.
Brad has a degree in mechanical engineering and is a licensed professional engineer. He is also a member of IEEE and ASME and is on the Power Sub Work Group of The Green Grid. He will be presenting a paper on the Evaluation of Eco Mode for UPS at the Green Grid Forum 2012 on March 6 in San Jose, California. | <urn:uuid:5b8c828d-f4d5-49d7-a8c7-4d22fc1b1f7a> | CC-MAIN-2017-04 | http://www.datacenterjournal.com/the-green-data-center-opportunity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00185-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929392 | 1,454 | 2.546875 | 3 |
Over the course of 20 years cell phones have evolved from status symbols for the very few to the primary means of communication for the very many.
In the process they've helped change our memories (why remember a friend's number if it's in your speed dial), our sense of direction (GPS is easier than maps, even for places we go frequently) and the way we communicate (ith phones in our pockets we can chat with anyone, anytime, but increasingly prefer to text or email because it's more convenient for us and more polite to those who share our physical space, rather than our digital networks.)
As cell phones have become ubiquitous they've also become the go-to method of tracing social and behavioral patterns we may have suspected but were never able to confirm.
Studying data showing who phones whom within an organization, how quickly those calls were returned and what secondary calls they set off identifies the paths of real influence, not just titular authority, even in large companies, for example.
Now those records may change the way we understand the nature of male-female relationships and changes in what each gender wants from the other at different times in individual lives.
In the most unlikely but by far most interesting possibility, they may also explain more about how natural selection works in humans today than all the evolution studies ever published on the sex life of fruit flies.
Following up on earlier studies that show most people tend to pick friends who are similar to themselves, University of Oxford anthropologist Robin Dunbar and colleagues talked an unnamed telco in an unnamed European country into turning over seven months worth of call records for about 3.2 million subscribers.
Looking only at calls for which the age and gender of the caller was known, researchers analyzed analyzed patterns of contact in 1.95 billion cell phone calls and 489 million text messages, sifting for patterns that indicate who those callers had chosen for one-on-one friendships and how those relationships were maintained.
Previous studies showed that, in established organizations, women's communication styles tended to be overlooked in favor of men. Those studies involved communication networks and norms created largely by men, for men without consideration for the development of romantic relationships, even though romance is a high priority for men as well as women.
Studying patterns of contact through mobile-phone use identifies actually behavior patterns formed according to the abilities and priorities of each gender, researchers said, giving a more accurate pattern of whose voice has the most impact and which gender holds real power in a technological society.
Scientific evidence overturns same-gender-friendship rule 'Bros before 'hos.
The most surprising finding was that, during their 20s and 30s both men and women spent a lot of energy maintaining at least one extremely close, friendship with a member of the opposite sex but similar age, usually a romantic partner, but not inevitably.
Both sexes peak young-ish in romantic activity as reflected in their phone traffic. For men , 32 was the peak of phone activity with a woman of the same age; for women it was 27.
After than, women's relationships remain far more dynamic than men, who tend to make and keep about the same number of friends of both sexes as earlier in their lives, and to keep the same people in the positions of best, second-best and third-best friend, as judged by volume of calls.
For men a spouse most often remains in the No. 1 spot during the 30s, 40s and even into the 50s, while women exhibit a far greater preference for female friends and to shift the positions of their top three friends, or even replace them.
After age 50 both men and women shift toward companionship as the main criterion for friendship, rather than romance or reproduction.
Men's existing relationships remain relatively stable; their relationships with younger people shows a lack of gender bias that, if the younger-generation friends are offspring, "reflect a strong lack of discrimination" in the choice of whom to contact.
Men tend not to play favorites among their children, in other words.
After age 50 women tend strongly toward same-sex friendships, especially toward a strong relationship with at least one woman a generation younger, which Dunbar interpreted as a real or ersatz mother-daughter relationship.
Women's networks define social norms, requirements and conventions
Those relationships are a continuation of what the researchers describe as a matrilineal social order – priorities, behavioral norms and requirements of romantic partners determined by women and acceded to by men – the reverse of the patrilineal order they expected to find.
The mechanism for all that tends to reinforce gender stereotypes.
Men in their 20s and 30s, researchers found spend far less time talking to one another about their relationships and far more talking to a woman defined as a best friend.
Women of the same age spend a lot more time and effort talking to each other one-on-one using voice, text and email than men, who used the phone far less and tended to bond in groups during activities rather than conversations.
Women's phone records showed "intense, one-on-one friendships maintained and shaped through frequent communication."
Their conclusion was that women's networks weren't only about friendship and mutual emotional support. They were focused on choosing, developing and maintaining the women's romantic relationships.
'Hen party' or Committee on Evolution of the Species?
Those focused discussions amount to a cooperative analysis of the prospective mates for each woman and the best way to go about choosing a mate.
That complex analysis plays much the same role in humans that mating displays or battles for dominance among males do among other animals, the paper concludes.
Rather than judging males on the redness of a baboon butt or size of the rock nest built by a hopeful penguin, human females judge prospective mates using a complex analysis of their physical appearance, social skills, potential for growth in social and financial circumstances, emotional aptitude and compatibility and other factors.
Hence, women in their 20s and 30s define the social requirements for both sexes of the same age, largely through decisions reached through long discussions and consensus among a network of female friends, often advised or guided by women over 50 providing advice.
Men, whose volume of phone conversation doesn't peak until 32, rather than 27 in women, appear to be separate from this process during their 20s, but gradually accede to it in order to form long-term pair bonds of their own.
"We have been able to demonstrate striking patterns in mobile phone usage data that reflects shifts in relationship preferences as a function of the way the reproductive strategies of the two sexes change across the lifespan," researchers concluded. "We have been able to demonstrate a marked sex difference in investment in relationships during the period of parbond formation, suggesting that women invest much more heavily in pairbonds than do men. Though previously suspected, this suggestion has been difficult to test."
So, if you're one of those people others hound about talking too much on your cell or texting while they'd prefer you pay attention to them, now you have an answer: you're not gossiping or wasting time. You're guiding the path of human evolution and developing the rules that govern how members of a society should conduct themselves and specifically why, given the application of the right criteria, men can be demonstrated scientifically to be complete jerks.
Read more of Kevin Fogarty's CoreIT blog and follow the latest IT news at ITworld. Follow Kevin on Twitter at @KevinFogarty. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook. | <urn:uuid:565abd55-8a5b-4b81-9781-c2e8053d4c31> | CC-MAIN-2017-04 | http://www.itworld.com/article/2729215/consumer-tech-science/study--phone-records-show-women-run--human-society--evolution.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00121-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960315 | 1,559 | 3 | 3 |
For a long time, copper network that constructed of a massive and revolutionary network copper wires has being in the dominant position in the network architecture enabling us to talk with each other through these newfangled telephones. Under the fact that people’s desire for more growing transcontinental telecom traffic, the proper application of data cabling and wiring, also refer to the network cabling and wiring becomes obvious imperative for successful business, government and academic network infrastructure installations.
There are two main types of network cabling & wiring method: copper network and fiber optic network.
Copper data network systems can be divided into several categories or standards by the cabling standards organizations which used the bandwidth needs to determine the proper customer application of each category of cabling. Frequency Bandwidth for each categories data cabling listed below:
Category 5e: 1 – 100 MHz
Category 6: 1 – 250 MHz
Category 6A: 1 – 500 MHz
Category 7: 1 – 600 MHz
Category 7A: 1 – 1000 MHz
Copper networks display the important performance in television, video, internet and telephone applications. And the copper data cabling can be further divided into three sub-types, unshielded twisted pair (UTP), screened twisted pair (F/UTP) and shielded twisted pair (S/FTP).
Fiber optic network or fiber optical cabling is the other second types of network data cabling system. In this data cabling system, data signals are transmit by the thin glass core fibers in the term of laser light pulses. Fiber optic cabling allows data signals to be transmitted much faster, at a higher bandwidth and over much greater distances than copper data cabling systems. Fiber optic cablings provides greater signals capacity than the copper network and it is impervious to interference with a small transmitting loss.
The advantage of fiber optic over copper wires makes the fibers attractive for many companies today to deploy the fibers into their networks. But the reality is that the in most of countries, the total fiber miles is still dwarfed by copper miles and it will take many years and much money before fibers replace the copper networks. The quite dynamic existing copper network makes us have to
think about a more cost-effective and efficient technology – Ethernet over Copper, which delivers high bandwidth and the latest features over that existing copper connection with affordable cost.
Fiber optics would have been a good option but it may also be less attractive because of the technological requirements and the costs implications. By using existing copper connections, combined with state of the art equipment including the copper Ethernet cable, some Ethernet over copper service companies can help deliver necessary bandwidth without compromising the quality with the foundation of the existing copper data cabling networks.
If your company is using a T1 line, EoC will provide you with the same bandwidth at a higher level of reliability. It is also cheaper to install than a T1 line. You need a lot of money to change a bonded T1 into partial or full DS3 service. By using EoC, switching to partial or full DS3 will be done at a much lower price.
Ethernet over copper is ideal for companies with bandwidth requirements that do not exceed 15 MB, which is adequately to meet the needs of most small and medium sized companies. If you want to higher bandwidth, you have to go for optical fiber network. | <urn:uuid:7c69f867-350c-4e7b-8d91-fffa99a92511> | CC-MAIN-2017-04 | http://www.fs.com/blog/ethernet-over-copper-maybe-necessary-for-your-high-speed-network-architecture.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00206-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924167 | 681 | 3.1875 | 3 |
Today is the 19th of January, 2013. Which means 19th of January, 2038 is now exactly 25 years away from us.
Why does it matter? Because at 03:14:07 UTC on 19th of January 2038 we will run into the Year 2038 Problem.
Many Unix-based system can't handle dates beyond that moment. For example, common Unix-based phones today won't let you set the date beyond 2038. This applied to all iPhones and Androids we tried it on (iOS is based on BSD and Android is Linux). Obviously this does not apply to Windows Phones, which let you set the date all the way to year 3000.
Yes, 25 years is a long time. But Unix-based systems will definitely still be in use at that time. And some things can start failing way before 2038. For example, if your Unix-based system calculates 25-year interests today, it better not be using time_t for the calculations. | <urn:uuid:d924bbcb-e7ed-4e3f-8740-8fbcc553de37> | CC-MAIN-2017-04 | https://www.f-secure.com/weblog/archives/00002489.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00232-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948031 | 205 | 3.4375 | 3 |
Video analytics — the automated analysis of terabytes of video content — has a proven track record helping investigators to glean information from surveillance cameras, recognize faces in a crowd, or zoom in on the license plates of suspects. However, researchers know they need more advanced capabilities and software algorithms to go beyond detection and tracking and really understand the relationships between objects in video footage.
Does this open the door for VARs and ISVs? Primary obstacles to the successful use of aggregated surveillance data remain geolocation and processing, both of which provide opportunities for IT solutions providers.
The abundance of video data requires a new system of analytics, with the capability to extract information, generate data sets and identify patterns.
One opportunity is in software development that it is line with the advances in video camera technology, according to Jie Yang, program director at the National Science Foundation, who states that there is now much more digital video data in nonstandard formats — petabytes of it – coming from cell phones, digital cameras, tablets, surveillance systems, and unmanned aerial vehicles.
So-called computer vision research, which focuses on image processing, combined with computer graphics and natural language expertise, will move video analysis beyond object detection to help analysts determine the relationships between the objects, Yang says.
One such forward-looking project is the Visual Cortex on Silicon, an NSF-backed research project led by Penn State University, which seeks to design a machine-vision system that operates much like human vision, allowing computers to record and understand visual content much faster and more efficiently than current technologies.
The Intelligence Advanced Research Projects Activity (IARPA) is involved in a five-year research project called Aladdin Video. Using a mix of audio and video extraction, knowledge representation and search technologies Aladdin processes video, starting with an automatic tagger that looks at the video in the analyst’s queue and creates a sophisticated search index for that information library.
Jill Crisman, IARPA program manager for both Finder and Aladdin Video, explains, “It is sort of like a giant card catalog of information about the videos. Aladdin applies metatags to online video in order to describe its content and assist in multimedia event detection, creating what Crisman calls “a text document for video.”
Another IARPA program, called Finder, is designed to help analysts locate non-geotagged imagery, whether photographs or video. The Aladdin Video program seeks to improve search capabilities for specific events so that analysts can more quickly find the videos most relevant to their needs.
“The goal of the Finder program is to develop tools to help analysts locate where in the world images or video were taken,” explains Jill Crisman.
Ultimately, researchers want to be able to manipulate video the same way they manipulate other data. “We’re looking to exploit pixels and turn them into data that can be combined with (other sources),” said Ken Rice, chief of the ISR integration division at the National Geospatial-Intelligence Agency, at a recent National Institute of Standards and Technology symposium.
One major hurdle is the variety of algorithms currently used to process video. Rice says there could be as many as 20 algorithms to process video coming from Predator unmanned aerial vehicles.
“What we really need is an open framework for video processing,” which should include a way of characterizing what has been done to the pixels during processing, Rice says. That way, experts can factor into their analysis any uncertainty introduced by changes to the video.
The eyes may deceive, but advanced research into video analytics is closing the gap between what analysts see and what they know, and it is creating a new avenue for solutions providers. | <urn:uuid:39ee77eb-06b5-405b-a471-00dc3474a098> | CC-MAIN-2017-04 | http://www.bsminfo.com/doc/is-video-data-a-new-opportunity-for-vars-and-isvs-0001 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00260-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930846 | 761 | 2.515625 | 3 |
Cyber-criminals targeting the Internet of Things (IoT) could result in disasters in the physical world, according to security expert Bruce Schneier.
Writing in Motherboard magazine, Schneier said that the Internet of Things had given the internet “hands and feet” and the ability to “directly affect the physical world”.
“What used to be attacks against data and information have become attacks against flesh, steel, and concrete,” he said.
He said that threats such as hacking cars while they drive on a motorway, remotely killing a person by hacking a medical device, or taking control of missile systems were all possibilities.
“The Internet of Things will allow for attacks we can’t even imagine,” he warned.
Government action required on IoT
Schneier, who is CTO of Resilient Systems, which is part of IBM, said that security engineers are working on technologies that can mitigate much of this risk, but many solutions won’t be deployed without government involvement.
“This is not something that the market can solve. Like data privacy, the risks and solutions are too technical for most people and organisations to understand.”
He said that governments needed to play a larger role in combatting the threat of hacking within IoT.
“The next president will probably be forced to deal with a large-scale internet disaster that kills multiple people. I hope he or she responds with both the recognition of what government can do that industry can’t, and the political will to make it happen,” he said.
Roy Fisher, a security consultant at MWR InfoSecurity told Internet of Business that IoT in an enterprise environment – i.e. the theory of multiple systems across a large or potentially multinational organisation, has multiple implications in terms of security controls.
“For this to be viable, organisations may need to segregate off environments to localise the risk posed through a potential flaw in one of the components. This will not only require a large overhead from implementation but could also hinder the benefits of IoT,” he said. | <urn:uuid:ce24ef37-adb6-40dc-a9bd-80b9b9f738d2> | CC-MAIN-2017-04 | https://internetofbusiness.com/schneier-warns-cybersecurity-threat-iot/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00196-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954083 | 439 | 2.78125 | 3 |
A computer program has successfully passed the Turing test by pretending to be a 13-year-old Ukranian boy named Eugene Goostman. During a series of five-minute long keyboard-based conversations, the chat bot convinced 33 percent of the judges that it was human. The results are just above the 30 percent mark, commonly cited as the passing threshold for the eponymous test, devised by famed mathematician Alan Turing over six decades ago.
The competition was organized by the University of Reading and conducted at the Royal Society in London. The date of the event, June 7, 2014, marked the 60th anniversary of Turing’s death and nearly six months after his post-humous royal pardon.
As outlined in Alan Turing’s 1950 paper, the test was designed to assess a machine’s ability to exhibit intelligence indistinguishable from that of a human. In its basic form, a human judge asks the computer a series of questions. If the judge cannot distinguish the computer from a human interlocutor, then the machine is said to have passed the test.
Until today, no computer program had successfully passed the 30 percent confidence threshold, although several, including Elbot and ELIZA, came pretty close.
Eugene’s developers, Vladimir Veselov and Eugene Demchenko, attribute the success to the program’s personality (a know-it-all teenager) and to the dialog system that is capable of handling both vague and direct questions.
“Our main idea was that he can claim that he knows anything, but his age also makes it perfectly reasonable that he doesn’t know everything. We spent a lot of time developing a character with a believable personality,” said Veselov. “This year we improved the ‘dialog controller’ which makes the conversation far more human-like when compared to programs that just answer questions. Going forward we plan to make Eugene smarter and continue working on improving what we refer to as ‘conversation logic’.”
Some of Eugene’s responses betray his “artificial” nature. For example, when asked how he felt about the accomplishment, Eugene remarked: “I feel about beating the turing test in quite convenient way. Nothing original.”
The announcement from the University of Reading describes the program as a supercomputer, but it really doesn’t take any sort of specialized hardware to run a chat bot. Real supercomputers are making breakthroughs that benefit humanity and expand our horizons. The kind of AI that Turing envisioned, something akin to actual human intelligence that shows creativity and broad problem-solving ability, has been cast aside in favor of more specialized intelligence tests, like winning a chess match or a television game show. Today’s breed of AI programs (Watson, Siri, etc.) can’t, as Gary Markus puts it, “come close to doing what any bright, real teenager can do: watch an episode of ‘The Simpsons,’ and tell us when to laugh.” | <urn:uuid:344716a1-8f9a-40bb-8f61-a8ea0b303a9c> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/06/09/chat-bot-passes-turing-test/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00104-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956858 | 632 | 2.953125 | 3 |
As the name implies, this new feature highlights the top research stories of the week, hand-selected from prominent science journals and leading conference proceedings. This week brings us a wide-range of topics from stopping the spread of pandemics, to the latest trends in programming and chip design, and tools for enhancing the quality of simulation models.
Heading Off Pandemics
Virgina Tech researchers Keith R. Bisset, Stephen Eubank, and Madhav V. Marathe presented a paper at the 2012 Winter Simulation Conference that explores how high performance informatics can enhance pandemic preparedness.
The authors explain that pandemics, such as the recent H1N1 influenza, occur on a global scale and affect large swathes of the population. They are closely aligned with human behavior and social contact networks. The ordinary behavior and daily activities of individuals operating in modern society (with its large urban centers and reliance on international travel) provides the perfect environment for rapid disease propagation. Change the behavior, however, and you change the progression of a disease outbreak. This maxim is at the heart of public health policies aimed at mitigating the spread of infectious disease.
Armed with this knowledge, experts can develop effective planning and response strategies to keep pandemics in check. According to the authors, “recent quantitative changes in high performance computing and networking have created new opportunities for collecting, integrating, analyzing and accessing information related to such large social contact networks and epidemic outbreaks.” The Virginia Tech researchers have leveraged these advances to create the Cyber Infrastructure for EPIdemics (CIEPI), an HPC-oriented decision-support environment that helps communities plan for and respond to epidemics.
Help for the “Average Technologist”
Another paper in the Proceedings of the Winter Simulation Conference addresses methods for boosting the democratization of modeling and simulation. According to Senior Technical Education Evangelist at The MathWorks Justyna Zander (who also affiliated with Gdansk University of Technology in Poland) and Senior Research Scientist at The MathWorks Pieter J. Mosterman (who is also an adjunct professor at McGill University), the list of practical applications associated with computational science and engineering is expanding. It’s common for people to use search engines, social media, and aspects of engineering to enhance their quality of life.
The researchers are proposing an online modeling and simulation (M&S) platform to assist the “average technologist” with making predictions and extrapolations. In the spirit of “open innovation,” the project will leverage crowd-sourcing and social-network-based processes. They expect the tool to support a wide range of fields, for example behavioral model analysis, big data extraction, and human computation.
In the words of the authors: “The platform aims at connecting users, developers, researchers, passionate citizens, and scientists in a professional network and opens the door to collaborative and multidisciplinary innovations.”
OpenStream and OpenMP
OpenMP is an API that provides shared-memory parallel programmers with a method for developing parallel applications on a range of platforms. A recent paper in the journal ACM Transactions on Architecture and Code Optimization (TACO) explores a streaming data-flow extension to the OpenMP 3.0 programming language. In this well-structured 26-page paper, OpenStream: Expressiveness and data-flow compilation of OpenMP streaming programs, researchers Antoniu Pop and Albert Cohen of INRIA and École Normale Supérieure (Paris, France) present an in-depth evaluation of their hypothesis.
The work addresses the need for productivity-oriented programming models that exploit multicore architectures. The INRIA researchers argue the strength of parallel programming languages based on the data-flow model of computation. More specifically, they examine the stream programming model for OpenMP and introduce OpenStream, a data-flow extension of OpenMP that expresses dynamic dependent tasks. As the INRIA researchers explain, “the language supports nested task creation, modular composition, variable and unbounded sets of producers/consumers, and first-class streams.”
“We demonstrate the performance advantages of a data-flow execution model compared to more restricted task and barrier models. We also demonstrate the efficiency of our compilation and runtime algorithms for the support of complex dependence patterns arising from StarSs benchmarks,” notes the abstract.
The Superconductor Promise
There’s no denying that the pace of Moore’s Law scaling has slowed as transistor sizes approach the atomic level. Last week, DARPA and a semiconductor research coalition unveiled the $194 million STARnet program to address the physical limitations of chip design. In light of this recent news, this research out of the Nagoya University is especially relevant.
Quantum engineering specialist Akira Fukimaki has authored a paper on the advancement of superconductor digital electronics that highlights the role of the Rapid Single Flux Quantum (RSFQ) logic circuit. Next-generation chip design will need to minimize power demands and gate delay and this is the promise of RSFQ circuits, according to Fukimaki.
“Ultra short pulse of a voltage generated across a Josephson junction and release from charging/discharging process for signal transmission in RSFQ circuits enable us to reduce power consumption and gate delay,” he writes.
Fukimaki argues that RSFQ integrated circuits (ICs) have advantages over semiconductor ICs and energy-efficient single flux quantum circuits have been proposed that could yield additional benefits. And thanks to advances in the fabrication process, RSFQ ICs have a proven role in mixed signal and IT applications, including datacenters and supercomputers.
Working Smarter, Not Harder
Is this a rule to live by or an overused maxim? A little of both maybe, but it’s also the title of a recent journal paper from researchers Susan M. Sanchez of the Naval Postgraduate School, in Monterey, Calif., and Hong Wan of Purdue University, West Lafayette, Ind.
“Work smarter, not harder: a tutorial on designing and conducting simulation experiments,” published in the WSC ’12 proceedings, delves into one of the scientist’s most important tasks, creating simulation models. Such models not only greatly enhance scientific understanding, they have implications that extend to national defense, industry and manufacturing, and even inform public policy.
Creating an accurate model is complex work, involving thousands of factors. While realistic, well-founded models are based on high-dimensional design experiments, many large-scale simulation models are constructed in ad hoc ways, the authors claim. They argue that what’s needed is a solid foundation in experimental design. Their tutorial includes basic design concepts and best practices for conducting simulation experiments. Their goal is to help other researchers transform their simulation study into a simulation experiment. | <urn:uuid:a89933eb-f6ae-4df8-b11e-ab44045dcef1> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/01/24/the_week_in_hpc_research/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00498-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.904443 | 1,421 | 2.515625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.