text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Definition: A variant of a linked list in which each item has a link to the previous item as well as the next. This allows easily accessing list items backward as well as forward and deleting any item in constant time.
Also known as two-way linked list, symmetrically linked list.
See also jump list.
Note: See [Stand98, p. 91].
Binary search may be effective with an ordered, doubly linked list. It makes O(n) traversals, as does linear search, but it only performs O(log n) comparisons. For more explanation, see Tim Rolfe's Searching in a Sorted Linked List.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 23 May 2011.
HTML page formatted Mon Feb 2 13:10:39 2015.
Cite this as:
Paul E. Black, "doubly linked list", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 23 May 2011. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/doublyLinkedList.html | <urn:uuid:82afebea-1a15-40b6-b35c-eeb629222727> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/doublyLinkedList.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00113-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.903292 | 260 | 3.578125 | 4 |
Ryan J.W.,University of South Australia |
Ryan J.W.,Chemical Pathology Directorate and Hanson Institute |
Anderson P.H.,University of South Australia |
Anderson P.H.,Chemical Pathology Directorate and Hanson Institute |
And 4 more authors.
Clinica Chimica Acta | Year: 2013
Vitamin D activity requires an adequate vitamin D status as indicated by the serum level of 25-hydroxyvitamin D and appropriate expression of genes coding for vitamin D receptor and 25-hydroxyvitamin D 1α-hydroxylase, the enzyme which converts 25-hydroxyvitamin D to 1,25-dihydroxyvitamin D. Vitamin D deficiency contributes to the aetiology of osteomalacia and osteoporosis. The key element of osteomalacia, or rickets in children, is a delay in mineralization. It can be resolved by normalisation of plasma calcium and phosphate homeostasis independently of vitamin D activity. The well characterised endocrine pathway of vitamin D metabolism generates plasma 1,25-dihydroxyvitamin D and these endocrine activities are solely responsible for vitamin D regulating plasma calcium and phosphate homeostasis and protection against osteomalacia. In contrast, a large body of clinical data indicate that an adequate serum 25-hydroxyvitamin D level improves bone mineral density protecting against osteoporosis and reducing fracture risk. Recent research demonstrates that the three major bone cell types have the capability to metabolise 25-hydroxyvitamin D to 1,25-dihydroxyvitamin D to activate the vitamin D receptor and modulate gene expression. Dietary calcium intake interacts with vitamin D metabolism at both the renal and bone tissue levels to direct either a catabolic action on bone through the endocrine system when calcium intake is inadequate or an anabolic action through a bone autocrine or paracrine system when calcium intake is sufficient. © 2013 Elsevier B.V. Source | <urn:uuid:6bb8d5c8-cf10-43ad-9b37-d32c6c00f96a> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/chemical-pathology-directorate-and-hanson-institute-361656/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00113-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.81719 | 404 | 2.546875 | 3 |
While a lot of the talk surrounding productivity in the HPC space has to do with parallel programming models and language compilers, for coders in the trenches, the most important productivity tool is their text editor. A good source code text editor can make even a poor programming environment seem tolerable. If you doubt the significance of a developer’s relationship with their editor, just suggest he or she ditch their beloved Emacs or vi for Brand X.
Bespin already seems to be getting a lot of praise in the press, and with Mozilla behind it, this may be a tool with a real future. The obvious advantage of coding in the cloud is that you’ve freed yourself of maintaining your editor tools — licenses, updates, custom configurations, etc. – on all your computers. Also, the online nature of the tool makes real-time collaboration of source code a no-brainer, although this capability doesn’t exist in the prototype.
The developers also paid a good deal of attention to the user interface and strived to make it as intuitive as possible. The fact that they used canvas to implement the UI graphics enabled them to incorporate a lot of intelligence in the layout and navigation of the source code files. A nice video demonstration of Bespin from two of the developers is provided below.
If you want to give Bespin a spin, you can register at https://bespin.mozilla.com/. But since the tool uses HTML 5, you’ll need to install Firefox 3.0 or WebKit Nightly to test it out. | <urn:uuid:fb4af196-c25b-4317-86cd-aa62baeea1d1> | CC-MAIN-2017-04 | https://www.hpcwire.com/2009/02/17/programming_in_the_cloud/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00049-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93705 | 321 | 2.5625 | 3 |
IPv6 Anycast Address
Anycast is basically the same on IPv4 and IPv6 so this part below refers to both.
As the name says it’s an address that can exist more than once anywhere in the network. If we look public IP space that’s available on the Internet, anycast IPv6 address can exist on multiple places all over the Internet. This kind of address is basically enabling us to have servers and services physically closer to us as they would be if the unicast address was used. It this way we are able to have, for example, a server with one anycast IP address somewhere in US and other server with same service and same IP address somewhere is Europe. If I am in Europe, closest server with that IP address will handle my request. Without to much additional technology solutions my service will automatically be resolved to server who is closer to me and it will probably also improve service security and speed. All that is called load balancing and can be accomplished by different networking solutions and technology designs but anycast addressing is basically the simplest method possible to enable this kind of “geo” localization for a service.
How anycast works?
As said before anycast addresses are called anycast because one address can be assigned to multiple interfaces inside the same network. Packets that are going to anycast IP destination address will be caught by nearest device. Today’s anycast IP addresses are used on some special routers and the most important thing that runs them is Global Internet’s DNS root servers service. Google also rely on anycast for all his different solutions and apps like gmail, search and so on.
If you imagine how DNS works you can see why anycast would be used on root DNS servers. You can then have one copy of the same DNS server on each continent. BGP will by himself bring your DNS query to server near you and in that way save you some delay time and bandwidth usage and thus some time.
In IPv6 world, what changes?
IPv6 had from the development phase the intention to support anycast just like described from RFC 1546. (RFC 1546 mentioned below in history section). IPv6 anycast has no special prefix and IPv6 anycast addresses are basically normal global unicast addresses. Each IPv6 configured interface on some device needs to be configured with one anycast address.
There is a big chance that anycast interfaces have no defined region, in that case every anycast entry would need to be propagated throughout the whole Internet. That would probably not scale well so support for that kind of global anycast addresses will be more or less impossible to handle.
If there are regions defined, inside the region devices with same anycast address will only need a separate entry in the routing table.
The only this that we need to know here and that can possibly be an issue is that anycast gives us no way to choose which device of more with the same anycast IP we will get our packet sent. The decision is done by the routing protocol and it is basically random router or the fastest or physically closest one. If we are sending multiple packets to an anycast address, the packets can also arrive at different destinations. If our communication is using a series of requests and replies this can be a problem. There is the thing with packets that are fragmented too. Fragments can be sent to different destinations and lost because they will not have a chance to get assembled back to real packets.
Subnet-router anycast address
The subnet-router anycast address is a special type of IPv6 anycast address that is required. That means that every router needs to support the subnet-router anycast address for all locally connected subnets on their interfaces. The important fact is that some data sent to subnet-router anycast address will be delivered to only one router of that subnet. Subnet-router anycast address is like a regular unicast address with a prefix specifying the subnet and special identifier bite range set to all zeros.
The “subnet prefix” in anycast address it identifies a particular link. To get this you can imagine a LAN segment with three routers that are having anycast address and they are all three gateways for going out to the Internet. If we have an applications which needs to communicate with any one of the routers available in order to have a closed connection state we should use subnet-router anycast address so that we are sure that all the communication pieces will go to the same router untill the specific communication is closed. In normal anycast configuration every piece can select different (from three) same anycast router interfaces.
Some more details and history stuff
We can deduce from the upper text that anycast is basically the simplest way to implement redundancy and load balancing in situations where more devices are running same services.
Anycast was actually present before the IPv6 so we cannot say that is IPv6 technology only. It’s defined before that in RFC 1546. It was years ago in 1993 and it was experimental IPv4 technology. In that specification it is intended to use special prefix for anycast so it would be recognized based on the prefix. Something like special prefix 22.214.171.124/4 for Multicast or 127.0.0.0/8 for Loopback addresses to the local host.
Anycast was intended to be good implementation of redundancy and load balancing for DNS and HTTP. Although the idea was practical and very innovative, anycast was not implemented as it was described in RC 1546. Shared unicast address way of implement redundancy and load balancing was used at the end.
Shared unicast address in IPv4
This is something similar but in IPv4 it removes the requirement of any new prefix or TCP changes. It’s implemented by assigning normal unicast address on more interfaces and then creating multiple entries in the routing table. Whole network on L3 and L4 thinks that this address is globally unique.
There are some exceptions but root DNS servers across the Internet are set up with shared unicast addresses and this is working fine. | <urn:uuid:5bc8156c-ddef-4ca5-a291-b1d41ac6fc0b> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2014/ipv6-anycast-addresses | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00443-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94487 | 1,265 | 2.71875 | 3 |
Agency: Cordis | Branch: FP7 | Program: CP-IP-SICA | Phase: OCEAN.2011-4 | Award Amount: 11.32M | Year: 2012
Environmental policies focus on protecting habitats valuable for their biodiversity, as well as producing energy in cleaner ways. The establishment of Marine Protected Area (MPA) networks and installing Offshore Wind Farms (OWF) are important ways to achieve these goals. The protection and management of marine biodiversity has focused on placing MPAs in areas important for biodiversity. This has proved successful within the MPAs, but had little impact beyond their boundaries. In the highly populated Mediterranean and the Black Seas, bordered by many range states, the declaration of extensive MPAs is unlikely at present, so limiting the bearing of protection. The establishment of MPAs networks can cope with this obstacle but, to be effective, such networks must be based on solid scientific knowledge and properly managed (not merely paper parks). OWF, meanwhile, must be placed where the winds are suitable for producing power, but they should not have any significant impact on biodiversity and ecosystem functioning, or on human activities. The project will have two main themes: 1 - identify prospective networks of existing or potential MPAs in the Mediterranean and the Black Seas, shifting from a local perspective (centred on single MPAs) to the regional level (network of MPAs) and finally the basin scale (network of networks). The identification of the physical and biological connections among MPAs will elucidate the patterns and processes of biodiversity distribution. Measures to improve protection schemes will be suggested, based on maintaining effective exchanges (biological and hydrological) between protected areas. The national coastal focus of existing MPAs will be widened to both off shore and deep sea habitats, incorporating them into the networks through examination of current legislation, to find legal solutions to set up transboundary MPAs. 2 - explore where OWF might be established, producing an enriched wind atlas both for the Mediterranean and the Black Seas. OWF locations will avoid too sensitive habitats but the possibility for them to act as stepping-stones through MPAs, without interfering much with human activities, will be evaluated. Socioeconomic studies employing ecosystem services valuation methods to develop sustainable approaches for both MPA and OWF development will also be carried out, to complement the ecological and technological parts of the project, so as to provide guidelines to design, manage and monitor networks of MPAs and OWF. Two pilot projects (one in the Mediterranean Sea and one in the Black Sea) will test in the field the assumptions of theoretical approaches, based on previous knowledge, to find emerging properties in what we already know, in the light of the needs of the project. The project covers many countries and involves researchers across a vast array of subjects, in order to achieve a much-needed holistic approach to environmental protection. It will help to integrate the Mediterranean and Black Seas scientific communities through intense collective activities, combined with strong communications with stakeholders and the public at large. Consequently, the project will create a permanent network of excellent researchers (with cross fertilization and further capacity building) that will also work together also in the future, making their expertise available to their countries and to the European Union.
Agency: Cordis | Branch: FP7 | Program: CSA-SA | Phase: KBBE.2013.1.2-11 | Award Amount: 2.15M | Year: 2014
The overall vision of the OrAqua project is the economic growth of the organic aquaculture sector in Europe, supported by science based regulations in line with the organic principles and consumer confidence. OrAqua will suggest improvements for the current EU regulatory framework for organic aquaculture based on i) a review of the relevant available scientific knowledge, ii) a review of organic aquaculture production and economics, as well as iii) consumer perceptions of organic aquaculture. The project will focus on aquaculture production of relevant European species of finfish, molluscs, crustaceans and seaweed. To ensure interaction with all relevant stakeholders throughout the project a multi stakeholder platform will be established. The project will assess and review existing knowledge on fish health and welfare, veterinary treatments, nutrition, feeding, seeds (sourcing of juveniles), production systems, including closed recirculation aquaculture systems (RAS), environmental impacts, socio-economic and aquaculture economic interactions, consumer aspects, legislations and private standards for organic aquaculture. The results will be communicated using a range of media and techniques tailored to involve all stakeholder groups. Further, Multi Criteria Decision Analysis (MCDA) and SWOT analysis will be used to generate relevant and robust recommendations. A wide range of actors from several countries will participate and interact through a participatory approach. The 13 OrAqua project partners form a highly qualified and multidisciplinary consortium that includes four universities, five aquaculture research institutes, three research groups in social science, a fish farmer organisation, a fish farmer and two organic certification/control bodies. The main outcomes of the project will be recommendations on how to improve the EU regulation, executive dossiers and a Policy Implementation Plan (PIP). Further the project will deliver recommendations on how to enhance economic development of the European organic aquaculture sector. | <urn:uuid:8e720466-1cdd-4681-9c9b-ac8c2d40c12d> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/coispa-tecnologia-ricerca-2430897/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00259-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920195 | 1,080 | 3.046875 | 3 |
Data modeling for the masses
Site offers data sets for anyone who wants them
- By Henry Kenyon
- Nov 26, 2010
The Internet is a vast sea of data, and one of the major information streams contributing to it comes from the U.S. government. Much of this material is in the form of datasets, such as regional economic figures, weather charts and geological survey data.
Last year, the Obama administration launched the Data.gov site as a part of its Open Government Initiative. The goal of the site is to make it much easier for the public to access, download and use federal government datasets. Data.gov provides descriptions of government datasets, information about how to access the information and the tools to make the best use of the datasets.
To meet its goal of promoting public participation and collaboration in government, the site provides downloadable federal datasets to build applications, conduct analysis and perform research. The site features three catalogs of downloadble datasets organized under raw data, tools and geodata. Users can scroll through the catalog lists and select searches by subject and/or federal agency. There is also a tool for ranking the usefulness of a given metadata set. Searches can also be conducted by keywords.
For an example of what a completed dataset looks like, Data.gov also features a page displaying 51 complete datasets and tools, including a U.S. Geological Survey Global Visualization Viewer for Aerial and Satellite Data and FBI-compiled national crime statistics for 2007.
The site’s newest tool is the GEO Viewer, an interactive mapping application designed to allow users to preview geospatial data available through Data.gov’s catalogs. The tool lets users view datasets on an interactive map, overlay datasets with other datasets and explore the underlying information.
Data.gov is also actively involved in the International Open Government Data Conference that is wrapping up today in Washington D.C. As the New York Times reported, at the heart of Data.gov’s efforts is a team of data curators at the Rensselaer Polytechnic Institute. Led by James Hendler, the team is responsible for the datasets featured on the website. Hendler, Tetherless World Professor of Computer and Cognitive Science, and the Assistant Dean for Information Technology and Web Science at R.P.I, is one of the conference speakers.
Hendler told the Times that one of the goals behind Data.gov’s efforts is to make the design and building of interactive data sites as easy as setting up a website. He noted that while the capability is not quite there yet, it is only a few years away. | <urn:uuid:ca2b2474-fde7-4ef6-9929-bf8ed4f441d5> | CC-MAIN-2017-04 | https://gcn.com/articles/2010/11/26/federal-site-offers-data-modeling-for-the-masses.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00167-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.918022 | 541 | 2.53125 | 3 |
NASA again stepped up its plan to mitigate the asteroid threat to Earth by announcing two significant new programs that call on a multitude of scientists and organizations to help spot, track and possibly alter the direction of killer space rocks.
First off, the agency announced the latest in its series of Grand Challenges where it dares public and private partnerships to come up with a unique solution to a very tough problem, usually with prize money attached for the winner. In the past NASA has sponsored such challenges regarding green aircraft and Mars/Moon rovers.
[RELATED: The sizzling world of asteroids]
Specifics of this asteroid challenge were spotty, but NASA said it will be a large-scale project "focused on detecting and characterizing asteroids and learning how to deal with potential threats. We will also harness public engagement, open innovation and citizen science to help solve this global problem," according to NASA Deputy Administrator Lori Garver. The challenge will involve a variety of partnerships with other government agencies, international partners, industry, academia, and citizen scientists, NASA said.
In combination with the Grand Challenge, NASA put out a request for information (RFI) that invites industry and potential partners to offer ideas on accomplishing NASA's goal to locate, redirect, and explore an asteroid, as well as find and plan for asteroid threats. The RFI is open for 30 days, and responses will be used to help develop public engagement opportunities and a September industry workshop.
The National Aeronautics and Space Administration (NASA) is seeking information for system concepts and innovative approaches for the agencies recently announced Asteroid Initiative. That mission involves redirecting an asteroid and parking it near the moon for study, possibly by 2021, as well as an increased study of how we can better defend the against the threat of catastrophic asteroid collisions, NASA said.
The RFI is looking for a variety of input, including:
- Asteroid Observation: NASA is interested in concepts for augmenting and accelerating ground and space-based capabilities for detecting all near-Earth asteroids (NEAs) - including those less than 10 meters in size that are in retrievable orbits - determining their orbits, and characterizing their shape, rotation state, mass, and composition as accurately as possible.
- Asteroid Redirection Systems: NASA is interested in concepts for robotic spacecraft systems to enable rendezvous and proximity operations with an asteroid, and redirection of an asteroid of up to 1,000 metric tons into translunar space. a. Solar electric propulsion system concepts available for launch as early as 2017, but no later than June 2018, that have the following general characteristics: Capable of launch on a single Space Launch System (SLS) or preferably a smaller launch vehicle, as part of the complete asteroid redirect vehicle, which includes power generation, propellants, spacecraft bus, and asteroid capture system. Propulsion system power output approximately 40 kW to 50 kW. Deliver thrust required to propel a robotic spacecraft to a target near-Earth asteroid and redirect the captured asteroid to a distant lunar retrograde orbit.
- Integrated sensing systems to support asteroid rendezvous, proximity operations, characterization, and capture. The sensing systems should be capable of characterizing the asteroid's size, shape, mass and inertia properties, spin state, surface properties, and composition. Some of the same sensors will also be needed in closed-loop control during capture.
- Refinements of the Asteroid Redirect Mission concept such as removing a piece (boulder) from the surface of a large asteroid, and redirecting the piece into translunar space, and other innovative approaches. For a description of early asteroid redirect approaches, see the Keck Institute for Space Studies Asteroid Retrieval Feasibility Study on the references website listed later in this RFI.
- Applications of satellite servicing technology to asteroid rendezvous, capture, and redirection, and opportunities for dual use technology development are also of interest.
- Asteroid Deflection Demonstration: NASA is interested in concepts for deflecting the trajectory of an asteroid using the robotic Asteroid Redirection Vehicle (ARV) that would be effective against objects large enough to do significant damage at the Earth's surface should they impact (i.e. > 100 meters in size). These demonstrations could include but not limited to: a. Use of the ARV to demonstrate a slow push trajectory modification on a larger asteroid. b. Use of the ARV to demonstrate a "gravity tractor" technique on an asteroid. c. Use of ARV instrumentation for investigations useful to planetary defense (e.g. sub-surface penetrating imaging) d. Use of deployables from the ARV to demonstrate techniques useful to planetary defense (e.g. deployment of a stand alone transponder for continued tracking of the asteroid over a longer period of time).
- Asteroid Capture Systems: NASA is interested in concepts for systems to capture and de-spin an asteroid with the following characteristics: a. Asteroid size: 5 m < mean diameter < 13 m; aspect ratio < 2/1 b. Asteroid mass: up to 1,000 metric tons c. Asteroid rotation rate: up to 2 revolutions per minute about any axis or all axes. d. Asteroid composition, internal structure, and physical integrity will likely be unknown until after rendezvous and capture.
- Crew Systems for Asteroid Exploration: NASA is interested in concepts for lightweight and low volume robotic and extra-vehicular activity (EVA) systems, such as space suits, tools, translation aids, stowage containers, and other equipment, that will allow astronauts to explore the surface of a captured asteroid, prospect for resources, and collect samples.
Check out these other hot stories: | <urn:uuid:c2473ac9-aad8-4e53-84c5-d5a781419b39> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2224817/security/nasa-issues-grand-challenge--calls-for-public--scientific-help-in-tracking-threatening-aste.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00471-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928028 | 1,159 | 3.328125 | 3 |
How data center cooling systems have evolved
Thursday, Jan 5th 2017
Data centers have changed a lot in the last few decades. A big reason for their modernization has been the sharp uptick in IP network traffic. Networking vendor Cisco has estimated that the amount of traffic passing through data centers will triple between 2014 and 2019. A big chunk of this increase will be from public and private clouds, which offer the on-demand infrastructure that enterprises, governmental organizations and others require to run their most demanding workloads.
"As data centers evolved, power usage effectiveness became an increasingly important metric."
Facilities have been modernized with new hardware and software, updated floor layouts and refreshed cooling systems. The changes in cooling infrastructure have been particularly notable over the years. With more traffic coming into data centers, server racks and other pieces of equipment are under increasing pressure, which leads to rising temperatures. Left unchecked, this heat can cause mechanical failure.
Understanding the evolution of data center cooling
The earliest cooling technologies in data centers were optimized for density. They were mostly similar to corporate cooling infrastructures, but were modified to fit into the particular dimensions of a data center. To meet the standards set by the American Society of Heating, Refrigerating and Air-Conditioning Engineers, many facilities included massive air handlers capable of generating sufficient airflow across their vast expanses.
As data centers evolved, power usage effectiveness became an increasingly important metric. The pursuit of PUE led to the introduction of innovations such as variable-speed fans, along with liquid-cooling alternatives to air conditioning as well as advanced environmental monitoring systems, such as the solutions available from ITWatchDogs. These monitors help keep tabs on a wide range of conditions, including airflow, humidity and electrical current. Through timely notifications, they help technicians spot potential issues early and take appropriate actions.
The most recent movement in data center cooling has been the use of indirect air cooling. Data center giants such as Facebook have been at the forefront of this movement, which has also spilled over many other data center operators. The idea is to take advantage of outside air – often in cool locales such as Sweden or the various Rocky Mountains states in the U.S. – to keep infrastructure cool and reduce the strain on mechanical systems, which have many moving parts and can easily fail.
"Today we are in the middle of the third generation of data center cooling – environmental," explained James Leach of RagingWire Data Centers, in a round table discussion at Data Center Frontier. "Our goal is to minimize the environmental impact of the high-capacity, high-efficiency cooling systems deployed in generations one and two. This is the generation of economization. The idea is to use outside air when possible to keep the data center floor cool."
It is essential for data centers to be environmentally friendly and energy-efficient. Using environmental solutions from ITWatchDogs in conjunction with well-designed cooling systems and floor plans, data center operators can ensure that they minimize downtime and sustain even the most demanding workloads around the clock. | <urn:uuid:00f163ed-9055-4584-bec3-660cf258d886> | CC-MAIN-2017-04 | http://www.itwatchdogs.com/environmental-monitoring-news/data-center/how-data-center-cooling-systems-have-evolved-40149344 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00195-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952388 | 619 | 2.71875 | 3 |
Send a manned mission to Mars. Create electric conversion kits for gas vehicles. Cure addictions and eradicate cancer. However futuristic, lofty or realistic the aforementioned goals are, the government is seeking such ambitious science and technology notions.
Incorporating Twitter, Facebook and e-mail, the White House Office of Science and Technology Policy (OSTP) and the National Economic Council recently launched a program dubbed "Grand Challenges of the 21st Century," which aims to solicit the knowledge and future-thinking ways of the public.
Specifically policymakers want ideas on the following:
In concert with Expert Labs -- a project of the American Association for the Advancement of Science -- the OSTP will make responses publicly available, said Expert Labs project director Gina Trapani. "The data set will be available for public analysis, and many academics and technologists have indicated an interest in creating presentations and visualizations of it," she wrote.
Trapani also heads development of Think Tank, a technology platform that is aiding the White House in gathering responses to the challenge. While there isn't a final number (the deadline for submissions is Thursday, April 15), Trapani said there have been several hundred responses so far.
"I am delighted by the level of enthusiasm and excitement that has recently grown around the concept of grand challenges, and the chance to build on some fantastic work that has already been done," OSTP Deputy Director for Policy Thomas Kalil wrote in a February blog.
In what appears to be a last-minute push for more public feedback, the OSTP announced Tuesday, April 13, the use of Twitter as a medium to send ideas by replying to @whitehouse and including the #whgc hashtag. Several phone calls to an OSTP spokesman went unanswered.
Some of the ideas flowing in via Twitter include, "Solar power on all homes," "Internet everywhere, fastest in the world," "Honor NASA's goals and make them a reality, improve our planet, explore the stars, and inspire Americans to higher dreams."
Of course, some ideas are more practical than others: "Solar-powered Star Trek replicators to feed the world." | <urn:uuid:4d104ec8-a516-4188-ba56-a70ee87274cc> | CC-MAIN-2017-04 | http://www.govtech.com/technology/White-House-Musters-Grand-Science-and.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00214-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945742 | 438 | 2.734375 | 3 |
Definition: A recursive algorithm, especially a sort algorithm, where dividing (splitting) into smaller problems is quick or simple and combining (merging) the solutions is time consuming or complex.
Generalization (I am a kind of ...)
divide and conquer.
Aggregate parent (I am a part of or used in ...)
merge sort, strand sort, insertion sort.
See also hard split, easy merge.
Note: Although the notion is wide spread, I first heard this term from Doug Edwards about 1994.
Called "Divide form" of using divide and conquer in [ATCH99, page 3-3].
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 27 October 2005.
HTML page formatted Mon Feb 2 13:10:39 2015.
Cite this as:
Paul E. Black, "easy split, hard merge", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 27 October 2005. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/easySplitHardMerge.html | <urn:uuid:0b8802c4-30e6-432b-98f5-4055a8422958> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/easySplitHardMerge.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00206-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.832467 | 254 | 3.34375 | 3 |
An experiment using IoT to reduce water consumption in agriculture has seen a decrease by 75 percent in the water used to grow avocados.
The research was carried out by Kurt Bantle, farmer and senior solution manager at Spirent Communications. He has 900 young avocado trees planted in his “back garden” in Southern California. He decided to experiment into how avocados could be grown using less water through soil moist monitoring and automated irrigation.
Bantle divided his farm into 22 irrigation blocks and inserted two soil moisture measurement units into each block. The units contain a LoRa unit for narrow-band data communication to a LoRa gateway which has broadband cellular uplink connectivity functionality.
The gateway also contains a Spirent partner Oasis reprogrammable SIM which becomes the enabler in remote water provisioning. All soil moisture data is collected from the avocado trees into a cloud and visualised by a presentation layer.
When a tree needs to be watered, the solution turns the sprinklers on automatically to get the correct level of soil moisture for each tree. It then turns them off when the correct moisture levels are reached. The connected trees are monitored constantly day and night.
“Avocado trees typically take 4-acre feet (1-acre foot = 326000 gallons) of water per acre per year. This is not only to supply the needed water but also to leach the salts which build up in the soil,” said Bantle.
“The soil moisture sensors let me drastically reduce water usage by telling me when to water and how deep to water to push the salts past the bulk of the rooting zone. The majority of the roots are in the top eight inches of soil so there is a sensor there and one at 24 inches so I can see when I’ve watered deep enough to get the salts out of the rooting zone,” added Bantle.
“The case study showed water usage reduction by 75 percent, but the usage will climb as the trees get bigger. The goal is to reach a 50 percent reduction of water usage when fully grown. By keeping the salts in check along with keeping nutrients supplied, stress on the trees is reduced and they are able to have better crop production,” said Bantle.
IoT has its downsides too
The downside for Bantle in harnessing the power of IoT to reduce water consumption was that he was placed under state surveillance for meter tampering.
Corry Brennan, Globalstar regional sales manager at Simplex, told Internet of Business that IoT in agriculture will become more popular.
“The advantages of the ability to remotely track, monitor and then report on the condition of herd, couple with the ability to remotely gauge various other dependant factors such as soil quality introduces huge efficiencies for the modern farmer, they can be alerted to various scenarios in advance and save both time and money by not having to patrol and survey, using satellite technology to receive various information in a proactive fashion,” he said. | <urn:uuid:2e50773c-b883-4a76-b89e-d41b32e5701e> | CC-MAIN-2017-04 | https://internetofbusiness.com/farmer-uses-iot-slash-water-consumption/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00416-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954364 | 613 | 3.015625 | 3 |
Imagine you’re a retailer and you’re trying to plan your next line of products. What information do you need to know? A useful way to look at it is by exploring attributes—the variables of the product and the customer base. Do wealthy suburban women prefer blue or green purses, and do they like them to be traditional or fashion-forward? Which purses do you already carry in blue or green, and just what the heck is meant by traditional, fashion-forward, and everything in between?
The value of attribute analysis expands across industries. An entertainment company—say an HBO or a Netflix—needs to know what current movies and TV shows its customers like so it can better decide which future movies and shows to buy or create. Do they prefer longer movies or shorter ones, happy endings or sad ones, scary or comedic or dramatic themes? If you want to know these things, it’s very useful to know the attributes of entertainment offerings.
A roadblock to use is that, in many industries, there is no widely accepted taxonomy of product attributes, and many manufacturers don’t classify their products. So companies that want to do some analytics need to create their own.
Apparel manufacturers, for example, don’t classify their products in any systematic way. So leading retailers are spending considerable time and effort classifying product attributes on their own. Zappos, an Amazon subsidiary specializing in shoes and leather goods, involves three different departments in product classification so that it can optimize customers’ searches and create the most effective offers. The classification involves product type, style, color, pattern, brand, and price. This can get complex: Customers can choose from more than 40 different material patterns—pearlized, patchwork, pebbled, pinstripes, paisley, polka dot, plaid—alone. You need to know that a customer has bought patchwork-patterned goods in the past to be comfortable recommending them in targeted offers.
In entertainment, the king of attribute analysis is Netflix. There is a decent classification system available from IMDb, but Netflix thought it could derive competitive advantage from a more detailed classification structure—with almost 80,000 categories of movie types, as well as their actors, directors, and so forth. The company uses human classifiers to do this work, and has a 36-page guideline to attribute classification.
Netflix, of course, uses the attributes for its movie recommendation engine, but it doesn’t stop there. The company has also used the attributes to predict commercial success, classifying attributes of shows before creating them. In doing so, Netflix has been able to substantially improve its success rate in developing and buying entertainment products.
For example, in the case of the very popular series House of Cards, Netflix increased the likelihood of its commercial success by classifying its likely competitors and the popularity of its actors and director. The closest competitor was a UK series with the same name. Kevin Spacey, a popular actor in Netflix shows, plays the evil president in the show. David Fincher is the producer. Netflix observed high correlations between all three attributes and commercial success.
This approach works. In addition to House of Cards, Netflix has produced many other original shows that gained loyal viewers, including Orange Is the New Black and Unbreakable Kimmy Schmidt. More than 90% of Netflix’s original shows were renewed after their first seasons—well over the recent 35% success rate of the company’s TV network competitors.
Attribute classification and analysis is now being used to predict the success of another entertainment category—the novel. A new book, The Bestseller Code, describes an algorithmic approach to identifying best-selling novels. The approach, developed by a professor and an editor, employs 2,800 attributes, including theme, style, vocabulary, and punctuation. The developers claim an 80% level of prediction accuracy.
So if your company’s promotions aren’t succeeding, or if your new product or service success rate is low, take a cue from these aggressive adopters of attribute-based analytics. Classify some of the key attributes of your past and current products or services. Figure out what customers prefer what attributes. Or analyze the past relationship between those attributes and the commercial success of the offerings. You’ll have a predictive model that should give you some sense of how likely a customer is to buy a particular offering, or how likely a new product or service is to be successful. With these types of attributes and analytics you’ll better understand your own offerings and how customers will feel about them.
Tom Davenport, the author of several best-selling management books on analytics and big data, is the President’s Distinguished Professor of Information Technology and Management at Babson College, a Fellow of the MIT Initiative on the Digital Economy, co-founder of the International Institute for Analytics, and an independent senior adviser to Deloitte Analytics. He also is a member of theData Informed Board of Advisers.
Subscribe to Data Informed for the latest information and news on big data and analytics for the enterprise. | <urn:uuid:2a52d493-8cac-417c-b5a9-7ebe33b6e313> | CC-MAIN-2017-04 | http://data-informed.com/the-critical-importance-of-classifying-attributes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00352-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940169 | 1,054 | 2.5625 | 3 |
Later this week at the Usenix security conference in Austin, a team of researchers from the University of Birmingham and the German engineering firm Kasper & Oswald plan to reveal two distinct vulnerabilities they say affect the keyless entry systems of an estimated nearly 100 million cars. One of the attacks would allow resourceful thieves to wirelessly unlock practically every vehicle the Volkswagen group has sold for the last two decades, including makes like Audi and Skoda. The second attack affects millions more vehicles, including Alfa Romeo, Citroen, Fiat, Ford, Mitsubishi, Nissan, Opel, and Peugeot.
The researchers are led by University of Birmingham computer scientist Flavio Garcia, who was previously blocked by a British court, at the behest of Volkswagen, from giving a talk about weaknesses in car immobilisers.
At the time Volkswagen argued that the research could "allow someone, especially a sophisticated criminal gang with the right tools, to break the security and steal a car." That researchers finally got to present their paper a year ago, detailing how the Megamos Crypto system – an RFID transponder that uses a Thales-developed algorithm to verify the identity of the ignition key used to start motors – could be subverted.
The team's latest research doesn't detail a flaw that in itself could be exploited by car thieves to steal a vehicle, but does describe how criminals located within 300 feet of the targeted car might use cheap hardware to intercept radio signals that allow them to clone an owner's key fob.
The researchers found that with some "tedious reverse engineering" of one component inside a Volkswagen’s internal network, they were able to extract a single cryptographic key value shared among millions of Volkswagen vehicles. By then using their radio hardware to intercept another value that’s unique to the target vehicle and included in the signal sent every time a driver presses the key fob’s buttons, they can combine the two supposedly secret numbers to clone the key fob and access to the car. "You only need to eavesdrop once," says Birmingham researcher David Oswald. "From that point on you can make a clone of the original remote control that locks and unlocks a vehicle as many times as you want."
Sounds to me like it's time to turn to the car manufacturers to ask what on earth they are going to do to fix the millions of potentially vulnerable vehicles they have sold in the last couple of decades.
Read more, including the researcher's paper, on Wired. | <urn:uuid:aade6f32-36cb-4288-ad45-ad644937066b> | CC-MAIN-2017-04 | http://www.cso.com.au/vendor_blog/?page=134 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00564-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952554 | 507 | 2.515625 | 3 |
Curling — similar to shuffleboard, but on ice — is a sport most people only see on TV during the Winter Olympics. But in Blaine, Minn., the sport is in high demand, so much so that the city could use a dedicated curling facility.
Ironically hybrid geothermal technology could make the icy endeavor a reality.
The private, nonprofit Fogerty Arena originally introduced plans to build a curling facility to the Blaine City Council in 2008, but the facility wasn’t built because the arena couldn’t finance the project, according to Dave Clark, councilman and the council’s arena liaison.
Mark Clasen, the arena’s manager, reintroduced the plan to the City Council after learning a similar project using hybrid geothermal technology in Brooklyn Park, Minn. Last fall, the Brooklyn Park project got Clasen’s attention. Over the course of the winter, Clasen and his staff have been tracking the project’s positive results.
“The improvements will use geothermal heat from the city’s water system to efficiently cool the rinks and heat portions of the building,” according Brooklyn Park’s official government website.
Clark said although Fogerty Arena’s project couldn’t get off the ground in 2008, the arena may qualify for grants by using the hybrid geothermal technology.
“What is different with this opportunity is they’ve come across some geothermal heating technology that basically allows them to draw heat or cooling from the city water tower located across the street,” Clark said. “By tapping that source of geothermal energy, it’s possible that the building will qualify for grants and different programs that might put them over the top in terms of getting the funding to work out.”
Clasen said he would like to combine the curling facility project with renovations to Fogerty’s 30-year-old south rink. By combining the two projects, there would be only one refrigeration room for the arena and curling facility, instead of constructing two rooms. From the refrigeration room, the arena could access the city’s water tower located 150 feet away.
“We’re not using the water at all,” Clasen said. “It’s a closed loop — a percentage of the water is being routed through the refrigeration room where we’re literally either adding or subtracting energy from that water as needed.”
Clasen said renovating the south arena and building the curling facility could cost between $600,000 to $1 million.
The City Council has tentatively approved the arena’s use of the city’s water tower, but first the city plans to do research and due diligence to ensure the approach won’t negatively impact water quality.
“First and foremost, it’s a city well,” Clark said. “It’s going to provide clean drinking water and anything that prevents that operation from happening is obviously not going to be allowed.” | <urn:uuid:4a3ba4eb-ec44-4d3b-bbd4-681543e03429> | CC-MAIN-2017-04 | http://www.govtech.com/technology/Ice-Rink-City-Hot-Water.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00472-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94188 | 650 | 2.515625 | 3 |
Wi-Fi, also spelled Wifi or WiFi, is a term for certain types of wireless local area network (WLAN) technology that uses 2.4 GHz UHF and 5 GHz SHF radio waves to provide wireless high-speed Internet and network connections. Wi-Fi is a trademarked name but not an acronyms. Many devices, such as computers, smartphones, digital cameras etc. can connect to the Internet or communicate with one another wirelessly within a particular area where there is a Wi-Fi access point. The Wi-Fi Alliance, the organization that owns the Wi-Fi registered trademark term specifically defines Wi-Fi as any “wireless local area network (WLAN) products that are based on the Institute of Electrical and Electronics Engineers’ (IEEE) 802.11 standards”. Only Wi-Fi products that complete Wi-Fi Alliance interoperability certification testing successfully may use the “Wi-Fi CERTIFIED” trademark. As a popular WLAN technology, Wi-Fi is widely used in businesses, campus, and homes as well as many airports, hotels, and fast-food restaurants.
Convenience – Users can access network resources from nearly any convenient location within their home, office or any other places.
Mobility – People can access the internet even outside their normal work environment as most chain coffee shops, restaurants or public places offerring theircustomers a wireless connection to the internet at little or no cost.
Lower Cost – Wi-Fi allows cheaper deployment of local area networks (LANs). The price of chipsets for Wi-Fi continues to drop, making it an economicalnetworking option included in even more devices.
Expandability – Wi-Fi can serve a suddenly-increased number of clients with the existing equipment. In a wired network, additional clients would require additionalwiring.
Security – The security of Wi-Fi is a main problem which people more concern. Though there is an encryption key to keep the Wi-fi safe, it has still been utilized bysome unwanted users.
Range – The Wi-Fi range is limited. To obtain additional range, repeaters or additional access points will have to be purchased. Costs for these items can add upquickly.
Interference – Wi-Fi signals are subject to a wide variety of interference, as well as complex propagation effects that are beyond the control of the networkadministrator.
Speed – Comparied to the wired networks, the speed of Wi-Fi (typically 1-54 Mbps) is far slower than the slowest common wired networks (100Mbps up to severalGbps). | <urn:uuid:0cb8e1f0-130b-4f9f-8a72-52c662d44911> | CC-MAIN-2017-04 | http://www.fs.com/blog/wifi.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00380-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923223 | 535 | 3.359375 | 3 |
The Bad: Continued
- 6. The new Finder defeats spatial orientation.
- Explanation: The Finder has been the single most important and influential application in the Mac OS user experience. Since 1984, the foundation of the classic Mac OS Finder has been the concept of spatial orientation. To quote Bruce Tognazzini again, "spatial orientation" includes "not only the nesting of folders within folders, but being able to recognize a window based on its size, shape, and the layout and coloring of the objects within."This idea is directly analogous to how people organize and find items in real life. If you fold up a newspaper and place it on the upper-left corner of your desk when you arrive at the office, three hours later when you're about to go on your lunch break you expect to look at the upper-left corner of your desk and find a folded newspaper.Another example: every house has an organizer, someone who cleans up junk drawers, arranges canned goods in cabinets, files important documents, etc. If you are not that person but you know that the scissors are "in the kitchen," you may have some difficulty finding them. But ask "the organizer" where the scissors are and she (it's usually a "she" in my experience) will immediately say "in the third drawer from the left, in the back of the drawer, under the tape measure." Why does she know this? Because that's where she put it, and in a sane world, things stay where you put them.Now then, there's no reason that computer interfaces must exactly duplicate real life. In fact, that is often exactly the wrong thing to do. (See the "thumb-wheel" volume control on the QuickTime 4 Player for an example.) But in the case of spatial orientation in the Finder, it has proven to be a resounding success.That's not to say that there aren't problems. As the number of items the user is expected to manage has increased over the years, spatial orientation in the Finder has become strained. To help manage the increased complexity, Apple has steadily improved the Finder, adding pop-up tabbed folders, spring-loaded folders, a customizable Apple Menu, new views and sorting patterns, and so on. But the concept of spatial orientation has remained firmly entrenched: windows and icons go where you move them and stay where you put them.With Mac OS X, Apple has decided that spatial orientation is no longer up to the task of helping users manage the complexity of the modern computer. They added a file browser interface inherited from NeXT and have steadily removed the traditional spatial functionality. The Mac community has complained, and Apple has responded by putting a few minor spatial orientation feature back in the Finder: the default action when dragging to the desktop now matches the behavior in classic Mac OS, double-clicking a folder can optionally open a new window, removable media may optionally appear on the desktop, etc.
The problem is, the OS X Finder as of Public Beta is still predominantly non-spatial. Even with all the classic-like options enabled, the behavior is still not even close to the spatial orientation of the classic Mac OS Finder. Windows try to remember their view modes and screen positions, but this feature is easily, frequently, and often accidentally defeated through the use of basic Finder functionality. For example, the folder hierarchy pop-up menu and the back button in the Finder toolbar always replace the contents of the current window when used, regardless of your preference settings for the double-click behavior. An example scenario:
- Your home directory is open in a square icon-view window in the upper-right of your screen.
- Your documents folder is open in a tall rectangular list-view window in the lower left of your screen.
- You minimize your home directory window.
- Some time later, you're working with your documents folder and you want to go up one level to your home directory, so you select it from the hierarchy pop-up menu in the window toolbar.
- This causes contents of your list-view window to be replaced with the contents of your home directory. They're in icon view, and the icon positions are the same, but the size and position of the window are that of the documents window whose contents they replaced. Your minimized home directory window remains minimized in the Dock.
- You maximize your home directory window. It appears in its former position and size in the upper-right of your screen. Now you've got two windows showing the contents of the same folder in two different places on your screen.
This is but one of the many confusing, spatially inconsistent scenarios that is possible with the OS X Finder. I was tempted to just chalk it up to understandable beta bugs, but the unavoidable question is this: how is it supposed to work then? The combination of in-place browser-style functions and the totally new concept of creating a "New Finder Window" (which is incidentally bound to command-n, destroying 16 years worth of muscle memory in the millions of Mac users who expect that sequence to create a new folder) necessarily compromises the spatial nature of the Finder.
Again, that's not necessarily a bad thing, provided this time-tested and heavily evolved functional interface is replaced with something better (it's not enough just to equal the old interface since a shift in something as fundamental as the Finder had better be worth throwing away almost two decades of familiarity for.)
Unfortunately, the OS X Finder as it exists in Public Beta does not achieve this goal. Worse, it not only removes all the evolutionary additions to the classic spatial Finder (pop-up tabbed folders, spring-loaded folders, etc.), it entirely compromises the spatial metaphor itself. Just imagine if that folded newspaper you put on the corner of your desk wasn't there a few hours later, or was there but unfolded, or was under your chair. You'd start to think someone was toying with you, or maybe that you're going senile. For a spatially oriented system to be effective, it must be absolutely consistent.
Replacing the spatial Finder is OS X Beta is an extremely limited browser interface that has an entire problem set of its own.
Even within the realm of the limited spatially oriented features of the new Finder, there are more problems to be found. The most glaring is the insane minimum window size discussed earlier. It makes common window aspect ratios impossible, primarily the "tall, skinny list-view window" and the "wide, thin icon-view window," both of which are often left open and used as quick-access points by Mac OS 9 users. (More on quick-access points in Mac OS X later.) There's also the excessively wide, unadjustable icon grid spacing, and the inability to toggle grid behavior on a per-drag basis via a modifier key (although these may just be beta bugs and/or unimplemented features.)
- Solutions:Either create a non-spatial Finder so intuitive, efficient, and powerful that we'll all forget about the classic Mac OS Finder, or be sure to implement all of the spatial features of the classic Finder in OS X. Guess which one of those options I think is the easiest to accomplish. People have been trying to out-do the classic spatial Finder for years with little success. It doesn't appear that Apple is doing any better so far.In my opinion, Apple should stick to the old spatial metaphor, at least in the short term during the transition to Mac OS X. Bruce Tognazzini sums it uplike this:
With the Mac, you have always had the power to move around and organize applications and documents in your own virtual space, maintaining a neat or cluttered workspace, as is your habit. Other desktop systems, from Windows to Unix, have depended more on abstraction, forcing users to remember the location of objects in complex hierarchies. In theory, all of this reduced clutter, but it really only moved the clutter from the visible desktop to the back of your mind. Since most of us work better with visible clutter than with rote memorization, our efficiency drops.
- 7. The Finder's "Column View" lacks flexibility.
- Explanation: The new Finder's "Column View" browser mode is handicapped by a lack of flexibility. We'll start from the top: in the screenshot above you can see the row of large buttons. Those buttons can actually appear on any Finder window, but I'm addressing them here because they're most associated with the browser-like functionality of column view. The buttons replace the contents of the window they're attached to with the contents of several common folders: the user's home directory, the applications folder, etc. They are also assigned the key sequence equivalents command-1 through command-6. Shortcuts are convenient, but it's shortsighted to think that every user will want shortcuts to the 6 locations Apple has chosen for them, and unfortunately that toolbar is not configurable (without some resource hacking, anyway.) Furthermore, it's also short-sighted to assume that the same set of shortcut buttons should be on every window. The NeXT file browser from which the interface is derived was much more flexible in this respect.
The next problem involves the columns themselves: they're not resizable. Well, they do expand and shrink as the window size changes, but they do so in unison, and when they reach certain widths the number of columns either increases or decreases (to a minimum of 2) depending on whether the window is getting smaller or larger. This system assumes that every column of information needs the same width as every other column, which is not the case in practice. The maximum width of the columns is also too narrow for "long" file names (greater than 30 characters or so) which seems silly considering that one of Mac OS X's touted features is support for 255 character file names.
Finally, the only column sorting choice is alphabetical. I don't think I've ever seen a list-like view in a file navigation application that wasn't able to be sorted based on more than one criterion.
Solutions: Make the buttons in the toolbar customizable on a per-directory basis, make the column widths independently resizable, add the usual sorting choices, and include a setting to define the maximum column width before columns multiply and the minimum column width before columns disappear.
Explanation: Font sizes in the Finder are not adjustable. This is especially vexing in list-view windows. The default font size is considerably larger than that found in classic Mac OS. This in and of itself is actually a good idea. Screen resolutions have increased over time, and 10-point Geneva (the Mac OS 9 default) may be too small for a lot of users on a modern 1024x768 or larger monitor. But advanced users typically want to jam the most information possible into every corner of the screen. The new, larger default font does not allow nearly as many visible characters in a given screen area.
Solutions: Make font sizes adjustable.
Explanation: In the screenshot above you can see the potition of the menu that pops up when the highlighted area of the Finder's toolbar pop-up menu widget is clicked. It's about an inch from the cursor, which is much too far away in my opinion. Obviously that pop-up menu doesn't need to be as wide as it is.
Solutions: Make pop-up menus only as wide as they need to be to promote easier operation. | <urn:uuid:d9bef868-44ba-40bc-b6ec-0cdc4131ea4b> | CC-MAIN-2017-04 | http://arstechnica.com/apple/2010/09/macos-x-beta/14/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00069-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.9442 | 2,349 | 2.578125 | 3 |
We’ve all had some experience in Microsoft Word, perhaps the most popular program in the Office Suite (many would argue). But many still don’t realize that there are quite a few hidden features in Word that, when learned, will help make you into a master of the globally-instituted document composition platform. Here are 10 key ways to master your use of Microsoft Word and make your working life that much more enjoyable.
- Enjoy the use of more of Word’s symbols as you type. Normally, when you are typing in a Word doc you see a lot of empty space between the words and lines, but there is a lot more going on than what is visible. If you want to see what you’re missing in terms of helpful formatting symbols, Go to File, Options, then Display, then Always Show These Formatting Marks on the Screen. Under that heading, you will see a list of options that will allow things like paragraph signs and dots marking the amount of space between words to become visible:
- How many ways can you format a paragraph? The answer is: There are many ways to format paragraphs, and you can easily master this and take your Word authorship to a new level. By allowing the paragraph symbol to be shown (as in step 1), this will allow you to copy over the formatting along with the text to wherever you want to next paste that text.
- Know Thy Word sections. Learn to organize your Word docs better by utilizing the different breaks found in the use of sections. Access the Breaks portion on the Page Layout menu, and see your document as Microsoft Office sees it. By setting up your Word doc in sections, you can independently format each section and attain a level of mastery over your document not otherwise found.
- Master the use of Styles. You can create style templates in Word which can be used again and again for future documents. For example, if you write a lot of memos, you can create a style template for memos, and so on. You can go to Design >> Themes for some good style ideas.
- Format your document prior to writing. Formatting your doc prior to beginning the writing of it is a good idea, so you can get a well-formed idea of the format before commencing the actual writing part. Many of us have experienced the frustration of wording a document only to have to format and perhaps reformat it in a different setting because we didn’t establish (and save) the formatting from the get-go.
- Customize your paste options. You can control how MS Office pastes your text by clicking on the Office logo (the button at the top left of the screen), going to Word Options, then to Advanced. You should then see a Cut, Copy, and Paste option that lets you configure customized options. This will do things like disable hyperlinking when pasting, along with other handy things to make your use of Word more enjoyable.
- Use fully justified formatting. This is perhaps one of the better-known Word formatting options – fully justified formatting will give you equally-aligned margins without the ragged edge on the right side that’s so commonplace in writing. It appeals to those who want a tidy, clean, and perhaps more professional look to their text, though “there’s no arguing taste” but with the beholder (or writer) in this case. Nevertheless, if you want to access this option, click the Office logo >> Word Options >> Advanced, then expand the Layout Options and set fully justified formatting there.
- Hide the Ribbon. This is another common option used by Word aficionados. For those who get a bit too distracted by the visual busy-ness of their ribbon toolbar, there is a shortcut to hiding it: Click CTRL+F1. Do it again to make it reappear.
- Clear all formatting. Here’s one many may not know of: The Clear All Formatting option, which does exactly what it says. This will give you a chance to clear the formatting slate and start over again. Select however much text you want to clear, and click the button that looks like the letter A holding an eraser right beneath References on the main ribbon interface.
- Spike your copy and pasting. Here’s a special way to copy and paste that allows you to copy from different places in a document and then paste them all together elsewhere. The CTRL+F3 command will allow you to cherry-pick the various places in your doc and put them all together in another area, or new document. The spike-pasted text will also display where the original cuts were, for comprehensive editing purposes.
Talk to a Software and Office Specialist
If you need further help with Microsoft Office programs like Word, you can speak to a specialist at Apex, which is a proven leader in providing IT consulting and software support in Central and Northern California. Contact us at (800) 310-2739 or send us an email at firstname.lastname@example.org today, and we can help you with all your questions or needs. | <urn:uuid:98cebaa9-ccc4-4e4f-a00b-bc1bc66be0f7> | CC-MAIN-2017-04 | https://www.apex.com/10-ways-to-master-your-use-of-microsoft-word/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00005-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.915883 | 1,045 | 2.5625 | 3 |
This course introduces Node.js to the experienced developer who wants more control, higher performance, effective security, and cross-platform support. You will learn how Node.js is built from a small but powerful core and how these low-level constructs can be used together to build complete, modern Web applications. You will learn how to use Express and Passport frameworks to build secure Web servers. Learn multiple ways of structuring large code bases and automating the development and operations tasks so that maintenance and deployments are as repeatable and consistent as possible.
This course uses MongoDB, Mongoose ODM(Object Document Mapper), and Mocha unit testing framework.
Note: You are required to bring your own laptop. | <urn:uuid:cb0f8eb9-4c16-4b4d-aefd-d2b57e55317e> | CC-MAIN-2017-04 | https://www.globalknowledge.com/ca-en/course/120447/essential-nodejs/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00427-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.898454 | 145 | 2.6875 | 3 |
Someday the number of scientists -- and I'm using the word loosely here -- who actually believe human activity has had no impact on global warming, and who might even believe that global warming is a myth, will dwindle down to one. And when it does, the professional climate-change deniers and their financial backers will still insist that there is fierce disagreement about global warming. Except there won't be, and there really isn't now. At least not among rational people. The truth is that the vast majority of qualified climate scientists have concluded that humans have caused the Earth's temperatures to rise. Drafts of a report by the Intergovernmental Panel on Climate Change (IPCC) due out in September "say it is at least 95 percent likely that human activities - chiefly the burning of fossil fuels - are the main cause of warming since the 1950s," Reuters reports. Just 12 years ago, the same panel said there was a 66% chance that humans were causing global warming. Back in 1995, when there actually was a debate (but shouldn't have been), the number was 50%. None of this will matter to the people who are paid to obstruct efforts to counteract climate change, or to the anti-regulation, Prison Planet crowd. But for the real scientists, the question now is not what causes global warming, but how to assess its impact on a local level. And that has the scientists stumped, according to Reuters:
Drew Shindell, a NASA climate scientist, said the relative lack of progress in regional predictions was the main disappointment of climate science since 2007.
"I talk to people in regional power planning. They ask: 'What's the temperature going to be in this region in the next 20-30 years, because that's where our power grid is?'" he said."We can't really tell. It's a shame," said Shindell.
Or as Reto Knutti, a professor at the Swiss Federal Institute of Technology in Zurich, responded when asked how global warming could affect nature, "You can't write an equation for a tree." Now read this: | <urn:uuid:be6ae23b-8d23-429f-bd2d-9d518bb1724f> | CC-MAIN-2017-04 | http://www.itworld.com/article/2708231/enterprise-software/science-panel-only-95--certain-that-humans-are-causing-global-warming--so-obviously-the-debate-rages.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00271-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.974803 | 427 | 2.796875 | 3 |
This page describes some of the details on how email is sent to and from your PC
Email clients communicate with a specific email server via well defined protocols such as SMTP and POP3 (need help with acronyms or jargon - go to whatis.com.). Your email server is often provided by your Internet provider or employer. Your email client sends new emails out through the SMTP server. The SMTP server may keep the email locally if the addressee is a fellow user of the same server. Otherwise, the mailserver will forward the email directly to the appropriate mail server, based on the DNS records of the addressee's domain name. ( see connection detective for more information about identifying the mail server associated with a domain name). Email accumulates for each user at the email server until the user client initiates an email download. Email is typically retrieved via the POP3 protocol.
Note: Most email servers will be properly configured to only send email from their own users. This way, if the user is "mis-behaving" (sending spam, etc) than the system administrator can cancel the user's account. Unfortunately, some mail servers are configured to allow anybody to forward their email through the mail server. This is called an "open SMTP relay" and can be heavily exploited by spammers. See email_persona for more information.
|For More Information:|
Contact me at 703-729-1757 or Russ
If you use email, put "internet training" in the subject of the email.
Copyright © Information Navigators | <urn:uuid:aea5223e-f9ac-4e41-a3f0-f16e4c043d09> | CC-MAIN-2017-04 | http://navigators.com/email_details.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00169-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.881871 | 323 | 3.0625 | 3 |
Definition: Find a solution by trying one of several choices. If the choice proves incorrect, computation backtracks or restarts at the point of choice and tries another choice. It is often convenient to maintain choice points and alternate choices using recursion.
Note: Conceptually, a backtracking algorithm does a depth-first search of a tree of possible (partial) solutions. Each choice is a node in the tree.
Explanation with the 8-queens problems. maze solving Java applet. James D. Allen's short explanation.
An early exposition of this technique:
Solomon W. Golomb and Leonard D. Baumert, Backtrack Programming, Journal of ACM, 12(4):516-524, Oct 1965.
"This rather universal method was named 'backtrack' by Professor D. H. Lehmer of the University of California at Berkeley."
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 10 November 2008.
HTML page formatted Mon Feb 2 13:10:39 2015.
Cite this as:
Paul E. Black, "backtracking", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 10 November 2008. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/backtrack.html | <urn:uuid:bd04677b-06e4-41d4-ae7b-8db0e4694bbf> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/backtrack.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00565-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.839931 | 295 | 4 | 4 |
Monday, March 9, 1998
Michelangelo may be dead, but viruses remain real threat to computer data
--Tips to protect your valuable information--It’s been six years since Michelangelo first frightened computer users around the world. According to legend, Michelangelo – not the famed artist – but rather the most famous virus of the same name, allegedly destroys data with one sadistic brushstroke when the clock reads March 6. But even though Michelangelo didn’t become the worldwide disaster many predicted, that doesn’t mean users can breathe any easier.
Michelangelo is still only one of 10,000 reported viruses that cost businesses more than $1 billion per year. Viruses, self-replicating programs that invade a host file on your system, can cause any number of strange side effects, from simple messages announcing their presence and keyboard chirps to destroying much of your valuable data and locking your computer. So, how do you protect your computer and your invaluable data from being infiltrated and destroyed?
Ontrack advises computer users to safeguard their valuable data by checking computers regularly. Stuart Hanley, vice president of worldwide operations for Ontrack, a company specializing in protecting and recovering data, encourages computer users to be proactive in the fight against viruses.
"While Michelangelo may be dead, a new generation of more destructive viruses continue to pose a very real threat to users," says Hanley. "We’re seeing new, stronger viruses every day; it’s important to remember that while many people assume viruses are a thing of the past because of the existing technologies to fight them, data is too valuable to risk losing on an assumption," Hanley says.
Hanley offers the following tips for protecting your valuable data, fighting viruses and avoiding the panic and frustration often evoked when faced with computer problems.
· BEWARE OF FILES BEARING VIRUSES: Whether you know you have a virus or not, Hanley cautions users about where they get their data. "Many users don’t take the precautions necessary to protect their data. They freely download applications or files from the Internet or put unchecked diskettes into their computers." Hanley warns users to be cautious of all programs and files they put into their computers, whether from a reputable source or not. "You can never be sure if software is virus-free; don’t be fooled by brand names. It only takes a few moments to safeguard your data by running a virus scanner." · NEVER PIRATE SOFTWARE: Legal considerations aside, copying files is a great way to spread viruses and when the duplication is done without quality control, the chance of infection is multiplied. The same goes for your other software. "Make sure to check all incoming software, regardless of its name," Hanley advises.
· GUARD AGAINST INFECTION WITH ANTI-VIRUS SOFTWARE: Hanley recommends using a reputable anti-virus product to keep your computer running in tip-top shape. "Running an anti-virus program will tell you if your system has an unexpected visitor and, if used at the right time, will be able to destroy the virus and restore your computer to working order." After running a virus scanner, you'll know if anti-virus software, such as VetÔAnti-Virus, is needed to eradicate the virus and/or if you need to contact a qualified data recovery engineer. Vet Anti-Virus is now available from Ontrack.
· GET RELIABLE BACK-UP: The safety of your data is only as reliable as its back-up. Thus, Hanley suggests investing in, using, and testing the restore capabilities of backups regularly. "Though not every problem can be solved by backing up a system, it is one of the most effective ways to protect yourself from losing precious data."
Hanley cautions that even the best anti-virus programs can’t help you if a virus has already worked its black magic. "You may find it’s too late and the virus was able to destroy or damage some of your data. In these situations, the best thing you can do is contact a qualified data recovery expert."
· BEGIN A REGULAR COMPUTER CHECK-UP PROGRAM: Hanley says beginning a data protection regimen immediately is the only way to adequately safeguard against viruses and other sources of data loss. "The best thing you can do for peace of mind is to use Ontrack Data Advisor™ software regularly. Data Advisor diagnostic software quickly assesses the health of your system, tells you what’s wrong and offers real-time solutions. And, because it’s self-booting, it will even work when your computer doesn’t." Hanley reminds users that these tips are only part of the solution to total computer health and not to rely solely on them. "Not every problem users encounter is foreseeable or immediately fixable, but if they need additional assistance, professional Ontrack data recovery experts are there to help." Ontrack (Nasdaq: ONDI), the world leader in data recovery, specializes in software and services that help computer users protect their valuable data. Ontrack uses hundreds of proprietary tools and techniques to recover lost or corrupted data from any storage device and operating system. Ontrack can be reached through its World Wide Web site at http://www.ontrack.com or by calling 800-872-2599. In addition to its Minneapolis headquarters, Ontrack operates data recovery labs in Los Angeles, San Jose, Washington, D.C., Tokyo, London and Stuttgart.### | <urn:uuid:28dbd7f5-af47-427d-b7ee-9e4e3a2666f3> | CC-MAIN-2017-04 | https://www.krollontrack.com/resources/press/details/61050/michelangelo-may-be-dead-but-viruses-remain-r/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00527-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929966 | 1,149 | 2.828125 | 3 |
This week Cray announced an Exascale Research Initiative, in which the supercomputer maker will team with a number of European HPC groups to research and develop technologies to support exaflop computing. This mirrors a June announcement by IBM that talks about an exascale research center in Ireland. No big surprises here. Everyone expects Cray and IBM to be pushing the exascale envelope.
But when it comes to talking about exascale applications, I wonder why the prospect of developing more accurate climate models and accelerating energy research is being used as a rationale for why we need such systems. In the Cray press release this week, company CEO Peter Ungaro stated: “We know there are scientific breakthroughs in important areas such as new energy sources and global climate change that are waiting for exascale performance, and we are working hard on building next-generation supercomputers that will be capable of it.” It is certainly not the first time the selling of exascale has been linked with climate and energy research, as even a cursory Google search will demonstrate.
Surely I’m not the only one who sees the cognitive disconnect here. The first sustained exaflop machines aren’t expected to boot up until the end of the next decade. I hope we’re not counting on “scientific breakthroughs” in 2019 to solve our 2009 energy and climate crisis. In case you haven’t picked up a newspaper in the last five years or so, a consensus has formed that we’re already more than fashionably late to the global housewarming party, the recent “Climategate” dust-up notwithstanding.
A February 2009 article in Scientific American warns that “the risk of catastrophic climate change is getting worse,” according to a recent study by United Nations Intergovernmental Panel on Climate Change (IPCC). There’s a real possibility that it’s already too late to reverse some of the damage resulting from rising sea levels, ocean acidification, and more extreme weather patterns. Quoting Stanford University climatologist Stephen Schneider from the Scientific American piece: “We’ve dawdled, and if we dawdle more it will get even worse. It’s time to move.” Notice he didn’t say: “Let’s run the numbers again with more fidelity and see what gives.”
Likewise, relying on exascale computing to help with the development of non-carbon based energy sources seems like a doomed strategy. If we’re not well on our way to kicking the oil and gas habit by the end of the next decade, I can’t imagine some amped up simulation of wind turbines is going to save us 10 years hence.
It’s disheartening to realize how long we’ve actually known about this problem compared to how little we’ve done. In watching a several-year-old rerun of a “The West Wing” episode the other day, a discussion of global warming came up that was depressingly similar to the ones we hear today. Let’s face it: there are all sorts of low-tech approaches (e.g., conservation, electric vehicles, carbon taxing, etc.) that require nary a FLOP of computing power, but will do a lot to put us on the road to climate redemption. For the past 10 years, the lack of action wasn’t related to technological shortcomings, just a lack of political will.
Part of the problem has to be the way we treat the climate and energy research itself, as if it’s some sort of lab experiment divorced from reality. We certainly don’t demand the same level of scientific scrutiny about decisions related to our personal well-being.
Let’s say 9 out of 10 doctors told you that you had a heart condition that will incapacitate you (if not kill you) in ten years, adding that the condition can be remedied by changing your lifestyle. The lifestyle changes would be onerous, but nothing that you wouldn’t be able to adapt to. Would you a) demand better proof of the heart condition from the nine doctors in agreement, b) wait for technology that would allow you to eat deep-fried twinkies without the deleterious side-effects or c) suck it up? Only a fool would choose a or b. Yet, so far, those are the two types of options we’ve chosen in response to our global crisis.
Don’t get me wrong. We should certainly continue to employ cutting-edge HPC to drive climate and energy research, from now until forever. The payoff from fusion research alone would be worth it. But to peg exascale computing as a technology lynchpin for our current predicament seems completely misplaced. For the time being we’re going to have to make due with our teraflops and petaflops, and hope that when exaflops systems come online we’ll still be around for yet grander challenges. | <urn:uuid:3e72e23e-dbcf-470e-aac0-5a0e430af945> | CC-MAIN-2017-04 | https://www.hpcwire.com/2009/12/10/exaflops_needed_to_solve_climate_crisis/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00308-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941357 | 1,056 | 2.734375 | 3 |
When Schrodinger Materials Science tools wanted to test out 200,000 different organic compounds to see which ones could be a good fit to be used in photovoltaic electricity generation, the amount of data it had to deal with was an inhibiting factor, to say the least.
The company wanted to design, synthesize and experiment various combinations to find just the right fit. The job would have taken about $68 million worth of infrastructure, or almost 200 years if ran on a single machine. Instead, Schrodinger hired Cycle Computing, which specializes in large-scale distributed high performance computing to do it all in Amazon's public cloud.The job ran across 156,000 virtual cores, and exceeded 1.21 petaflops of computing capacity. Using a distributed system of virtual machines across eight regions of Amazon Web Servcie's public cloud around the world for a total of 18 hours.
Cycle says it was a record-breaking petabyte-scale analytics job. So big it has dubbed it the "Megarun."
Cycle Computing has a software management platform that controls the hundreds of thousands of virtual machines that are needed to run these types of jobs. Life science testing is a perfect fit for this software because of the massive amounts of options that are available to scientists to test a broad range of theories.
Cycle uses it software to make the job as least expensive as possible. Using cloud-based resources that are spun up and then deprovisioned as soon as the job is finished, the total cost came to just $33,000. Cycle used more than 16,700 AWS Spot Instances, which are virtual machines that are not reserved or dedicated resources, but instead are made available to customers when they are available. The Cycle software also schedules data movement, encrypts the data and automatically detects and troubleshoots some errors, such as failures to a machines, zones or regions.
While the 156,000 core run is an impressive accomplishment for Cycle, the company has been doing this sort of thing before. Between 2010 and 2013, it has run analytic jobs of 2,000; 4,000; 10,000; 30,000 and 50,000 cores. This one was so big that Cycle calls it its "MegaRun." In addition to using the Cycle software, named Jupiter, it also used Chef automated configuration tools. | <urn:uuid:e79a704b-dba6-4844-b2e6-59aed6d5e0ea> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2225781/-68-million--200-year--150-000-core-analytics-job-run-on-amazon-s-cloud-in-18-hours-for--33-000.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00428-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947888 | 479 | 2.71875 | 3 |
A bunch of bad user passwords. That was all it took for cybercriminals to expose data on a site that they didn’t even directly break into. The site from which user information was compromised was Tesco.com, but according to CBR, Tesco itself was not breached.
Instead, hackers had taken advantage of a user trend that unfortunately is all too common: weak user passwords. By infiltrating other websites that Tesco users also frequented, the cybercriminals were able to determine and then compromise passwords for 2,239 online Tesco patrons.
How did they do it? Because user passwords were the same across different sites. This is not completely the fault of the customers themselves. Companies like Tesco should have strong authentication strategies in place for its patrons to protect them against such attacks.
However, as one expert pointed out, it is never a good idea for any computing user to have the same password across different platforms.
“This is about consumer behavior,” security expert Trey Ford told CBR. “People continue to reuse passwords and other credentials across multiple sites, making it easy for attackers to compromise them.”
Malware Attacks Continue to Strike Across the Board
Unfortunately, malware and cybercrime are not going away anytime soon. A recent Security Threat Report conducted by Sophos for 2013 found that the threat landscape for malware is constantly expanding.
The mounting nature of the threats is primarily due to a growing sophistication within the hacking community. Whereas attacks were once conspicuous, the malware criminal of today lurks in the shadows, compromising enterprise security covertly and often slipping out of a company’s system before they are even detected.
“Cybercriminals have become more adept at eluding identification,” the Sophos report found.
This new tendency was illustrated recently by an attack on retail giant Neiman Marcus, which resulted in compromised information for millions of customers. Reportedly launched in July 2013, the breach was not fully stopped until nearly half a year later, in January 2014, Reuters reported. But by then it was too late — credit card information had already been stolen.
There is Money in Malware — Big Money
So why does cybercrime pose an unprecedented degree of threat? According to the Sophos report, it is not only because malicious incursions are easier to carry out, it is also because they are bringing in more profits.
Just like any criminal operation, malware is emboldened by the green. It is not incidental that cybercriminals tend to target things like Social Security numbers and credit cards: these things present a means of accessing a person’s identity. And once that identity is assumed, it can be exploited.
Based on a separate report carried out by the RAND Corporation — which posits that the cybercrime “Black Market” may be more profitable than international drug crime — the threat of malware necessitates identity protection on the part of computing users everywhere.
To learn about how the malware Black Market is raking in profits — and how users can defend their identity against cybercriminals — tune in to Part 2. | <urn:uuid:b7779788-0589-4020-9484-b2b2f3b7eca3> | CC-MAIN-2017-04 | https://www.entrust.com/cybercrime-compromising-identity-reaping-profits-slowing-part-1/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00244-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964706 | 635 | 2.53125 | 3 |
The use of laptop encryption in higher education, especially by faculty and staff, seems like a no-brainer to me. After all, such computers are full of personal information, not only of the devices' owners themselves but also of the student body (they still use SSNs as student IDs, don't they?).While the Department of Education may not openly require the use of encryption software, it's always a good idea. Even if you think that your computer is properly protected behind locked doors. Why? As the University of South Carolina shows us, doors can be busted.
According to databreaches.net and a breach notification letter posted at abccolumbia.com, the physics and astronomy departments at the University of South Carolina experienced a data breach when a laptop was stolen from a locked room. The data breach affected 6,000 students who were enrolled in physics and astronomy classes at SC between January 2010 and today.The breached data involved full names, SSNs, and other personally identifiable information. While disk encryption for student data was not employed, password recovery was used (which is tantamount to applying leaches to a massive melanoma – in other words, less than useless) and the laptop was stored in a locked room.Considering the type of information that was being stored in that room, however, it surprises me (well, maybe it doesn't. I've heard of worse, actually) that these were the only things between a sensitive data and a burglar. One wonders: if the Department of Education also had a policy of issuing monetary fines – like the Department of Health and Human Services, which can impose a penalty of up to $1 million – for preventable data breaches, would the University of South Carolina relied only on a door for their security needs?You know what's really surprising, though? That in the past three years, 6,000 students were enrolled in physics and astronomy courses. (And, personally, this is music to my ears.)
Many universities and small colleges have undergone the process of replacing student ID numbers with something other than SSNs. This is a great first step towards data security. After all, you can't have a data breach on what you don't collect.However, personal information encompasses more than SSNs alone. A student's grades, for example, are also subject to protection. Naturally, these scores have to be linked to some form of identifier, be it a first and last name, a student ID number, or whatever.In fact, that such information has to be linked to an identifier means that the potential for a data breach is always there. Not using proper protection, then, is an invitation for future data breaches. | <urn:uuid:4b83c311-a2b9-4bd6-a515-92806737922a> | CC-MAIN-2017-04 | http://www.alertboot.com/blog/blogs/endpoint_security/archive/2013/07/05/education-encryption-laptop-encryption-beats-locked-rooms-shows-university-of-south-carolina-data-breach.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00060-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.973085 | 543 | 2.734375 | 3 |
When it comes to the transition to digital broadcasts of television programming, the elephant in the room has been the fact that many people who can receive analog transmissions just fine may not get digital transmissions. As I’ve discussed here many times, a weak analog signal produces a snowy image, but a weak digital signal results in a blank screen. The problem is that not enough people are aware of this, or what they can do about it.
The FCC has addressed this issue with some new online resources. Some might say that it’s a little bit late for this information — especially if the original transition date of yesterday had been upheld — but we’ll be generous and file this under the Better Late than Never category.
Go to http://www.dtv.gov/fixreception.html. There, you’ll find two publications (available as Web pages or PDF downloads) that discuss how to fix reception problems. Some of the tips are excellent, such as the fact that you can move a rabbit-ear antenna just inches and it can make a huge difference in your reception. I live in an area of moderate to weak signals, so I tried playing with some rabbit ears that I have connected to a secondary TV set. They work okay for analog reception, but when I tried them with a converter box, I only got two stations and the signal was too weak to watch because the picture kept breaking up.
I tried moving the antenna about two feet away, and scanned again. This time I got a dozen stations, and most of them were strong enough to watch. I set the converter box control to show the signal strength, and then I tried tweaking the settings. The result was a noticeable improvement. So it’s worth spending some time making adjustments to the location and angle of your antenna. Remember that the change in the signal strength meter is not instantaneous, so make a small change, then wait a few seconds to see if it is better or worse before you make the next small change.
The other major improvement is that the FCC has added a site that predicts your signal strength based on the FCC database of information about the broadcast stations and terrain: http://www.fcc.gov/mb/engineering/maps/. Enter your address, and it will show your location on a Google map, and a list the stations you should be able to receive in order of signal strength. It’s not perfect, because it’s based on theoretical calculations, but it’s a good start. And like www.antennaweb.org, it gives you the compass heading from your location to the transmitter, which can help you aim a directional antenna. (Some antennas are omni-directional, which means they work in all directions, so you don’t need to aim them.)
This new information would have been good to have a year ago, but now that we have it, we may be able to find a way to make that elephant in the room a little smaller. | <urn:uuid:01763412-760b-4427-8c3f-de69563b1880> | CC-MAIN-2017-04 | https://hdtvprofessor.com/HDTVAlmanac/?p=887 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00208-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953824 | 617 | 2.5625 | 3 |
Chapter 18 – Parallel Processing
True parallel processing has been the goal of quite a few computer designers. Here, we give a brief history of parallel processing as a way of discussing the problems associated with this new way of computing. We then cover some of the classical ways of characterizing parallel processing. We should admit at the first that one big problem with parallel processing is the human doing the programming; we are essentially serial thinkers.
The Origin of Parallel Computing
The basic parallel computing organization dates from the 19th century, if not before. The difference is that, before 1945, all computers were human; a “computer” was defined to be “a person who computes”. An office dedicated to computing employed dozens of human computers who would cooperate on solution of one large problem. They used mechanical desk calculators to solve numeric equations, and paper as a medium of communication between the computers. Kathleen McNulty, an Irish immigrant, was one of the more famous computers. As she later described it:
“You do a multiplication and when the answer appeared, you had to write it down and reenter it. … To hand compute one trajectory took 30 to 40 hours.”
from the time of U.S. participation in the Second World War illustrates
the important features of parallel computing. The problem was large, but could be broken into a large number of independent pieces, each of which was rather small and manageable. Each part of the problem could be assigned to a single computer, with the expectation that communication between independent computers would not occupy a significant amount of the time devoted to solving the problem.
An Early Parallel Computer
Here is a picture, probably from the 1940’s.
Note that each computer is quite busy working on a mechanical adding machine. We may presume that computer–to–computer (interpersonal) communication was minimal and took place by passing data written on paper. Note here that the computers appear all to be boys. Early experience indicated that grown men quickly became bored with the tasks and were not good computers.
Consider a computing system with N processors, possibly independent. Let C(N) be the cost of the N–processor system, with C1 = C(1) being the cost of one processor. Normally, we assume that C(N) » N·C1, that the cost of the system scales up approximately as fast as the number of processors. Let P(N) be the performance of the N–processor system, measured in some conventional measure such as MFLOPS (Million Floating Operations Per Second), MIPS (Million Instructions per Second), or some similar terms.
Let P1 = P(1) be the performance of a single processor system on the same measure. The goal of any parallel processor system is linear speedup: P(N) » N·P1. More properly, the actual goal is [P(N)/P1] » [C(N)/C1]. Define the speedup factor as S(N) = [P(N)/P1]. The goal is S(N) » N.
Recall the pessimistic estimates from the early days of the supercomputer era that for large values we have S(N) < [N / log2(N)], which is not an encouraging number.
It may be that it was these values that slowed the development of parallel processors. This is certainly one of the factors that lead Seymour Cray to make his joke comparing two strong oxen to 1,024 chickens (see the previous chapter for the quote).
The goal of parallel execution system design is called “linear speedup”, in which the performance of an N–processor system is approximately N times that of a single processor system. Fortunately, there are many problem types amenable to algorithms that can be caused to exhibit nearly linear speedup.
Linear Speedup: The View from the Early 1990’s
Here is what Harold Stone [R109] said in his textbook. The first thing to note is that he uses the term “peak performance” for what we call “linear speedup”. His definition of peak performance is quite specific. I quote it here.
multiprocessor is operating at peak performance, all processors are engaged in
useful work. No processor is idle, and
no processor is executing an instruction that would not be executed if the same
algorithm were executing on a single processor.
In this state of peak performance, all N processors are contributing to
effective performance, and the processing rate is increased by a factor of
N. Peak performance is a very special
is rarely achievable.” [R109].
Stone notes a number of factors that introduce inefficiencies and inhibit peak performance.
1. The delays introduced by inter–processor communication.
2. The overhead in synchronizing the work of one processor with another.
3. The possibility that one or more processors will run out of tasks and do nothing.
4. The process cost of controlling the system and scheduling the tasks.
Motivations for Multiprocessing
Recalling the history of the late 1980’s and early 1990’s, we note that originally there was little enthusiasm for multiprocessing. At that time, it was thought that the upper limit on processor count in a serious computer would be either 8 or 16. This was a result of reflections on Amdahl’s Law, to be discussed in the next section of this chapter.
In the 1990’s, experiments at Los Alamos and Sandia showed the feasibility of multiprocessor systems with a few thousand commodity CPUs. As of early 2010, the champion processor was the Jaguar, a Cray XT5. It had a peak performance of 2.5 petaflops (2.5·1015 floating point operations per second) and a sustained performance in excess of 1.0 petaflop. As of mid–2011, the Jaguar is no longer the champion.
is designed and intended to facilitate parallel processing. Remember that there are several classes of parallelism,
only one of which presents a significant challenge to the computer system
designer. Job–level parallelism or process–level
parallelism uses multiple processors to run multiple independent programs
simultaneously. As these programs do not
communicate with each other, the design of systems to support this class
of programs presents very little challenge.
The true challenge arises with what might be called a parallel processing program, in which a single program executes on multiple processors. In this definition is the assumption that there must be some interdependencies between the multiple execution units. In this set of lectures, we shall focus on designs for efficient execution of solutions to large single software problems, such as weather forecasting, fluid flow modeling, and so on.
Most of the rest of this chapter will focus on what might be called “true parallel processing.”
Here is a variant of Amdahl’s Law that addresses the speedup due to N processors.
Let T(N) be the time to execute the program on N processors,
T1 = T(1) be the time to execute the program on 1 processor.
The speedup factor is obviously S(N) = T(1) / T(N). We consider any program as having two distinct components: the code that can be sped up by parallel processing, and the code that is essentially serialized. Assume that the fraction of the code that can be sped up is denoted by variable X. The time to execute the code on a single processor can be written as follows: T(1) = X·T1 + (1 – X)·T1 = T1
states that the time on an N–processor system will be
T(N) = (X·T1)/N + (1 – X)·T1 = [(X/N) + (1 – X)]·T1
The speedup is S(N) = T(1) / T(N) = 1 / [(X / N) + (1 – X)]
It is easy to show that S(N) = N if and only if X = 1.0; there is no part of the code that is essentially sequential in nature and cannot be run in parallel. Let’s examine the two most interesting cases. Suppose that X = 1. Then S(N) = 1 / [ 1/N + 0] = 1 / [1/N] = N. Suppose that X = 0.0, then S(N) = 1 / [0/N + 1] = 1/1 = 1; no speedup.
Suppose that 0.0 < X < 1.0. Then [(X / N) + (1 – X)] > (1 – X) and 1 / [(X / N) + (1 – X)] is less than 1 / (1 – X). So we have the maximum speedup for 0.0 < X < 1.0; it is 1 / (1 – X).
Some Results Due to Amdahl’s Law
Here are some results on speedup as a function of number of processors.
Note that even 5% purely sequential code really slows things down. For much larger processor counts, the results are about the same.
In the 1980’s and early 1990’s, the N/log2(N) was thought to be the most likely speedup , with log2(N) a second candidate. Each of these was discouraging. For N/log2(N) speedup, S(1024) = 102 and S(65536) = 4096. For log2(N) speedup, S(1024) = 10 and S(65536) = 16. Who would want to pay 65,536 times the dollars for 16 times the performance?
Taxonomy is just a way of organizing items that are to be studied. Here is the taxonomy of computer designs developed by Michael Flynn, who published it in 1966.
SISD (Standard computer)
SIMD (SSE on x86)
MISD (No examples)
MIMD (Parallel processing)
The classification focuses on two of the more characterizations of processing: the nature of the data streams and the number of processors applied to those data streams. The simplest, of course, is the SISD design, which characterizes most computers. In this design, a single CPU is applied to processing a single data stream. This is the classic von Neumann architecture studied in the previous chapters of this textbook. Even if the processor incorporates internal parallelism, it would be characterized as SISD. Note that this class includes some processors of significant speed.
The two multiple–data–stream classifications, SIMD and MIMD, achieve speedup by processing multiple data streams at the same time. Each of these two classes will involve multiple processors, as a single processor can usually handle only one data stream. One difference between SIMD and MIMD is the number of control units. For SIMD, there is generally one control unit to handle fetching and decoding of the instructions. In the MIMD model, the computer has a number of processors possibly running independent programs. Two classes of MIMD, multiprocessors and multicomputers, will be discussed soon in this chapter. It is important to note that a MIMD design can be made to mimic a SIMD by providing the same program to all of its independent processors.
This taxonomy is still taught today, as it continues
to be useful in characterizing
and analyzing computer architectures. However, this taxonomy has been replaced for serious design use because there are too many interesting cases that cannot be exactly fit into one of its classes. Note that it is very likely that Flynn included the MISD class just to be
complete. There is no evidence that a viable MISD computer was ever put to any real work.
Vector computers form the most interesting realization of SIMD architectures. This is especially true for the latest incarnation, called CUDA. The term “CUDA” stands for “Compute Unified Device Architecture”. The most noteworthy examples of the CUDA are produced by the NDIVIA Corporation (see www.nvidia.com). Originally NVIDIA focused on the production of GPUs (Graphical Processor Units), such as the NVIDIA GeForce 8800, which are high–performance graphic cards. It was found to be possible to apply a strange style of programming to these devices and cause them to function as general–purpose numerical processors. This lead to the evolution of a new type of device, which was released at CUDA by NVIDIA in 2007 [R68]. CUDA will be discussed in detail later.
SIMD vs. SPMD
Actually, CUDA machines such as the NVIDIA Tesla M2090 represent a variant of SIMD that is better called SPMD (Single Program Multiple Data). The difference between SIMD and SPMD is slight but important. The original SIMD architectures focused on amortizing the cost of a control unit over a number of processors by having a single CU control them all. This design leads to interesting performance problems, which are addressed by SPMD.
Parallel execution in the SIMD class involves all execution units responding to the same instruction at the same time. This instruction is associated with a single Program Counter that is shared by all processors in the system. Each execution unit has its own general purpose registers and allocation of shared memory, so SIMD does support multiple independent data streams. This works very well on looping program structures, but poorly in logic statements, such as if..then..else, case, or switch.
SIMD: Processing the “If statement”
Consider the following block of code, to be executed on four processors being run in SIMD mode. These are P0, P1, P2, and P3.
(x > 0) then
y = y + 2 ;
y = y – 3;
Suppose that the x values are as follows (1, –3, 2, –4). Here is what happens.
y = y + 2
y = y + 2
y = y + 2
y = y – 3
y = y – 3
y = y – 3
Execution units with data that do not fit the condition are disabled so that units with proper data may continue, causing inefficiencies. The SPMD avoids these as follows.
y = y + 2
y = y – 3
y = y + 2
y = y – 3
SIMD vs. SPMD vs. MIMD
The following figure illustrates the main difference between the SIMD and SPMD architectures and compares each to the MIMD architecture.
In a way, SPMD is equivalent to MIMD in which each processor is running the same high–level program. This does not imply running the exact same instruction stream, as data conditionals may differ between processors.
Multiprocessors, Multicomputers, and Clusters
We shall now investigate a number of strategies for parallel computing, focusing on MIMD. The two main classes of SIMD are vector processors and array processors. We have already discussed each, but will mention them again just to be complete.
There are two main classes of MIMD architectures [R15]:
a) Multiprocessors, which appear to have a shared memory and a shared address space.
which comprise a large number of independent processors
(each with its own memory) that communicate via a dedicated network.
Note that each
of the SIMD and MIMD architectures call for multiple independent
processors. The main difference lies in the instruction stream. SIMD architectures comprise a number of processors, each executing the same set of instructions (often in lock step). MIMD architectures comprise a number of processors, each executing its own program.
It may be the case that a number are executing the same program; it is not required.
Overview of Parallel Processing
Early on, it was discovered that the design of a parallel processing system is far from trivial if one wants reasonable performance. In order to achieve reasonable performance, one must address a number of important questions.
1. How do the parallel processors share data?
2. How do the parallel processors coordinate their computing schedules?
3. How many processors should be used?
is the minimum speedup S(N) acceptable for N
What are the factors that drive this decision?
In addition to the above question, there is the important one of matching the problem to the processing architecture. Put another way, the questions above must be answered within the context of the problem to be solved. For some hard real time problems (such as anti–aircraft defense), there might be a minimum speedup that needs to be achieved without regard to cost. Commercial problems rarely show this dependence on a specific performance level.
There are two main categories here, each having subcategories.
Multiprocessors are computing systems in which all programs share a single address space. This may be achieved by use of a single memory or a collection of memory modules that are closely connected and addressable as a single unit. All programs running on such a system communicate via shared variables in memory. There are two major variants of multiprocessors: UMA and NUMA.
In UMA (Uniform Memory Access) multiprocessors, often called SMP (Symmetric Multiprocessors), each processor takes the
same amount of time to access every
memory location. This property may be enforced by use of memory delays.
In NUMA (Non–Uniform Memory Access) multiprocessors, some memory accesses are faster than others. This model presents interesting challenges to the programmer in that race conditions become a real possibility, but offers increased performance.
Multicomputers are computing systems in which a collection of processors, each with its private memory, communicate via some dedicated network. Programs communicate by use of specific send message and receive message primitives. There are 2 types of multicomputers: clusters and MPP (Massively Parallel Processors).
Coordination of Processes
operating on parallel processors must be coordinated in order to insure proper
access to data and avoid the “lost update” problem associated with stale
data. In the stale data problem, a processor uses an old copy of a data item
that has been updated. We must guarantee
that each processor uses only “fresh data”.
We shall address this issue
head–on when we address the cache coherency problem.
Classification of Parallel Processors
Here is a figure from Tanenbaum [R15]. It shows a taxonomy of parallel computers, including SIMD, MISD, and MIMD.
Note Tanenbaum’s sense of humor. What he elsewhere calls a cluster, he here calls a COW for Collection of Workstations. Here is another figure from Tanenbaum (Ref. 4, page 549). It shows a number of levels of parallelism including multiprocessors and multicomputers.
parallelism, b) An attached coprocessor (we shall discuss these soon),
c) A multiprocessor with shared memory, d) A multicomputer, each processor having
its private memory and cache, and e) A grid, which is a loosely coupled multicomputer.
This is a model discussed by Harold Stone [R109]. It is formulated in terms of a time–sharing model of computation. In time sharing, each process that is active on a computer is given a fixed time allocation, called a quantum, during which it can use the CPU. At the end of its quantum, it is timed out, and another process is given the CPU. The Operating System will move the place a reference to the timed–out process on a ready queue and restart it a bit later. This model does not account for a process requesting I/O and not being able to use its entire quantum due to being blocked.
Let R be the length of the run–time quantum, measured in any convenient time unit. Typical values are 10 to 100 milliseconds (0.01 to 0.10 seconds). Let C be the amount of time during that run–time quantum that the process spends in communication with other processes. The applicable ratio is (R/C), which is defined only for 0 < C £ R.
In course–grain parallelism, R/C is fairly high so that computation is efficient.
In fine–grain parallelism, R/C is low and little work gets done due to the excessive overhead of communication and coordination among processors.
UMA Symmetric Multiprocessor Architectures
Beginning in the later 1980’s, it was discovered that several microprocessors can be usefully placed on a bus. We note immediately that, though the single–bus SMP architecture is easier to program, bus contention places an upper limit on the number of processors that can be attached. Even with use of cache memory for each processor to cut bus traffic, this upper limit seems to be about 32 processors [R15]. Here is a depiction of three classes of bus–based UMA architectures: a) No caching, and two variants of individual processors with caches: b) Just cache memory, and c) Both cache memory and a private memory.
In each architecture, there is a global memory shared by all processors. The bus structure is not the only way to connect a number of processors to a number of shared memories. Here are two others: the crossbar switch and the omega switch. These are properly the subject of a more advanced course, so we shall have little to say about them here.
The basic idea behind the crossbar and the omega is efficient communication among units that form part of the computer. A typical unit can be a CPU with an attached cache, or a memory bank. It is likely the case that there is little memory–to–memory communication.
The figure above shows three distinct UMA architectures. The big issue in each of these is the proper sharing of items found in a common global memory. For that reason, we do not consider the third option (cache memory and private memory), as data stored only in private memories cannot cause problems. We ignore the first option (no cache, just direct connect to the global memory) because it is too slow and also because it causes no problems. The only situation that causes problems is the one with a private cache for each CPU.
A parallel processing computer comprises a number of independent processors connected by a communications medium, either a bus or more advanced switching system, such as a crossbar switch. We focus this discussion on multiprocessors, which use a common main memory as the primary means for inter–processor communication. Later, we shall see that many of the same issues appear for multicomputers, which are more loosely coupled.
A big issue with the realization of the shared–memory multiprocessors was the development of protocols to maintain cache coherency. Briefly put, this insures that the value in any individual processor’s cache is the most current value and not stale data. Ideally, each processor in a multiprocessor system will have its own “chunk of the problem”, referencing data that are not used by other processors. Cache coherency is not a problem in that case as the individual processors do not share data. We shall see some situations in which this partitioning of data can be achieved, but these are not common.
In real multiprocessor systems, there are data that must be shared between the individual processors. The amount of shared data is usually so large that a single bus would be overloaded were it not that each processor had its own cache. When an individual processor accesses a block from the shared memory, that block is copied into that processors cache. There is no problem as long as the processor only reads the cache. As soon as the processor writes to the cache, we have a cache coherency problem. Other processors accessing those data might get stale copies. One logical way to avoid this process is to implement each individual processor’s cache using the write–through strategy. In this strategy, the shared memory is updated as soon as the cache is updated. Naturally, this increases bus traffic significantly. Reduction of bus traffic is a major design goal.
Logically speaking, it would be better to do without cache memory. Such a solution would completely avoid the problems of cache coherency and stale data. Unfortunately, such a solution would place a severe burden on the communications medium, thereby limiting the number of independent processors in the system. This section will focus on a common bus as a communications medium, but only because a bus is easier to draw. The same issues apply to other switching systems.
The Cache Write Problem
Almost all problems with cache memory arise from the fact that the processors write data to the caches. This is a necessary requirement for a stored program computer. The problem in uniprocessors is quite simple. If the cache is updated, the main memory must be updated at some point so that the changes can be made permanent.
It was in this context that we first met the issue of cache write strategies. We focused on two strategies: write–through and write–back. In the write–through strategy, all changes to the cache memory were immediately copied to the main memory. In this simpler strategy, memory writes could be slow. In the write–back strategy, changes to the cache were not propagated back to the main memory until necessary in order to save the data. This is more complex, but faster.
The uniprocessor issue continues to apply, but here we face a bigger problem.
The coherency problem arises from the fact that the same block of the shared main memory may be resident in two or more of the independent caches. There is no problem with reading shared data. As soon as one processor writes to a cache block that is found in another processor’s cache, the possibility of a problem arises.
We first note that this problem is not unique to parallel processing systems. Those students who have experience with database design will note the strong resemblance to the “lost update” problem. Those with experience in operating system design might find a hint of the theoretical problem called “readers and writers”. It is all the same problem: handling the issues of inconsistent and stale data. The cache coherency problems and strategies for solution are well illustrated on a two processor system. We shall consider two processors P1 and P2, each with a cache. Access to a cache by a processor involves one of two processes: read and write. Each process can have two results: a cache hit or a cache miss.
Recall that a cache hit occurs when the processor accesses its private cache and finds the addressed item already in the cache. Otherwise, the access is a cache miss. Read hits occur when the individual processor attempts to read a data item from its private cache and finds it there. There is no problem with this access, no matter how many other private caches contain the data. The problem of processor receiving stale data on a read hit, due to updates by other independent processors, is handled by the cache write protocols.
Cache Coherency: The Wandering Process Problem
This strange little problem was much discussed in the 1980’s (Ref. 3), and remains somewhat of an issue today. Its lesser importance now is probably due to revisions in operating systems to better assign processes to individual processors in the system. The problem arises in a time–sharing environment and is really quite simple. Suppose a dual–processor system: CPU_1 with cache C_1 and CPU_2 with cache C_2. Suppose a process P that uses data to which it has exclusive access. Consider the following scenario:
1. The process P runs on CPU_1 and accesses its data through the cache C_1.
process P exceeds its time quantum and times out.
All dirty cache lines are written back to the shared main memory.
some time, the process P is assigned to CPU_2.
It accesses its data
through cache C_2, updating some of the data.
4. Again, the process P times out. Dirty cache lines are written back to the memory.
P is assigned to CPU_1 and attempts to access its data. The cache C_1
retains some data from the previous execution, though those data are stale.
In order to avoid the problem of cache hits on stale data, the operating system must flush every cache line associated with a process that times out or is blocked.
Cache Coherency: Snoop Tags
Each line in a cache is identified by a cache tag (block number), which allows the determination of the primary memory address associated with each element in the cache. Cache blocks are identified and referenced by their memory tags.
In order to maintain coherency, each individual cache must monitor the traffic in cache tags, which corresponds to the blocks being read from and written to the shared primary memory. This is done by a snooping cache (or snoopy cache, after the Peanuts comic strip), which is just another port into the cache memory from the shared bus. The function of the snooping cache is to “snoop the bus”, watching for references to memory blocks that have copies in the associated data cache.
Cache Coherency: A Simple Protocol
We begin our consideration of a simple cache coherency protocol. After a few comments on this, we then move to consideration of the MESI protocol. In this simple protocol, each block in the cache of an individual processor can be in one of three states:
the cache block does not contain valid data.
This might indicate that the
data in the cache are stale.
(Read Only): the cache block contains
valid data, loaded as a result of a
read request. The processor has not written to it; it is “clean” in that it is not “dirty”
(been changed). This cache block may be shared with other processors; it may
be present in a number of individual processor caches.
(Read/Write): the cache block contains valid
data, loaded as a result of
either a read or write request. The cache block is “dirty” because its individual
processor has written to it. It may not be shared with other individual processors,
as those other caches will contain stale data.
A First Look at the Simple Protocol
Let’s consider transactions on the cache when the state is best labeled as “Invalid”. The requested block is not in the individual cache, so the only possible transitions correspond to misses, either read misses or write misses.
Note that this process cannot proceed if another processor’s cache has the block labeled as “Modified”. We shall discuss the details of this case later. In a read miss, the individual processor acquires the bus and requests the block. When the block is read into the cache, it is labeled as “not dirty” and the read proceeds.
In a write miss, the individual processor acquires the bus, requests the block, and then writes data to its copy in the cache. This sets the dirty bit on the cache block. Note that the processing of a write miss exactly follows the sequence that would be followed for a read miss followed by a write hit, referencing the block just read.
Cache Misses: Interaction with Other Processors
We have just established that, on either a read miss or a write miss, the individual processor must acquire the shared communication channel and request the block. If the requested block is not held by the cache of any other individual processor, the transition takes place as described above. We shall later add a special state to account for this possibility; that is the contribution of the MESI protocol.
If the requested block is held by another cache and that copy is labeled as “Modified”, then a sequence of actions must take place: 1) the modified copy is written back to the shared primary memory, 2) the requesting processor fetches the block just written back to the shared memory, and 3) both copies are labeled as “Shared”.
If the requested block is held by another cache and that copy is labeled as “Shared”, then the processing depends on the action. Processing a read miss only requires that the requesting processor fetch the block, mark it as “Shared”, and execute the read.
On a write miss, the requesting processor first fetches the requested block with the protocol responding properly to the read miss. At the point, there should be no copy of the block marked “Modified”. The requesting processor marks the copy in its cache as “Modified” and sends an invalidate signal to mark all copies in other caches as stale.
The protocol must insure that no more than one copy of a block is marked as “Modified”.
Write Hits and Misses
As we have noted above, the best way to view a write miss is to consider it as a sequence of events: first, a read miss that is properly handled, and then a write hit. This is due to the fact that the only way to handle a cache write properly is to be sure that the affected block has been read into memory. As a result of this two–step procedure for a write miss, we may propose a uniform approach that is based on proper handling of write hits.
At the beginning of the process, it is the case that no copy of the referenced block in the cache of any other individual processor is marked as “Modified”.
If the block in the cache of the requesting processor is marked as “Shared”, a write hit to it will cause the requesting processor to send out a “Cache Invalidate” signal to all other processors. Each of these other processors snoops the bus and responds to the Invalidate signal if it references a block held by that processor. The requesting processor then marks its cache copy as “Modified”.
If the block in the cache of the requesting processor is already marked as “Modified”, nothing special happens. The write takes place and the cache copy is updated.
The MESI Protocol
This is a commonly used cache coherency protocol. Its name is derived from the fours states in its FSM representation: Modified, Exclusive, Shared, and Invalid.
This description is taken from Section 8.3 of Tanenbaum [R15].
Each line in an individual processors cache can exist in one of the four following states:
1. Invalid The cache line does not contain valid data.
2. Shared Multiple caches may hold the line; the shared memory is up to date.
3. Exclusive No other cache holds a copy of this
the shared memory is up to date.
4. Modified The line in
this cache is valid; no copies of the line exist in
other caches; the shared memory is not up to date.
The main purpose of the Exclusive state is to prevent the unnecessary broadcast of a Cache Invalidate signal on a write hit. This reduces traffic on a shared bus. Recall that a necessary precondition for a successful write hit on a line in the cache of a processor is that no other processor has that line with a label of Exclusive or Modified. As a result of a successful write hit on a cache line, that cache line is always marked as Modified.
a requesting processor processing a write hit on its cache. By definition, any copy of the line in the
caches of other processors must be in the
1. Modified The protocol does not specify an action for the processor.
2. Shared The
processor writes the data, marks the cache line as Modified,
and broadcasts a Cache Invalidate signal to other processors.
3. Exclusive The processor writes the data and marks the cache line as Modified.
If a line in the cache of an individual processor is marked as “Modified” and another processor attempts to access the data copied into that cache line, the individual processor must signal “Dirty” and write the data back to the shared primary memory.
Consider the following scenario, in which processor P1 has a write miss on a cache line.
1. P1 fetches the block of memory into its cache line, writes to it, and marks it Dirty.
2. Another processor attempts to fetch the same block from the shared main memory.
snoop cache detects the memory request.
P1 broadcasts a message “Dirty” on
the shared bus, causing the other processor to abandon its memory fetch.
4. P1 writes the block back to the share memory and the other processor can access it.
Events in the MESI Protocol
There are six events that are basic to the MESI protocol, three due to the local processor and three due to bus signals from remote processors [R110].
Local Read The individual processor reads from its cache memory.
Local Write The individual processor writes data to its cache memory.
Eviction The individual processor must
write back a dirty line from its
cache in order to free up a cache line for a newly requested block.
Read Another processor issues a
read request to the shared primary
memory for a block that is held in this processors individual cache.
This processor’s snoop cache detects the request.
Write Another processor issues a
write request to the shared primary
memory for a block that is held in this processors individual cache.
Upgrade Another processor signals
that a write to a cache line that is shared
with this processor. The other processor will upgrade the status of
the cache line from “Shared” to “Modified”.
The MESI FSM: Action and Next State (NS)
Here is a
tabular representation of the Finite State Machine for the MESI protocol.
Depending on its Present State (PS), an individual processor responds to events.
BU – Bus
Yes: NS = S
No: NS = E
NS = M
NS = I
NS = I
NS = I
NS = I
NS = M
NS = I
NS = S
NS = I
Should not occur.
Write data back.
NS = I.
Write data back
NS = S
Write data back
NS = I
Should not occur.
Here is an
example from the text by Andrew Tanenbaum [R15]. This describes three individual processors,
each with a private cache, attached to a shared primary memory. When the
multiprocessor is turned on, all cache lines are marked invalid. We begin
with CPU reading block A from the shared memory.
CPU 1 is the first (and only) processor to request block A from the shared memory. It issues a BR (Bus Read) for the block and gets its copy. The cache line containing block A is marked Exclusive. Subsequent reads to this block access the cached entry and not the shared memory.Neither CPU 2 nor CPU 3 respond to the BR.
We now assume that CPU 2 requests the same block. The snoop cache on CPU 1 notes the request and CPU 1 broadcasts “Shared”, announcing that it has a copy of the block.
Both copies of
the block are marked as shared. This
indicates that the block is in two or more caches for reading and that the copy
in the shared primary memory is up to date.
CPU 3 does not respond to the BR. At this point, either CPU 1 or CPU 2 can issue a local write, as that step is valid for either the Shared or Exclusive state. Both are in the Shared state. Suppose that CPU 2 writes to the cache line it is holding in its cache. It issues a BU (Bus Upgrade) broadcast, marks the cache line as Modified, and writes the data to the line.
CPU 1 responds to the BU by marking the copy in its cache line as Invalid.
CPU 3 does not respond to the BU. Informally, CPU 2 can be said to “own the cache line”.
Now suppose that CPU 3 attempts to read block A from primary memory. For CPU 1, the cache line holding that block has been marked as Invalid. CPU 1 does not respond to the BR (Bus Read) request. CPU 2 has the cache line marked as Modified. It asserts the signal “Dirty” on the bus, writes the data in the cache line back to the shared memory, and marks the line “Shared”. Informally, CPU 2 asks CPU 3 to wait while it writes back the contents of its modified cache line to the shared primary memory. CPU 3 waits and then gets a correct copy. The cache line in each of CPU 2 and CPU 3 is marked as Shared.
We have considered cache memories in parallel computers, both multiprocessors and multicomputers. Each of these architectures comprises a number of individual processors with private caches and possibly private memories. We have noted that the assignment of a private cache to each of the individual processors in such architecture is necessary if we are to get acceptable performance. We have noted that the major issue to consider in these designs is that of cache coherency. Logically speaking, each of the individual processors must function as if it were accessing the one and only copy of the memory block, which resides in the shared primary memory.
We have proposed a modern solution, called MESI, which is a protocol in the class called “Cache Invalidate”. This shows reasonable efficiency in the maintenance of coherency.
The only other class of protocols falls under the name “Central Database”. In this, the shared primary memory maintains a list of “which processor has which block”. This centralized management of coherency has been shown to place an unacceptably high processing load on the shared primary memory. For this reason, it is no longer used.
Loosely Coupled Multiprocessors
Our previous discussions of multiprocessors focused on systems built with a modest number of processors (no more than about 50), which communicate via a shared bus. The class of computers we shall consider now is called “MPP”, for Massively Parallel Processor”. As we shall see, the development of MPP systems was resisted for a long time, due to the belief that such designs could not be cost effective. We shall see that MPP systems finally evolved due to a number of factors, at least one of which only became operative in the late 1990’s.
1. The availability of small and
inexpensive microprocessor units (Intel 80386, etc.)
that could be efficiently packaged into a small unit.
2. The discovery that many very important
problems were quite amenable to
3. The discovery that many of these
important problems had structures of such
regularity that sequential code could be automatically translated for parallel
execution with little loss in efficiency.
Early History: The C.mmp
While this chapter will focus on multicomputers, it is instructive to begin with a review of a paper on the C.mmp, which is a shared–memory multiprocessor developed at Carnegie Mellon University in the early 1970’s. The C.mmp is described in a paper by Wulf and Harbinson [R111], which has been noted as “one of the most thorough and balanced research–project retrospectives … ever seen”. Remarkably, this paper gives a thorough description of the project’s failures.
The C.mmp is described [R111] as “a multiprocessor composed of 16 PDP–11’s, 16 independent memory banks, a crosspoint [crossbar] switch which permits any processor to access any memory, and a typical complement of I/O equipment”. It includes an independent bus, called the “IP bus”, used to communicate control signals.
As of 1978, the system included the following 16 processors.
5 PDP–11/20’s, each rated at 0.20 MIPS (that is 200,000 instructions per second)
11 PDP–11/40’s, each rated at 0.40 MIPS
3 megabytes of shared memory (650 nsec core and 300 nsec semiconductor)
The system was observed to compute at 6 MIPS.
The Design Goals of the C.mmp
The goal of the project seems to have been the construction of a simple system using as many commercially available components as possible. The C.mmp was intended to be a research project not only in distributed processors, but also in distributed software. The native operating system designed for the C.mmp was called “Hydra”. It was intended as an OS kernel, intended to provide only minimal services and encourage experimentation in system software. As of 1978, the software developed on top of the Hydra kernel included file systems, directory systems, schedulers and a number of language processors.
Another part of the project involved the development of performance evaluation tools, including the Hardware Monitor for recording the signals on the PDP–11 data bus and software tools for analyzing the performance traces. One of the more important software tools was the Kernel Tracer, which was built into the Hydra kernel. It allowed selected operating system events, such as context swaps and blocking on semaphores, to be recorded while a set of applications was running. The Hydra kernel was originally designed based on some common assumptions. When experimentation showed these to be false, the Hydra kernel was redesigned.
The C.mmp: Lessons Learned
The researchers were able to implement the C.mmp as “a cost–effective, symmetric multiprocessor” and distribute the Hydra kernel over all of the processors. The use of two variants of the PDP–11 was considered as a mistake, as it complicated the process of making the necessary processor and operating system modifications. The authors had used newer variants of the PDP–11 in order to gain speed, but concluded that “It would have been better to have had a single processor model, regardless of speed”. The critical component was expected to be the crossbar switch. Experience showed the switch to be “very reliable, and fast enough”. Early expectations that the “raw speed” of the switch would be important were not supported by experience.
The authors concluded that “most applications are sped up by decomposing their algorithms to use the multiprocessor structure, not by executing on processors with short memory access times”. The simplicity of the Hydra kernel, with much system software built on top of it, yielded benefits, such as few software errors caused by inadequate synchronization.
The C.mmp: More Lessons Learned
Here I quote from Wulf & Harbison [R111], arranging their comments in an order not found in their original. The PDP–11 was a memory–mapped architecture with a single bus, called the UNIBUS, that connected the CPU to both memory and I/O devices.
(un)reliability was our largest day–to–day disappointment … The
aggregate mean–time–between–failure (MTBF) of C.mmp/Hydra fluctuated
between two to six hours.”
two–thirds of the failures were directly attributable to hardware
There is insufficient fault detection built into the hardware.”
3. “We found the PDP–11 UNIBUS to be especially noisy and error–prone.”
crosspoint [crossbar] switch is too trusting of other
components; it can be
hung by malfunctioning memories or processors.”
My favorite lesson learned is summarized in the following two paragraphs in the report.
“We made a serious error in not writing good diagnostics for the hardware. The software developers should have written such programs for the hardware.”
“In our experience, diagnostics written by the hardware group often did not test components under the type of load generated by Hydra, resulting in much finger–pointing between groups.”
The Challenge for Multiprocessors
As multicore processors evolve into manycore processors (with a few hundreds of cores), the challenge remains the same as it always has been. The goal is to get an increase in computing power (or performance or whatever) that is proportional to the cost of providing a large number of processors. The design problems associated with multicore processors remain the same as they have always been: how to coordinate the work of a large number of computing units so that each is doing useful work. These problems generally do not arise when the computer is processing a number of independent jobs that do not need to communicate. The main part of these design problems is management of access to shared memory. This part has two aspects:
cache coherency problem, discussed earlier.
2. The problem of process synchronization, requiring the use of lock
variables, and reliable processes to lock and unlock these variables.
Task Management in Multicomputers
The basic idea behind both multicomputers and multiprocessors is to run multiple tasks or multiple task threads at the same time. This goal leads to a number of requirements, especially since it is commonly assumed that any user program will be able to spawn a number of independently executing tasks or processes or threads.
According to Baron and Higbie [R112], any multicomputer or multiprocessor system must provide facilities for these five task–management capabilities.
1. Initiation A process must be able to spawn another process;
that is, generate another process and activate it.
2. Synchronization A process must be able to suspend itself or another process
until some sort of external synchronizing event occurs.
3. Exclusion A process must be able to monopolize a shared
such as data or code, to prevent “lost updates”.
4. Communication A process must be able to exchange messages with any
other active process that is executing on the system.
5. Termination A process must be able to terminate itself and release
all resources being used, without any memory leaks.
These facilities are more efficiently provided if there is sufficient hardware support.
One of the more common mechanisms for coordinating multiple processes in a single address space multiprocessor is called a lock. This feature is commonly used in databases accessed by multiple users, even those implemented on single processors.
These must use explicit synchronization messages in order to coordinate the processes. One method is called “barrier synchronization”, in which there are logical spots, called “barriers” in each of the programs. When a process reaches a barrier, it stops processing and waits until it has received a message allowing it to proceed. The common idea is that each processor must wait at the barrier until every other processor has reached it. At that point every processor signals that it has reached the barrier and received the signal from every other processor. Then they all continue.
Other processors, such as the MIPS, implement a different mechanism.
A Naive Lock Mechanism and Its Problems
Consider a shared memory variable, called LOCK, used to control access to a specific shared resource. This simple variable has two values. When the variable has value 0, the lock is free and a process may set the lock value to 1 and obtain the resource. When the variable has value 1, the lock is unavailable and any process must wait to have access to the resource. Here is the simplistic code (written in Boz–7 assembly language) that is executed by any process to access the variable.
GETIT LDR %R1,LOCK LOAD THE LOCK VALUE INTO R1.
CMP %R1,%R0 IS THE VALUE 0? REMEMBER THAT
REGISTER R0 IS IDENTICALLY 0.
BGT GETIT NO, IT IS 1. TRY AGAIN.
LDI %R1,1 SET THE REGISTER TO VALUE 1
STR %R1,LOCK STORE VALUE OF 1, LOCKING IT
The Obvious Problem
suppose a context switch after process 1 gets the lock and before it is able to
the revised lock value back to main memory. It is a lost update.
Event Process 1 Process 2
LOCK = 0
%R1 has value 0
Compares OK. Continue
%R1 has value 0
LOCK = 1
LOCK = 1
Context switch LDI %R1,1
LOCK = 1
Each process has access to the resource and continues processing.
Hardware Support for Multitasking
Any processor or group of processors that supports multitasking will do so more efficiently if the hardware provides an appropriate primitive operation. A test–and–set operation with a binary semaphore (also called a “lock variable”) can be used for both mutual exclusion and process synchronization. This is best implemented as an atomic operation, which in this context is one that cannot be interrupted until it completes execution. It either executes completely or fails.
Atomic Synchronization Primitives
What is needed
is an atomic operation, defined in the original sense of the word to be an
operation that must complete once it begins.
Specifically it cannot be interrupted or suspended by a context switch. There may be some problems associated with
virtual memory, particularly arising from page faults. These are easily fixed. We consider an atomic
instruction that is to be called CAS, standing for either Compare and Set, or
Compare and Swap
Either of these takes three arguments: a lock variable, an expected value (allowing the resource to be accessed) and an updated value (blocking access by other processes to this resource). Here is a sample of proper use.
LDR %R1,LOCK ATTEMPT
CAUSING A PAGE FAULT
CAS LOCK,0,1 SET TO 1 TO LOCK IT.
Two Variants of CAS
Each variant is
atomic; it is guaranteed to execute with no interrupts or
context switches. It is a single CPU instruction, directly executed by
Compare_and_set (X, expected_value, updated_value)
If (X == expected_value)
X ¬ updated_value
Else Return False
Compare_and_swap (X, expected_value, updated_value)
If (X == expected_value)
Swap X « updated_value
Else Return False
Such instructions date back at least to the IBM System/370 in 1970.
What About MESI?
Consider two processes executing on different processors, each with its own cache memory (probably both L1 and L2). Let these processes be called P1 and P2. Suppose that each P1 and P2 have the variable LOCK in cache memory and that each wants to set it.
Suppose P1 sets the lock first. This write to the cache block causes a cache invalidate to be broadcast to all other processes. The shared memory value of LOCK is updated and then copied to the cache associated with process P2. However, there is no signal to P2 that the value in its local registers has become invalid. P2 will just write a new value to its cache. In other words, the MESI protocol will maintain the integrity of values in shared memory. However, it cannot be used as a lock mechanism. Any synchronization primitives that we design will assume that the MESI protocol is functioning properly and add important functionality to it.
CAS: Implementation Problems
The single atomic CAS presents some problems in processor design, as it requires both a memory read and memory write in a single uninterruptable instruction. The option chosen by the designers of the MIPS is to create a pair of instructions in which the second instruction returns a value showing whether or not the two executed as if they were atomic. In the MIPS design, this pair of instructions is as follows:
LL Load Linked LL Register, Memory Address
SC Store Conditional SC Register, Memory Address
either fails or swaps the value in register $S4 with the value in
the memory location with address in register $S1.
TRY: ADD $T0,$0,$S4 MOVE VALUE IN $S4 TO REGISTER $T0
LL $T1,0($S1) LOAD $T1 FROM MEMORY ADDRESS
SC $T0,0($S1) STORE CONDITIONAL
BEQ $T0,$0,TRY BRANCH STORE FAILS, GO BACK
ADD $S4,$0,$T1 PUT VALUE INTO $S4
More on the MESI Issue
Basically the MESI protocol presents an efficient mechanism to handle the effects of processor writes to shared memory. MESI assumes a shared memory in which each addressable item has a unique memory address and hence a unique memory block number.
But note that the problem associated with MESI would largely go away if we could make one additional stipulation: once a block in shared memory is written by a processor, only that processor will access that block for some time. We shall see that a number of problems have this desirable property. We may assign multiple processors to the problem and enforce the following.
memory block can be read by any number of processors,
provided that it is only read.
a memory block is written to by one processor, it is the “sole property”
of that processor. No other processor may read or write that memory block.
Remember that a processor accesses a memory block through its copy in cache.
The High–End Graphics Coprocessor and CUDA
We now proceed to consider a type of Massively Parallel Processor and a class of problems well suited to be executed on processors of this class. The main reason for this match is the modification to MESI just suggested. In this class of problems, the data being processed can be split into small sets, with each set being assigned to one, and only one, processor. The original problems of this class were taken from the world of computer graphics and focused on rendering a digital scene on a computer screen. For this reason, the original work on the class of machines was done under the name GPU (Graphical Processing Unit). In the early development, these were nothing more than high performance graphic cards.
Since about 2003, the design approach taken by NVIDIA Corporation for these high end graphical processing units has been called “many–core” to distinguish it from the more traditional multicore designs found in many commercial personal computers. While a multicore CPU might have 8 to 16 cores, a many–core GPU will have hundreds of cores. In 2008, the NVIDIA GTX 280 GPU had 280 cores [R68]. In July 2011, the NVIDIA Tesla C2070 had 448 cores and 6 GB memory [R113].
The historic pressures on the designers of GPUs are well described by Kirk & Hwu [R68].
“The design philosophy of the GPUs is shaped by the fast growing video game industry, which exerts tremendous economic pressure for the ability to perform a massive number of floating–point calculations per video frame in advanced games. … The prevailing solution to date is to optimize for the execution throughput of massive numbers of threads. … As a result, much more chip area is dedicated to the floating–point calculations.”
“It should be clear now that GPUs are designed as numeric computing engines, and they will not perform well on some tasks on which CPUs are designed to perform well; therefore, one should expect that most applications will use both CPUs and GPUs, executing the sequential parts on the CPU and the numerically intensive parts on the GPUs. This is why the CUDA™ (Compute Unified Device Architecture) programming model, introduced by NVIDIA in 2007, is designed to support joint CPU/GPU execution of an application.”
Game Engines as Supercomputers
It may surprise students to learn that many of these high–end graphics processors are actually export controlled as munitions. In this case, the control is due to the possibility of using these processors as high–performance computers.
In the figure below, we present a high–end graphics coprocessor that can be viewed as a vector processor. It is capable of a sustained rate of 4,300 Megaflops.
The NVIDIA Tesla C870
Data here are from the NVIDIA web site [R113]. I quote from their advertising copy from the year 2008. This is impressive enough; I did not bother with the figures for this year.
The C870 processor is a “massively multi–threaded processor architecture that is ideal for high performance computing (HPC) applications”.
This has 128 processor cores, each operating at 1.35 GHz. It supports the IEEE–754 single–precision standard, and operates at a sustained rate of 430 gigaflops (512 GFlops peak).
The typical power usage is 120 watts. Note the dedicated fan for cooling.
The processor has 1.5 gigabytes of DDR SDRAM, operating at 800 MHz. The data bus to memory is 384 bits (48 bytes) wide, so that the maximum sustained data rate is 76.8 Gigabytes per second.
Compare this to the CRAY–1 supercomputer of 1976, with a sustained computing rate of 136 Megaflops and a peak rate of 250 Megaflops. This is about 3.2% of the performance of the current graphics coprocessor at about 500 times the cost. The Cray Y–MP was a supercomputer sold by Cray Research beginning in 1988. Its peak performance was 2.66 Gigaflops (8 processors at 333 Megaflops each). Its memory comprised 128, 256, or 512 MB of static RAM. The earliest supercomputer that could outperform the current graphics processor seems to have been the Cray T3E–1200E™, a MPP (Massively Parallel Processor) introduced in 1995 (Ref. 9). In 1998, a joint scientific team from Oak Ridge National Lab, the University of Bristol (UK) and others ran a simulation related to controlled fusion at a sustained rate of 1.02 Teraflops (1020 Gigaflops).
Sample Problem: 2–D Matrix Multiplication
We now turn to a simple mathematics problem that illustrates the structure of a problem that is well suited to execution on the GRU part of the CUDA. We shall begin with simple sequential code and modify it in stages until it is structured for parallel execution.
Here we consider the multiplication of two square matrices, each of size N–by–N, having row and column indices in the range [0, N – 1]. The following is code such as one might see in a typical serial implementation to multiply square matrix A by square matrix B, producing square matrix C.
For I = 0 to (N – 1) Do
For J = 0 to (N – 1) Do
Sum = 0 ;
For K = 0 to (N – 1) Do
SUM = SUM + A[I][K]·B[K][J] ;
C[I][J] = SUM ;
Note the use of SUM to avoid multiple references to C[I][J] within the inner loop.
Memory Organization of 2–D Arrays
In order to write efficient array code of any sort one has to understand the organization of multiply dimensioned arrays in computer memory. The most efficient way to handle two dimensional arrays will be to treat them as one dimensional arrays. We are moving towards a parallel implementation in which the computation of any number of matrix functions of two square N–by–N matrices can be done very efficiently in parallel by an array of N2 processors; each computing the results for one element in the result array.
Doing this efficiently means that we must reduce all arrays to one dimension, in the way that the run–time support systems for high–level languages do. Two–dimensional arrays make for good examples of this in that they represent the simplest data structures in which this effect is seen. Multiply dimensioned arrays are stored in one of two fashions: row major order and column major order. Consider a 2–by–3 array X.
In row major
order, the rows are stored contiguously.
X, X, X, X, X, X
Most high–level languages use row major ordering.
In column major
order, the columns are stored contiguously
X, X, X, X, X, X
Old FORTRAN is column major.
We shall assume that the language for implementing this problem supports row major ordering. In that case, we have two ways to reference elements in the array.
Two indices X X X X X X
One index X X X X X X
The index in the “one index” version is the true offset of the element in the array.
Sample Problem Code Rewritten
The following is
code shows the one–dimensional access to the 2–dimensional
arrays A, B, C. Each has row and column indices in the range [0, N – 1].
For I = 0 to (N – 1) Do
For J = 0 to (N – 1) Do
Sum = 0 ;
For K = 0 to (N – 1) Do
SUM = SUM + A[I·N + K]·B[K·N + J] ;
C[I·N + J] = SUM ;
Note that the [I][J] element of an N–by–N array is at offset [I·N + J].
Some Issues of Efficiency
The first issue is rather obvious and has been assumed. We might have written the code as:
For K = 0 to (N – 1) Do
C[I·N + J] = C[I·N + J] + A[I·N + K]·B[K·N + J] ;
But note that this apparently simpler construct leads to 2·N references to array element C[I·N + J] for each value of I and J. Array references are expensive, because the compiler must generate code that will allow access to any element in the array.
Our code has one reference to C[I·N + J] for each value of I and J.
Sum = 0 ;
For K = 0 to (N – 1) Do
SUM = SUM + A[I·N + K]·B[K·N + J] ;
C[I·N + J] = SUM ;
Array Access: Another Efficiency Issue
discussion, we have evolved the key code statement as follows. We began with
SUM = SUM + A[J][L]·B[L][K] and changed to
SUM = SUM + A[I·N + K]·B[K·N + J]
The purpose of this evolution was to make explicit the mechanism by which the address of an element in a two–dimensional array is determined. This one–dimensional access code exposes a major inefficiency that is due to the necessity of multiplication to determine the addresses of each of the two array elements in this statement. Compared to addition, multiplication is a very time–consuming operation.
As written the key statement SUM = SUM + A[I·N + K]·B[K·N + J] contains three multiplications, only one of which is essential to the code. We now exchange the multiplication statements in the address computations for addition statements, which execute much more quickly.
Addition to Generate Addresses in a Loop
Change the inner loop to define and use the indices L and M as follows.
For K = 0 to (N – 1) Do
L = I·N + K ;
M = K·N + J ;
SUM = SUM + A[L]·B[M] ;
For K = 0 L = I·N M = J
For K = 1 L = I·N + 1 M = J + N
For K = 2 L = I·N + 2 M = J + 2·N
For K = 3 L = I·N + 3 M = J + 3·N
The Next Evolution of the Code
This example eliminates all but one of the unnecessary multiplications.
For I = 0 to (N – 1) Do
For J = 0 to (N – 1) Do
Sum = 0 ;
L = I·N ;
M = J ;
For K = 0 to (N – 1) Do
SUM = SUM + A[L]·B[M] ;
L = L + 1 ;
M = M + N ;
C[I·N + J] = SUM ;
Suppose a Square Array of Processors
Suppose an array of N2 processors, one for each element in the product array C. Each of these processors will be assigned a unique row and column pair. Assume that process (I, J) is running on processor (I, J) and that there is a global mechanism for communicating these indices to each process.
Sum = 0 ;
L = I·N ;
M = J ;
INJ = L + M ; // Note we have I·N + J computed here.
For K = 0 to (N – 1) Do
SUM = SUM + A[L]·B[M] ;
L = L + 1 ;
M = M + N ;
C[INJ] = SUM ;
For large values of N, there is a significant speedup to be realized.
Another Look at the NVIDIA GeForce 8800 GTX
Here, your author presents a few random thoughts about this device. As noted in the textbook, a “fully loaded” device has 16 multiprocessors, each of which contains 8 streaming processors operating at 1.35 GHz. Each streaming processor has a local memory with capacity of 16 KB, along with 8,192 (213) 32–bit registers.
The work load for computing is broken into threads, with a thread block being defined as a number of intercommunicating threads that must execute on the same streaming processor. A block can have up to 512 (29) threads.
Conjecture: This division allows for sixteen 32–bit registers per thread.
Fact: The maximum
performance of this device is 345.6 GFLOPS
(billion floating point operations per second)
4/17/2010, the list price was $1320 per unit, which was
discounted to $200 per unit on Amazon.com.
1995, the fastest vector computer was a Cray T932. Its maximum
performance was just under 60 GFLOPS. It cost $39 million.
Some Final Words on CUDA
As has been noted above in this text, the graphical processing units are designed to be part of a computational pair, along with a traditional CPU. Examination of the picture of the NVIDIA Tesla C870 shows it to be a coprocessor, to be attached to the main computer bus. It turns out that the device uses a PCI Express connection.
The best way to use a CUDA is to buy an assembled PC with the GPU attached. Fortunately these are quite reasonable. In July 2011, one could buy a Dell Precision T7500 with a Quad Core Xeon Processor (2.13 GHz), 4 GB of DDR3 RAM, a 250 GB SATA disk, and an attached NVIDIA Tesla C2050, all for about $5,000.
Clusters, Grids, and the Like
There are many applications amenable to an even looser grouping of multicomputers. These often use collections of commercially available computers, rather than just connecting a number of processors together in a special network. In the past there have been problems of administering large clusters of computers; the cost of administration scaling as a linear function of the number of processors. Recent developments in automated tools for remote management are likely to help here.
It appears that blade servers are one of the more recent adaptations of the cluster concept. The major advance represented by blade servers is the ease of mounting and interconnecting the individual computers, called “blades”, in the cluster. In this aspect, the blade server hearkens back to the 1970’s and the innovation in instrumentation called “CAMAC”, which was a rack with a standard bus structure for interconnecting instruments. This replaced the jungle of interconnecting wires, so complex that it often took a technician dedicated to keeping the communications intact.
Clusters can be placed in physical proximity, as in the case of blade servers, or at some distance and communicate via established networks, such as the Internet. When a network is used for communication, it is often designed using TCP/IP on top of Ethernet simply due to the wealth of experience with this combination.
A Few Examples of Clusters, Grids, and the Like
In order to show the variety of large computing systems, your author has selected a random collection. Each will be described in a bit of detail.
The E25K NUMA Multiprocessor by Sun Microsystems
Our first example is a shared–memory NUMA multiprocessor built from seventy–two processors. Each processor is an UltraSPARC IV, which itself is a pair of UltraSPARC III Cu processors. The “Cu” in the name refers to the use of copper, rather than aluminum, in the signal traces on the chip. A trace can be considered as a “wire” deposited on the surface of a chip; it carries a signal from one component to another. Though more difficult to fabricate than aluminum traces, copper traces yield a measurable improvement in signal transmission speed, and are becoming favored.
Recall that NUMA stands for “Non–Uniform Memory Access” and describes those multiprocessors in which the time to access memory may depend on the module in which the addressed element is located; access to local memory is much faster than access to memory on a remote node. The basic board in the multiprocessor comprises the following:
1. A CPU and memory board with four UltraSPARC IV processors, each with an 8–GB
memory. As each processor is dual core, the board has 8 processors and 32 GB memory.
2. A snooping bus between the four processors, providing for cache coherency.
3. An I/O board with four PCI slots.
4. An expander board to connect all of these
components and provide communication
to the other boards in the multiprocessor.
A full E25K configuration has 18 boards; thus 144 CPU’s and 576 GB of memory.
The E25K Physical Configuration
Here is a figure from Tanenbaum [R15] depicting the E25K configuration.
The E25K has a centerplane with three 18–by–18 crossbar switches to connect the boards. There is a crossbar for the address lines, one for the responses, and one for data transfer.
The number 18 was chosen because a system with 18 boards was the largest that would fit through a standard doorway without being disassembled. Design constraints come from everywhere.
Cache Coherence in the E25K
How does one connect 144 processors (72 dual–core processors) to a distributed memory and still maintain cache coherence? There are two obvious solutions: one is too slow and the other is too expensive. Sun Microsystems opted for a multilevel approach, with cache snooping on each board and a directory structure at a higher level. The next figure shows the design.
The memory address space is broken into blocks of 64 bytes each. Each block is assigned a “home board”, but may be requested by a processor on another board. Efficient algorithm design will call for most memory references to be served from the processors home board.
The IBM BlueGene
The description of this MPP system is based mostly on Tanenbaum [R15]. The system was designed in 1999 as “a massively parallel supercomputer for solving computationally–intensive problems in, among other fields, the life sciences”. It has long been known that the biological activity of any number of important proteins depends on the three dimensional structure of the protein. An ability to model this three dimensional configuration would allow the development of a number of powerful new drugs.
The BlueGene/L was the first model built; it was shipped to Lawrence Livermore Lab in June 2003. A quarter–scale model, with 16,384 processors, became operational in November 2004 and achieved a computational speed of 71 teraflops. The full model, with 65,536 processors, was scheduled for delivery in the summer of 2005. In October 2005, the full system achieved a peak speed on 280.6 teraflops on a standard benchmark called “Linpack”. On real problems, it achieved a sustained speed of over 100 teraflops.
The Custom Processor Chip
IBM intended the BlueGene line for general commercial and research applications. Because of this, the company elected to produce the processor chips from available commercial cores. Each processor chip has two PowerPC 440 cores operating at 700 MHz. The configuration of the chip, with its multiple caches is shown in the figure below. Note that only one of the two cores is dedicated to computation, the other is dedicated to handling communications.
The connection topology used in the BlueGene is a three–dimensional torus. Each processor chip is connected to six other processor chips. The connections are called “North”, “East”, “South”, “West”, “Up”, and “Down”. Think of a three–dimensional cube with 6 faces.
In a recent upgrade (June 2007), IBM upgraded this chip to hold four PowerPC 450 cores operating at 850 MHz. In November 2007, the new computer, called the BlueGene/P achieved a sustained performance of 167 teraflops. This design obviously has some “growing room”.
The BlueGene/L Hierarchy
The 65,536 BlueGene/L is designed in a hierarchical fashion. There are two chips per card,
16 cards per board, 32 boards per cabinet, and 64 cabinets in the system.
We shall see that the MPP systems manufactured by Cray, Inc. follow the same design philosophy. It seems that this organization will become common for large MPP systems.
The AMD Opteron
Before continuing with our discussion of MPP systems, let us stop and examine the chip that has recently become the favorite for use as the processor, of which there are thousands. This chip is the AMD Opteron, which is a 64–bit processor that can operate in three modes.
In legacy mode, the Opteron runs standard Pentium binary programs unmodified.
In compatibility mode, the operating
system runs in full 64–bit mode, but applications
must run in 32–bit mode.
In 64–bit mode, all programs can issue
64–bit addresses; both 32–bit and 64–bit
programs can run simultaneously in this mode.
The Opteron has an integrated memory controller, which runs at the speed of the processor clock. This improves memory performance. It can manage 32 GB of memory. The Opteron comes in single–core, dual–core, or quad–core processors. The standard clock rates for these processors range from 1.7 to 2.3 GHz.
The Red Storm by Cray, Inc.
The Red Storm is a MPP system in operation at Sandia National Laboratory. This lab, operated by Lockheed Martin, doe classified work for the U.S. Department of Energy. Much of this work supports the design of nuclear weapons. The simulation of nuclear weapon detonations, which is very computationally intensive, has replaced actual testing as a way to verify designs.
In 2002, Sandia selected Cray, Inc. to build a replacement for its current MPP, called ASCI Red. This system had 1.2 terabytes of RAM and operated at a peak rate of 3 teraflops. The Red Storm was delivered in August 2004 and upgraded in 2006 [Ref. 9]. The Red Storm now uses dual–core AMD Opterons, operating at 2.4 GHz. Each Opteron has 4 GB of RAM and a dedicated custom network processor called the Seastar, manufactured by IBM. Almost all data traffic between processors moves through the Seastar network, so great care was taken in its design. This is the only chip that is custom–made for the project.
The next step in the architecture hierarchy is the board, which holds four complete Opteron systems (four CPU’s, 16 GB RAM, four Seastar units), a 100 megabit per second Ethernet chip, and a RAS (Reliability, Availability, and Service) processor to facilitate fault location. The next step in the hierarchy is the card cage, which comprises eight boards inserted into a backplane. Three card cages and their supporting power units are placed into a cabinet. The full Red Storm system comprises 108 cabinets, for a total of 10,836 Opterons and 10 terabytes of SDRAM. Its theoretical peak performance is 124 teraflops, with a sustained rate of 101 teraflops (1012 floating–point operations per second).
Security Implications of the Architecture
In the world on
national laboratories there are special requirements on the architecture of
computers that might be used to process classified data. The Red Storm at Sandia routinely processes
data from which the detailed design of current
The Cray XT5h
The Cray XT3 is a commercial design based on the Red Storm installed at Sandia National Labs. The Cray XT3 led to the development of the Cray XT4 and Cray XT5, the latest in the line. The XT5 follows the Red Storm approach in using a large number of AMD Opteron processors. The processor interconnect uses the same three–dimensional torus as found in the IBM BlueGene and presumably in the Cray Red Storm. The network processor has been upgraded to a system called ‘Seastar 2+”; each switch having six 9.6 GB/second router–to–router ports.
The Cray XT5h is a modified XT5, adding vector coprocessors and FPGA (Field Programmable Gate Array) accelerators. FPGA processors might be used to handle specific calculations, such as Fast Fourier Transforms, which often run faster on these units than on general purpose processors. We may expect to see a number of heterogeneous processors.
In April 2008,
Cray, Inc. was chosen to deliver an XT4 to the
As of July 2011, the computer showed a peak seed of 2.33 petaflops (2.33·1015 floating point operations per second), and a sustained performance in excess of 1.0 petaflop. The upgraded system has an 84–cabinet Cray XT4 and a200–cabinet upgraded Cray XT5. Each XT4 has 8 gigabytes per node, and each XT5 has 16 gigabytes per node, for a total of 362 terabytes of high speed memory. The total processor count is 37,376 six–core AMD Opterons (in the XT5) and 7,832 quad–core AMD Opterons (in the XT4). The XT4 part of the system is air cooled, but the XT5 part must be liquid cooled, using the commercial refrigerant R–134. The Jaguar requires 12.7 megawatts of electric power and a continuous supply of chilled air and refrigerant.
In every sense of the word, this is a big computer. | <urn:uuid:d2867146-836e-4bf6-ae47-fd63e8c0fd68> | CC-MAIN-2017-04 | http://edwardbosworth.com/My5155Text_V07_HTM/MyText5155_Ch19_V07.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00024-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933685 | 17,617 | 3.59375 | 4 |
HDLC: High Level Data Link Control
In preparation of our CCNA exam, we want to make sure we cover the various concepts that we could see on our Cisco CCNA exam. So to assist you, below we will discuss an HDLC Overview.
The High Level Data Link Control (HDLC) protocol, an ISO data link layer protocol based on the IBM SDLC, is to ensure that data passed up to the next layer has been received exactly as transmitted (i.e error free, without loss and in the correct order). Another important function of HDLC is flow control, which ensures that data is transmitted only as fast as the receiver can receive it. There are two distinct HDLC implementations: HDLC NRM (also known as SDLC) and HDLC Link Access Procedure Balanced (LAPB), the later is a more popular implementation. HDLC is usually used by X.25.
LAPB is a bit-oriented synchronous protocol that provides complete data transparency in a full-duplex point-to-point operation. It supports a peer-to-peer link in that neither end of the link plays the role of the permanent master station. HDLC NRM, on the other hand, has a permanent primary station with one or more secondary stations.
HDLC LAPB is a very efficient protocol, which requires a minimum of overhead to ensure flow control, error detection and recovery. If data is flowing in both directions (full duplex), the data frames themselves carry all the information required to ensure data integrity.
The concept of a frame window is used to send multiple frames before receiving confirmation that the first frame has been correctly been received. This means that data can continue to flow in situations where there may be long “turn-around” time lags without stopping to wait for an acknowledgement. This kind of situation occurs, for instance in satellite communication.
There are three categories of frames:/p>
- Information framestransport data across the link and may encapsulate the higher layers of the OSI architecture.
- Supervisory framesperform the flow control and error recovery functions.
- Unnumbered framesprovide the link initialization and termination.
Protocol Structure – HDLC: High Level Data Link Control
Flag – The value of the flag is always (0x7E).
Address field – Defines the address of the secondary station which is sending the frame or the destination of the frame sent by the primary station. It contains Service Access Point (6bits), a Command/Response bit to indicate whether the frame relates to information frames (I-frames) being sent from the node or received by the node, and an address extension bit which is usually set to true to indicate that the address is of length one byte. When set to false it indicates an additional byte follows.
Extended address – HDLC provides another type of extension to the basic format. The address field may be extended to more than one byte by agreement between the involved parties.
Control field – Serves to identify the type of the frame. In addition, it includes sequence numbers, control features and error tracking according to the frame type.
FCS – The Frame Check Sequence (FCS) enables a high level of physical error control by allowing the integrity of the transmitted frame data to be checked.
I hope you found this article to be of use and it helps you prepare for your Cisco CCNA certification. I am sure you will quickly find out that hands-on real world experience is the best way to cement the CCNA concepts in your head to help you pass your CCNA exam! | <urn:uuid:d43445fd-2934-4e8e-8183-a6698cbb7c88> | CC-MAIN-2017-04 | https://www.certificationkits.com/hdlc/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00510-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.900768 | 740 | 3.03125 | 3 |
On January 27, 2014, when the United States Congress adopted S. Res. 337, a nonbinding resolution expressing support for the designation of a “National Data Privacy Day” to be observed on January 28, there wasn’t a lot of time to get the word out, even though the event had been around awhile.
But that’s the date in which I first became aware of Data Privacy Day, and in the years since the bill’s passage, I’ve been thinking of ways to champion its existence.
January 28 was chosen because on that date back in 1981, the Council of Europe held the Convention for the Protection of Individuals with Regard to Automatic Processing of Personal Data. Luckily, they shortened the name a bit for the event. There they signed Convention 108, the first legally binding international treaty dealing with privacy and data protection.
Data Privacy Day began in the U.S. and Canada on January 2008 as an extension of the Data Protection Day celebration in Europe. The international event promotes awareness of privacy and data protection best practices. Recognized in the U.S., Canada and 27 European countries, Data Privacy Day’s educational initiative is to focus on raising awareness among users and businesses of the importance of protecting the privacy of their data online. This has become even more important as social networking has increased in popularity over the years, as have security breaches.
Data Privacy Day’s goal is to educate and empower businesses, consumers and families with the knowledge and best practices to better protect themselves from hackers, viruses and malware that can put their information as risk. Data Privacy Day brings together not just technology folks, but government officials, educators, those involved with nonprofits, as well as leaders from all industry sectors.
The National Cyber Security Alliance coordinates the promotion of Data Privacy Day activities. The agency’s goals are to:
- Encourage consumers to consider the implications of their online actions for themselves and others on social media channels such as Facebook, Twitter, Instagram, etc.
- Educate consumers to better understand how their personal information may be collected and the benefits and risks of sharing personal data.
- Empower consumers to express their expectations for the use, protection and management of their personal data.
- Educate consumers by sharing simple and actionable tips to more actively manage their own online footprints.
- Encourage businesses to be better protectors of consumer information by being more transparent about how they collect, use and even share personal information.
- Encourage businesses to better communicate any available privacy and security controls when dealing with consumer information.
So what can you do about data privacy? If the security of your data and privacy matters to you, Data Privacy Day is a great time to start actively protecting your info. Target, Sony and Lowe’s all learned the hard way. It’s in the best interest of every business to practice good data stewardship or they’ll be the next lead story on CNN or the next big headline in The New York Times. Whether it’s your bank, doctor, pharmacy or even workplace, encourage them to protect your data sufficiently. Don’t feel powerless to do so.
Here are some of the things you can do:
- Socialize it. To protect your personal data, you don’t have to be afraid of social media. Tweet privacy tips. Post messages on your Facebook and LinkedIn accounts. You can use the official Data Privacy Day hashtag #DPD15 and follow @DataPrivacyDay to stay up to date on all of the latest Data Privacy Day tips to share with your connections and followers.
- Make it official. You can suggest your organization show its support of Data Privacy Day by becoming an official Data Privacy Day Champion. Last year, more than 220 organizations enrolled as Data Privacy Day Champions. It’s quick and easy to sign up.
- Make it personal. Data privacy starts at home, so make sure your loved ones know the risks to their personal information, especially children and teenagers who may be more likely to overshare on social media channels. Secure your information if you have a shared accounts on your PC, tablets or smart TVs that are connected to multimedia outlets like Netflix, Hulu, Amazon Prime and iTunes. | <urn:uuid:2a23949e-02ab-4eff-aa30-fe372a4899d2> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2015/01/28/how-to-do-your-part-on-data-privacy-day/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00326-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932015 | 869 | 2.9375 | 3 |
NSF seeds cloud research test beds
The National Science Foundation recently announced two $10 million projects to create cloud computing test beds – to be called Chameleon and CloudLab – that will help develop novel cloud architectures and new applications.
The awards complement private sector efforts to build cloud architectures that can support real-time and safety-critical applications like those used in medical devices, power grids, and transportation systems, NSF said in its announcement. They are part of the NSFCloud program that supports research into novel cloud architectures to address emerging challenges including real-time and high-confidence systems.
Chameleon, to be co-located at the University of Chicago and the University of Texas at Austin, will consist of 650 cloud nodes with 5 petabytes of storage. Researchers will be able to configure slices of Chameleon as custom clouds to test the efficiency and usability of different cloud architectures on a range of problems, from machine learning and adaptive operating systems to climate simulations and flood prediction.
The test bed will allow "bare-metal access," an alternative to virtualization technologies currently used to share cloud hardware, allowing for experimentation with new virtualization technologies that could improve reliability, security and performance.
Chameleon is unique for its support for heterogeneous computer architectures, including low-power processors, general processing units and field-programmable gate arrays, as well as a variety of network interconnects and storage devices, NSF said.
Researchers can therefore mix and match hardware, software and networking components and test their performance. This flexibility is expected to benefit many scientific communities, including the growing field of cyber-physical systems or the Internet of Things, which integrates computation into physical infrastructure.
"Like its namesake, the Chameleon test bed will be able to adapt itself to a wide range of experimental needs, from bare metal reconfiguration to support for ready-made clouds," said Kate Keahey, a scientist at the Computation Institute at the University of Chicago and principal investigator for Chameleon.
"Furthermore, users will be able to run those experiments on a large scale, critical for big data and big compute research.”
The CloudLab test bed is a large-scale distributed infrastructure based at the University of Utah, Clemson University and the University of Wisconsin, on top of which researchers will be able to construct many different types of clouds. Each site will have unique hardware, architecture and storage features, and will connect to the others via 100 gigabit/sec connections on Internet2's advanced platform. CloudLab will also support OpenFlow (an open standard that enables researchers to run experimental protocols in campus networks) and other software-defined networking technologies.
CloudLab will provide approximately 15,000 processing cores and in excess of 1 petabyte of storage at its three data centers. Each center will comprise different hardware, facilitating additional experimentation. In that capacity, the team is partnering with HP, Cisco and Dell to provide diverse platforms for research. Like Chameleon, CloudLab will feature bare-metal access.
Over its lifetime, CloudLab is expected to run dozens of virtual experiments simultaneously and to support thousands of researchers. "CloudLab will be a facility where researchers can build their own clouds and experiment with new ideas with complete control, visibility and scientific fidelity,” said Robert Ricci, a research assistant professor of computer science at the University of Utah and principal investigator of CloudLab.
Ultimately, the goal of the NSFCloud program and the two new test beds is to advance the field of cloud computing broadly. The awards will help researchers develop new concepts, methods and technologies to enable infrastructure design and execution.
Connect with the GCN staff on Twitter @GCNtech. | <urn:uuid:b02b47ec-1259-4f88-a011-0a786593502a> | CC-MAIN-2017-04 | https://gcn.com/articles/2014/08/25/nsf-cloud-test-beds.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00052-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928233 | 752 | 2.875 | 3 |
The number of 'things' connected to the internet is already bypassing the number of people on the planet. This Internet of 'things' is changing the way we live and work: from the way food is grown and produced on farms through automated temperature and feeding controls, to the way we check prices and buy through connected terminals, to the vehicles we drive, the security cameras at work, and automated gates at the entrance. Connected 'things' are everywhere. All these 'things' are helping us to be more productive and efficient while also offering more and more convenience.
The demand for these connected 'things' is creating an exploding demand for the M2M (Machine to Machine) communication system since the 'things' need to communicate among themselves or with a central controller, most often wirelessly. This M2M wireless communication demand is changing wireless operators' business models. Human users normally use high data rates and yield high ARPU, while most M2M communications generate much less data but also yields much lower ARPU. Wireless carriers now need to support many more of these low ARPU 'things' than they had designed their network to handle.
Although data demands from humans and M2Ms differ, further complicating the matter is the fact that each creates about the same amount of signaling load with current network designs. Since signaling traffic is generally not monetized, this load creates immense pressure on mobile operators. Hence, mobile operators need to rethink signaling traffic and implement lightweight policy-based controls for the M2M communications.
The 3GPP (3rd Generation Partnership Project) is creating standards for M2M that state that devices should not need MSISDNs (Mobile Subscriber ISDN Numbers), and thus the traditional way of addressing mobile devices through their phone numbers is no longer applicable. Although the IMSI (International Mobile Subscriber Identity) can be used by an operator to locate a desired device, it is undesirable to use this identifier by anyone outside the operator's domain because eavesdroppers can potentially identify and misuse customer information. Therefore, to identify devices outside of the operator's domain it is required that all devices be assigned static host names that can be used to always reach the device. This creates requirements for DDNS to store the mapping between these names and IP addresses while also mandating the use of DNS queries to locate the communication end points. Considering the sheer amount of M2M devices and projected traffic, it is important that the DNS is able to handle billions of records and thousands of DDNS updates per second while maintaining low latency and high performance to allow for fast and reliable M2M communications.
Other features important for M2M are policies to control the enormous volume of M2M communications and how to handle operational issues, such as misbehaving or stolen devices. Implementation of these policies at the DNS server can provide a lightweight and efficient way for operators to control traffic and access patterns without any in-line overhead and extra equipment. Further, DNS policies can easily be made flexible and extensible to allow for various controls (such as a time based control, an access control, etc.)
There are many other issues with M2M communications (such as heavy traffic demands on P-GWs (Packet Gateways)) that can benefit from a proper DNS-based APN (Access Point Name) architecture. We will address those and dive deeper into the DNS guidelines for M2M communications in future posts, so stay tuned.
By Manjari Asawa, Mobile Product Manager at Nominum
|Data Center||Policy & Regulation|
|DNS Security||Regional Registries|
|Domain Names||Registry Services|
|Intellectual Property||Top-Level Domains|
|Internet of Things||Web|
|Internet Protocol||White Space|
Afilias - Mobile & Web Services | <urn:uuid:406b01fc-294c-49b8-8ae6-a004d284c781> | CC-MAIN-2017-04 | http://www.circleid.com/posts/20130123_benefits_of_dns_based_architecture_for_m2m_communications/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00262-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.900517 | 784 | 2.6875 | 3 |
This week's announcement of the Asus Transformer Prime marks the start of a new era for Android tablets. Aside from the Prime's sleek design and PC-like transformation potential, the product will be the first tablet to run on a quad-core processor -- specifically, the new Tegra 3 chip made by Nvidia.
Let's face it, though: For most folks, things like processor cores ultimately boil down to a bunch of gobbledygook geek speak. So what does having a quad-core tablet really mean from a user perspective, and will it make a significant difference in your day-to-day life?
Transformer Prime and the Quad-Core Advantage
The short answer is that having a quad-core processor lets your tablet do more stuff simultaneously and more efficiently. Nvidia cites its new Tegra 3 processor as delivering up to five times the performance of its dual-core predecessor, the Tegra 2 -- the chip in many current high-end Android tablets, including the Motorola Xoom, Samsung Galaxy Tab, and original Asus Transformer. The Tegra 3's graphics processing unit is also said to be three times as fast as the previous model's.
More geek speak -- I know. So let's put this into real-world terms. Some things you'll notice when using a tablet with a quad-core processor:
Multitasking will be smoother and more responsive. The extra cores in a chip like Nvidia's Tegra 3 let Android figure out which tasks need the most resources and assign computing power accordingly. With more cores to spread out the work, the system can keep up with high amounts of activity without stuttering or slowing down.
Resource-intensive apps will have better performance. Something like Words with Friends runs fine on any ol' system, more or less, but when you start getting into apps that handle tasks like photo and video editing and graphical gaming, a quad-core chip will allow for a higher level of performance. It'll also enable developers to create new kinds of resource-intensive games that simply wouldn't be able to run (or run well, anyway) on a less powerful processor.
Quad-Core Tablets In Action
That's all fine and dandy to discuss, but how 'bout an actual example? The video below, produced by Nvidia, shows side-by-side performance of a tablet running the new Tegra 3 chip compared to various dual-core systems. Nvidia's Tegra team tells me they used devices with comparable specs (aside from the chip, of course) to eliminate as many variables as possible. The devices on the left side of the screen are running either Tegra 2 chips or Qualcomm 1.5GHz dual-core chips; the devices on the right are running the new Tegra 3 quad-core processors.
And as for the quad-core gaming potential? This next video shows some of the advancements being made by developers to take advantage of the added cores:
Quad-Core Tablets: What About Battery Life?
The first thing most people ask when they hear about quad-core tablets is how all that added processing power will affect their device's battery life. In what seems to be a paradoxical twist, a quad-core processor actually runs at a lower frequency and tends to use less power than a dual-core equivalent.
The reason? All those cores aren't just lighting up willy-nilly; instead, the system spreads out the workload and uses only the minimum amount of processing power needed at any given time. The Tegra 3 chip actually has five cores (confusing, I know), one of which is dedicated to handling low-frequency tasks like keeping the tablet humming in active-standby mode, playing music in the background, or playing a video. The four main cores, then, handle the heavier stuff, switching on and off automatically as they're needed.
Case in point: With the Transformer Prime, Asus is promising 12 hours of battery life -- and that's based on tests where the tablet was continuously playing 720p-quality video.
Quad-Core Tablets: The Bottom Line
Look, at this point, a quad-core tablet is a top-of-the-line luxury item. If you're a casual user who wants a tablet mainly for surfing the Web, checking email, and playing some basic games, a dual-core device will probably be fine for your needs. Pretty much every quality tablet on the market right now is dual-core; most of them have excellent performance for typical everyday use.
If you're a power user, though -- someone who does a lot of multitasking, heavy-duty gaming, or high-intensity application usage -- a quad-core unit might be something worth your while. You'll pay a premium for it, no doubt, but if technology is a passion and you like having the latest and greatest stuff, the benefits could be well worth the cost.
The Asus Transformer Prime is scheduled to come out sometime next month; we'll likely see other quad-core tablets (and then phones) shortly thereafter. I'll be reviewing the Prime in much greater detail once Asus makes review units available, so I'll be able to give you more personal hands-on impressions of the performance then.
UPDATE: The full review is now online. Follow the link below:
Article copyright 2011 JR Raphael. All rights reserved. | <urn:uuid:6f4324e4-4263-46fe-8f20-5973fe9b9267> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2471629/mobile-apps/asus-transformer-prime--does-quad-core-really-matter-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00347-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936811 | 1,099 | 2.578125 | 3 |
SSG is a revolutionary new storage technology that is ready to be incorporated in graphics card for tomorrow.
- SSG is the latest technology in graphics card which is going to make revolutionary changes in the field of graphics and virtual reality.
- SSG is basically an SSD (Solid State Drive) inside a Graphics card.
SSD is a storage technology where data is stored inside flash memory. In traditional hard disks, data is stored on spinning platters and read by a read write head which keeps moving to read data. In SSD, flash memory is used to save data. This is the same memory that is used in pen drives, camera etc. Data transfer using flash memory is very quick. They don’t make any sound as there are no moving parts involved for reading and writing data. In short, SSD is much faster and work quieter than traditional hard disks.
- Based on the advantages of SSD, research has been going on for some time to incorporate SSD inside graphics card. The benefits are more storage space inside graphics card, faster data transfer to achieve faster data processing.
- AMD released Radeon Pro Graphics Card which is the first SSG card released in market
- Radeon Pro comes 1TB of memory which is a 100 times more memory than some of the best graphics card in market. This is the first time graphics card came with TB memory.
- 1TB of graphics card memory is an obscene amount of memory. Graphics with the most memory in the market today are Nvidia Quadro M6000 comes with 24 GDDR4 and AMD FirePro S9170 comes with 32GB GDDR5. When these models are considered obscene amount of memory, imagine what it feels like to have a graphics card with 1TB memory.
- The Graphics card will now directly take data from SSG instead of RAM. Previously, when a video card wants to process certain information, it directly looks into GDDR5 or GDDR3. If not present, it then checks in RAM and then ask CPU to give the information for processing. In the technology with SSG, it first checks GDDR5 or GDDR3 space for processing data, then it checks SSG. Since SSG have lot of storage space, frequently used data can easily be stored inside.
- Graphics card will be able to able effortlessly churn out ultra-high definition 8k video. In a recent SIGGRAPH presentation, AMD demonstrated 8k video at 90fps.
- AMD Radeon Pro developer kit is priced at around $10000. If you are filthy rich and totally into graphics, go for it. For others, you need to wait for some time for the technology to ripen and AMD will be able to offer this graphics card at affordable price.
- You will be able to install Radeon Pro on any computer with a PCI Express x16 slot. Every PC released in the last 5yrs will have this slot.
- Organizations which are into the business of graphics and multimedia can hugely benefit from this technology. This is a dream come true for game developers and special effects creators. According to AMD, the technology is also very beneficial in the field of medicine as doctors will be able to see the live 3D view of every organ in the body leading to accurate diagnosis and treatment. CAD designers don’t have to wait long time for their machines to boot up with the finished model. VR technology, one of the fastest growing is also going to benefit a lot by using faster streaming videos and special effects. | <urn:uuid:8d385f72-fd16-4d08-9132-d6467dd74c24> | CC-MAIN-2017-04 | http://atechjourney.com/ssg-solid-state-graphics-card-facts-details.html/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00163-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940865 | 700 | 3.046875 | 3 |
Mad Dog 21/21: Pilots Of The Carob Bean
December 9, 2013 Hesh Wiener
Since biblical times, Mediterranean people have cultivated the carob tree. Its seeds, called carats, are remarkably consistent in mass, about 200 milligrams each. For ages, the carat weight has been a measure of gemstones and other valuable items. The Roman pure gold solidus coin weighed 24 carats; it was the original 24-carat gold. Mobile device displays may soon be described in carats along with pixels and inches. This is because Apple is getting into the sapphire sheet business.
It might make a monkey of Corning’s Gorilla Glass, because wherever Apple leads, the rest of the technology world usually follows.
Sapphire has a number of uses in high technology devices. For instance, some semiconductors are built on a sapphire rather than silicon substrate. But the future of sapphire for Apple is expected to be a different use of the material. Sapphire can be used for the face of scratch-resistant displays, much the way it is currently used to make top quality watch crystals and other optical parts. All these applications are based on the key physical characteristics of sapphire: its hardness, its ability to provide electrical insulation and its ability to carry heat, even as it blocks the flow of electricity.
While sapphire as a gemstone is usually blue, in optical applications it is clear. The difference stems from impurities in the crystalline aluminum oxide that we call sapphire. The same type of crystal with different impurities can be red rather than blue, and when we find this crystal in nature we call it a ruby. The name of the family of minerals made of this material is corundum, and it includes yellowish and orange stones as well as red and blue ones.
Whether sapphire or ruby, natural or manmade, aluminum oxide crystals are very hard. They are rated 9 on the Mohs hardness scale. Diamond, rated 10, is harder, but Corning’s family of Gorilla Glass products, is not. Gorilla Glass display faces have has some physical and economic characteristics that give them an advantage over sapphire . . . for now. But that could change. While it is likely that sapphire will always be more costly than Gorilla Glass, it is just as likely that in some applications the superior hardness of sapphire will make it a winner, much the way sapphire watch crystals are so often used in fine timepieces.
Additionally, sapphire display surfaces seem particularly suitable for Apple products for a number of reasons. Apple devices have long technical and economic lives, not only with their initial owners but also in the aftermarket. (So do Kindle ebooks and tablets, but I can’t think of any other client devices with comparable persistence.) Consumers seem to trust Apple to make products that don’t require pampering. While many people get cases, bumpers, or neoprene pockets for their iThings, many do not; iPhones are often carried in pockets or purses alongside keys, and iPads are often just tossed into cluttered backpacks. Those are some practical reasons for Apple to enhance its products with superior display faces. But that isn’t all to this matter. Apple’s customers really love their gadgets. A sapphire face would boost the allure of an iPhone, iPad or, if one were to emerge, an iWatch.
Sometimes it seems as if Apple products are held in such reverence that the most enthused Apple fans constitute some kind of cult. At the extreme, users’ lives seem to revolve around their iPhones and, to a lesser extent, their iPads and iPods. If Apple were a religion rather than a maker of high-tech products, its vaunted devices would be candidates for canonization. And the most fervent fanbois might adorn their gadgets with sapphires and other jewels, the way passionate admirers of saints have embellished and bejeweled relic skulls and bones.
Apple relics have already been established as objects of considerable value. An Apple I, forerunner of the Apple II computer that first put Apple on the map, was sold for nearly $400,000 at auction last summer. Somewhere in medicine’s physical archives are relics of Steve Jobs, whose medical history included a number of biopsies, some or all of them very likely preserved for scientific posterity. If a religion ever pops up in which Jobs is a saint or something more, those relics or items purporting to be Jobs’ relics could end up in Jobsian cathedrals.
There must be a dozen purported heads of St. John the Baptist in North Africa, the Middle East, Europe, and possibly elsewhere. The Pope seems to believe the real thing is in Rome, in an ancient church near the big post office. John is venerated as the saint who baptized Christ. John’s wanderings in the desert may well have involved a diet of carob, which is why the plant’s pods are sometimes called St. John’s Bread. It turns out that carob and St. John are woven together in many ways and in many places. On the island of Malta, where carob not only grows well but served as a vital foodstuff during World War II, for instance. A thousand years ago on Malta, knights of the Hospitallers collected relics they sincerely believed were the bones of St. John.
As entertaining as some of the stories surrounding relics of St. John might be, few are as exotic as the real scientific story behind the sapphire-cutting technology that has captivated Apple’s attention and a pile of its money, too. In order to get a lock on very thin sheets of sapphire, Apple is putting more than a half billion dollars into a factory that will peel very thin layers of sapphire off a chunk of the material. The chunk is called a boule. The company doing the sapphire splitting is called GT Technology and its process, originally developed for use in conjunction with the fabrication of solar panels on a sapphire substrate, is close to magic.
In 2012, an outfit called Twin Creeks found a way to beam hydrogen ions at a sapphire boule in a way that made them build up in a layer about 20 micrometers below the surface of the sapphire. Once this had been done, the ions could, with the addition or some electrons, be turned into hydrogen atoms. At that point the layer, which had fit within the sapphire crystal lattice very nicely, became real hydrogen gas and its presence caused the crystal to split. The thin layer above the hydrogen, a 20-micrometer layer, could be harvested. The process could be repeated, with each cycle yielding a very thin sheet of sapphire. Twin Creeks was acquired by GT Technology, and GT cut the deal with Apple that will give GT a very nice factory for making sapphire sheets and give Apple access, presumably exclusive access, to the sheets made at that factory.
So now Apple is just a few steps and several months away from making (or getting a supplier to make) device screens that have as their external surface an ultra-thin, incredibly hard, optically clear, electrically insulating layer of sapphire. The sapphire will most likely be bonded to some kind of glass to form a complete screen that provides both display and touch-sensing features.
Notwithstanding all the positive reports in the press, this whole venture could turn out to be a bust. Maybe the process won’t work as expected. Maybe the costs will be so high that the sapphire display screen will remain impractical, even for Apple, which can command premium prices for its products. But even so, such an unfortunate development wouldn’t stop Apple from moving forward. It has plenty of cash on hand. It could cut this kind of deal ten times, if that’s what it took to obtain a key technological advantage, and still have more than enough money in the bank to fight off the insipid Icahnbirds it attracts.
Steve Jobs is gone, but Apple’s current management seems to be plenty smart and its corporate culture is lively and as creative as ever. Apple is quite capable of taking a long view. It has that kind of vision. And these days it is looking at the future through a sapphire lens others are only beginning to fully appreciate. | <urn:uuid:b209d6fa-9195-4b77-bc31-b543f3137abb> | CC-MAIN-2017-04 | https://www.itjungle.com/2013/12/09/tfh120913-story04/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00557-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957472 | 1,766 | 3.03125 | 3 |
What email address or phone number would you like to use to sign in to Docs.com?
If you already have an account that you use with Office or other Microsoft services, enter it here.
Or sign in with:
Signing in allows you to download and like content, which the author will be aware of.
Embed code for: Lesson 10 - Job
Select a size
Lesson 10November 26–December 2
The Wrath of Elihu
Read for This Week’s Study: Job 13:28, Job 28:28, Job 32:1–5, Job 34:10–15, Ezek. 28:12–17, Job 1–2:10.
Memory Text: “ ‘For as the heavens are higher than the earth, so are My ways higher than your ways, and My thoughts than your thoughts’ ” (Isaiah 55:9, NKJV).
And so it goes, the battle of words between Job and these three men, words that at times are profound, beautiful, deep, and true. How often people will quote from the book of Job, even quotes from Eliphaz, Bildad, or Zophar. And that’s because, as we have seen over and over, they did have a lot of good things to say. They just didn’t say them in the right place, at the right time, in the right circumstances. What this should teach us is the powerful truth of these texts in Proverbs 25:11–13:
A word fitly spoken is like apples of gold in settings of silver.
Like an earring of gold and an ornament of fine gold is a wise rebuker to an obedient ear.
Like the cold of snow in time of harvest is a faithful messenger to those who send him, for he refreshes the soul of his masters (NKJV).
Unfortunately, those weren’t the words that Job was hearing from his friends. In fact, the problem was going to get worse because, instead of just three people telling him he’s wrong, a new one comes on the scene.
Study this week’s lesson to prepare for Sabbath, December 3.
Even after Job’s powerful expression of faith (Job 13: 15, 16), the verbal sparring continued. Over the course of many chapters, the men go back and forth, arguing many deep and important questions about God, sin, death, justice, the wicked, wisdom, and the transient nature of humanity.
What truths are being expressed in the following texts?
Job 13:28 We go as we came from the dust of the ground once we have died. God promised the first man, Adam, that of the dust I created you in my image, you will go back. Decay, because of sin, is the finality of death or eternal sleep until Jesus’ 2nd coming.
Job 15:14–16 Only God has the ability to be righteous, as we have been born into sin of this world because of Adam and Eve sinning in the Garden of Eden. Neither man nor woman is righteous. We have to earn our place and righteousness through our love for God and be true to Him.
Job 19:25–27 God has promised us a place in heaven when Jesus died on the cross for our sins of the people in his time and those ancestors who have followed. Our redeemer lives in heaven making a place, a home, for those who truly love God as he promised. Although we have decaying bodies and our souls are asleep in the grave until Jesus’ 2nd coming, Jesus will come to bring everyone true to God will be resurrected and taken home. Our eyes will see Him with all of His glory while the wicked will see a beautiful sight that will frightened them away from righteousness.
Job 28:28 God wants us to fear Him to the point that we turn away from our evil ways and allow Him into our hearts so we can have wisdom and understanding of His power and love.
Through all these chapters the arguments continued, neither side conceding its position. Eliphaz, Bildad, and Zophar each in their own way, each with their own agenda, didn’t let up in their argument about how people get what they deserve in life; and thus, what came upon Job had to be just punishment for his sins. Job, meanwhile, continued to lament the cruel fate that had befallen him, certain that he did not deserve the suffering. Back and forth they sparred, each “comforter” accusing Job of uttering empty and vain words, and Job doing the same to them. Job was not uttering empty and vain words. He knew that he was suffering for some reason but did not understand why. He was going to get through his afflictions one way or another without ending his own life before God whom he had great love for despite his afflictions.
In the end, none of them, including Job, understood all that was going on. How could they? They were speaking from a very limited perspective, which all humans have. If we can get any lesson from the book of Job (one that should be obvious by now, especially after all the speeches of these men), it is that we as humans need humility when we profess to talk about God and the workings of God. We might know some truth, maybe even a lot of truth, but sometimes—as we can see with these three men—we might not necessarily know the best way to apply the truths that we know. As humans we have a limited understanding of God’s reasoning for things. We need to be thankful for what we have and what God has given us in the short time we have on earth. We are living in the last days today and we see what has been going on to both the good and not so good people. There is wicked people out there today wanting everything and there are good people who are suffering at times because of some wicked people.
Look around at the natural world. Why does this alone show us how limited we are in what we know about even the simplest of things? This is one question that has many answers. We are constantly at war right now because of radical religious groups killing Christians who are not of the same religious belief. We are also still having racial tension between the men and women of color and the whites since slavery, and men are murdering policemen and women of color and the white men and women are murdering blacks. The simplest things today are becoming very complicated. The beauty of nature is becoming less beautiful because of man destroying the beauty God once created before sin and after sin became what it is today because of Adam and Eve sinning in the Garden of Eden.
The Entrance of Elihu
From Job 26 to 31, the tragic hero of this story, Job, gives his final speech to the three men. Though eloquent and passionate, he basically repeats the argument he has been making all along: I do not deserve what has been happening to me. Period.
Again, Job represents so much of humanity in that many people suffer things that they don’t deserve. And the question, in many ways the hardest question of all, is—why? In some cases, the answer to suffering is relatively easy. People clearly bring the trouble on themselves. But so often, and especially in the case of Job, that’s not what happened, and so the question of suffering remains.
As chapter 31 comes to a close, Job has been talking about the kind of life he led, a life in which nothing he had done justified what was happening to him now. Then the final verse of the chapter reads: “The words of Job are ended” (Job 31:40). bGenesis 3:18
Job 31:40 states: 40Then let
http://biblia.com/bible/nkjv/Job%2031.40bthistles grow instead of wheat,
And weeds instead of barley.”
The words of Job are ended.
Genesis 3:18 states: 18 Both thorns and thistles it shall
http://biblia.com/books/nkjv/biblenkjv.Ge3.186bring forth for you,
http://biblia.com/books/nkjv/biblenkjv.Ge3.18yyou shall eat the herb of the field. yPsalms 104:14 states: He causes the grass to grow for the cattle,
And vegetation for the service of man,
That he may bring forth
http://biblia.com/books/nkjv/biblenkjv.Ps104.14ifood from the earth,
Read Job 32:1–5. What is happening here, and what is Elihu’s charge against Job and the other men? Elihu showed his wrath toward Job who remained faithful to God despite his affliction.
Here is the first time that this man, Elihu, is mentioned in the book of Job. He obviously heard some of the long discussions, though we are not told just when he appeared on the scene. He must have come later, because he was not mentioned as being with the other three when they first came. What we do know, however, is that he wasn’t satisfied with the answers he had heard during whatever part of the dialogue he heard. In fact, we’re told four times in these five verses that his “wrath” had been kindled over what he had heard. For the next six chapters, then, this man Elihu seeks to give his understanding and explanation of the issues that all these men confronted because of the calamity that struck Job. Just like Elihu, we become angry and emotional about things in our own lives we do not understand. Being in a world of sin today to Jesus’ 2nd coming we will always not understanding some things that are happening around even though we may know why.
Job 32:2 said that Elihu was angry with Job because he “justified himself rather than God,” a distortion of Job’s true position. What should this tell us about how we need to be careful in the ways that we interpret the words of others? We do not want to alienate the people around us because we have said something we should not have said. We can hurt the people around us as hurt ourselves in this case. How can we learn to try to put the best construction rather than the worst on what people say? We can ask questions and pray for an answer or two from God because He will show us the way in His Time when we go to Him for help.
Elihu’s Defense of God
A lot of commentary has been written over the ages about Elihu and his speech, some seeing it a major turning point in the direction of the dialogue. Yet, it’s really not that easy to see where Elihu adds anything so new or so groundbreaking that it changes the dynamic of the dialogue. Instead, he seems largely to be giving the same arguments that the other three had done in their attempt to defend the character of God against the charge of unfairness in regard to the sufferings of Job.
Read Job 34:10–15. What truths is Elihu expressing here? How do they parallel what the other men have said before? And though his words were true, why were they inappropriate for the current situation?
Perhaps what we can see with Elihu, as with these other men, is fear—the fear that God is not what they think Him to be. They want to believe in the goodness and the justice and the power of God; and so, what does Elihu do but utter truths about the goodness, the justice, and the power of God?
“ ‘For His eyes are on the ways of man, and He sees all his steps. There is no darkness nor shadow of death where the workers of iniquity may hide themselves’ ” (Job 34:21, 22, NKJV).
“ ‘Behold, God is mighty, but despises no one; He is mighty in strength of understanding. He does not preserve the life of the wicked, but gives justice to the oppressed. He does not withdraw His eyes from the righteous; but they are on the throne with kings, for He has seated them forever, and they are exalted’ ” (Job 36:5–7, NKJV).
“ ‘As for the Almighty, we cannot find Him; He is excellent in power, in judgment and abundant justice; He does not oppress. Therefore men fear Him; He shows no partiality to any who are wise of heart’ ” (Job 37:23, 24, NKJV).
If all this is true, then the only logical conclusion one must draw is that Job is getting what he deserves. What else could it be? Elihu, then, was trying to protect his own understanding of God in the face of such terrible evil befalling such a good man as Job.
Have you ever faced a time when something happened that made you fearful for your faith? How did you respond? Looking back, what might you have done differently?
The Irrationality of Evil
All four of these men, believers in God, believers in a God of justice, found themselves in a dilemma: how to explain Job’s situation in a rational and logical manner that was consistent with their understanding of the character of God. Unfortunately, they ended up taking a position that turned out basically wrong in their attempt to understand evil, or at least the evil that befell Job.
Ellen G. White offers a powerful comment in this regard. “It is impossible to explain the origin of sin so as to give a reason for its existence. . . . Sin is an intruder, for whose presence no reason can be given. It is mysterious, unaccountable; to excuse it is to defend it. Could excuse for it be found, or cause be shown for its existence, it would cease to be sin.” —
http://ssnet.org/lessons/16d/helps/lesshp10.htmlThe Great Controversy, pp. 492,493.
Though she uses the word sin, suppose we replaced that word with another word, one that has a similar meaning: evil. Then the quote could read: It is impossible to explain the origin of evil so as to give a reason for its existence. . . . Evil is an intruder, for whose presence no reason can be given. It is mysterious, unaccountable; to excuse it is to defend it. Could excuse for it be found, or cause be shown for its existence, it would cease to be evil.
So often when tragedy strikes, people will say or think: “I don’t understand this.” Or “This doesn’t make sense.” This is precisely what Job’s complaint had been about all along.
There is a good reason that Job and his friends can’t make sense of it: evil itself doesn’t make sense. If we could understand it, if it made sense, if it fit into some logical and rational plan, then it wouldn’t be that evil, it wouldn’t be that tragic, because it would serve a rational purpose.
Look at these verses about the fall of Satan and the origin of evil. How much sense does his fall make? (Ezek. 28:12–17).
Here’s a perfect being, created by a perfect God, in a perfect environment. He’s exalted, full of wisdom, perfect in beauty, covered in precious stones, an “anointed cherub” who was in the “holy mountain of God.” And yet, even with all that and having been given so much, this being corrupted himself and allowed evil to take over. What could have been more irrational and illogical than the evil that came to infect the devil?
What is your own experience with how irrational and inexplicable evil is?
The Challenge of Faith
Certainly the primary characters in the book of Job, as mere mortals seeing “through a glass darkly” (1 Cor. 13:12), were working from a very limited perspective, a very limited understanding of the nature of the physical world, much less the spiritual one. Interesting, too, that in all these debates about the evil that befell Job, none of the men, Job included, discussed the role of the devil—the direct and immediate cause of all of Job’s ills. And yet, despite their own confidence about how right they were, especially Elihu (see Job 36:1–4), their attempts to explain Job’s suffering rationally all fell short. And, of course, Job knew that their attempts failed.
Even with our understanding of the story’s cosmic background, how well are we able to rationalize and explain the evil that befell Job? Read Job 1–2:10 again. Even with all this revealed to us, what other questions remain?
With the opening chapters of Job before us, we have a view of things that none of these men did. Nevertheless, even now the issues remain hard to understand. As we saw, far from his evil bringing this suffering to him, it was precisely Job’s goodness that caused God to point him out to the devil. So, the man’s goodness and desire to be faithful to God led this to happen to him? How do we understand this? And even if Job had known what was going on, wouldn’t he have cried out, “Please, God, use someone else. Give me back my children, my health, my property!” Job didn’t volunteer to be the guinea pig. Who would? So, how fair was all this to Job and to his family? Meanwhile, even though God won His challenge with the devil, we know the devil has not conceded defeat (Rev. 12:12); so, what was the purpose? And also, whatever good ultimately came out of what happened to Job, was it worth the death of all these people and all the suffering that Job went through? If these questions remain for us (though more answers are coming), imagine all the questions that Job had!
And yet, here’s one of the most important lessons we can take from the book of Job: that of living by faith and not by sight; that of trusting in God and staying faithful to Him even when, like Job, we cannot rationalize or explain why things happen as they do. We don’t live by faith when everything is fully and rationally explained. We live by faith when, like Job, we trust and obey God even when we cannot make sense of what is happening around us.
What are the things you have to trust God for even though you don’t understand them? How can you continue to build that trust even when you don’t have answers?
Further Thought: In a discussion concerning the question of faith and reason, author John Hedley Brooke wrote about the German philosopher Immanuel Kant (1724–1804) and his attempt to understand the limits of human knowledge, especially when it came to the working of God. For Kant, “the question of justifying the ways of God to man was one of faith, not of knowledge. As his example of an authentic stance in the face of adversity, Kant chose Job, who had been stripped of everything save a clear conscience. Submitting before a divine decree, he had been right to resist the advice of friends who had sought to rationalize his misfortune. The strength of Job’s position consisted in his knowing what he did not know: what God thought He was doing in piling misfortune upon him.”—Science and Religion (New York: Cambridge University Press, 2006), pp. 207, 208. These men in the book of Job, and now Elihu, thought they could explain what happened to Job in a simple cause-and-effect relationship. The cause was Job’s sin; the effect was his suffering. What could be more clear-cut, theologically sound, and rational than that? However, their reasoning was wrong, a powerful example of the fact that reality and the God who created and sustains that reality don’t necessarily follow our understanding of how God and the world He created work.
As we saw, in all the long speeches about poor Job’s situation and why it happened, the devil was not once mentioned. Why is that so? What does it tell us about how limited these men were in their understanding, despite all the truths that they had? What could their ignorance teach us about our own, despite all the truths that we have?
“When we take into our hands the management of things with which we have to do, and depend upon our own wisdom for success, we are taking a burden which God has not given us, and are trying to bear it without His aid. . . . But when we really believe that God loves us and means to do us good we shall cease to worry about the future. We shall trust God as a child trusts a loving parent. Then our troubles and torments will disappear, for our will is swallowed up in the will of God.” — Ellen G. White,
http://ssnet.org/lessons/16d/helps/lesshp10.htmlThoughts From the Mount of Blessing, pp.100, 101. How can we learn this kind of trust and faith? That is, what choices are we making now that will make our faith either stronger or weaker?ngs that none of these men did. Nevertheless, even now the issues remain hard to understand. As we saw, far from his evil bringing this suffering to him, it was precisely Job’s goodness that caused God to point him out to the devil. So, the man’s goodness and desire to be faithful to God led this to happen to him? How do we understand this? And even if Job had known what was going on, wouldn’t he have cried out, “Please, God, use someone else. Give me back my children, my health, my property!” Job didn’t volunteer to be the guinea pig. Who would? So, how fair was all this to Job and to his family? Meanwhile, even though God won His challenge with the devil, we know the devil has not conceded defeat (Rev. 12:12); so, what was the purpose? And also, whatever good ultimately came out of what happened to Job, was it worth the death of all these people and all the suffering that Job went through? If these questions remain for us (though more answers are coming), imagine all the questions that Job had! | <urn:uuid:c8ab5a8c-bafe-4fa0-8f40-dfa7efb30362> | CC-MAIN-2017-04 | https://docs.com/kristi-karnopp/8532/lesson-10-job | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00465-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.973571 | 4,768 | 2.5625 | 3 |
The Internet of Things (IoT) is the latest technology buzzword doing the rounds but what does it mean exactly, and why is it exciting consumers and businesses?
The Internet of Things is already a strange phenomenon. As a term, it often confuses consumers, excites developers, and rouses and terrifies end-user businesses in equal measure.
There are a number of reasons for this, as we’ll detail shortly, but there’s little doubting at least that IoT has a huge future as our lives become increasingly digitalised.
Some market figures back this up; Cisco has labelled IoT as an $19 trillion opportunity for companies and industries, while a TechNavio report estimates that the global IoT market will grow at a CAGR of 31.72 percent from 2014-2019.
IDC and Intel, meanwhile, predict that there will be over 200 billion ‘things’ in circulation by 2020, with Gartner’s acclaimed Hype Circle putting IoT at the ‘peak of inflated expectations’ – the time where new technology starts to see real business benefits.
So what is it?
These figures are impressive and yet, there does seem confusion around the Internet of Things and what the term means. To different people it means different things, from expensive ‘connected’ toys to the ability to collect huge datasets from legacy enterprise systems connected to the Internet, and using this information for numerous different purposes.
According to the Oxford dictionary, the Internet of Things refers to “the interconnection via the Internet of computing devices embedded in everyday objects, enabling them to send and receive data”. Wikipedia rather surprisingly adds that the Internet of Things (IoT) “is the network of physical objects or “things” embedded with electronics, software, sensors, and network connectivity, which enables these objects to collect and exchange data.”
SAS offers a more simplified version, saying that IoT is a network of “everyday objects – from industrial machines to consumer goods– that can share information and complete tasks while you are busy with other activities, like work, sleep or exercise.”
In truth, IoT is all of the above; It’s about ‘smart’ everyday objects connecting to the Internet, identifying themselves to other devices through communication protocols like RFID, Wi-Fi, Bluetooth, QR codes, and acting upon that information without human intervention.
This increased connection is a sign of increased machine-to-machine (M2M) communication which is built on cloud computing and networks of data-gathering sensors. It’s mobile, virtual, and instantaneous connection.
British entrepreneur Kevin Ashton coined the term in 1999, and in a recent interview with Diginomica said that the future is bright.
“What the Internet of Things is really about is information technology that can gather its own information. Often what it does with that information is not tell a human being something, it [just] does something.
“It’s the difference between, ‘Oh, my fridge is empty, my fridge is going to tell me, ‘I’m empty,’ and a system which observes things and takes action based on those observations and doesn’t need to trouble you with that information. That’s really where we’re headed.”
It’s not new
Despite all this early buzz, much of the technology behind IoT has arguably been around for years.
Near-Field-Communication (NFC), Bluetooth, robotics and artificial intelligence, as well as telemetry (a precursor of telematics) can be traced back decades – while the first IoT hardware goes back to 1982. The first connected-toaster came to market in 1989.
Some industry verticals could also argue that they’ve been doing IoT under a different name for years; manufacturing has had automated, predictive maintenance for years, insurance has adopted telematics and retailers have looked to harness social and smartphone data.
Benefits for business
A lot of the media attention has focused on the consumer benefits of IoT and rightly so. Wearables, in particular, offer a massive potentially to keep us fit, while Nest and Hive thermostats help us save money on heating. Cameras like Dropcam let us monitor the home. IoT is ultimately about user convenience.
But it’s clear this is the very tip of the iceberg, especially as far as business is concerned.
There have been a number of early IoT case studies across all industries, from retailers and banks using Apple iBeacons to offer tailor ads, and airlines with sensors for predictive maintenance, to insurers determining insurance premiums by tracking data from black boxes in cars or smartphones.
This move to IoT is changing business models, revenue streams and how companies interact with their customers.
It’s clear that this is just the start, as innovation expert Daniel Burrus suggested recently in a piece for Wired magazine.
“It’s about upending old models entirely, creating new services and new products. There is no one sector where the Internet of Things is making the biggest impact; it will disrupt every industry imaginable, including agriculture, energy, security, disaster management, and healthcare, just to name a few.”
The Internet of Things age is here – are you ready for the disruption? | <urn:uuid:f77812ba-1eb9-475e-8098-795f49861bc9> | CC-MAIN-2017-04 | https://internetofbusiness.com/what-exactly-is-the-internet-of-things/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00465-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93952 | 1,110 | 3.03125 | 3 |
With some of the most advanced manufacturing industries in the world, the U.S. and Germany in particular stand to gain much from the new ‘digital manufacturing era’ or Industry 4.0 — that is, the integrated application of computer technology to all phases of the production process. Certainly, technological advancements are creating huge opportunities for individual manufacturers to improve the quality of production while streamlining internal efficiencies and reducing overheads.
Yet a number of crucial challenges remain to realizing the full benefits of digital manufacturing. Firstly, manufacturers must identify and invest in the “right” technology — that will increase production capacity and flexibility, all the while delivering a clear business case. At the same time, the required investment must be funded while maintaining reasonable levels of capital expenditure and liquidity.
Specialist financiers can help overcome such challenges; guiding manufacturers through new technology and providing bespoke, yet flexible, financing solutions. And by using smart financing techniques — such as tying payments to the performance or longevity of the underlying technology — manufacturers can be more confident in the cost and value of their investment.
Why Join the Race?
At a macroeconomic level, digitalizing the manufacturing process means that more products can be produced closer to the customer base, thus reducing the need to import from abroad. Boston Consulting Group, for instance, predicts that as much as 30% of America's imports from China could be produced domestically by 2020 as a result of Industry 4.0. Given the U.S. trade deficit hit a record $367 billion last year, cutting transportation costs would be a huge benefit.
But what about the individual companies that will shoulder the burden of implementing new technology?
First and foremost, Industry 4.0 stands to boost product quality. Innovative technology such as robotics, for instance, can improve maintenance and fault diagnostics for products such as cars, giving manufacturers a competitive edge in the market.
At the same time, new technologies are boosting productivity and efficiency. For instance, so-called ‘digital twin’ technology, which uses a computerized copy of a real product, means that manufacturers can conduct performance tests digitally and save on material wastage. This cuts costs and improves product development time – which enhances speed to market.
What is more, as the U.S. shows, it is not just high-tech sectors that stand to benefit from advanced manufacturing processes, but also a range of industries: from energy and aircraft, to healthcare, and food.
Yet, while both the U.S. and Germany have made huge strides forward in their digitalization journey, each country is taking different steps to approaching Industry 4.0.
The U.S., for its part, is taking a revolutionary approach, with rapid changes in the country’s manufacturing industry being driven by the world’s densest hub of innovative startup companies in Silicon Valley. The German approach, on the other hand, is more evolutionary, with the country’s strong industrial base incrementally improving and refining existing technology and processes through increased digitalization.
The Challenge: Funding the “Right” Technology
Despite different approaches, challenges remain for all. If manufacturers are to fully adapt their production lines to Industry 4.0 and keep up-to-date with new technologies, they must source significant investment. In Germany alone, an additional €250 billion (or $284 billion) will be needed over the next 10 years. This represents about 1% to 1.5% of manufacturers’ total revenues.
The story is similar in the U.S. To modernize its factories, American multinational corporation General Motors (GM), for instance, is planning to invest $1.8 billion (€1.6 billion) every year over the next three years: were all U.S. manufacturers to make an equivalent investment, the total expenditure would amount to around $100 billion (€88 billion) a year.
That said, capital outlays only make business sense if they invested in the “right technology.” Manufacturers, rightly, are often concerned that technology is advancing at such a rate that technology will become obsolete before they can profit from it — rendering their investment redundant.
Overcoming the Challenges with Innovative Financing
So how can these key challenges be overcome? Crucially, manufacturers need access to tailored, well-structured financing that is innovative and flexible enough to cope with the ever-evolving technological environment.
This finance usually resembles traditional ‘asset finance’ — and, more often than not, a variation of leasing or renting. First and foremost, this type of financing can remove the need for an initial capital outlay — as discussed, a key barrier for many manufacturers looking to implement digital solutions. Take the example of Friedrich A. Kruse jun. International Logistics, a German logistics company which was keen to modernize its facilities by switching to Siemens control and drive technology. In this instance, Siemens Financial Services provided a leasing solution which protected the company’s liquidity — by removing the need for a purchase — while the long-term nature of the contract allowed the company to finance the installments from operating cash-flow.
There are also a number of financing solutions that can overcome the second key challenge; ensuring investment in the “right” technology. Our view is that manufacturers should start by setting out what they want to achieve in their concrete business case — whether it be increased productivity, better quality products for clients etc. — and then work backwards to decide how the necessary investments can be funded. The structuring of such solutions is, of course, helped by working alongside a financier that has detailed knowledge of the technologies involved, how they are applied, and the operating, capability or efficiency outcomes that they will likely deliver.
This approach opens up a number of innovative financing solutions. “Performance-based financing,” for one, allows to provide contracts which match payments to defined and measurable business benefits.
Another option in this respect is “usage-based financing”, such as pay-per-use programs for equipment investments. Again, the manufacturer benefits from not having to splash out on new technology at day one, instead paying as per its usage — ensuring that it receives value from any expenditure. Such a solution also provides manufacturers with the flexibility to upgrade machinery or switch to next-generation technology as and when it becomes available and attractive.
Of course, the cost of purchase is only one cost element associated with investing in new machinery; there are also costs relating to service, software and maintenance. “Total cost of ownership” financing therefore accounts for the full cost of digital technology, providing a financially reliable package that ensures running costs will not escalate unpredictably over the technology’s lifetime. This approach works well to highlight when it makes more financial sense to upgrade technology rather than continue with an older system.
Financing packages such as these are crucial if manufacturers are to overcome the challenges associated with implementing new technology, and thus seize the opportunities with respect to increased productivity, efficiency and quality of product. They will also play a significant role in helping Germany and the U.S. over the line in the transatlantic race to digital manufacturing.
This article was originally published in our sister publication IndustryWeek. | <urn:uuid:6a46f7c1-bd7d-463f-8122-ea28f61baf54> | CC-MAIN-2017-04 | http://www.ioti.com/industrial-iot/germany-and-us-transatlantic-race-digital-manufacturing | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00061-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944861 | 1,482 | 2.5625 | 3 |
Imagine that there were three different varieties of Wi-Fi style connectivity. Whenever you ordered a new Wi-Fi device you would have to select the right option – which might not always be available. Only some hotspots would be accessible to you – those with the same technology as in your laptop. Some places, like planes, might offer multiple technologies but because of lack of spectrum this reduced the data rate for each. Others chose not to deploy Wi-Fi until the confusion over the preferred technology was over. Such a world would be much more annoying and much less productive than the one we currently inhabit.
This is exactly the world of unlicensed IoT – the machine equivalent of Wi-Fi connectivity. Technologies include Sigfox, LoRa, Ingenu, Telensa, Weightless and others. Devices from one are incompatible with others and they mostly compete for the same spectrum, often interfering with each other. Why do we put up with it?
Some claim that these technologies target different market segments – for example Sigfox targets the lowest-cost devices that only transmit data, while LoRa is for more complex devices that need to receive as well. But Wi-Fi covers a wide breadth of use cases with a single standard, as does cellular.
We do not have one cellular technology for the high-bandwidth gamer and a different one for the low-usage pensioner. It is invariably less expensive to have a single standard, able to meet the needs of most users, which can deliver economies of scale and encourage widespread deployment of infrastructure.
Others claim that the market is large enough for multiple technologies – but it is much smaller by value than cellular. Or that competition is needed between different technologies at this early stage of its evolution – but most standards evolve as lessons are learnt rather than compete.
These are arguments designed to perpetuate the status quo, rather than ones with substance. A quick glance at the current wireless connectivity we use throughout our lives shows that there are no exceptions to the rule that (1) we have a single preferred connectivity solution in each different space – eg Bluetooth, Wi-Fi and cellular – and (2) only open standards succeed. There are no reasons why IoT should be any different. And more fundamentally, there will not be widespread success of IoT connectivity until these conditions are met.
For our cell-phones we are used to wide-area connectivity being provided by mobile network operators (MNOs) using 3GPP cellular standards and local connectivity self-provided through Wi-Fi using IEEE standards under the auspices of the Wi-Fi Alliance. It seems likely that a very similar outcome will transpire for IoT.
Wide-area connectivity will come from MNOs deploying 3GPP standards, most likely NB-IoT. Local connectivity will come from an ETSI standard under the auspices of the Weightless SIG. Some devices will have dual-purpose chipsets, others will have perhaps only Weightless-certified connectivity – in the same way that some devices have dual cellular-Wi-Fi chipsets and others Wi-Fi only (but it is rare to have a cellular chipset that does not also have Wi-Fi connectivity).
While few would disagree with the prognosis of NB-IoT deployment, many might question the prediction of unlicensed deployment. After all, isn’t it the case that Sigfox and LoRa have significant deployments already? The answer is emphatically “no”. If we are to reach the predicted 50 billion devices in, say, a decade, then we need to deploy 13 million per day.
Sigfox have around 10 million users, less than a day’s worth of deployment. These are more akin to early trials than mass deployment. And this is exactly what would be predicted based on the observation that markets only succeed when there is a clear single open standard. (Sigfox are part of a process to develop a standard within ETSI so it is possible that they will find a route to become that open standard).
The situation of competing proprietary technologies can only be resolved through the wider industry getting together and collectively putting its weight behind a single unlicensed standard. The Weightless SIG is providing such a forum and welcomes membership from all those who wish to see this untenable situation resolved quickly in the interests of all who want to see IoT succeed. Or would you prefer the world of multiple incompatible Wi-Fi variants described above? | <urn:uuid:8495b09b-534b-4888-b9db-f562fd8ae4e5> | CC-MAIN-2017-04 | https://internetofbusiness.com/standards-iot-problem-sigfox/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00365-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955257 | 901 | 2.96875 | 3 |
SQL data types differ from those used in COBOL.
SQL has a standard set of data types, but the exact implementation of these varies between databases and many databases do not implement the full set.
Within your COBOL program a host variable can act as a COBOL program variable and as a SQL database variable and so the preprocessor must convert, or map, COBOL data types to the appropriate SQL data type. This means that you need to declare your host variables with the correct COBOL picture clause so that the preprocessor maps it to the correct SQL data type. To do this, you need to know the SQL data types used by the data source to which you are going to connect.
The following sections describe the different SQL data types and how to declare host variables that map directly onto them.
When using either Sybase or Oracle with COBSQL, the database engine will be able to perform some sort of conversion to change the data from a COBOL data type to a database data type. A general rule of thumb is that for numeric or integer data types host variables should be defined as:
while character or text data types should be defined as:
Both Oracle and Sybase allow you to define the database data type for a given host variable. This can be useful for the more complex data types.
For Oracle, this is done as follows:
EXEC SQL BEGIN DECLARE SECTION END-EXEC. * * Define data item as Oracle data type DISPLAY * 01 emp-comm pic s9(6)v99 DISPLAY SIGN LEADING SEPARATE * EXEC SQL VAR emp-comm IS DISPLAY(8,2) END-EXEC. EXEC SQL END DECLARE SECTION END-EXEC.
For Sybase, this is done as follows:
EXEC SQL BEGIN DECLARE SECTION END-EXEC. * * Define item as Sybase specific data type * 01 money-item CS-MONEY. * EXEC SQL END DECLARE SECTION END-EXEC.
For more information about defining the database type of a host variable, refer to the COBOL precompiler manual supplied by your database vendor.
A tiny integer (TINYINT) is a 1-byte integer SQL data type that can be declared in COBOL as
PIC S9(4) COMP-5.
The tiny integer data type is not supported by DB2.
Sybase supports the use of tiny integer host variables. The definition for Sybase is:
03 tinyint1 PIC S9(2) COMP-5. 03 tinyint2 PIC S9(2) COMP. 03 tinyint3 PIC S9(2) BINARY.
These map onto the Sybase data type TINYINT.
A small integer (SMALLINT) is a 2-byte integer SQL data type that can be declared in COBOL with usage BINARY, COMP, COMP-X, COMP-5 or COMP-4.
For example, all of the following definitions are valid for host variables to map directly onto the SMALLINT data type.
03 shortint1 PIC S9(4) COMP. 03 shortint2 PIC S9(4) BINARY. 03 shortint3 PIC X(2) COMP-5. 03 shortint4 PIC S9(4) COMP-4. 03 shortint5 PIC 9(4) USAGE DISPLAY. 03 shortint6 PIC S9(4) USAGE DISPLAY.
OpenESQL currently supports signed small integers, but not unsigned small integers.
With Oracle, it is best to define the host variable as
shortint2 or as:
03 shortint7 PIC S9(4) COMP-5.
These map onto the Oracle data type NUMBER(38).
With Sybase all except
shortint3 should be accepted. You can
03 shortint7 PIC S9(4) COMP-5.
These map onto the Sybase data type SMALLINT.
An integer (INT) is a 4-byte integer SQL data type that can be declared in COBOL with usage BINARY, COMP, COMP-X, COMP-5 or COMP-4.
All of the following definitions are valid for host variables to map directly onto the INT data type.
03 longint1 PIC S9(9) COMP. 03 longint2 PIC S9(9) COMP-5. 03 longint3 PIC X(4) COMP-5. 03 longint4 PIC X(4) COMP-X. 03 longint5 PIC 9(9) USAGE DISPLAY. 03 longint6 PIC S9(9) USAGE DISPLAY.
OpenESQL currently supports signed integers, but not unsigned integers.
With Oracle, it is best to define integer host variables as
longint2 or as:
03 longint7 PIC S9(8) COMP-5.
These map to the Oracle data type NUMBER(38).
With Sybase, all except
longint3 should be accepted. You can
03 longint7 PIC S9(8) COMP-5.
These map to the Sybase data type INT.
A big integer (BIGINT) is an 8-byte integer SQL data type that can be declared in COBOL as:
PIC S9(18) COMP-3.
OpenESQL supports a maximum size of S9(18) for COBOL data items used as host variables to hold values mapped from the SQL data type BIGINT. You should be aware, however, that a BIGINT data type can hold a value that is larger than the maximum value that can be held in a
PIC S9(18) data item
and ensure that your code checks for data truncation.
The big integer data type is not supported by DB2.
Neither Oracle nor Sybase support big integers.
Fixed-length character strings (CHAR) are SQL data types with a driver defined maximum length. They are declared in COBOL as PIC X(n) where n is an integer between 1 and the maximum length.
03 char-field1 pic x(5). 03 char-field2 pic x(254).
This maps to the Oracle data type CHAR(n) and to the Sybase data type CHAR(n). For both Oracle and Sybase the largest supported fixed length character string is 255 bytes.
OpenESQL and DB2
Variable-length character strings (VARCHAR) are SQL data types that can be declared in COBOL in one of two ways:
03 varchar1. 49 varchar1-len pic 9(4) comp-5. 49 varchar1-data pic x(200). 03 Longvarchar1. 49 Longvarchar1-len pic 9(4) comp. 49 Longvarchar1-data pic x(30000).
If the data being copied to a SQL CHAR, VARCHAR or LONG VARCHAR data type is longer than the defined length, then the data is truncated and the SQLWARN1 flag in the SQLCA data structure is set. If the data is smaller than the defined length, a receiving CHAR data type may be padded with blanks.
For Oracle, the host variable is defined using the Oracle keyword VARYING. An example of its use is as follows:
EXEC SQL BEGIN DECLARE SECTION END-EXEC. 01 USERNAME PIC X(20) VARYING. EXEC SQL END DECLARE SECTION END-EXEC.
Oracle will then expand the data item USERNAME into the following group item:
01 USERNAME 02 USERNAME-LEN PIC S9(4) COMP-5. 02 USERNAME-ARR PIC X(20).
Within the COBOL code, references must be made to either
USERNAME-ARR but within SQL statments the group name
USERNAME must be used. For example:
move "SCOTT" to USERNAME-ARR. move 5 to USERNAME-LEN. exec sql connect :USERNAME identified by :pwd using :db-alias end-exec.
This maps to the Oracle data type VARCHAR(n) or VARCHAR2(n). For very large character items, Oracle provides the data type LONG.
For Sybase the host variable must be defined with a PIC X(n) picture clause as the Sybase precompiler does not support the use of group items to handle VARCHAR SQL data types.
These map to the Sybase data type of VARCHAR(n).
The 32-bit SQL floating-point data type, REAL, is declared in COBOL as usage COMP-2.
The 64-bit SQL floating-point data types, FLOAT and DOUBLE, are declared in COBOL as usage COMP-2.
01 float1 usage comp-2.
Both 32-bit and 64-bit floating-point data types are mapped to COMP-2 COBOL data items because single-precision floating point is not supported in embedded SQL by OpenESQL.
DB2 Universal Database supports single-precision floating point (REAL) as COMP-1 and double-precision floating point (FLOAT or DOUBLE) as COMP-2.
DB2 Version 2.1 only supports double-precision floating point (FLOAT or DOUBLE) as COMP-2.
Oracle supports the use of both COMP-1 and COMP-2 data items. These both map to the Oracle data type NUMBER.
Sybase supports the use of both COMP-1 and COMP-2 data items. COMP-1 data items map to the Sybase data type REAL. COMP-2 data items map to the Sybase data type FLOAT.
The exact numeric data types DECIMAL and NUMERIC can hold values up to a driver-specified precision and scale.
They are declared in COBOL as COMP-3, PACKED-DECIMAL or as NUMERIC USAGE DISPLAY.
03 packed1 pic s9(8)v9(10) usage comp-3. 03 packed2 pic s9(8)v9(10) usage display.
For Oracle, these map to the data type NUMBER(p,s). For Sybase, they map to either NUMBER(p,s) or to DECIMAL(p,s)
For more information on the difference between the NUMERIC and DECIMAL data types, refer to the chapter on Using and Creating Datatypes in the Sybase Transact-SQL Users Guide
COBOL does not have date/time data types so SQL date/time columns are converted to character representations.
If a COBOL output host variable is defined as PIC X(n), for a SQL timestamp value, where n is greater than or equal to 19, the date and time will be specified in the format yyyy-mm-dd hh:mm:ss.ff..., where the number of fractional digits is driver defined.
For DB2, the TIMESTAMP data type has a maximum length of 26 characters.
Oracle date items have a unique data definition and Oracle provides functions to convert date, time and datetime fields when used within a COBOL program. These functions are:
Converts from Oracle's date format to a character string.
Converts a character string into an Oracle date.
Both functions take an item to be converted followed by the date, time or datetime mask to be applied to that data item. An example of this is as follows:
exec sql select ename, TO_CHAR(hiredate, 'DD-MM-YYYY') from emp into :ename, :hiredate where empno = :empno end-exec. exec sql insert into emp (ename, TO_DATE(hiredate, 'DD-MM-YYYY')) values (:ename, :hiredate) end-exec.
This maps to the Oracle data type of DATE. For more information about the DATE data type, refer to the Oracle SQL Language Reference Manual. More information about the use of functions within Oracle SQL statements can be found in this manual.
Sybase provides a function, called convert, to change the format of a data type. Using the Oracle examples above, the SQL syntax would be:
exec sql select ename, convert(varchar(12) hiredate, 105) from emp into :ename, :hiredate where empno = :empno end-exec. exec sql insert into emp (ename, hiredate) values (:ename, convert(datetime :hiredate, 105) end-exec.
This maps to the Sybase data type of either SMALLDATETIME or DATETIME. For more information on the difference between the SMALLDATETIME and the DATETIME data types, refer to the chapter Using and Creating Datatypes in the Sybase Transact-SQL User's Guide.
For more information on the Sybase convert function, refer to the Sybase SQL Server Reference Manual: Volume 1 Commands, Functions and Topics.
SQL BINARY, VARBINARY and IMAGE data are represented in COBOL as PIC X (n) fields. No data conversion is performed. When data is fetched from the database, if the COBOL field is smaller than the amount of data, the data is truncated and the SQLWARN1 field in the SQLCA data structure is set to "W". If the COBOL field is larger than the amount of data, the field is padded with null (x"00") bytes. To insert data into BINARY, VARBINARY or LONG VARBINARY columns, you must use dynamic SQL statements.
With DB2 use CHAR FOR BIT DATA to represent BINARY, VARCHAR(n) FOR BIT DATA to represent VARBINARY and LONG VARCHAR FOR BIT DATA to represent LONG VARBINARY. If you use the IBM ODBC driver, BINARY, VARBINARY and LONG VARBINARY are the data types returned instead of the IBM equivalent. The IMAGE data type can be represented by BLOB. DB2 uses LOBs (Character Large Object, Binary Large Object or Graphical Large Object to define very large columns (2 Gigabytes maximum). You can use static SQL with these data types.
Oracle provides support for binary data. The difference between binary and character data is that Oracle will do codeset conversions on character data, but will leave binary data untouched.
The two Oracle data types are RAW and LONG RAW. There are some restrictions on the use of RAW and LONG RAW - consult your Oracle documentation for further details.
Sybase provides three binary data types: BINARY, VARBINARY and IMAGE. IMAGE is a complex data type and as such, host variables can be defined as CS-IMAGE, for example:
EXEC SQL BEGIN DECLARE SECTION END-EXEC. * * Define item as Sybase specific data type. * 01 image-item CS-IMAGE. * EXEC SQL END DECLARE SECTION END-EXEC.
Note: For more information on using the Sybase data types of BINARY, VARBINARY and IMAGE, please refer to the chapter Using and Creating Datatypes in the Sybase Transact-SQL User's Guide.
Copyright © 1998 Micro Focus Limited. All rights reserved.
This document and the proprietary marks and names used herein are protected by international law. | <urn:uuid:9568c174-0044-466c-a193-31742887a34d> | CC-MAIN-2017-04 | https://supportline.microfocus.com/documentation/books/nx30books/dbdtyp.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00365-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.735455 | 3,338 | 3 | 3 |
The research, made possible by cutting-edge AAO instrumentation, means that astronomers can now classify galaxies according to their physical properties rather than human interpretation of a galaxy's appearance. For the past 200 years, telescopes have been capable of observing galaxies beyond our own galaxy, the Milky Way. Only a few were visible to begin with but as telescopes became more powerful, more galaxies were discovered, making it crucial for astronomers to come up with a way to consistently group different types of galaxies together. In 1926, the famous American astronomer Edwin Hubble refined a system that classified galaxies into categories of spiral, elliptical, lenticular or irregular shape. This system, known as the Hubble sequence, is the most common way of classifying galaxies to this day. Despite its success, the criteria on which the Hubble scheme is based are subjective, and only indirectly related to the physical properties of galaxies. This has significantly hampered attempts to identify the evolutionary pathways followed by different types of galaxies as they slowly change over billions of years. Dr Luca Cortese, from The University of Western Australia node of the International Centre for Radio Astronomy Research (ICRAR), said the world's premier astronomical facilities are now producing surveys consisting of hundreds of thousands of galaxies rather than the hundreds that Hubble and his contemporaries were working with. "We really need a way to classify galaxies consistently using instruments that measure physical properties rather than a time consuming and subjective technique involving human interpretation," he said. In a study led by Dr Cortese, a team of astronomers has used a technique known as Integral Field Spectroscopy to quantify how gas and stars move within galaxies and reinterpret the Hubble sequence as a physically based two-dimensional classification system. "Thanks to the development of new technologies, we can map in great detail the distribution and velocity of different components of galaxies. Then, using this information we're able to determine the overall angular momentum of a galaxy, which is the key physical quantity affecting how the galaxy will evolve over billions of years. "Remarkably, the galaxy types described by the Hubble scheme appear to be determined by two primary properties of galaxies–mass and angular momentum. This provides us with a physical interpretation for the well known Hubble sequence whilst removing the subjectiveness and bias of a visual classification based on human perception rather than actual measurement." The new study involved 488 galaxies observed by the 3.9m Anglo Australian Telescope in New South Wales and an instrument attached to the telescope called the Sydney-AAO Multi-object Integral-field spectrograph or 'SAMI'. The SAMI project, led by the University of Sydney and the ARC Centre of Excellence for All-sky Astrophysics (CAASTRO), aims to create one of the first large-scale resolved survey of galaxies, measuring the velocity and distribution of gas and stars of different ages in thousands of systems. "Australia has a lot of expertise with this type of astronomy and is really at the forefront of what's being done," said Professor Warrick Couch, Director of the Australian Astronomical Observatory and CAASTRO Partner Investigator. "For the SAMI instrument we succeeded in putting 61 optical fibres within a distance that's less than half the width of a human hair. "That's no small feat, it's making this type of work possible and attracting interest from astronomers and observatories from around the world." Future upgrades of the instrument are planned that will allow astronomers to obtain even sharper maps of galaxies and further their understanding of the physical processes shaping the Hubble sequence. "As we get better at doing this and the instruments we're using are upgraded, we should be able to look for the physical triggers that cause one type of galaxy to evolve into another—that's really exciting stuff," Dr Cortese said. More information: The SAMI Galaxy Survey: the link between angular momentum and optical morphology. arxiv.org/abs/1608.00291
Bernardi G.,Harvard - Smithsonian Center for Astrophysics |
Greenhill L.J.,Harvard - Smithsonian Center for Astrophysics |
Mitchell D.A.,University of Melbourne |
Ord S.M.,Curtin University Australia |
And 57 more authors.
Astrophysical Journal | Year: 2013
We present a Stokes I, Q and U survey at 189 MHz with the Murchison Widefield Array 32 element prototype covering 2400 deg2. The survey has a 15.6 arcmin angular resolution and achieves a noise level of 15 mJy beam-1. We demonstrate a novel interferometric data analysis that involves calibration of drift scan data, integration through the co-addition of warped snapshot images, and deconvolution of the point-spread function through forward modeling. We present a point source catalog down to a flux limit of 4 Jy. We detect polarization from only one of the sources, PMN J0351-2744, at a level of 1.8% ± 0.4%, whereas the remaining sources have a polarization fraction below 2%. Compared to a reported average value of 7% at 1.4 GHz, the polarization fraction of compact sources significantly decreases at low frequencies. We find a wealth of diffuse polarized emission across a large area of the survey with a maximum peak of 13 K, primarily with positive rotation measure values smaller than +10 rad m-2. The small values observed indicate that the emission is likely to have a local origin (closer than a few hundred parsecs). There is a large sky area at α ≥ 2h30 m where the diffuse polarized emission rms is fainter than 1 K. Within this area of low Galactic polarization we characterize the foreground properties in a cold sky patch at (α, δ) = (4h, -27.°6) in terms of three-dimensional power spectra. © 2013. The American Astronomical Society. All rights reserved. Source | <urn:uuid:82a0dcea-e51a-47d2-89ca-e41c6ebaa68a> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/caastro-2437900/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00301-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924087 | 1,192 | 3.953125 | 4 |
Joined: 19 Feb 2005 Posts: 27 Location: hyderabad-ap-india
My dear friends i have the following doubt in Cobol..
1.What is meant by Reference Modofication..can anybody give me explaination with an example..
2. Where do we cannot use MOVE verb
a) OCCURS clause
b) USAGE is COMP
c) USAGE is POINTER
3. Can we use $ and Z together to a field?
4.What is INITIAL ? ( we use it program name).
quick reply is highly appriciated..
1) Ref/Mod allows you to reference any byte or bytes in a field of data when using a COBOL stmt, such as move. Some stmts don't allow its use. Check the stmt's description in the COBOL ref manual before using it. An example: Suppose you wanted to move bytes 4 thru 8 of frm-fld to positions 3 thru 7 of to-fld. The move stmt would look like this:
move frm-fld(4:5) to to-fld(3:5)
2) Where can we not use MOVE verb? C
Data is moved to and from POINTER data items via the SET stmt.
3) Can we use $ and Z together to a field? No $ is a floating fill char and serves a similar purpose to Z. If you code $$$$9.99 a value of 142.98 wotld print as $149.98 with 1 leading space; a value of .98 would print as $0.98 with 3 leading spaces.
So you see there's no reason to include Zs too.
4) What is INITIAL ? ( we use it program name). This tells the compiler to generate code that results in Working Starage being reinitialized whenever it is CALLed. That is the values you originally coded in the VALUE clauses in the pgm are reinserted in the data fields. This means that whatever values were there when the previous execution ended are overlaid and no longer available. | <urn:uuid:1ae62547-dbbc-42ee-b516-8c340c374626> | CC-MAIN-2017-04 | http://ibmmainframes.com/about1828.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00053-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.903695 | 447 | 2.640625 | 3 |
Definition: The smallest set of vertices in an undirected graph which separate two distinct vertices. That is, every path between them passes through some member of the cut.
See also minimum cut.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 19 April 2004.
HTML page formatted Mon Feb 2 13:10:39 2015.
Cite this as:
Paul E. Black, "minimum vertex cut", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 19 April 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/minvertexcut.html | <urn:uuid:a5414b9b-9cce-4587-9226-556e74bf756a> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/minvertexcut.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00567-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.894378 | 159 | 2.96875 | 3 |
Astrophysicists at the CEA (the French Alternative Energies and Atomic Energy Commission) and CNRS (the French National Center for Scientific Research) have achieved a major breakthrough. Thanks to a set of highly precise supercomputer simulations, the scientists have a much keener understanding of the turbulence that is generated when two galaxies collide. The study used very high resolution numerical simulations in which the disordered motions of the gas contained in galaxies is seen at extremely small-scale resolutions.
Appearing in the May 2014 edition of the Monthly Notices of the Royal Astronomical Society, Letters, the study resolves a long-standing cosmic mystery, a phenomenon called “starbursts.” Stars form when gas that is contained in a galaxy becomes dense enough to collapse in on itself, most often as a result of gravity. As they form, they emit very intense ultraviolet and infrared light. When two galaxies collide, the effect is multiplied and a large number of stars blink into existence. The peak emission of light that results is called a “starburst.”
Astrophysicists had observed such galactic collision light shows before, but could not explain why the stars formed. When galaxies collide, the galactic gas becomes more disordered, and the vortices of turbulence that result should theoretically prevent the gas from condensing due to gravity. In this scenario, turbulence would actually slow down, putting the breaks on star formation. The reasoning seemed solid, except for the fact that it the exact opposite of what actually happens. For the first time, the new simulations, some of the most sophisticated sky studies yet, fill in the missing details.
The simulations show that the collision changes the nature of the turbulence at a very small scale. The vortex effect is replaced by a gas compressive mode that enables the turbulence to facilitate the collapse of the gas by compressing it. This compressive turbulence effect sets off an excess of dense gas that causes multiple stars to form all throughout the galaxies. Compressive turbulence not only explains the mystery of star formation, it also sheds light on why some galaxies form more stars than others.
The research would not have been possible without the assistance of some of the most powerful supercomputers in the world. These high resolution models – which represent two real-life galaxies: the Milky Way and the “Antennae Galaxies” – employed two supercomputers that are part of the European research infrastructure, PRACE: GENCI’s Curie supercomputer, housed at the CEA’s Computing Center, and the SuperMUC supercomputer, located in Leibniz-Garching, Germany.
The Milky Way simulation, carried out on the Curie supercomputer, covered a span of about 300,000 light-years, with a resolution of 0.1 light-year. This part of the study used the equivalent of 12 million computing hours over a period of 12 months.
The galactic collision was simulated on the SuperMUC supercomputer, which has 4,096 processors running in parallel. It took 8 million computing hours and a period of eight months to simulate a cube that is 600,000 light-years on each side, with resolution of 3 light-years. According to CEA officials, these are the most realistic simulations to date of the observed events.
“These new simulations have achieved a level of precision never seen before, making it possible to resolve structures with a mass 1,000 times smaller than ever before,” notes a press release. “This has enabled the astrophysicists to track the evolution of the galaxies over hundreds of thousands of light-years, and to explore a mere fraction of a light-year in detail. Thanks to this decisive advantage, new physical effects emerged, revealing the complex nature of turbulence.” | <urn:uuid:e13fe9ab-9db0-430b-9a3f-22bdb7dfe1f6> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/05/19/mega-simulations-resolve-starburst-puzzle/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00567-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938672 | 779 | 4.21875 | 4 |
You'd need an umbrella made of kryptonite if you were to go walking on Mars apparently.
NASA scientists using images from the space agency's Mars Reconnaissance Orbiter (MRO) have estimated that the planet is bombarded by more than 200 small asteroids or bits of comets per year forming craters at least 12.8 feet (3.9 meters) across.
Using MRO's High Resolution Imaging Science Experiment (HiRISE) camera, NASA researchers spotted 248 new impact sites on parts of the Martian surface in the past decade, using images from the spacecraft to determine when the craters appeared - MRO has been looking at Mars since 2006. The 200-per-year planet wide estimate is a calculation based on the number found in a systematic survey of a portion of the planet, NASA stated.
These asteroids or comet fragments typically are no more than 3 to 6 feet (1 to 2 meters) in diameter. NASA noted that space rocks too small to reach the ground on Earth cause craters on Mars because the Red Planet has a much thinner atmosphere. NASA also added that the meteor over Chelyabinsk, Russia, in February was about 10 times bigger than the objects that dug the fresh Martian craters.
The rate is equivalent to an average of one each year on each area of the Martian surface roughly the size of the U.S. state of Texas. Earlier estimates pegged the cratering rate at three to 10 times more craters per year. They were based on studies of craters on the Moon and the ages of lunar rocks collected during NASA's Apollo missions in the late 1960s and early 1970s.
Counting the rate at which new craters appear serve as researcher's best way to estimate the ages of exposed landscape surfaces on Mars and other worlds.
HiRISE operations are based at The University of Arizona in Tucson. According to the school, the HiRISE camera is the most powerful camera ever to orbit another planet. It has taken thousands of black-and-white images, and hundreds of color images, since it began science operations in 2006. A single HiRISE image will often be a multigigabyte image that measures 20,000 pixels by 50,000 pixels, which includes a 4,000-by-50,000 pixel region in three colors. It can take a computer up to three hours to process such an image.
Check out these other hot stories: | <urn:uuid:b0a87faa-4276-4b28-81bd-fde4b69bb491> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2224643/data-center/nasa--mars-hit-by-some-200-small-asteroids-or-bits-of-comets-per-year.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00475-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946677 | 494 | 3.921875 | 4 |
Wavelength Division Multiplexing (WDM) technology provides an effective approach to the rapid increase of bandwidth and capacity requirement in communication systems and networks. In WDM systems. all kinds of photonics devices based on arrayed waveguide gratings (AWG) have been the key components used as wavelength division multiplexing/demultiplexing.
What’s Arrayed Waveguide Grating (AWG)?
Arrayed waveguide grating (AWG), also known as the optical phased array (PHASAR), phased-array waveguide grating (PAWG), or waveguide grating router (WGR), is a device built with silicon planar lightwave circuits (PLC), that allows multiple wavelengths to be combined and separated in a dense wavelength-division multiplexing (DWDM) system. It has become increasingly popular as a wavelength multiplexer and demultiplexer (MUX/DeMUX) for dense wavelength division multiplexing (DWDM) and very high density wavelength division multiplexing (VHDWDM) applications.
AWG Structure & Working Principle
Based on the substrate, an AWG consist of an array of waveguides (also called phased array) and two couplers (also called the free propagation region – FPR). One of the input waveguides carries an optical signal consisting of multiple wavelengths λ1 – λn into the first (input) coupler, which then distributes the light amongst an array of waveguides, as the following picture.
The light subsequently propagates through the waveguides to the second (output) coupler. The length of these waveguides is chosen so that the optical path length difference between adjacent waveguides, dL equals an integer multiple of the central wavelength λc of the demultiplexer. For this wavelength the fields in the individual arrayed waveguides will arrive at the input of the output coupler with equal phase, and the field distribution at the output of the input coupler will be reproduced at the input of the output coupler. Linearly increasing length of the array waveguides will cause interference and diffraction when light mixes in the output coupler. As a result, each wavelength is focused into only one of the N output waveguides (also called output channels).
Or we can simply understand it by take the following picture for example: The incoming light (1) traverses a free space (2) and enters a bundle of optical fibers or channel waveguides (3). The fibers have different length and thus apply a different phase shift at the exit of the fibers. The light then traverses another free space (4) and interferes at the entries of the output waveguides (5) in such a way that each output channel receives only light of a certain wavelength. The orange lines only illustrate the light path. The light path from (1) to (5) is a demultiplexer, from (5) to (1) a multiplexer.
Types of AWG
Various AWGs are available on the market. In general, they can be divided into two main groups according to the material used: low-index and high index AWGs. Low-index AWGs with a typical refractive index contrast of 0.75% have the advantage of their compatibility with optical fibers, and hence very low coupling losses between output waveguides and optical fibers. The disadvantage of such AWGs is their size, which corresponds with the waveguide curvature that may not lie below a critical value. As a result, increasing the channel counts and narrowing the channel spacing leads to a rapid increase in the AWG size and this, in turn; causes the deterioration in optical performance like higher insertion loss and, in particular, higher channel crosstalk. In contrast to this, high-index AWGs feature a much smaller size but also much higher coupling losses.
As the number of waveguides used to carry the information in DWDM systems is generally a power of 2, the AWGs are designed to separate two different wavelengths, or 4, 16, 32, 64 etc. In addition to this, 40- and 80-channel AWGs are also available. Systems being deployed at present usually have no more that 40 wavelengths, but technological advancements will continue to make higher numbers of wavelengths possible.
The wavelengths being used to transmit the information are usually around the 1550 nm region, the wavelength region in which optical fiber performs best (it has very low loss and low attenuation). Each wavelength is separated from the previous one by a multiple of 0.8 nm (also referred to as 100 GHz spacing, which is the frequency separation). However, they can be also separated by 1.6 nm (i.e. 200 GHz) or another spacing as long as it is a multiple of 0.8 nm. These channel spacings refer to WDM systems. On the other hand, increasing capacity demands mean the present aim is to squeeze even more wavelengths into an even tighter space, which may result in as little as half the regular spacing, i.e. 0.4 nm (50 GHz) or even a quarter, 0.2 nm (25 GHz). Such narrow channel spacings are being used in DWDM systems. However, the recent rapid growth in network capacity has meant that even higher capacity transmission is required in DWDM systems. To meet the growing capacity demands, it is necessary to continue increasing the channel counts of these AWGs as far as possible, i.e. decreasing their channel spacing going down to 10 GHz or less. Such AWGs play a key role in the very high density WDM applications.
The optical signals transmitted can have different shapes. The most common is the Gaussian passband (or Gaussian shape) which features very low insertion loss. In contrast to this, the flat-top passband suffers far higher insertion losses but features much better detection conditions. Somewhere between these two shapes lies so-called semi-flat passband, this is also often used in DWDM systems.
A special part of the AWG family creates so-called “cyclic” or “colorless” AWG with an usual 100 GHz or 50 GHz channel spacing and 8 (or 16) output channels. Here applying a special design such AWG will repeat its orders and can work in any predefined channel band. In other words, the same colorless AWG can work on channels 1 to 8 or 9 to 16 or 17 to 24, and so on.
Temperature-Insensitive (Athermal) AWG vs Thermal AWG
In order to use AWG devices in practical optical communications’ applications, precise wavelength control and long-term wavelength stability are needed. However, if the temperature of an AWG fluctuates, the channel wavelength will change according to the thermal coefficient of the material used. Using the thermo-optic effect, a temperature controller can be build into the AWG to control and tune the device to the ITU grid or any other desired wavelength. With the technology developed, there is launching a new type of AWG, which called Atermal AWG. This kind of AWG is based on the silica on silicon technology and no electrical power is required. Here is a comparison between Athermal AWG and Thermal AWG.
AWG Advantages & Applications
The key advantage of the AWG is that its cost is notdependent on wavelength count as in the dielectric filter solution. Therefore it suits metropolitan applications thatrequire the cost-effective of large wavelength counts. Other advantage of the AWG is the flexibility of selectingits channel number and channel spacing and, as a result,various kinds of AWG’s can be fabricated in a similar manner.
You may have a question that where can I use AWGs in my optical network. Generally AWG devices serve as multiplexers, demultiplexers, filters, and add-drop devices in optical WDM and DWDM applications: | <urn:uuid:bcaabfeb-8ed8-46bb-8b7a-c62b2a67deb5> | CC-MAIN-2017-04 | http://www.fs.com/blog/brief-introduction-of-arrayed-waveguide-gratings.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00383-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924954 | 1,663 | 3.765625 | 4 |
There's been talk for years of a more accurate Global Positioning System. The current GPS system tells you roughly where you are, but it's only accurate to within a few feet. That vagueness means that although it's fine for mapping, it isn't good enough for narrowly targeted proximity or geo-fencing that can be used in e-commerce.
Existing GPS has been used in toll-road billing, and has been fine-tuned for surveying with large, expensive antennas, but it's currently not much good for tracking customers as they choose a concert seat, for example.
The European Space Agency is building a new, highly-accurate system called Galileo that they say will be fully functional by about 2020.
Galileo, if you scan its fine print, includes authentication. That's an accurately timed, trusted location factor that will allow advertisers to not only pitch you down to the micro-location, but also let you perform financial transactions.
You'll be able to choose your seat at an open-air sporting event and pay for it too. Turnstiles become redundant, among other things. You should even be able to place your hand in a vending machine and get billed.
But, as I say, this is a ways away and Galileo still has to get more of its birds into the sky for it to work properly—a 2014 launch placed two satellites in the wrong orbit, which hasn't helped progress.
In the meantime, a University of Texas at Austin project to improve GPS accuracy using existing tech is definitely interesting.
The researchers say that they've gotten GPS location errors "from the size of a large car to the size of a nickel - a more than 100 times increase in accuracy."
And they've done it with software in existing smartphone-quality chipsets, antennas, and the existing GPS constellation.
The process involves extracting accurate data, called carrier phase measurements, carried by GPS anyway, and used by surveyors and special interests, like mining. Previously, collecting those accurate measurements required special antennas.
UT Austin's software-defined GPS receiver ideas don't have an authentication factor like Galileo, but do have accuracy and use GPS's global reference frame. It could be in a small form factor, ultimately.
That improvement in accuracy, with cost savings and size, could make the system suitable for automobile collision prevention, outdoor virtual reality headset games, the Internet of Things, and drone deliveries.
The Cockrell School of Engineering scientists at UT Austin say poor multipath suppression in existing smartphone-grade gear is causing ambiguity in processing, and that degrades accuracy.
Multipath is where signals travel along many paths—not always for the best results. It's what caused ghosting on old analog television broadcasts, if you remember those. Reflection can cause multipath.
The scientists say that it isn't the antennas, which is what was previously thought to be the impediment. In fact, they say that chips in smartphones can be better quality than those found in costly surveying GPS receivers.
You don't need expensive, bulky gear to get accuracy, they reckon. You can do it all with software and a form of random antenna motion where the smartphone is gently moved in wavelength-sized increments to reduce multipath.
The researchers have written about their tests and findings in a GPS World article, if you want to read more.
Next up is to figure out a way to further reduce the error-creating multipath—possibly by estimating trajectories for the accuracy-enhancing antenna motion.
A snap-on smartphone accessory is currently being developed by the group, which created a Samsung-deal financed startup called Radiosense.
This article is published as part of the IDG Contributor Network. Want to Join? | <urn:uuid:270bb54e-9929-477a-8dc3-a714361832c6> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2920157/internet-of-things/gps-breakthrough-with-low-cost-centimeter-accuracy.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00319-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958958 | 769 | 2.765625 | 3 |
A groundbreaking Theoretical Linguistic Framework
The Meaning-Text Theory is a unique AI method that uses Lexical Functions to compute Semantics. We’ve implemented software based on the the way Meaning-Text Theory (MTT) conceives natural language from lexicon to semantics, which has led our linguistic team to create detailed and specific descriptions of the lexical units in a number of different languages.
The two men behind the MTT model
First put forward in Moscow by Aleksandr Žolkovskij and Igor Mel’čuk, MTT is a theoretical linguistic framework for the construction of models of natural language. A theory that provides a large and elaborate basis for linguistic description and, due to its formal character, lends itself particularly well to computer AI applications.
The power of Lexical Function
One important discovery of meaning–text linguistics was the recognition that the elements in the lexicon (lexical units) of a language can be related to one another in an abstract semantic sense. These relations are represented in MTT as Lexical Functions (LF). Thus, the description of the lexicon in a crucial aspect of our software.
Lexical Functions are a tool designed to formally represent the relations between lexical units. This allows us to formalize and describe — in a relatively simple manner — the complex lexical relationship network that languages present and assign a corresponding semantic weight to, for each element in a sentence. Most importantly — they allow us to relate analogous meanings no matter which form they’re presented.
The meaning in MTT
Natural languages are more restrictive than they may seem at first glance. In the majority of cases, we encounter frozen expressions sooner or later. And although these have varying degrees of rigidity, ultimately they are fixed, and must be described according to some characteristic, for example:
Obtain a result
Do a favor
Ask / pose a question
Raise a building
All of these examples show us that it’s the lexicon that imposes selection restrictions, since we would hardly find “do a question” or “raise a favor” in a text. Indeed the most important factor when analyzing these phrases is that, from a meaning point of view, the elements don’t have the same semantic value. As illustrated in the examples above, the first element provides little information, with all of the meaning or semantic weight provided by the second element.
The crucial matter here is that the semantic relationship between the first and second element is exactly the same in every example. Roughly, what we’re saying is “make X” (a result, a joke, a favor, a question, a building). This type of relation can be represented by the “Oper” Lexical Function.
Complexity is easy for our linguists
MTT collects around 60 different types of Lexical Functions. This allows, among other things, the description of relations such as synonymy (buying and purchasing are identical actions), hypernymy and hyponymy (a dog is a type of animal) and other relations among lexical units at the sentence level. This includes the Oper that we mentioned before, or ones expressing the concept “a lot”, i.e., if you smoke a lot you are a heavy smoker, but if you sleep a lot, you are not a “heavy sleeper”. All we can say is that you sleep like a log.
Our linguists adapt the principles of the Meaning-Text Theory while describing the languages supported. User questions may be completely different on the surface but the questions underlying meaning is the same, and thus correctly understood by our semantic-based searches. The upshot is that users get fast, accurate results from their queries.
Another example of MTT in action
Let’s take these user questions:
Purchasing a ticket for an overweight person
I want to buy a ticket for someone who is obese
Even though the words are different, the meaning conveyed is the same way in both cases. So both will get the same answer from a Virtual Assistant. At Inbenta, our Semantic Search Engine is built within a rich and complex network of lexical relations so that it understands what users mean with their queries, regardless of the exact words they use to pose their questions. | <urn:uuid:6e2edb20-de16-45ce-aa83-ada359ba436c> | CC-MAIN-2017-04 | https://www.inbenta.com/en/technology/the-meaning-text-theory/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00437-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929625 | 895 | 3.1875 | 3 |
APIs are the ‘glue of the connected web’ and all its IoT extensions, but their own interoperability is key to their existence…
Use of so-called Application Programming Interfaces (APIs) is on the rise. Discussion surrounding API technology is no longer the sole preserve of the developer community and APIs are now entering business-tech language conversations.
But what are APIs and why is their interoperability so important and who is responsible for the connection factor?
Application Programming Interfaces for dummies
Every IoT device has a degree of software ascribed to it. In order to connect to central IT systems, other devices and various other management controllers these devices use APIs.
As clarified here, APIs exist to form an irreplaceable communications bond between different software program elements and data streams. APIs describe and demarcate the route for a developer to code a program (or program component) so that it is able to request services from an operating system (OS) or other application.
As explained at the above link, “APIs have the ability to ‘speak’ to and ‘glue’ together any required information components around a system. They are often ‘released’ to third-party programmers who will want to connect application elements and services together. APIs have a required syntax and are implemented by function calls composed of verbs and nouns — simple, well mostly.”
The API is the backbone of the IoT
The ProgrammableWeb.com writes this week to say that the API in now the backbone of the IoT.
“If you work with Application Programming Interfaces (and if you’re reading this you probably do), beyond the dollar signs, there is one more staggering number you should be thinking about: McKinsey finds that forty percent of the whole value of IoT hinges on its interoperability. For the IoT to reach its full potential, privacy and security concerns will have to be addressed, but they won’t even matter if the devices can’t even connect,” writes Jennifer Riggins.
So who is managing all the APIs?
TIBCO isn’t the only company in the API management space (the aptly named Apigee is a key player, IBM is strong, Intel and MuleSoft do a good job and we also have to mention 3scale, Axway, CA Technologies, Informatica, Intel Services, SOA Software and WSO2… but it’s TIBCO that we will mention here due a new product release.
The firm validates its claim for a CAPS name moniker by explaining that TIBCO stands for The Information Bus COmpany … and the firm has just released Mashery Enterprise API as a piece of API management software supplied as a SaaS subscription.
What is API management?
TIBCO’s approach to API management is to explain that APIs need to be created, published, integrated, secured and ‘choreographed’ such that they are in the right place for the right IoT device at the right time.
“API usage is evolving and becoming increasingly sophisticated; hence the need to evolve the API management platform to offer all necessary capabilities such as API creation, integration and management, within a single cloud-based service,” said Matt Quinn executive vice president, products and technology and chief technology officer, TIBCO. “To deliver the diverse and growing needs of digital business, an API management platform now needs to provide capabilities, such as advanced routing and transformation, that have historically been the domain of API development and integration middleware. Mashery Enterprise brings these capabilities together to provide a single, modern API platform.”
TIBCO Mashery Enterprise users can build and test APIs, define run-time governance policies, migrate APIs between environments, and monitor and report on API usage.
The firm says that Mashery Enterprise platform allows users to expose data and services for sharing with developers to expand market reach and generate new revenue streams.
APIs will eventually (very soon) enter the public consciousness at a business-tech level if they haven’t done so already. If you know what an ‘app’ is, then you should know what an API is. | <urn:uuid:1a61bc16-fa39-4a47-96a5-87efb4a8371a> | CC-MAIN-2017-04 | https://internetofbusiness.com/iot-value-comes-interoperability/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00348-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935075 | 879 | 3.046875 | 3 |
Last week Japanese IT company Fujitsu used the K supercomputer to conduct the world’s first simulation of the magnetization-reversal process in a permanent magnet. According to an announcement from the firm, “this opens up new possibilities in the manufacture of electric motors, generators and other devices without relying on heavy rare earth elements.”
The process of magnetization reversal is a worthy avenue of scientific study, but to accurately model magnetic materials requires an enormous amount of computing power. The technique that Fujitsu developed combines a finite-element method with micromagnetics, the process of dividing magnets into regions the size of a few atoms. This technology makes it possible to compute magnetization processes with complex microstructures on a nanometer scale, many times smaller than conventional methods.
The research is viewed as a stepping stone toward the development of new magnetic materials, including strong magnets free from heavy rare earth elements. This is important because the supply for these elements is limited. State-of-the-art motors like the ones used in hybrid and electric vehicles rely on these heavy rare earth elements, so the advent of new super magnets would be a boon to this growing sector.
The simulations of magnetization reversal in rare-earth magnets were performed on the K supercomputer in cooperation with Japan’s National Institute for Materials Science (NIMS). On September 5, the results of this simulation were presented jointly by Fujitsu and NIMS at the 37th Annual Conference on Magnetics in Japan being held at Hokkaido University.
Developed by Fujitsu, the K supercomputer is installed at the RIKEN Advanced Institute for Computational Science (AICS) in Kobe, Japan. It is the number four system on the most recent TOP500 list with a performance of 10.51 petaflops (Linpack).
The next step is for researchers to perform ultra-large-scale computations on the K system and develop a “multi-scale magnetic simulator.” | <urn:uuid:8a7d3411-c0b1-481f-b5aa-96569b101511> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/09/09/supercomputing_for_super_magnets/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00062-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927167 | 409 | 3.328125 | 3 |
This Big Data training gives one the background necessary to start doing analyst work on Big Data. It covers - areas like Big Data basics, Hadoop basics and tools like Hive and Pig - which allows one to load large data sets on Hadoop and start playing around with SQL Like queries over it using Hive and do analysis and Data Wrangling work with Pig. The Big Data online course also teaches Machine Learning Basics and Data Science using R and also covers Mahout briefly - a Recommendation, Clustering Engine on Large data sets. The course includes hands-on exercises with Hadoop, Hive , Pig and R with some examples of using R to do Machine Learning and Data Science work
What am I going to get from this course?
- Students will get a good idea of Big Data Landscape, Learn basics of Big Data and Hadoop and HDFS.
- Students will also learn to use tools like - Hive and Pig - both from a theoretical aspect as well as Hands on.
- Students will Learn some amount of R and SparkR ( a big data processing framework )
- Students will learn about Mahout and also about Data Science and where it is used
- Students will learn basics of some Data Science Algorithms like - Decision Trees, Naive Bayes and Clustering algorithms and do hands on work with them
- Students will learn about R on Hadoop - tools and solutions
- Students will also learn how to use Hadoop Virtual Machines on their laptop | <urn:uuid:86aef2e1-cbb0-415d-8779-e327b3100394> | CC-MAIN-2017-04 | https://www.experfy.com/training/courses/big-data-analyst?code=BIGDATA25 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00302-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913011 | 306 | 2.859375 | 3 |
This is the first of I don't know how many articles about "Big Data". I was going to start the first article by saying that big data is not the same as Hadoop but then I realised that I had better describe what Hadoop is first, as otherwise that statement wouldn't make sense. So you can take this as a sort of preface to the whole series.
Hadoop is a) an Apache Software Foundation development project and b) the original developer's son's stuffed toy elephant (hence the Hadoop logo and the menagerie of complementary projects named after animals). It has three bits to it: a storage layer, a framework for query processing and a function library. The last is not terribly important so I'll concentrate on the first two. There are also loads of things built on top of or around Hadoop (Hive, Pig, Zookeeper et al) that are not part of Hadoop per se - I'll come back to these in a further article.
The storage layer is HDFS (Hadoop distributed file system) and it stores data in blocks. Unlike a relational database, in which block sizes are typically 32Kb or less, in HDFS block sizes are, by default, 64Mb. This supports serial processing, as opposed to the random I/O that you find in a relational database. It is good for reading large volumes of data but useless for transaction processing or operational business intelligence.
Further, HDFS has redundancy built-in. Designed to run across a cluster of hundreds or thousands of low cost servers, HDFS expects those servers and their disk drives to break down on a regular basis. For this reason each data block is stored, by default, three times within a Hadoop environment. Note the implication of this: however much data you want to store you will require at least three times as much capacity as data (bearing in mind factors such as compression). This also has implications for processing functions, which are despatched to the servers on which the data resides rather than bringing the data to the processing, which is the norm in relational environments. A further point to note is that new data is always appended to what is already present: you cannot insert data.
OK, enough about HDFS for the time being, what about MapReduce? MapReduce is a programming framework for parallelising queries, which greatly speeds up the (batch - this is not suitable for real-time) processing of large datasets running, in conjunction with HDFS, on low-cost hardware. While it is usually thought of as having two steps (Map and Reduce) it actually has three. The first, the Map phase, reads the data and converts it into key/value pairs (a discussion of key/value databases will be the subject of a further article or articles). An example of a key/value pair might be "Name: Howard" "Employee number: 666" (just kidding) where Name is the key and Employee number is the value. Imagine that you had multiple employee files, then you would have a separate map task for each file.
Secondly, there is a process called Shuffle, which assigns the output from mapping tasks to specific Reduce tasks, which combine these results. All keys with the same value must be sent to the same reduce task.
There is obviously more to it than I have outlined here but these are the essentials. So, to return to the initial question: what is Hadoop? The short answer is that it is a support framework for MapReduce. The key word is "a": you can run MapReduce on Aster Data (Teradata) so you don't need Hadoop at all, you can deploy HBase, which is a column-oriented but non-relational database management system on top of HDFS, or you can deploy RainStor alongside HDFS, or you can replace HDFS with IBM's GPFS-SNC or there are various other options. Why you might want to do any of these things are subjects for another day but the important thing to remember at this stage is that the key element is the query processing provided by MapReduce rather than anything else. | <urn:uuid:a191472f-d402-472c-adc7-e8c5ff6e9853> | CC-MAIN-2017-04 | http://www.bloorresearch.com/analysis/hadoop/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00356-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958078 | 865 | 2.84375 | 3 |
Computer memory is currently undergoing something of an identity crisis. For the past 8 years, multicore microprocessors have been creating a performance discontinuity, the so-called memory wall. It’s now fairly clear that this widening gap between compute and memory performance will not be solved with conventional dynamic random-access memory (DRAM) products. But there is one technology under development that aims to close that gap, and its first use case will likely be in the ethereal realm of supercomputing.
About a year and half ago, memory-maker Micron Technology came up with the Hybrid Memory Cube (HMC), a multi-chip module (MCM) device specifically designed to scale the memory wall. The goal was to offer a memory technology that matches the needs of core-happy CPUs and GPUs and do so in a way that is attractive to computer makers.
In a nutshell, HMC glues a logic control chip to a 3D memory stack, all of which are connected with Through Silicon Vias (TSVs). The technology promises not only to deliver an order of magnitude performance increase, but also to keep pace with future microprocessors as those designs continue to add cores. Micron claims a single HMC device can deliver 15 times the performance of today’s DDR3 modules and can do so with 70 percent less energy and in 90 percent less space. Latency is expected to decrease as well, although no specific claims are being made in that regard.
According to Dean Klein, VP of Micron’s Memory System Development, the problem with conventional DRAM technology is that they’ve pushed CMOS technology about as far as it’s going to go under the DDR model. Although DDR4 products are slated to ship before this end year, there is currently no DDR5 on the drawing board. That’s a problem, especially considering that DDR5 would probably be coming out toward the end of the decade, just when the first exascale supercomputers are expected to appear.
But even if DDR evolution is maintained through 2020, it would almost certainly fall short of the needs of exascale computing. Such machines are expected to require per-node memory bandwidth in excess of 500 terabytes/second. Klein says they just can’t boost the signal rates much more on the DDR design, and if they tried, power consumption would go in the wrong direction.
The HMC design gets around those limitations by going vertical and using the TSV technology to parallelize communication to the stack of memory chips, which enables much higher transfer rates. Bandwidth between the logic and the DRAM chips are projected to top a terabit per second (128 GB/second), which is much more in line with exascale needs.
Another important aspect of the design is that the interface abstracts the notion of reads and writes. That means a microprocessor’s memory controller doesn’t need to know about the underlying technology that stores the bits. So one could build an HMC device that was made up of DRAM or NAND flash, or even some combination of these technologies. That frees up the microprocessor and other peripheral devices from being locked into a particular memory type and, in general, should make system designs more flexible.
To move HMC beyond a science project, Micron put together a consortium and attracted key players, including competitors, to back the technology. Today the Hybrid Memory Cube Consortium consists of some of the industry’s heaviest hitters: Samsung, Microsoft, IBM, ARM, HP, IBM, Altera, Xilinx, Open-Silicon, and SK hynix. The group’s immediate goal is to develop a standard interface for the technology so that multiple manufacturers can build compliant HMC devices. The formal standard is due out later this year.
A key partner with Micron has been Intel, a vendor with a particular interest in high-performance memory. The chipmaker’s immediate motivation to support HMC is its Xeon line (including, soon, the manycore Xeon Phi), which is especially dependent on performant memory. In fact, without such memory, the value of high-end server chips is greatly diminished, since additional cores doesn’t translate into more performance for the end user. The relative success of future multicore and manycore processors will depend, to a large extent, on memory wall-busting technology.
Further out, Intel is looking at HMC as a technology to support its own aspirations to develop components for exascale supercomputers. Last year Intel helped Micron build an HMC prototype, which CTO Justin Rattner talked up at last September’s Intel Developer Forum. Although the chipmaker will presumably assist Micron if and when it starts churning out commercial silicon, neither company has offered a timeline for an HMC product launch. Klein did say that its prototype has been in the hands of select customers (HPC users and others) for several months, and their intent is to commercialize the technology.
And not just for high performance computing market. Although supercomputing has the greatest immediate need for such technology, other application areas, like networking, could also benefit greatly from HMC’s high bandwidth characteristics. And because of the promised power savings, even the high-volume mobile computing market is a potential target.
The biggest challenge for HMC is likely to be price. In particular, the use of TSV and 3D chip-stacking is in its infancy and by all accounts, will not come cheaply — at least not initially. And when you’re talking about 10PB of memory for an exascale machine or 1MB for a mobile phone, cost is a big consideration.
Other technologies like HP’s memristor, Magneto-resistive Random-Access Memory (MRAM), or Phase Change Memory (PCM) could come to the fore in time for the exascale era, but each one has its own challenges. As Klein notes, there is no holy grail of memory that encapsulates every desired attribute — high performance, low-cost, non-volatile, low-power, and infinite endurance.
The nice thing about HMC is that it can encapsulate DRAM as well as other memory technologies as they prove themselves. For the time being though, dynamic random-access memory will remain as the foundation of computer memory in the datacenter. “DRAM is certainly going to with us, at least until the end of the decade,” admits Klein. “We really don’t have a replace technology that looks as attractive.” | <urn:uuid:e64ceb4a-2402-43f0-95a0-809f53581cb8> | CC-MAIN-2017-04 | https://www.hpcwire.com/2012/07/10/hybrid_memory_cube_angles_for_exascale/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00356-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943028 | 1,366 | 2.734375 | 3 |
Introducing IPv6 | Classifying IPv6 Addresses
Previously, in part 1 of this series, you learned the basics of IPv6 address construction and format. Now you can complement that information with this peek into the various types of IPv6 addresses; critical knowledge for anyone looking to understand the mechanics of the new Internet protocol. Go ahead, I know you want to get started:
Classifying IPv6 Addresses
As with IPv4, an IPv6 address serves as an identifier for an interface or group of interfaces. Also like IPv4, IPv6 addresses come in several types, based on how they represent those interfaces. IPv6 has three types of addresses:
- Unicast: An IPv6 unicast address is used to identify a single interface. Packets sent to a unicast address are delivered to that specific interface.
- Anycast: IPv6 anycast addresses identify groups of interfaces, which typically belong to different nodes. Packets destined to an anycast address are sent to the nearest interface in the group, as determined by the active routing protocols.
- Multicast: An IPv6 a multicast address also identifies a group of interfaces, again typically belonging to different nodes. Packets sent to a multicast address are delivered to all interfaces in the group.
There are no broadcast addresses in IPv6. The functions served by broadcast addresses in IPv4 are provided by multicast addresses in IPv6.
The high-order (left-most) bits of an IPv6 address are used to identify its type, as shown here:
Address Type Binary Prefix Hex Prefix Unspecified 0000...0 (128 bits) ::/128 Loopback 000...01 (128 bits) ::1/128 IPv4 Mapped 00...01111111111111111 (96 bits) ::FFFF/96 Multicast 11111111 FF00::/8 Link-Local Unicast 1111111010 FE80::/10 Unique Local Unicast (ULA) 1111110 FC00::/7 Global Unicast (everything else)
Anycast addresses are taken from the global unicast pool. Anycast and unicast addresses cannot be distinguished based on format.
One of the primary changes from IPv4 to IPv6 is that multicast addressing support is improved and expanded in IPv6. Here’s a figure that illustrates the format for IPv6 multicast addresses:
All network operators need to have a basic understanding of multicast address format and function when working with IPv6. Because there are no broadcast addresses in IPv6; multicast is used in its place, in addition to all of the ways multicast was used in IPv4. The three fields that you must be most familiar with are indicator, scope, and Group ID. These are the fields used by all IPv6 multicast traffic; including routing protocol messages.
The indicator is always 11111111 (FF in hex notation) because this is the high-order bit pattern that indicates that an IPv6 address is a multicast address.
Scope limits the transmission of multicast packets to one of the defined IPv6 scopes. The four possible values are:
- node-local (1)
- link-local (2)
- site-local (5)
- global (E)
Group ID refers to a multicast group within the given scope. Some examples of assigned multicast groups are:
- all nodes (1) – valid scope of 1 or 2
- all-routers (2) – valid scopes are 1, 2 or 5
- OSPF Designated Routers (6) – only valid with scope of 2
- NTP (101) – valid in any scope
Here are a few examples of multicast addresses, notice both the scope and group ID of each:
FF02::1 All nodes on the same link as the sender, this address replaces the broadcast function in IPv4. FF02::6 All OSPF DRs on the same link as the sender. FF05::101 All NTP servers on the same site as the sender.
To learn more about IPv6 multicast addresses, see: RFC 2375 “IPv6 Multicast Address Assignments,” RFC 3306 “Unicast-Prefix-based IPv6 Multicast Addresses,” and RFC 3307 “Allocation Guidelines for IPv6 Multicast Addresses.”
Global Unicast Addresses
As with IPv4, unicast addresses are the most common type of IPv6 address you will work with. Because of the abundance of addresses available with IPv6, it is very likely that virtually every machine attached to your network will have at least one global unicast address assigned to each interface. (Read that sentence again, if you don’t mind.)
Because of this, all IPv6 address space not currently specified for another purpose is reserved for use as global unicast addresses. Only a single /3 is currently allocated for use however. The IETF (Internet Engineering Task Force) has assigned binary prefix 001 (hex prefix 2000::/3) to IANA (Internet Assigned Numbers Authority) for use on the Internet. This means that all valid global unicast addresses begin with the 2000::/3 prefix, for now.
The format for a typical IPv6 Global Unicast address is illustrated here:
- Global Routing Prefix: The prefix assigned to a site. Typically this is hierarchically structured as it passes from IANA (Internet Assigned Numbers Authority) to the Regional Internet Registry (RIR) to an ISP (Internet Service Provider) or LIR (Local Internet Registry) and then to a customer or a specific customer location. In each of these transactions, a smaller prefix is assigned downstream – creating the hierarchy.
- Subnet ID: The prefix assigned to a particular link or LAN within the site. In the case of a /48 being assigned to a site, there are 16 bits available for Subnet IDs; this allows a maximum of 65,535 /64 subnet prefixes at that location!
- Interface ID: All unicast IPv6 addresses (except those which begin with 000) are required by RFC 4291 “IPv6 Addressing Architecture” to have a 64-bit interface identifier in Modified Extended Unique Identifier-64 (MEUI-64) format. Interface IDs must be unique within a subnet prefix and are used to identify interfaces on a link. Because of this, /64 prefixes are the smallest common subnet you will use in IPv6.
For more information on Modified EUI-64 formatted interface IDs, see section 2.5.1 and Appendix A in RFC 4291 “IPv6 Addressing Architecture.
Special IPv6 Addresses
If you re-examine the table of address types near the beginning of this post, you can see there are several special addresses and address groups within IPv6. Some of these will be familiar to you from your work with IPv4 addressing, and some are new in IPv6:
- Unspecified address (::/128): This all-zeros address refers to the host itself when the host does not know it’s own address. The unspecified address is typically used in the source field by a device seeking to have its IPv6 address assigned.
- Loopback address (::1/128): IPv6 has a single address for the loopback function, instead of a whole block as is the case in IPv4.
- IPv4-Mapped addresses (::FFFF/96): A /96 prefix leaves 32 bits, exactly enough to hold an embedded IPv4 address. IPv4-Mapped IPv6 addresses are used to represent an IPv4 node’s address as an IPv6 address. This address type was defined to help with the transition from IPv4 to IPv6.
- Link-Local unicast addresses (FE80::/10): As the name implies, Link-Local addresses are unicast addresses to be used on a single link. Packets with a Link-Local source or destination address will not be forwarded to other links. These addresses are used for neighbor discovery, automatic address configuration and in circumstances when no routers are present.
- Unique local unicast addresses (FC00::/7): Commonly known as ULA, this group of addresses is for use locally, within a site or group of sites. Although globally unique, these addresses are not routable on the global Internet. This author looks at ULA as a kind of upgraded rfc1918 (private) address space for IPv6.
For some background on IPv4-mapped IPv6 addresses, see RFC 4038 “Application Aspects of IPv6 Transition.” Also, be aware that there may be security risks associated with using IPv4-mapped addresses. See draft-itojun-v6ops-v4mapped-harmful-02, “IPv4-Mapped Addresses on the Wire Considered Harmful” for more information.
For more information on ULA, read RFC 4193 “Unique Local IPv6 Unicast Addresses.”
But Wait, There’s More!
That’s right, we’re only half-way through this 4-part Introducing IPv6 series, so go check out Part 3: IPv6 Headers (you guessed it, didn’t you). Or you can always take a look at some other IPv6 posts here on dp, or read the book. | <urn:uuid:e1dadd5c-17a3-401a-bcdb-d9cf481fe756> | CC-MAIN-2017-04 | https://chrisgrundemann.com/index.php/2012/introducing-ipv6-classifying-ipv6-addresses/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00568-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.880856 | 1,971 | 3.3125 | 3 |
Futurists Explain Why
Learn which technologies will completely transform and which will completely disappear as IoT continues to expand
Tens of billions of devices, sensors, vehicles and people will become interconnected over the next 10 to 15 years as the so-called Internet of Things (IoT) expands from about 11 billion connections today, to 30 billion by 2020, to 80 billion by 2025. And in fact, those estimates may prove low.
But the good news is, estimates will become increasingly easier to make.
“In the future, we’ll be able to better predict the future,” said Tom Bradicich, vice president and general manager of Servers and Internet of Things Systems for Hewlett Packard Enterprise. Algorithms will build on algorithms, with every prediction smarter than the last. Bradicich isn’t alone in his opinion on how interconnectedness will transform transportation.
“In the future, we’ll be able to better predict the future.”
We spoke to a number of futurists from across industries who forecasted how the Internet of Things will help shape what many are calling the next Industrial Revolution. And be prepared, these predictions are sufficiently bold.
Driver’s licenses will not exist—you won’t need one, because you won’t be driving
Through the first half of 2016, road fatalities in the U.S. climbed more than 10 percent compared with the same period last year, which saw an overall spike in traffic deaths of more than seven percent, the biggest increase in almost a half century. Tens of thousands die every year.
In 10 or 15 years, people who own cars will be thought of the same way as people today who own their own planes: They must have too much money or be an obsessed hobbyists or both. Whether you want to drive or not—even as a hobby—the decision might not be up to you.
“It’s possible it will be illegal to drive a car,” said Bradicich. Driverless cars could quickly result in, say, 90 percent fewer accidents, at which point we’ll start hearing, “I don’t want my neighbor down the road driving because he’s possibly nine times more likely to hit me than an autonomous vehicle,” Bradicich added.
In interviews with more than a dozen prominent futurists, academics and consultants on the effects of IoT, it was unanimous that people will drive less, and everyone will benefit. “Losing 30,000 people a year is really unacceptable, but we live with that,” said Cindy Frewen, a professor and board chair of the Association of Professional Futurists. “We’ve become numb to that fact, yet we take that risk every day. We talk about other things that are high risk, and they are, but cars are one we really don’t talk about.”
“In the future, it will likely be illegal to drive a car.”
Humans, historically, have proven they aren’t the best at understanding or mitigating risks. That will quickly change, and it will be one of the single greatest benefits of IoT, according to Marti Ryan, a consultant and the former CEO of Telematic, a cloud-based platform that provides auto insurance. “Personalization is what I see coming with IoT, and by that I mean personalized, location-based, just-in-time marketing, and personalized risk profiling,” she said. “You’ll be paying for the risks that you choose to take. I don’t know if it’ll be on a per-second, per-minute or per-day basis, but we’ll get to a point where when we make choices, we’ll pay for those choices.”
In other words, people who drive only sporadically won’t need full time auto coverage. Being conscious of more decisions means people, ultimately, will begin making better decisions. Insurance will be peer-to-peer, spreading by word of mouth, and people will team up to pool their own collective risks and save money. That shared data will lead to even greater efficiencies, as people will be able to collect and sell their own data, Ryan said.
“Humans aren’t the best at understanding or mitigating risks.”
“I’ll own all of my own data and get to shop my own data for insurance or financial services,” she said. “Companies that realize it’s all about the consumer and put the power of the data back in the consumer’s hands will rise to the top. The more companies take our data and do something useful, the more we’ll be conscious that our data is being shared, and the more we’ll want to share.”
Data will become more like currency
Ryan wants her life optimized. She wants a device to explicitly tell her things including how much time she should spend looking at a screen or going to a museum or exercising on a given day. Exactly how beneficial, from a health standpoint, would an extra 20 minutes of jogging be on a given day? “Provide some value to me,” Ryan said. “Save me time or money, and continue to do that. Don’t be a bad steward of my data, and I’ll continue to give it to you.”
Christopher Bishop, a board member of Teach the Future and TEDx TimesSquare who spent 15 years at IBM, agrees that data will become more of a currency. “People will want to buy your data for a survey or to participate in a focus group,” he said. “There will be chips that have all that data stored. You’ll monitor and manage what goes in there.” But speaking of currency, futurists say you also won’t use cash.
People will visit doctors less often
For the day-to-day health needs, sensors on your clothing will monitor your vitals and provide constant biometric updates—eliminating the need for annual in-person check-ups.
For more extreme healthcare needs, futurists say that in 2025 or 2030, your driver’s license will no longer say whether you’re an organ donor, because no one will be an organ donor anymore. Imperceptibly fast connections between 3D printing devices and medical data repositories will build new organs on demand, possibly before those who need them even know that they need them. A microchip in a device on that person’s arm or ear—or in that person’s arm or ear—may buzz to alert him or her, similar to receiving an email or text today, that it’s time to swap out a kidney with a new one perfectly crafted for their body, by their body. And it will be replaced with one that can’t possibly be rejected.
“In the future, no one will be an organ donor.”
Food waste will be nearly eliminated
Sensors will be everywhere, even on our food. Edible sensors will prevent spoilage and optimize global food deliveries, Bradicich said. “We’ll be able to know where there’s food shortages,” he said. “When food can have sensors on it, its spoilage rate can drop. If you’re shipping it, it can be routed in a way to prevent fruit from spoiling.”
We’ll also be able to simply grow more food, a tremendous boon for farmers in developing nations, especially given the potential effects of climate changes, said Christopher Kent, a partner and founder of Foresight Alliance. “There’s a scientist who’s figuring out a way to print paper sensors that you can plant in the soil to measure alkalinity, salinity and how much water there is,” Kent said. “That may not be life changing for farmers in the developed world, but it’s also a huge development for farmers in developing countries.” Printable sensors will be cheap and also tell farmers optimal times to plant optimal quantities of optimal crops, with up-to-the-nanosecond climate forecasts.
Money saved from energy efficiencies will be used to revamp urban infrastructure
IoT will be crucial in urban areas, too, as it will save large companies billions, if not trillions, of dollars. “You’re going to see amazing savings in commercial buildings,” said Lee Mottern, a member of the Association of Professional Futurists and former Defense Department civilian intelligence analyst. “They’ll save a fortune with smart buildings. I do see a lot of adoption in industry. It cuts manpower, it cuts energy costs and you can even disengage the building from the grid,” similar to unplugging an unused charging device from a wall socket, saving additional electricity.
Much of the power that’s saved will be invested in building and maintaining infrastructure networks that most people haven’t yet imagined. Cars can’t drive themselves without smart roads and smart signs and smart intersections.
Paying for that infrastructure will prove more challenging. With fewer people driving—both because of driverless transportation and with so many more working from home on high-speed connections—municipalities won’t get money from traffic tickets. People may not buy as much locally, crushing the tax base. If more people share housing or don’t buy homes, or move around more frequently, local revenues will plummet and many legacy organizational structures found in cities will crumble from lack of funds. Governments will have to think of other ways to provide goods or services, perhaps through private-public partnerships. Maybe that partnership will focus on creating the necessary set of protocols for everything to talk to everything else simultaneously. “Standards and policies are going to be critical to making that all work as the deluge of data continues,” Bishop said.
Cyber attackers will be more motivated than ever
Security of IoT also will be critically important. Several futurists interviewed said devices may reach a point where they only respond to us—perhaps requiring a constant, pulsing connection to our DNA—but many were concerned that hackers will have more incentives than ever before to break down those barriers. One recent hack attack using baby monitors, connected cameras and home routers took down several major web sites, an innovative exploit the Department of Homeland Security had anticipated with a warning just a week before.
“One thing you’re already seeing is that the IT security consulting industry is growing pretty significantly, and that’ll continue to happen,” said David Stehlik, a consultant, professor and certified ethical hacker. “The successful players in the next 10 years are going to be the ones who’ve already established the framework within their own organizations to manage those issues and their brand going forward. Those who create the standards will have a leg up.”
“Devices may reach a point where they only respond to our personal DNA.”
Timothy Dolan, a principal at Policy Foresight, is less worried about long-term security issues for personal devices. “When you come up with a secure system, there’s always a hack, there’s always an innovation and then a counter response to it, but in terms of it being a personal device, I think it’ll be more secure,” he said. “I can imagine that if there is tampering there would be any number of security protocols that would lock people out or destroy data located within the device itself.”
Those high-tech contact lenses in sci-fi movies will exist
Dolan said he probably favors personalized earpieces to watches or implanted sensors. Others, including Bradicich, think all of our personal data will be implanted in contact lenses that will give us the supreme combination of security and functionality.
“Your entire smartphone will be in contact lenses,” he said. The fluids in our eyes, and perhaps in nearby blood vessels, will feed data back into the lenses to give us constant health updates. The lenses may even provide personal security, assuming we don’t already have sufficient sensors in our clothing “like those extra buttons tacked onto the inside of a dress shirt,” he said.
We’ll create robots that create robots
These devices will have to be created by people—or by robots created by people, at least until robots are smart enough to create manufacturing robots without human input. That might not be far off. The entire nature of work will be transformed through IoT, and maybe John Maynard Keynes’ decades-old prediction of a 15-hour workweek will finally come true.
The best jobs of the IoT era don’t yet exist, just as many of the best jobs of the ‘90s didn’t exist in the ‘60s. According to a survey conducted by Visual Capitalist, people in the future will make a living working as neuro-implant technicians, virtual-reality designers and 3D-printing engineers.
Humans will be able to do more good
Life will be less about stuff, even though all of your stuff will be constantly talking to all of the other stuff. People will be living longer—perhaps, according to some surveyed, 150 or 200 years—and spending more of their newfound free time helping others, Bishop said. “I think we’re going to see increased leisure time, but it’ll allow for the addressing of more global problems as global awareness increases,” he said. “With the tremendous value and wealth created by these companies, we’re going to see Gen-Z even more focused on the common good and corporate sustainability.”
“In the future, there won’t be communication barriers. Translations will happen as we talk.”
That focus will increase as everything gets faster and easier because of IoT. There won’t be communication barriers because different languages will be instantly translated as we talk. People otherwise too young or old or infirm to operate a vehicle will get anywhere they want safely and quickly. Your clothing will talk to your refrigerator after you eat half a pizza for lunch and perhaps the two devices will politely suggest having a salad for dinner. Evidence from a crime scene will be collected and recorded automatically. In fact, most crimes could plummet. More people will be doing things they actually want to do.
“One thing IoT does is increase the opportunity for individuals to be more self-reliant,” said Unique Visions’ Joe Tankersley, a 20-year Disney veteran and imagineer. “I could run a craft factory in my own garage. We’ll have new entrepreneurship because of that.” Similarly, low barriers to entry will result in more people growing their own food.
“As humans,” he said, “we tend to overvalue our unique contributions, but it doesn’t mean that we won’t see a situation where, for instance, as people prefer craft beer to beer brewed by a giant corporation, you might see a future where people prefer a product made by a human vs. automation, because it’s human made. For everyday items, we’ll care less and less.”
Technology will become invisible
Tankersley thinks we’ll reach a point where we’ll be surrounded by so much technology we’ll then be surrounded by none.
“The ultimate goal of all this technology is for it to disappear,” he said. “The only people who really think a cell phone is a good form factor are the people who manufacture cell phones. You’ve got to carry it, it’s bulky, it breaks. People seem to think that these devices are what we’re obsessed with—we’re really not. We’re obsessed with the fact that these devices provide a new kind of connection for us. And if we can make that connection less obtrusive, why wouldn’t we?” | <urn:uuid:ce6553d2-10ec-422d-a227-9abd4deda60b> | CC-MAIN-2017-04 | https://www.hpematter.com/iot-issue/ask-the-futurists-10-bold-predictions-for-2030 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00412-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953114 | 3,396 | 2.953125 | 3 |
TTL (Time To Live) And Fun With Mikrotik TTL Mangle
Wikipedia will be happy to explain.
In a nutshell TTL is a field in the IP header that sets how many routers a packet can traverse. If you have a routing loop and TTL didn’t exist, then the packets could indefinately loop. What’s the problem with that? Packets can travel between two routers with virtually no delay. As the packet races in a loop from router to router it consumes bandwidth and CPU resources…no good-o!
So here’s my test setup:
As a packet moves through a router the forwarding router reads the packets TTL, subtracts one from it and then forwards it on. If a packet has a TTL of one as it reaches a router it will be dropped before it has an opportunity to forward the packet. In the Mikrotik, the TTL is decremented first thing in the forward chain.
This would be the normal operation, but Mikrotik has a little trick up it’s port. There is a mangle rule that can be created to adjust the TTL to whatever value you want! I was thinking how fun it would be to create a routing loop…so I did. You might be surprised at how adversely a single ICMP packet can affect a router in an infinite loop. (Excuse the screaming children and the random cat noises hehe)
Here’s the code version:
/ip firewall mangle add action=change-ttl chain=prerouting comment="" disabled=no new-ttl=set:10 protocol=icmp
So what would be an advantage of adjusting the TTL manually? You can adjust the TTL down to a lower value so that specific traffic won’t have the opportunity to travel any farther than you want. You could set the TTL on a packet to 1 as it exits the router so that only a host can exist behind your equipment.
Have fun kids! | <urn:uuid:4c9e300c-ce99-4fdf-95db-3f5c3ee1ad1f> | CC-MAIN-2017-04 | http://gregsowell.com/?p=2139 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00375-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.908134 | 407 | 2.6875 | 3 |
Building Resilient Public Health Infrastructure
CHHS Extern Lisa Bowen also contributed to this blog
noun re·sil·ience \ri-ˈzil-yən(t)s\
: the ability to become strong, healthy, or successful again after something bad happens
: the ability of something to return to its original shape after it has been pulled, stretched, pressed, bent, etc.
Several recent public health crises – Ebola, Measles, and the 2014-2015 Flu Season to name a few – make apparent the need for public health partners to develop and maintain resilient public health infrastructures. These well-publicized incidents and others like them occur against the backdrop of the essential services provided by public health partners every day. Local health departments (LHDs) provide a variety of services that contribute to creating and maintaining healthy communities, such as immunizations, food safety, and maternal/child services.
All of these public health functions require an infrastructure of resources and relationships, which is put under strain by both the recent trend of substantial budget cuts for public health programs and by providing for continued operations during emergency response activations. In the face of this strain, resilient infrastructure bolsters the capacity of public health partners to “withstand disruption, absorb disturbance, act effectively in a crisis, adapt to changing conditions, and grow stronger over time.”
Building community resilience has been one of the major focuses of the U.S. Department of Health and Human Services’ (DHHS) National Health Security Strategy. DHHS and other policy makers believe that resilience is a fundamental capability for disaster preparedness, response, and recovery.
According to DHHS’s National Health Security Strategy, the core components of community resilience are:
- Social connectedness for resource exchange, cohesion, response, and recovery;
- Effective risk communication for all populations, including at-risk individuals;
- Integration and involvement of government and non-governmental entities in planning, response, and recovery;
- Physical and psychological health of the population; and
- Social and economic well-being of the community.
When applying these core components to public health infrastructure, two categories emerge: resources and relationships. Using the Incident Command System (ICS) definition of resources, this category refers to both the materials and the people needed to maintain public health functions.
In order to create a resilient community, materials and people need to be ready for deployment the moment an emergency happens. While it is not feasible to build a cache of supplies for every conceivable event, having response plans for a variety of events, communication systems, and the laboratory capacity to rapidly test and confirm infectious agents can greatly improve the ability for health departments to efficiently respond and manage public health emergencies. Having a tiered list of cross-trained individuals and alternate people who could lead public health operational response efforts is also vital to maintaining public health functions and building a resilient community.
However, resilient public health infrastructure goes beyond the materials and people directly involved in the emergency response. The core of resilient infrastructure is a strong healthcare system. A healthcare system that is capable of scaling up everyday operations during emergencies decreases the vulnerability of the impacted population.
An important part of community resilience is human resilience and social capital. Even during strictly public health emergencies, public health responders will receive assistance from police, fire/ems, emergency management, etc. Emergency responders must work together to build strong relationships among their different agencies and with the community.
FEMA recognizes a “whole community” approach to emergency management. Community engagement helps local governments understand the unique and diverse needs of their communities and builds social capital through community empowerment.
By investing in the relationships among individuals and organizations and having a public-private partnership that can address emergencies as a unified front, communities can build social capital through improved trust between government and citizens and can better overcome damage to infrastructure and lack of aid during emergencies.
Local governments can build resilient communities through efforts to build strong relationships and ensure resources are in place. By working together before an emergency occurs, public health infrastructure can withstand disruption and better respond when disaster strikes. | <urn:uuid:ba98bcaa-90ea-4fad-a94f-e73cb296c484> | CC-MAIN-2017-04 | http://www.mdchhs.com/building-resilient-public-health-infrastructure/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00247-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934935 | 840 | 3.234375 | 3 |
Why Aren’t We Still Making History?
“History is a guide to navigation in perilous times. History is who we are and why we are the way we are.”
– David C. McCullough, historian & author
This month’s column is more than a history lesson on certification testing in IT programs. It asks why things aren’t more different today.
Computers were introduced to high-stakes testing when Novell began conducting certification tests via diskettes in 1989. Test questions were written by Novell instructors, compiled into word processing files and mailed to certification candidates in the fledgling program. The candidate would open the files, supply the answers directly in the file, save it and return the diskette. Graders then opened the files and scored the responses. The process took way too long and was difficult to manage. Plus, Novell was a networking company—sending diskettes around the world ran against the grain.
Novell figured that networked computers could be used to deliver tests and serve as the technical backbone of a new certification program. But no one knew exactly how to do it. There were no “experts” on the topic. There were only about 16 computerized testing centers in the entire United States, and none elsewhere. Novell found that a lack of locations and old technology limited its goal to certify hundreds of thousands of people to support a growing array of networking products.
Most of today’s computerized testing innovations came from Novell’s program. Here are just a few:
- First large-scale use of computerized adaptive testing, improving security, keeping test pricing low and making testing simpler (1991).
- Computerized beta tests to try out exams with actual candidates and publish IT tests when needed (1991).
- First global testing center networks, beginning with Drake, then Sylvan Prometric (later Thomson Prometric) and then VUE (starting in 1990 and still going strong today).
- First computerized support for languages other than English (1992), including dual languages available during the same test (1993).
- Use of CDs (new at the time) to support performance testing of a candidate’s ability to use technical libraries and support encyclopedias (1993).
- First large-scale use of simulations of networking software in a certification test to measure actual network installation and administration skills (1995).
- New question types to measure IT skills better: multiple-choice questions that include more than one correct answer (1992), drag-and-drop, hot area (1995) and short-answer free-response (1993).
Now, it seems we’ve hit a dry spell. There has been little innovation recently in any part of the testing process. Testing costs continue to rise, while research efforts are almost nonexistent. Many programs that continue to use paper-and-pencil tests are unwilling to switch to computerized testing for cost and security reasons. The computerization of testing should have resulted in a decreased cost, greater security, greater reach and more convenience. Instead, the opposite happened.
For fun, I re-read my first column for Certification Magazine (October 1999). It talked about the future and the benefits of computerized testing for IT certification. I wrote:
Accurate measurement of knowledge and skills is really the goal of the whole testing effort and of the use of computer technology in testing. If the test can identify the competent individual as efficiently as possible, and help certify that individual, then everyone wins. The competent candidate gets certified properly and quickly. The less competent candidate learns that he or she needs more training or experience, and in some cases finds out the exact areas of strengths and weaknesses. The certifying organization gains confidence in the certified individual and can recommend him or her to customers and hiring departments. And companies all over the world improve their procedures, and their products and services, by relying on the competence of these certified individuals.
As we begin to rely again on technology in testing, you should see the value of technology reflected in better measurement of skills, more respect for IT certifications, more convenience, better security and lower costs. Training should follow suit, helping you prepare more effectively, more quickly and at a lower cost.
I’ve been racking my brain to suggest ways you can help stimulate these new innovations. Maybe you can offer to provide feedback on exams or sit on a program advisory council for free. Certainly, communicating with your program on things you’d like to see can’t hurt. Above all, stay positive—sometimes the tail does wag the dog.
David Foster, Ph.D., is president of Caveon and is a member of the International Test Commission, as well as several measurement industry boards. David can be reached at email@example.com. | <urn:uuid:3307444a-2e36-4f5d-9509-339f8252053c> | CC-MAIN-2017-04 | http://certmag.com/why-arent-we-still-making-history/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00275-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944939 | 995 | 2.765625 | 3 |
Wisconsin consumers have had to say goodbye to the days of dumping electronic waste in landfills. Under a new law that moves the financial burden from local governments to manufacturers, users now have to recycle their old computers, cell phones and other electronic devices instead of tossing them in the trash.
The state's new e-waste recycling program, E-Cycle Wisconsin, took effect Wednesday, Sept. 1, authorized by the legislation Gov. Jim Doyle signed into law in 2009. Many electronic devices contain harmful materials such as lead, mercury and other heavy metals, which harm the environment when improperly disposed of. But the plastic, steel, copper and glass can be used to make new devices. The law seeks to conserve valuable resources, prevent pollution from improper e-waste disposal and give a boost to the state's recycling industry.
"Electronic devices contain harmful materials and by recycling them, we can ensure that they're handled properly and don't contaminate the air," said Sarah Murray, communications specialist with the state's Department of Natural Resources.
Devices covered by the law include computers, printers, TVs and computer monitors, keyboards, mice, hard drives, DVD players, VCRs and cell phones. The new rules require consumers to bring discarded electronics to collection sites. At the moment, Murray said, the state has about 300 sites registered as collection sites under the program. Drop-off fees vary.
Based on a product stewardship approach, the law gives the primary responsibility for collection and recycling to the manufacturer. According to the bill, manufacturers had to register with the DNR starting Jan. 1. This arrangement takes the stress off local governments, which previously had to handle disposal of electronics, a waste stream growing faster than ever.
For example, in Milwaukee, from 2001 to 2005, the city had 72,000 pounds a year of discarded computer equipment coming from the entire city. Between 2006 and 2009, the average spiked to 428,000 pounds. And taking these discarded devices to a facility for processing cost the city $100,000 a year, according Rick Meyers, a recycling specialist for the city.
"Nothing's changed in terms of costs," he said," but with this bill, instead of all these costs falling on the taxpayers, the manufacturer bears the primary financial responsibility. Now we're actually getting paid a few cents a pound."
At this time, state officials do not plan to issue any individual citations if consumers don't comply. Local and state governments, Murray said, want to focus on the educating the public, not fining individuals.
"The government always seems to be the bad guy anyway," said Chris Pirlot, operations director for Green Bay, Wis. "So we did public service announcements and got the word out as a goodwill gesture. We didn't want to leave our residents high and dry."
For a complete list of all recycling and collection sites, visit the DNR's website. | <urn:uuid:957e5663-1c42-40d5-9f39-1f326753ed77> | CC-MAIN-2017-04 | http://www.govtech.com/technology/Wisconsin-Bans-Trashing-of-Electronics.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00303-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95147 | 595 | 2.796875 | 3 |
Do-it-yourself (DIY) electronics is entering a golden age with the help of powerful, cheap, programmable devices like the Arduino micro controller and Raspberry Pi mini computer. Hobbyists and technology enthusiasts have flocked to those and other platforms to make everything from talking alarm clocks to robots to tablet computers.
But the DIY potential of these new platforms isn't limited to consumer applications. In the panoply of home-built tools is a wide range of security products, from malware scanners to virtual private network devices. Here is a look at some we like:
Virtual private network (VPN) software used to be reserved for government employees and folks who worked for security conscious corporations. These days even the most casual web surfer on Starbucks free Wi-Fi needs one. With its small size and low power consumption, the Raspberry Pi platform makes a great choice for a portable VPN you can stick in your laptop bag for use on the road, or to protect always on connections like your home network. The folks at Lifehacker have a great tutorial on making your Raspberry Pi VPN by combining the minicomputer hardware with free and open source software (Hamachi from LogMeIn for the VPN and Privoxy for a web browsing proxy). Check it out!
Penetration testing is something of a dark art. While commercial tools exist from companies like Core Security and Immunity, most professionals still roll their own tools, often using a combination of proprietary and open source tools. In May, the folks over at Pwnie Express made that a lot easier, releasing Raspberry Pwn, an open source tool that lets enthusiasts turn their Raspberry Pi into a penetration testing and audit tool. Their software, released under the GNU public license, was built on DEBIAN and compiles a small arsenal of common pen testing tools including netcat, wireshark, kismet, cryptcat and others. Bluetooth and wireless connectivity mean the device can be remotely controlled once deployed on a target network.
Malware isn't just for Internet connected devices. With the spread of inexpensive, high capacity portable drives, malware can easily jump over "air gaps" that separate stand alone devices like PCs, servers and embedded systems from malware-prone network- and Internet connected systems. How do you figure if a given USB device you want to use is infected? One easy way is a portable scanner that you can plug the portable device into prior to using it. The folks at Icarus labs tapped the Raspberry Pi platform to build just such a system as a proof of concept, and say make a high-powered scanner that leverages 44 separate AV engines to interrogate portable media.
Cellular intrusion detection system (CIDS)
As far back as DEFCON 20, security experts demonstrated how so-called "Evil Twin" attacks that are common in the Wi-Fi world can be ported to GSM networks used by cellular phones, while existing network security tools can't inspect cell traffic. Fortunately, fighting back against attacks transmitted over cellular networks doesn't have to be costly. Researchers from LMG Security showed how to make a low-cost cellular intrusion detection system using a Verizon Samsung Femtocell and the SNORT open-source intrusion detection software. The CIDS developed by LMG was able to detect and alert on command and control traffic sent to a nearby infected mobile device.
Network backdoor/Trojan horse
The small form factor for Raspberry Pi devices make them ideal for the most straight-forward kind of hacking tool: the rogue device or Trojan horse. This tutorial shows how even a technically unsophisticated hacker could disguise a wi-fi enabled Raspberry Pi device inside a standard laptop power cord. Once planted on a target network, the device will create an SSH encrypted tunnel that would enable an external attacker to send and receive data, including malicious payloads, to the target network. | <urn:uuid:a03d930b-60db-4c09-b51c-c2949e190563> | CC-MAIN-2017-04 | http://www.itworld.com/article/2704713/security/diy-security--cool-tools-you-can-build-yourself.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00513-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91665 | 781 | 2.796875 | 3 |
When we talk about “3DTV” we generally mean “stereoscopic” images. This approach presents different views to the left and right eyes. The brain then deciphers information from the small differences between the two images, and constructs a three-dimensional image from the combination. This can deliver the illusion of depth, where some objects appear to be closer and others farther away.
This does not successfully mimic all aspects of real-world vision, however. The most glaring omission is that it does not provide “motion parallax” effects. This is the effect that you see when you move your head from side to side; closer objects will “move” relative to the background, and elements of the background will either be covered up or revealed by this motion. It turns out that this is a very strong effect; it allows you to “see around” objects so that you can see their sides.
You can’t do this with stereoscopic imagery because there are only two images available. If you have more images from additional angles, you could create a hologram that would appear to have volume, but this is expensive and difficult and generally requires lasers. But now a group from Osaka University has demonstrated a system that uses simple front projectors to show multiple images. Instead of projecting onto a flat surface, the images are projected into a cloud of water droplets. Depending on your viewing angle, you will see a different image. As you walk around the cloud of mist, the image changes and it appears as though you are looking at a three-dimensional object.
Unlike stereoscopic imagery, you can capture this effect in a simple video. Here’s a recording of the demonstration:
This approach certainly could be applied to larger, full color displays. A stable “cloud screen” would be required, and it would probably have to rely on interpolation to create enough different images to give it a natural look, but it would definitely solve the multiple-viewer problems inherent in auto-stereoscopic displays. | <urn:uuid:291eb165-8a84-48e8-b48c-da51f9460801> | CC-MAIN-2017-04 | https://hdtvprofessor.com/HDTVAlmanac/?p=1447 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00569-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94408 | 423 | 2.859375 | 3 |
Kozak J.,Electrotechnical Institute Warsaw |
Kozak J.,Laboratory of Superconducting Technology |
Majka M.,Electrotechnical Institute Warsaw |
Majka M.,Laboratory of Superconducting Technology |
And 4 more authors.
IEEE Transactions on Applied Superconductivity | Year: 2013
This article presents a comparison of inductive and resistive superconducting fault current limiter built with the same length of high temperature superconducting (HTS) tape. The resistive limiter is constructed as a noninductive bifilar winding. The inductive coreless limiter consists of primary winding and secondary shorted winding. Both limiters are connected parallel to the additional Cu primary winding, which helps to reduce the power dissipated in the HTS windings during and after a fault. It also ensures that in cases of an HTS tape failure, the protected circuit will not be disrupted. The limiters are very fast and the first peak is almost equally limited by both types of limiters. © 2002-2011 IEEE. Source | <urn:uuid:19500ca3-5872-4211-b0bf-a96bc2538eed> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/laboratory-of-superconducting-technology-236760/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00413-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916061 | 224 | 2.8125 | 3 |
What is software testing?
Delphix transforms software testing methodologies
Software testing is a method of assessing the functionality of a product or service under test. The intent is to evaluate various properties of interest from efficient design to program usability. Typically, software testing involves finding bugs - errors or other defects - in the software; however, the overall process can provide insights into not only the quality of the software but also the risk factors for its users and sponsors. Software testing is a necessary component for firms to guarantee the reliability and subsequent end-user satisfaction of their software.
What is software quality assurance?
Software quality assurance involves the set of activities that guarantee the implementation of processes, standards, and procedures for developed software. This encompasses the entirety of the software development life cycle and represents a range of preventive measures. Many times, the concepts of quality assurance and quality control are evaluated against each other. While they both involve a level of assessment, software quality assurance is process-oriented as opposed to product-oriented, and focuses on creating the foundations for which developed software meets standardized quality specifications.
Types of software testing
Software testing is a crucial step in the development cycle, and depending on the need, different types of testing are needed when evaluating the software. Among these types, the most common include the following: unit, integration, performance, quality assurance, and user acceptance software testing.
Unit Testing – a process performed by software developers, where units of source code are tested (e.g. statements and functions). This approach tests the smallest, testable parts of an application to evaluate its internal structure or inner workings. Unit software testing is commonly a candidate for automation.
Integration Testing – a process that tests the connectivity or interaction of the individual components of an application. This approach necessarily builds upon the results of unit software testing.
Performance Testing – a process that focuses on testing the quality metrics of software. In this case, this type of testing differs from functional software testing in that it prioritizes evaluation of attributes beyond base functionality (e.g. stability and reliability).
User Acceptance Testing – a process performed by end users on developed features or components of an application. This approach focuses on whether the software is aligned with both present business operational needs and with other previously dictated requirements.
What is test data?
Test data is the data, or input, provided to a software program for usage in a variety of testing purposes. These purposes can be either confirmatory or destructive. In the case of confirmatory software testing, test data can be used and evaluated to generate an expected result. On the other hand, destructive testing with test data intends to test application performance in situations that demand unusual or extreme responses.
In reference to the test data above, the various phases of software testing require their respective types of test data. These phases can include subsetting, data masking, and synthetic data creation.
Subsetting – takes full production data and creates smaller representative sample of the data. The intent is to save on storage requirements and theoretically, smaller datasets are more portable and easier to test.
Synthetic Data – creates artificial data that resemble what a production application will encounter in a real environment.
Data Masking – produces structurally similar but masked real-life data without introducing unsafe levels of risk.
Automated software & data testing tools
Automated software testing tools provide a level of maintenance and repeatability of software upkeep. The testing tool software is separate from the software being tested, and controls the execution of phases in the testing cycle with previous iterations. Steps in the overall testing process can be exhaustive or require significant user effort, but test automation helps automate some of these more time-intensive, repetitive tasks. This is especially important in organizations hoping to move towards agile or continuous delivery. The most popular offerings for automated data testing include the following: HP, IBM, Micro Focus, Microsoft, Sauce Labs, ThoughtWorks, and Tricentis.
While these tools provide different levels of automation in the software testing cycle, the delivery of the test data itself still requires both significant time and manual effort. However, data virtualization, or virtual data, provides this missing automation. By masking test data in non-production and provisioning virtual copies, virtual data allows for the delivery of full, masked copies of data into downstream environments. Additionally, these copies represent complete, fresh copies of production, so teams no longer need to rely on subsetting and synthetic data for their testing. With virtual data, teams can improve their software testing cycles and accelerate application development. | <urn:uuid:05e82e38-b36c-49af-925b-756a760f5385> | CC-MAIN-2017-04 | https://www.delphix.com/solutions/test-data-management/what-is-software-testing-methodologies | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00350-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.909891 | 922 | 3.0625 | 3 |
With cyber threats around the globe evolving and becoming more sophisticated, evidenced by the recent data breach at retailer Target, it is important that consumers protect themselves when using technology for sensitive information or data.
There are a number of steps that individuals can take to help improve their security both on and off the internet, including a few suggestions from Condor Capital:
One of the first and most important steps that you can take towards better security is improving your password management. One effective method is to think in terms of “pass phrases,” instead of pass “words.” Using a phrase that is personal to you creates a password that is not only easier to remember, but harder to crack. This method works to counteract one of the most popular password breaking techniques – a dictionary attack.
As its name would suggest, this is when an attacker uses a dictionary to guess your password. If your password is a word or only a slight variation of a word, where simply an “O” is chanced to a “0” for instance, it is most likely guessable via a dictionary attack.
Secondly, it is essential that you use different passwords across all of your critical services so that in the event that one is compromised, an attacker does not immediately have access to all of your accounts. Although many claim that this process makes it too difficult to remember passwords, there are several services aimed at making password management easier. Some examples include password management systems that are built into Google’s Chrome and Mozilla’s Firefox web browsers, as well as third party offerings, such as LastPass. Further, you should not hesitate to utilize the “forgot password” prompt for lesser used accounts.
Finally, take advantage of dual factor authentication logins whenever they are available. Under processes such as this, a user is required to present two data points to verify their identity. One popular method involves the user entering a password, then a separate, one-time code which can be sent via text message or phone call. It is particularly important to use two factor authentication when utilizing password management systems, such as the ones we mentioned above.
While these can be an inconvenience, they are a small price to pay for significantly improved security for your critical personal and financial information.
Connections to wireless networks can also be a source of security vulnerability. When setting up a wireless network at home, it is important to always require a security key to enable connection. In addition to potentially letting others use your connection for free or slowing down your service, an unprotected network can allow them to intercept data that you transmit over the network.
Going a step further, when connecting to a public network such as a coffee shop, you should be cautious regarding the type of information that you access using such a portal.
With the increasing proliferation of mobile devices, they have become an increased target for attackers. In addition to connecting to secure networks, as stated above, it is important to always utilize a security PIN to lock your phone. This is particularly important when the device is used to access email, which likely contains a trove of personal information. When downloading apps, be wary of those from lesser known or obscure developers, as these apps can sometimes pull personal information from your device.
Finally, be sure to keep your operating software as up-to-date as possible. In addition to fixing bugs, many updates patch security flaws that have been discovered in the programs. This came to the forefront recently after a severe security flaw was found in iOS 7.
When using a credit card to make a purchase, be sure that you do not leave it out in the open. For example, if it is left face up on a table as you enjoy coffee, an attacker could easily snap a picture of your card, then use the numbers to make fraudulent purchases. With that said, it is important to check your monthly statements for irregularities so that any potential security breach is identified as soon as possible. If you receive statements via hard copy, it is then essential that they are shredded or stored in a secure location, such as a lockbox.
On the Web
Posting personal information to social networks such as Facebook or LinkedIn can provide another avenue for attackers to gather information on you. When using these sites, turn your security feature and privacy settings to their most strict level to prevent unauthorized people from viewing your personal information. Subsequently, you should think carefully before adding a new person to your “network.”
Most social networks also have tools that will allow you to limit what content certain groupings of connections can see. With an ever increasing amount of commerce and general interaction with services transitioning to the web, it is also crucial to ensure that the websites you are using are secure. Whenever connecting to a website that collects personal information or facilitates payment or the transfer of funds, you should ensure that it is being done over an encrypted connection. Some signs that the website is secure are a web address beginning with “https” or the presence of a padlock when connected to the site.
Sending and receiving documents or information via email has become increasingly popular for its ease and speed, but you should take caution when transmitting information using this method. Try to utilize email providers that support both encrypted connections and encrypted sending/receiving of messages. Older and legacy email addresses (e.g. Hotmail) tend to be some of the least secure. Surprisingly, most email services provided by internet service providers (e.g. Comcast or Verizon) do NOT support encrypted transmission.
Even if you are taking precautions to secure your email, there is no guarantee that the person on the other end is. If they have a weak password or use an email service that does not use encrypted connection portals, any personal information that you send them is at risk, so use caution when sending sensitive information via email.
One of the most frequent forms of attack is called phishing, which occurs when an attacker attempts to acquire personal or financial information by impersonating a trustworthy source such as your bank or credit card provider. To guard against such schemes, always be cautious when someone contacts you regarding your accounts, whether it be via phone or email. It is standard protocol that such organizations will not call you and then ask you to provide information such as account numbers or passwords.
In regards to emails, always be wary of links directly in an email prompting you to enter personal information. One effective way to help identify fraudulent emails is to look for any obvious spelling or grammatical errors, which would be unlikely in an official document. If you feel that you may be a target of such an attack, hang up and call back via a verified number or visit the company’s website directly. You can always obtain more information from your providers regarding their security practices, how they will communicate with you, and what personal information that they may ask. | <urn:uuid:a6191313-063d-4b1c-b7c1-2ea0ff93ecda> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2014/03/31/tips-for-improving-your-cyber-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00432-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949561 | 1,396 | 2.78125 | 3 |
When researchers in Germany sat down nearly a decade ago to create a brand new parallel file system for HPC clusters, they had three goals: maximum scalability, maximum flexibility, and ease of use. What they came up with was the Fraunhofer Parallel File System (FhGFS), which is now in use on supercomputers.
The initial design considerations and inner workings of FhGFS are described in a ClusterMonkey paper on the file system by Tobias Götz, a researcher at the Fraunhofer Institute for Industrial Mathematics (ITWM).
Götz, who now lives and works in Berkeley, California, says ITWM researchers were frustrated with the limitations of existing parallel file systems. “There has to be a better way!” was the rallying cry of a group led by Franz-Josef Pfreundt, head of ITWM’s Competence Center High-Performance Computing (CC-HPC).
Pfreundt’s team started from scratch to create an ideal file system that used a “scalable, multi-threaded architecture that distributes metadata and doesn’t require any kernel patches, supports several network interconnects including native InfiniBand, and is easy to install and manage,” Götz writes.
The distributed metadata architecture is a key component of FhGHS, and contributes to the high level of scalability and flexibility that FhGHS was designed to provide HPC applications. “The metadata is distributed over several metadata servers on a directory level, with each server storing a part of the complete file system tree. This approach allows much faster access on the data,” he writes.
Similarly, the storage system breaks the storage content into “chunks” and distributes them across several storage servers using striping, according to Götz’s paper. The size of the chunks can be defined by the file system administrator.
There is no requirement in FhGHS to have dedicated hardware for the file and metadata servers. In fact, they can reside on the same physical server if necessary, Götz writes. This virtual approach also enables users to add as many storage and metadata servers as needed, without requiring any downtime.
Administrators can easily create a new FhGHS instance over a set of nodes, which makes it easy to set up a new test environment, either on physical hardware or in the cloud. A Java-based GUI is provided for management and monitoring tasks. The FhGHS file system itself runs on the Linux kernel, and is commercially supported by Fraunhofer.
FhGHS was officially unveiled in November 2007 at the SC07 conference in Reno, Nevada. Since then, it has been put to use on several systems, including the Top 500 system at Goethe University in Frankfurt, Germany.
Benchmark tests for FhGHS show near linear scalability (94 percent of maximum) on read/write operations on clusters of up to 20 storage servers. Tests of the metadata server demonstrate the capability to generate up to 500,000 files per second. In other words, the creation of 1 billion files would take about half an hour.
In head to head competition against Lustre and GPFS on 37-mile and 250-mile 100Gigabit Ethernet test tracks in Dresden, Germany, the group backing FhGHS was one of a few to publish results. In those tests, FhGHS demonstrated throughput of 89.6 percent of theoretical maximum on the 250-mile track in bi-directional mode, and 99.2 percent of maximum in uni-directional mode, according to Götz’s paper.
As the HPC community moves towards exascale computing, the folks behind FhGHS think the new file system can provide part of the solution, especially as it has to do with power consumption, fault tolerance, and software scalability. “Fraunhofer has experience that can be used to attack the exascale problem from several directions, the parallel file system being one of them,” Götz writes. | <urn:uuid:cdcc1fb8-21f4-4129-aa42-b507bd0b44d8> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/07/24/fhgfs_designed_for_scalability_flexibility_in_hpc_clusters/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00432-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944083 | 849 | 2.625 | 3 |
How Smartphones Could Transform Mobile Medical Lab Testing
An iPhone connected to a special "dongle" that turns the phone into a low-cost and mobile blood sampling laboratory is being used successfully to test the blood of sick patients in developing nations, bringing needed diagnostic services to people in remote areas.
Presently the devices, which were developed by researchers at Columbia University in New York City, are being used to diagnose HIV and syphilis in patients in Rwanda in the continent of Africa, at low cost and with high rates of reliability, according to a Feb. 6 report by The Los Angeles Times.
The dongle is a small handheld device that plugs into an iPhone and can diagnose HIV and syphilis in as little as 15 minutes, the story reported. Existing tests can take as long as 2.5 hours, and patients who are tested often don't stay around to see their results, which means they can't begin treatment if they are infected with the diseases, the report continued. The initial tests of the devices involved 96 patients in three community health centers in Kigali, Rwanda. A small blood sample is taken by the device, and it is then analyzed and tested. When testing is complete, the results are displayed on the iPhone's screen.
The devices use very little power because instead of drawing the patient's blood with a small electrical pump that could have been powered by the phone, the dongles use a squeezable rubber bulb to draw the blood using vacuum, according to the Times report. About 41 tests can be done using one charge on a phone. And as significant, the researchers estimate that the dongles would cost $34 each to build, as opposed to $18,450 for traditional lab equipment for such tests.
While these mobile lab testing devices were created to help health care workers test patients for life-threatening sexually transmitted diseases, all I could think about as I read about the devices was the promise of such devices and testing concepts for a myriad of other diseases and health issues around the world.
How about if dongles like these could be built and re-engineered for other kinds of testing as needed, such as malaria and Ebola?
And what about development of a handheld device that could test for the E. coli bacteria on fresh fruits and vegetables in the field or in prepared foods that are swabbed by such a device? Imagine how many people would avoid E. coli symptoms and horrors if such on-location and cost-effective testing were possible. The mother of a dear college friend of mine died in September 2006 after she ate fresh-bagged spinach that was infected with a virulent strain of E. coli. If testing with a specially equipped and inexpensive dongle could prevent another E. coli death, it would be a good thing.
The promise of these testing devices is amazing. I hope researchers are already dreaming up new ways to use them. | <urn:uuid:58512d1e-fa89-4770-8aba-7a2043074dbc> | CC-MAIN-2017-04 | http://www.eweek.com/blogs/first-read/how-smartphones-could-transform-mobile-medical-lab-testing.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00460-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.973932 | 592 | 3.328125 | 3 |
What email address or phone number would you like to use to sign in to Docs.com?
If you already have an account that you use with Office or other Microsoft services, enter it here.
Or sign in with:
Signing in allows you to download and like content, which the author will be aware of.
Embed code for: Session 14 Exam review
Select a size
Bio 101 Exam II Review Session 14 October 18, 2016
SI Leader: Janet Van De Stroet
How to be successful tonight
Go through the learning objectives and write out what you know about each. Then, search in the notes for clarification and other details that you may have missed or did not know.
Pay close attention to anything in bold, underlined or italicized
Ask questions if you do not understand.
Which chapter do you need the most review on?
7 Cell division
9 Patterns of Inheritance
10 Chromosomes and Human Genetics
How many SI sessions have you attended since exam I?
A sunburn causes skin cells to be replaced as they are lost from the surface of the skin. This replacement process represents an example of
meiosis I and II.
none of the above
I don’t know
Describe the difference between
1. Chromatin vs chromatid
2. haploid vs. diploid
3. Somatic cells vs. sex cells
4. Autosomes vs. sex chromosomes
Consider a cell that begins meiosis with 30 total chromosomes. How many chromosomes will be present in each resulting daughter cell by the time the division process reaches metaphase II?
___________ is more efficient, while ________ creates more genetic diversity.
Binary fission, mitosis
Binary fission, meiosis
Chromosomes connected by a centromere are called a homologous pair
When examining the rapidly dividing cells of the onion root tip, you see a cell whose replicated chromosomes are visible and are arranged in approximately the same size and shape as the nucleus, but there is no nuclear envelope. What stage of mitosis is this cell in?
Cells undergo mitosis for many reasons. Which is not a valid cause
Describe crossing over
What is it
When does it occur
What is the benefit of it occurring
The alleles of a gene are found at ____ chromosomes
The same locus on non-homologous
Different loci on homologous
Different loci on non-homologous
The same locus on homologous
In a cell where 2n=4. How many pairs of homologous chromosomes would be present in a cell undergoing metaphase II?
I don’t know
During the first meiotic division (Meiosis I) what happens?
Homologous chromosomes separate.
The chromosome number becomes haploid.
Crossing over between homologous pairs occurs.
Paternal and maternal chromosomes assort randomly
All of the above occur
What are the advantages and disadvantages of sexual and asexual reproduction?
This is a somatic cell. What is it’s haploid number?
An animal whose 2n=32 is always more complex than an animal whose 2n= 16
One minute paper
Explain how the chromosomal content changes through the main stages of meiosis
Most of the cell cycle is spent in _______.
Which is not a function of meiosis?
Reduce the chromosomal content
Separate sister chromosomes
Create new genetic combinations
All of the above are functions of meiosis
Describe the relationship of positive and negative growth regulators in regards to controlling cell growth and division.
What factors can increase human susceptibility to cancer?
exposure to hormones
all of the above
Describe the three main steps of cancer development
What are the steps
What is happening in the steps
When is the tumor benign and when is it malignant
When does the tumor become cancerous
Which of the following would not lead to increased risk of cancer?
Mutations that increase the activity of proto-oncogenes
Mutations that decrease the activity of tumor suppressor genes
Loss of p53 activity
All of the above increase cancer risk
Why is it more common for a person beyond middle age to have an increased risk for cancer?
Older cells have a harder time producing new cells.
Proto-oncogenes and tumor suppressor genes will have a longer time to accumulate mutations.
Genes can no longer repair themselves and will fall to mutations.
Hereditary mutations will occur after this age to cause cancer.
Oncogenes and tumor suppressor genes reduce the number of mutations.
Which of the following statements regarding genes is false
Genes are located on chromosomes.
Genes consist of a long sequence of DNA.
Genes contain information for the production of a single protein.
In sexually reproducing species, each cell contains a single copy of every gene.
A pea plant that is heterozygous for the flower color gene makes gametes. What is the probability that a specific gamete contains the recessive allele for flower color for this plant?
The observable traits of an organism are its ________.
Albinism is caused by a recessive autosomal allele. A woman and man, both normally pigmented, have an albino child together. For this trait, what is the genotype of the parents?
It depends on the sex of the child
It is unknown because not enough information provided.
Both Homozygous recessive
A true breeding purple flower plant is crossed with a true breeding white flowered plant, you would expect to see (purple is dominant to white) ____________.
offspring with all white flowers
offspring with all purple flowers
Mixed offspring 50% purple and 50% white
Mixed offspring, with about 3 purple plants to 1 white plant
The genotypic ratio of offspring of a monohybrid cross is ______________.
75% LI 25% ll
50% Ll and 50 % ll
25% LL, 50% Ll, 25 % ll
What are the 4 types on non-Mendelian traits?
Explain each and give an example
If a child has blood type O, she could not have been produced by which set of parents?
Type A mother and type B father
Type A mother and type O father
Type AB mother and type O father
Type O mother and type O father
Coat color in one breed of mice is controlled by incompletely dominant alleles so that yellow and white are homozygous, while cream is heterozygous. The cross of two cream individuals will produce
all cream offspring.
equal numbers of white and yellow mice, but no cream.
equal numbers of white and cream mice.
equal numbers of yellow and cream mice.
equal numbers of white and yellow mice, with twice as many creams as the other two colors.
A child has blood type AB his father knows that he has blood type A what are the possibilities for the mothers genotype?
BB or AA
BB or Bi
BB or AB
BB , AB or Bi
BB or ii
In humans, the genetic commonality of height and skin tone is that they are both
regulated by the same pleiotropic gene.
strictly environmentally induced with little or no genetic component.
clear violations of Mendel’s basic laws of genetic inheritance.
controlled by multiple genes with a strong environmental influence.
cases of genes exhibiting incomplete dominance.
Genetically identical plant clones can exhibit dramatic phenotypic variation depending on the environmental conditions under which they are grown.
The observation that individuals afflicted with albinism also always have vision problems is an example of
List the phenotypic and genotypic ratios for the following scenarios
1. A purple heterozygote is crossed with a true breeding purple plant.
Assume Mendelian inheritance
2. A homozygous white chicken is crossed with a black and white chicken who exhibits co-dominance.
Which is a cause of new alleles?
Law of segregation
What is the chromosomal theory of inheritance?
Mendel's law of independent assortment states that
One allele “hides” the other allele
Alleles of the same gene are separated during gamete formation
Alleles from each parent are blended in the offspring
Each gene is inherited separately from other genes
Which is not a method that increase genetic variation in a population?
Law of independent assortment
Explain how crossing over occurs. Assume that genes A,B, and C are arranged in that order along a chromosome. From your understanding of crossing-over, do you think that A will be inherited more often with B or with C?
Why will crossing over occur more between A and C than A and B?
A and C are farther apart so the chances of crossing over somewhere between them are greater than the chances of crossing over somewhere between A and B.
What is genetic linkage?
When 2 gene alleles are often inherited together because they are close together on the chromosome
What process disrupts genetic linkage?
Crossing over, when crossing over occurs it randomly separates two genes on an allele that were linked together
Genetic linkage and Crossing over
Crossing over disrupts genetic linkage
Genes that are further apart are more likely to independently assort (be separated from each other though crossing over)
Two genes are known to be on the same chromosome, yet analysis of genetic crosses involving these genes suggests that they often assort independently. The most plausible explanation for this observation is that they are
the genes involved in causing Huntington’s disease.
genes that are especially prone to mutation.
on opposite ends of the same chromosome.
located adjacent to one another on the same chromosome.
Both members of a couple are carriers for a recessive autosomal disease allele. If the couple has four children, which of the following statements must be true?
One of the children has the disease
Two of the children are carriers of the disease
All female children have the disease
Fifty percent of the children could be carrier of the disease
Sickle-cell anemia is an inherited chronic blood disease caused by an autosomal recessive allele. Suppose a man who is homozygous recessive for the sickle-cell gene fathers a child by a woman who is a carrier for sickle-cell. What are the chances their children will exhibit the disease?
Genes located right next together on the same chromosomes are referred to as ________ and generally ____________.
Linked…sort independently during meiosis
Homologous…are inherited together
Linked…do not sort independently during meiosis
Codependent…do not sort independently during meiosis
Why are Autosomal Recessive genetic disorders common in a population?
Because they can continue on in a population because carriers can pass on the disorder without being affected by it
Huntington’s disease is an autosomal dominant disease. How is it possible that this disease persists in a population?
The deadly allele can hide in heterozygous carriers.
the disease causing gene is highly affect by the environment
The disease is not controlled by one specific gene, rather there are many genes that affect whether the allele is expressed
The disease takes effect later on in life allowing affected individuals to have children and pass on the affect alleles
Which of the following statements regarding crossing-over is false?
Crossing-over disrupts the linkage between genes on the same chromosome.
Crossing-over disrupts the linkage between genes on different chromosomes.
Crossing-over produces new allelic combinations.
Crossing-over produces non-parental chromosomes.
Which of the following is not true regarding x-linked disorders?
They are found on sex chromosomes
They can be inherited by both males and females.
Females are more susceptible than males
All of the above are true
An XX individual develops as a male. Which of the following statements offers the most likely explanation?
The XX inheritance pattern represents male
The sperm did not contribute any genetic material
A piece of the Y chromosome attached to one of the X chromosomes
The egg contributed twice as many sex chromosomes compared to the average situationrease genetic variation in a population?
The XX inheritance p | <urn:uuid:dac45a51-614f-4a00-8acf-d6d78f39f9eb> | CC-MAIN-2017-04 | https://docs.com/janet-van-de--1/9484/session-14-exam-review | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00000-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.90804 | 2,579 | 2.9375 | 3 |
Power over Ethernet -- Ready to Power On?
Consider the following scenarios:
You're planning to install the latest Voice over IP (VoIP) phone system to minimize cabling build-out costs when your company moves into new offices next month. Just imagine if you didn't need to be concerned about losing your telecommunications system every time there's a power outage.
- The company staff has been clamoring for a wireless access point in the picnic area behind the building so they can work on their laptops through lunch, but the cost of running electrical power to the outside is prohibitive.
- Management has been asking for security cameras and business access systems throughout the facility, but you would rather avoid yet another electrician's bill.
Have you ever imagined what you could do if the network cabling you already have in place (or that you're going to have to install anyway) could also support electrical power? Think of the possibilities for simplifying your infrastructure support.
You no longer need to imagine such a scenario, because with the forthcoming new IEEE standard 802.3af, also known as Power over Ethernet (PoE), your dreams of power over network wiring are now a gigantic step closer to reality. Power over Ethernet promises to enable these applications and many more by providing up to 12.95 watts of power (at 48 volts) over the same Category 5 cable that already delivers your standard 10/100/1000Mb Ethernet service.
You're probably thinking this all sounds so great that there must be a catch somewhere, when in fact this technology is real and is rapidly becoming more widely available. With the new standard's official approval expected in the next few months, let's explore the technological frontier and find out what PoE is all about and how it can make your life easier.
What is Power over Ethernet?
A version of electrical power over network connections has been utilized in the telecommunications industry for many years. It's what allows your telephone service to continue when you experience power outages. So what is so exciting and unique about Power over Ethernet?
Power over Ethernet extends the reliability that the telecommunications industry has enjoyed for years and enables lifeline service for IP telephones. It has the ability to connect and power wireless access points and web-based security cameras. Even more exciting, PoE opens the door to a new generation of networked appliances. Because there is no need for the PoE appliance (called a "Powered Device" or PD in the standard) to be anywhere near a wall socket, the PoE vendors foresee a plethora of innovative applications, from building access systems and retail point-of-information systems to "smart" signs and vending and gaming machines.
There are two system components in PoE -- the Power Sourcing Equipment (PSE) initiates the connection to the second component, the Powered Device (PD). The current is transmitted over two of the four twisted pairs of wires in a Category-5 cable. The standard defines two choices for which pairs of wires are used to transmit the power. In one method, the power goes over the spare pairs that are not used by 10BASE-T and 100BASE-T. In the other method, the data pairs are used (without negatively affecting data transfer performance). The Power Sourcing Equipment (PSE) can take either approach. All Powered Devices must support both.
When used in conjunction with a UPS system and integrated with network management tools, mission-critical network-based facilities can maintain high availability, even though the electrical power may be down. You may be able to unplug devices with sufficiently low power requirements and rely on the network to provide power with UPS reliability.
Any network manager understands the pain of those midnight visits to the data center or wiring closet to reset some piece of equipment. Power over Ethernet eliminates the need to push a reset or power switch on remote, possibly difficult-to-reach PoE-powered devices. They can be turned on or off or reset by a network manager sitting at his or her desk. This has the potential to save your company the huge overhead costs of on-site service calls, the maintenance of dispatch centers, and late night administration trips.
The Internet Engineering Task Force (IETF) has been working in parallel with the IEEE to extend its Simple Network Management Protocol (SNMP) to apply to PoE ports as well. It has developed an Internet Draft that extends the Ethernet-like Interfaces MIB (RFC 2665) with a set of objects for managing Power Source Equipment and Powered Devices. IEEE 802.3af defines the hardware registers that would be used by a management interface. The IETF draft defines management data objects based on the information read from and written to these registers.
The real beauty of the standard is that Power over Ethernet is completely compatible with existing Ethernet switches and networked devices. Because the Power Sourcing Equipment (PSE) tests whether a networked device is PoE-capable, power is never transmitted unless a Powered Device is at other end of the cable. It also continues to monitor the channel. If the Powered Device does not draw a minimum current, perhaps because it has been unplugged or physically turned off, the PSE shuts down the power to that port. Optionally, the standard permits Powered Devices to signal to the PSEs exactly how much power they need.
At this point, you might be saying to yourself, "All this sounds fantastic! How can I start using this technology?" Introducing Power over Ethernet to your network is trivial. All of the equipment goes into your wiring closet, where the edge switches connect to networked devices. If you are expanding or upgrading your network equipment, you can purchase Ethernet switches or modules that integrate PoE into their 10/100 or 10/100/1000 ports. These ports are said to have "inline power."
Alternatively, you can add power to existing ports using a mid-span insertion device, sometimes alternatively called a mid-span insertion panel or a "power hub." When using a power hub, a patch cable connects the switch port to an input port on the power hub. The matching output port on the power hub is connected to the Powered Device. A Power over Ethernet adaptor is similar to a power hub. It adds PoE capability to a single existing Ethernet port or networked appliance.
When do you use switch ports with inline power, and when do you use a mid-span insertion device? The correct choice depends on how many powered ports you will be deploying and how much flexibility you need or want to locate or move the powered devices. You will also need to work within the space, power, and cooling constraints in your wiring closet. Of course, the biggest factor, as always, will be what your budget can support.
If you will only need to administer a handful of PoE ports in an existing or new wiring closet, a power hub may be the simplest and most cost-effective approach. You pay for PoE only where you need it, and you maintain your investment in your current switches. You have the flexibility to connect one mid-span device to ports on multiple switches. However, using mid-span insertion devices results in having three ports for every one that you would need if you used a switch that integrates PoE on its ports. This means more rack space and higher power requirements. You should take care, for example, by tagging or using color-coded cables when making the connections from switch port to the mid-span insertion device to ensure that the correct connections are made between switch ports and networked devices, and to speed and simplify the process the next time the configuration is changed.
On the other hand, if you plan to support many PoE ports -- or if space, power, or cooling in the wiring closet is at a premium -- purchasing new switch ports with inline PoE may be preferable. Note that due to the limitations of the available power and cooling, not all existing chassis-based switches can support modules with inline power. Similarly, some systems may limit the number of powered ports for the same reasons. Be sure to check the product specifications before purchasing any additional devices.
Your UPS configuration may need to be updated to support Power over Ethernet. You might discover the need to expand your existing UPS capacity or to add UPS capability to wiring closets where it is not currently present. You will want to coordinate UPS configuration with PoE configuration to ensure that power is available to the switches and power hubs where it is most needed. Likewise, you must take care to match configurations of switches and mid-span devices if you are managing power on a port-by-port basis during a power outage.
Vendors are releasing management tools to complement their hardware devices and enterprise solutions. For small PoE deployments, device-level management may be adequate. However, for large installations of Power over Ethernet, such as IP telephony, network managers may prefer a management solution that integrates PoE with the management of the application itself. This approach reduces the total configuration required and eliminates configuration errors due to inconsistency between the application and the network ports. Enterprise solution vendors have a natural advantage over device vendors in this area because they are able to fully integrate the hardware and software.
What Is the Catch?
Power over Ethernet equipment has been available for a number of years, so what has changed? The most important change is that a stable, official specification will soon exist that the vendors can build to. Many current products claim to already comply with the forthcoming standard; however, as with any emerging technology, these claims should be taken cautiously, especially for products that have been released significantly in advance of the final version of the standard.
The newly established Power over Ethernet consortium conducted its first round of interoperability testing on a matrix of Powered Devices and Power Sourcing Equipment in April 2003, at the University of New Hampshire InterOperability Laboratory. 3Com, Extreme Networks, Nortel Networks, PowerDsine, and Texas Instruments are members of the consortium. Additional companies that participated in the event included Avaya and Foundry Networks. The results were only made available to the participants.
When considering the purchase of Power over Ethernet equipment, be sure to ask your vendor about interoperability with other vendors' equipment. Even if all your data switches come from a single vendor, the Powered Devices that you and your users want to deploy may come from other manufacturers.
As with Gigabit Ethernet over copper, deploying Power over Ethernet depends on the proper use of Category 5 cable. Some older networks may still have remnants of Category 3 cable or connections in them. Another gotcha to watch for is, on occasion, some installations have economized on cable by "splitting" Cat-5 cable in two, connecting two end-devices with a single cable. Power over Ethernet will not work in such deployments. This is an opportunity to bring your cable infrastructure up to standard.
While the IEEE standard for Power over Ethernet has yet to be completely formalized, a final draft is available for review and the last step in the ratification process is expected in June of this year. The final ratification of the standard will quickly lead to the introduction of many new types and varieties of network appliances. Numerous PoE products have already been announced by all major Enterprise network equipment vendors, making now an ideal time to evaluate the advantages that PoE can bring to your organization. With the capital cost of a pilot deployment running as little as the cost of a few adaptors or a power hub and some powered devices, what are you waiting for?
http://www.iol.unh.edu/consortiums/poe/ - The Power over Ethernet Consortium website
http://www.ietf.org/internet-drafts/draft-ietf-hubmib-power-ethernet-mib-04.txt - Most recent IETF documentation
http://www.ieee802.org/3/af - IEEE Power over Ethernet web pages
Beth Cohen is president of Luth Computer Specialists, Inc., a consulting practice specializing in IT infrastructure for smaller companies. She has been in the trenches supporting company IT infrastructure for over 20 years in a number of different fields including architecture, construction, engineering, software, telecommunications, and research. She is currently consulting, teaching college IT courses, and writing a book about IT for the small enterprise.
Debbie Deutsch is a data networking industry veteran with 25 years experience as a technologist, product manager, and consultant, and has participated in the development of national and international data communications standards. Her expertise spans wired and wireless technologies for Enterprise, Carrier, and DoD markets. She is currently a freelance writer and consultant. | <urn:uuid:cefb2486-8709-4003-aabf-bb9177873228> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/netsp/article.php/2208781/Power-over-Ethernet--Ready-to-Power-On.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00514-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931856 | 2,578 | 2.546875 | 3 |
From Sketches on Napkins to Border Gateway Protocol
Cisco Fellow Kirk Lougheed’s Border Gateway Protocol (BGP) is one of many Cisco contributions to Internet standards. BGP is a path vector protocol for exchanging routing information between independent networks and has been adopted by most Internet Service Providers (ISPs).
At Internet Engineering Task Force (IETF) meetings in January 1989, Lougheed (a Cisco founder and employee number four) and Yakov Rekhter of IBM sketched their first draft of the protocol “on cafeteria napkins,” which are preserved as framed artifacts of Cisco’s role in standards development in one of the headquarters buildings in San Jose.
BGP facilitates scalable, fully decentralized routing, replacing the Exterior Gateway Protocol (EGP) routing protocol. Len Bosack, a Cisco founder, suggested reusing Transmission Control Protocol (TCP) as a reliable way to carry routing information. Lougheed commented that this suggestion “was considered heresy at the time.” Lougheed, Rekhter, and their IETF colleagues incorporated this suggestion and other refinements into BGP, eventually producing three successive versions of the protocol. Later, Rekhter collaborated with Tony Li on BGP-4, which supports classless inter-domain routing to allocate network addresses more efficiently than the original network address assignment scheme. Released in 1995, this latest version of BGP continues to be used today by all large ISPs.
Each instance of a BGP router maintains a table of networks, and each network is associated with a path of autonomous systems (roughly, groups of networks in a single administrative domain) that must be traversed to reach that network. BGP specifies how these tables of networks are exchanged between routers, subject to administrator-defined policies. The result is a global view of all networks within the Internet. BGP is the standard for communicating network reachability information within the Internet core.
In 1997, Cisco recognized Lougheed’s achievement by naming him a Cisco Fellow, chosen for innovative technical contributions and leadership and for advancing the networking industry. Many Cisco Fellows significantly influenced the evolution of IP networking and are widely recognized as thought leaders in the networking industry. | <urn:uuid:423499e9-f575-440a-9db7-9a4b47611695> | CC-MAIN-2017-04 | http://www.cisco.com/c/en/us/about/open-source/open-standards/border-gateway-protocol-bgp.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00330-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932866 | 456 | 2.8125 | 3 |
Big data is in a state of constant evolution, with new technologies and software regularly emerging in the field.
There are increasing numbers of companies that are turning to big data in order to control the vast amounts of information controlled on a variety of devices, ranging from laptops and tablets to mobile phones and even smartwatches.
It was noted by Information Age that the result of this is that big data can no longer be considered a passing trend. Enterprises need to accept that the technology is here to stay – as when internal communications, cloud computing and social media interactions are considered, organisations are having to handle more information than ever before.
With so much data now readily available to companies, organising it is harder than ever before. Information Age noted it is increasingly challenging to put big data into a unified format, especially when it comes to governing information and making sure it does not fall into the wrong hands.
As well as this, big data remains a relatively new technology and companies that have just begun using it will be more likely to encounter difficulties. Accessing logs and audit trails could also be challenging for businesses that are looking to check harder-to-reach information.
Big data has become such a phenomenon in the IT industry that likely that some companies will feel they have to investigate the technology, even though it may not necessarily be suited to them.
Prior to investing any time or money into big data, companies should fully consider what they are hoping to get out of big data. It may be that workers need to complete more training on data mining and machine learning, or they could struggle to interpret data properly. | <urn:uuid:ca840b32-9a7d-498b-b131-cd5ec2668be4> | CC-MAIN-2017-04 | http://kognitio.com/what-will-the-big-data-challenges-for-2016-be/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00056-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963406 | 323 | 2.734375 | 3 |
The network and security operations team needed to know if their load-balancers, servers and applications deployed with TLS were vulnerable to the Heartbleed bug. They needed to know which systems were affected, where they were, and the origins of the attacks so they could block those attempts while they patched their systems. Of primary concern was whether any internal systems had become compromised. Although firewalls were updated and servers were patched a growing concern was that internally compromised systems could be used to launch Heartbleed attacks from the inside.
Heartbleed, a vulnerability in the OpenSSL cryptography library, affected roughly 17% of all secure web servers at the time of its disclosure in April 2014. The bug was named "Heartbleed" after the TLS Heartbeat extension that it exploited. This extension is often enabled by default, making both clients and servers vulnerable. Vendors quickly pushed out security updates, but the ubiquitous use of OpenSSL in TLS-enabled websites and services meant that 1.5% of the 800,000 most popular websites were still vulnerable a full month after Heartbleed's disclosure. Given the prevalence of SSL for web applications and the fact that TLS could be enabled on servers as well as other infrastructure like load-balancers, tracking which applications and supporting infrastructure that is affected seemed daunting.
With thousands of servers and infrastructure systems, they did not want to perform an extremely inefficient prevention and analysis by packet-filtering and customizing logs at the firewall. Filtering and logging are computationally expensive and can impact user performance and application availability as well as blocking legitimate uses of the TLS Heartbeat extension.
- Automatically discover which hosts and services across all of their applications and infrastructure were vulnerable to Heartbleed
- Determine whether malicious entities were trying to access the site and identify them by IP and geolocation
- Continuously monitor after remediation to be sure that previously decommissioned servers and infrastructure didn't accidentally come online introducing the vulnerability
Vendors quickly pushed out security updates, but the ubiquitous use of OpenSSL in TLS-enabled websites and services meant that 1.5% of the 800,000 most popular websites were still vulnerable a full month after Heartbleed's disclosure. Given the prevalence of SSL in HTTP-based deployments, tracking which applications might be affected becomes a difficult task.
Early morning the day of the Heartbleed announcement, the financial organization's Principal Security Analyst and Senior Network Architect wondered if they could use ExtraHop to rapidly detect Heartbleed attempts and discover all vulnerable systems. They had been a customer for over six months and had used the ExtraHop platform to discover, view, and audit all SSL activity across their application portfolio of over 800 applications mainly to provide audit reports for PII data.
They rapidly built a dashboard with a view of all SSL and TLS version session analysis by host and client as well as all certificates and ciphers in use which are native "out of the box metrics" provided by ExtraHop. They were able to immediately identify which hosts in the network could be vulnerable to malicious traffic. That same morning a two hours after the announcement, ExtraHop published the industry's first Heartbleed Detection bundle free to the ExtraHop Community site. A notification went out to all customers and the Security Analyst downloaded and applied it to their ExtraHop platform. The Heartbleed Detection bundle's dashboard built upon ExtraHop's native TLS / SSL analysis but included an Application Inspection (AI) Trigger to record whenever a TLS Heartbeat record was observed and to store the client IP and the Common Name (CN) from the x509 certificate so a customer would know where each attempt came from and where it was destined. The bundle also incorporated ExtraHop's Geomap feature to visualizing malicious access attempts and target systems on a world map.
ExtraHop natively collects many SSL and encryption attributes, like the TLS Heartbeat. The customer used the Heartbleed Detection bundle to see both current and past Heartbleed attempts on their systems.
They watched in real time as dozens of attempts targeted their applications. The security incident team used this information to immediately set blocking policies for those clients while they patched their vulnerable systems.
It took less than three hours from the time of the Heartbleed announcement to identification of all vulnerable systems, real-time analysis of past and current attempts by client, to remediation efforts - all at no additional product cost to the organization. The Principal Security Analyst said it best, "I didn't realize we had purchased a platform for pervasive security monitoring when we first bought ExtraHop. We originally bought you for Citrix analysis. This is the the most extensible monitoring platform I have ever seen."
The most important value measurement for this team was time. "In the security world, time is the currency that counts. The longer it takes to identify and act, the higher probability your business gets hit big and you lose your job." They also said that no other platform, that they are aware of, would be able to provide a real-time and globally observed view of all client, network, and application behavior and encryption analysis across their entire application and infrastructure portfolio. They indicated this unique perspective based on wire data analysis allowed much better situational awareness which translated to a fast and focused response.
Packet filtering at the firewall was seen as a last resort and would have cost them dearly in time and user performance. Instead, the ExtraHop platform automatically detected and classified all traffic sources and SSL server targets and continues to perform that analysis to this day.
Both the Principal Security Analyst and Network Architect were promoted a few months afterward for demonstrated leadership, agility and innovation at their company. | <urn:uuid:3fa4d370-bbff-4781-b8df-4521e3c00cf8> | CC-MAIN-2017-04 | https://www.extrahop.com/solutions/heartbleed/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00450-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.967376 | 1,144 | 2.515625 | 3 |
Phishing: Electronic Social Engineering
In the book The Art of Deception, Kevin Mitnick defines the term “social engineering” as the use of influence and persuasion to deceive people into divulging information. While there are many ways to glean information from unsuspecting and trusting people, advances in technology make it even easier.
One particularly widespread form is phishing. Many individuals and organizations fall prey to these surreptitious, yet perilous attacks. In most cases, the victims of phishing are unaware of the attack. | <urn:uuid:7bebafd6-d39c-478d-a9aa-8d1d831abe09> | CC-MAIN-2017-04 | http://certmag.com/phishing-electronic-social-engineering/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00110-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94251 | 106 | 2.875 | 3 |
Social media: A mixed blessing for disaster response
- By Alice Lipowicz
- Sep 16, 2011
When it comes to disaster response, social media has proven to be a popular and effective tool for sharing information --except when the information is incorrect or malicious, in which case it hinders response efforts.
That conundrum is one of the drawbacks that limit the usefulness of social media in emergency situations, according to a new report from the Congressional Research Service, which was released publicly Sept. 13 by the Federation of American Scientists.
Networks such as Facebook and Twitter have been used for sharing warnings and disaster information, contacting friends during a crisis and raise funds for disaster relief.
Government agencies use such tools primarily for pushing information to the public, such as links to hurricane forecasts and evacuation routes. Some emergency management agencies are using social tools to help gather and share information in real-time, such as locations of trapped survivors.
However, using social media in such situations has risks, the service warned.
“While there may be some potential advantages to using social media for emergencies and disasters, there may also be some potential policy issues and drawbacks associated with its use,” the report said.
For example, studies show that outdated, inaccurate or false information has been disseminated via social media forums during disasters, the report said. In some cases, the location of the hazard or threat was inaccurately reported, or, in the case of the Japanese tsunami, some requests for help were retweeted repeatedly even after victims are rescued.
To reduce the possibility of false information, responders can use additional methods and protocols to help ensure the accuracy of the incoming information. Even so, response time might be hindered.
Another concern is that some individuals or organizations might intentionally provide inaccurate information to “confuse, disrupt, or otherwise thwart response efforts,” the report said. This could be for a prank or as part of a terrorist act.
Technology limitations may limit the usefulness of social media, because power outages may be widespread and many smart phones and tables have battery lives of less than 12 hours.
“Although social media may improve some aspects of emergency and disaster response, overreliance on the technology could be problematic under prolonged power outages,” the report said.
Also, the costs to the federal government of establishing and maintaining a social media emergency response program are unclear, the authors wrote. Estimates of how many personnel would be required, and with what skills, to carry out a successful program were uncertain.
The privacy and security of personal information collected in the course of a disaster response through social media also are concerns, the report concluded.
Alice Lipowicz is a staff writer covering government 2.0, homeland security and other IT policies for Federal Computer Week. | <urn:uuid:ee7ef0bf-cace-4665-b6a7-8802f6b69fee> | CC-MAIN-2017-04 | https://fcw.com/articles/2011/09/16/social-media-for-disasters-has-good-and-bad-aspects-crs-report-says.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00018-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94676 | 565 | 2.890625 | 3 |
With User Mode Linux you can create virtual Linux machines within a Linux computer and use them to safely test and debug applications, network services, and even kernels.
You can try out new distributions, experiment with buggy software, and even test security. The author covers everything from getting started through running enterprise-class User Mode Linux servers.
- What User Mode Linux is, how it works, and its uses in Linux networks
- Key applications, including server consolidation, development, and disaster recovery
- Booting and exploration: logins, consoles, swap space, partitioned disks, and more
- Copy-On-Write (COW): UML’s efficient approach to storing filesystem changes
- In-depth discussion of User Mode Linux networking and security
- Centrally managing User Mode Linux instances, and controlling their hardware resources
- Implementing clusters and other specialized configurations
- Setting up User Mode Linux servers, step-by-step: small-scale and large-scale examples
- The future of virtualization and User Mode Linux. | <urn:uuid:791ff07f-725a-4fbe-8b64-772df8f7cd1f> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2014/09/22/ebook-user-mode-linux/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00322-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.862842 | 214 | 2.828125 | 3 |
In todays computing world everyone is looking for speed. In addition to speed is space, meaning how much space do I have on that hard drive and how fast can it read my data.
Solid State Drives are drives with no moving parts which make it less suspectibale to damage. SSD’s can read data up to 500mb per second compared to regular drives that read at about 200mb per second. The cost of a solid state drives is generally double what you would pay for a regular drive, the advantage again is speed. Boot up times for many of the operating systems is just seconds rather than minutes we have all learned to hate.
We also have Hybrid drives that blend regular hard drive capacity with a bit of SSD speed. This works by by placing traditional rotating platters and a small amount of high-speed flash memory on a single drive. Basically the drive monitors what is accessed the most frequently and basically learns which actions to perform but putting this information in flash memory. Since this is drive learns, that means new data access will perform slower than if you were using just a solid state drive.
So if you can afford it SSD’s really are the way to go for speed and performance. However if you are on a budget, consider a hybrid drive. This will give you a boost in boot up speed and actions that you peform on a regular basis. | <urn:uuid:29c8f073-f423-4611-8807-091aeae98c21> | CC-MAIN-2017-04 | http://www.bvainc.com/solid-state-drives-vs-hybrid-drives/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00230-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.969904 | 281 | 2.96875 | 3 |
A scribbled signature may have been enough to verify your identity 20 years ago, but today’s online world requires more advanced — and authenticated or encrypted — methods of proving who, or what, you are online or within a digital environment.
Enter digital certificates — an authentication method that has an increasingly widespread role in today’s online world. Found in e-mails, mobile devices, machines, websites, advanced travel documents and more, digital certificates are the behind-the-scenes tool that helps keep identities and information safe.
What are digital certificates?
Developed during the eCommerce boom of the 1990s, digital certificates are electronic files that are used to identify people, devices and resources over networks such as the Internet.
Digital certificates also enable secure, confidential communication between two parties using encryption. When you travel to another country, your passport provides a way to establish your identity and grant you entry. Digital certificates provide similar identification in the electronic world.
Certificates are issued by a certification authority (CA). Much like the role of the passport office, the responsibility of the CA is to validate the certificate holder’s identity and to “sign” the certificate so that it is trusted by relying parties and cannot be tampered with or altered.
Once a CA has signed a certificate, the holders can present their certificate to people, websites and network resources to prove their identity and establish encrypted, confidential communication. A standard certificate typically includes a variety of information pertaining to its owner and to the CA that issued it, such as:
- The name of the holder and other identification information required to identify the holder, such as the URL of the Web server using the certificate, or an individual’s e-mail address
- The holder’s public key, which can be used to encrypt sensitive information for the certificate holder or to verify his or hers digital signature
- The name of the certification authority that issued the certificate
- A serial number
- The validity period (or lifetime) of the certificate (i.e., start and end date)
- The length and algorithm of any keys included.
In creating the certificate, the identity information is digitally signed by the issuing CA. The CA’s signature on the certificate is like a tamper-detection seal on packaging — any tampering with the contents is easily detected.
Digital certificates are based on public-key cryptography, which uses a pair of keys for encryption and decryption. With public-key cryptography, keys work in pairs of matched “public” and “private” keys.
In cryptographic systems, the term key refers to a numerical value used by an algorithm to alter information, making that information secure and visible only to individuals who have the corresponding key to recover the information.
The public key can be freely distributed without compromising the private key, which must be kept secret by its owner. Since these keys only work as a pair, an operation (e.g., encryption) executed with the public key can only be undone or decrypted with the corresponding private key, and vice versa. A digital certificate can securely bind your identity, as verified by a trusted third party, with your public key.
Core to a digital world
At one point, the use of digital certificates was limited to secure sockets layer (SSL) implementations and public key infrastructure (PKI) environments. And while those remain two cornerstones for the technology, their value has been realized and expanded to help secure people, machines, devices and environments alike.
The SSL start
The use of SSL digital certificates to encrypt transmissions between Web browsers and Web servers remains a monumental development of the eCommerce boom. From Internet shopping to online-banking to Web-based stock trading, SSL certificates were the catalyst for innovation that made today’s online world possible.
Based on a publicly trusted certificate, SSL technology was created to help prevent theft, fraud and other criminal activity within the new online frontier. Personal data had to be protected, credit card numbers secured, and transactions safeguarded. And while SSL technology has advanced since, the understanding gained from its development has helped extend digital certificates to secure all aspects of today’s connected world.
In your everyday devices
An electronic document that is embedded into a hardware device and can last as long as the device is used, a device certificate’s purpose is similar to that of a driver’s license or passport: it provides proof of the device’s identity and, by extension, the identity of the device owner.
Popular examples of devices that are secured by certificates include cable-ready TVs, smart meters, mobile smartphone devices, wireless routers, satellite receivers and others. Using device certificates helps protect services from unauthorized access, possibly by cloned devices. Typically, an organization injects certificates into devices that are then distributed across a large user base.
Protecting your identity
A technology that is rarely seen but always relied upon, digital certificates help secure important identity aspects of everyday lives. Specialized digital certificates authenticate identities everywhere from typical office environments to border security checkpoints.
Also, as the backbone of the ePassport trust infrastructure, PKI and digital certificates help secure domestic and international borders by implementing technology that makes it difficult for criminals to duplicate, deceive or circumvent identity documents.
Securing the machines
By issuing certificates to machines, organizations permit authorized machines to access a network by authenticating to other machines, devices or servers — typically in either Microsoft Windows or UNIX environments — using a certificate. This allows authorized machines to access and share confidential data. Many other solutions for securing networks, including firewalls or network isolation (which prevents access to the Intranet/Internet), are either susceptible to attack or are not practical. Using certificate-based authentication for machines is the best way to secure a network.
This approach prevents unauthorized machines from accessing a network; encrypts machine-to-machine communication; and permits machines, both attended and unattended, to authenticate to the network over a wired or wireless network connection. Typical deployment scenarios include hospitals, law enforcement, government and more.
Popular with enterprises, desktop certificates enable secure e-mail, file and folder encryption, secure remote access (VPN) and the secure use of electronic forms and PDFs. As data breaches, identity theft and information loss continue being commonplace occurrences, digital certificates in the enterprise enable organizations to solve security challenges quickly, easily and in a cost-effective manner.
While there are many factors that contribute to the increase of use of digital certificates, one of the most compelling is the widespread presence of mobile devices. From 8-year-olds to retired grandparents, many people have now access to or use mobile devices daily. And many of those devices are embedded with a digital certificate that authenticates its identity and ties it to the owner.
According to a recent Gartner report, global mobile phone end-user sales grew 35 percent in Q3 2010 over Q3 2009, accounting for 417 million devices sold. The report also noted that smartphone growth increased 96 percent in the third quarter compared to 2009. With many of these brands and models either including digital certificates out of the box or providing the option to install them, the increase in digital certificate use is easy to understand.
Of course, the ubiquity of mobile devices isn’t the only catalyst. As digital certificate products and capabilities become available from different vendors, the cost of implementing them decreases.
But this raises an important question: is it all happening too fast? The answer is yes – in some cases. As organizations rely more and more on digital certificates, they can be overwhelmed with the day-to-day management of large certificate pools.
It’s really not an arduous chore if you have only a handful of digital certificates, but many organizations deploy thousands of digital certificates with their products, services and even within the organization itself. Without a proven system in place, it’s easy to lose track of thousands of expiry dates, deployment locations and certificate copies, not to mention errors introduced by the human element.
So, what’s the best approach for mitigating these difficulties? To date, one of the most relied upon methods is to employ a two-pronged strategy — certificate discovery and management.
The most trusted and successful security vendors offer certificate discovery services that use network scans to search for certificates on both internal and external hosts. This solution can typically be configured to scan given IP addresses or IP ranges, looking for certificates, with a goal of exposing potential problems on the network.
Certificate discovery solutions often highlight pending issues such as certificates about to expire or certificates from unauthorized vendors.
Once an organization fully understands its certificate environment, it’s best to employ a proven tool or service to help streamline the day-to-day management of large certificate pools. These services range from simple (and often limited) software products to robust hosted services that provide more functionality, customization and control.
The more advanced services — whether deployed on-site or realized via a cloud-based model — enable organizations to easily circumvent the issues that plague unmanaged certificate environments (e.g. self-signed certificate creation, certificate copies, expiring certificates, etc.).
Cryptography and compliance
Organizations that are subject to regulations typically implement a security policy concerning the use of digital certificates. This often results in certificate-reporting and audit requirements. Typically, organizations provide a list of certificates issued from their known CAs to adhere to these requirements. In most cases, however, these lists are incomplete because some CAs are unknown and certificates have been copied.
That might present a problem. An organization’s policy might require 2048-bit keys, and it’s likely enforced with known CAs. But with unknown CAs, organizations could have weak cryptography deployed and be unaware of the oversight, leaving them vulnerable to a data breach.
The potential presence of unknown CAs or copied certificates also means IT departments cannot provide a complete list of all certificates — leaving an organization non-compliant and at risk during an audit.
Side with a security expert
As digital certificates become a more critical component in our daily lives, security experts are available to help organizations leverage the technology, regardless of their current deployment status.
Proven security companies are available to help organizations understand which certificates are best suited to meet their business objectives. And they also provide the tools and service to manage all certificates — regardless of type, purpose or environment.
If not properly managed from the onset, large certificate pools can quickly become unorganized. This may lead to higher costs, non-compliance and the unnecessary use of workforce bandwidth. And this doesn’t even account for the negative effect that may occur to a brand, product or service in the consumer’s eyes.
And even if an organization didn’t deploy certificates via a management tool or service, it’s not too late to partner with a provider that can help deploy the necessary discovery and management tools to make sense of all digital certificates — no matter how many are deployed. | <urn:uuid:1f4be5da-4730-49ce-baa8-40cde29a0f93> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2011/03/21/the-expanding-role-of-digital-certificates-in-more-places-than-you-think/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00138-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930659 | 2,263 | 3.8125 | 4 |
It’s one of the biggest challenges in computing — getting a machine to think like a human. This long-standing computational problem is one that captivates public interest, as evidenced by the much-hyped 1997 chess match between IBM’s Deep Blue supercomputer and world champion Garry Kasparov. The machine won a six-game match by two wins to one with three draws, but more importantly, the game brought about an international love affair with supercomputing. It’s actually been a long-time since supercomputing has so moved the world. Arguably, not even the 2008 accomplishment of breaking the petaflop barrier created such an intense international stir. There’s something about a machine being able to do something so seemingly human, like playing a centuries old game of strategy, that touches hearts and minds more than achieving some remote-sounding number of computations per minute.
But it turns out that playing chess is actually not such a great predictor of “human-ness” for a machine. It’s actually pretty easy for computers to beat humans at well-defined tasks such as playing rule-oriented games or predicting weather changes. What’s not so easy is for machines to understand language — indeed semantics is one area where humans still have the clear edge.
In a recent New York Times article, author Steve Lohr covers current advances in the field of computational semantics being undertaken by a group of researchers at Carnegie Mellon University.
Team leader Tom M. Mitchell, a computer scientist and chairman of the machine learning department, outlines the nature of the challenge: “For all the advances in computer science, we still don’t have a computer that can learn as humans do, cumulatively, over the long term.”
The researchers are working on a project, called the Never-Ending Language Learning system, or NELL. NELL is fed facts, which are grouped into semantic categories, such as cities, companies, sports teams, actors, universities, plants and 274 others. Examples of category facts are “San Francisco is a city” and “sunflower is a plant.” NELL has been able to glean 390,000 facts by scanning hundreds of millions of Web pages. The larger the pool of facts, the more refined the system will get.
So much of language understanding is predicated on an underlying knowledge base and that’s what NELL is developing. In the sentence: “The girl caught the butterfly with the spots,” a human reader innately understands that “spots” refers to the butterfly because the human knows that butterflies are likely to be spotted whereas girls are not. Such “basic” knowledge that we take for granted confounds the computer. This general knowledge can only be learned, and that’s why NELL was programmed to learn so many facts.
There have been similar attempted artificial learning programs, but NELL is different in that the system is being taught to learn on its own with little assistance from researchers, although if they notice NELL has gotten something blatantly wrong, like classifying an Internet cookie as a baked good, they will correct those errors. | <urn:uuid:980f4ba3-7283-4204-a202-63a8359c502c> | CC-MAIN-2017-04 | https://www.hpcwire.com/2010/10/05/machine_learns_language_starting_with_the_facts/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00532-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949533 | 660 | 3.171875 | 3 |
The password. It really gives you power doesn’t it? You’re the only one that has the “key” to the workstation or something else that has to be kept away from prying eyes. If you’re using a password than there must be something worth protecting, so why not make this protection a good one?
Choosing a good password
There are two ways to choose a password. You can either use a password generation utility or you can make a password by yourself. If you’re going to do it by yourself than there are several things you have to keep in mind.
Some of the things you should not use include: your name (as well as names of family members, friends, etc.), phone number, address, nickname, computer name, words from a dictionary, name of the company you work for, etc. The idea is to basically not use any kind of information that may be linked with you directly.
A good password includes the following: upper-case and lower-case letters mixed together with special characters, and is at least six characters long. Also, never repeat the same character within a password. Example of a good password: y_R6t*n!b
Using a password generator and safekeeping
A random password generator utility is a wise choice when making hard-to-crack passwords. Also, when you generate a good password, it will be pretty hard to remember so a password manager is a good thing to use. There are many software titles that do this job and two of them are presented below – one for Linux and one for Windows.
Figaro’s Password Manager – [ Download ]
Figaro’s Password Manager is a GNOME application that allows you to securely store your passwords which are encrypted with the blowfish algorithm. If the password is for a website, FPM can keep track of the URLs of your login screens and can automatically launch your browser. In this capacity, FPM acts as a kind of bookmark manager. The program is extremely easy to use and is open source free software.
Included with the program comes a nifty password generator, here’s how it looks:
myPasswords Professional – [ Download ]
myPasswords Professional is a password manager for Windows that uses Blowfish encryption to ensure your information is safe. It can export your databases to Microsoft Excel worksheets, HTML, text, and CSV files. It can import your existing Critical Mass and myPasswords databases and your sensitive information can be masked. The program is very configurable and it’s interface is simple which makes accessing information fast and easy. After a swift installation I doubt you’ll have any problems getting around the program, if you do – there’s a good help file to learn from.
Also included with the program comes a random password generator that makes your password creation extremely easy.
To make users create strong passwords, and in that way improve the security of a system, it’s a good idea to define the type of password that can be created. There are several ways to do this:
- make them use a password generator
- setup some guidelines like how much the password has to be long, what characters have to be used, etc.
- check the integrity of existing passwords with a cracking program and alert users with a weak password.
There are various cracking programs that you can use, some of them are:
It’s wise to change the password frequently as well as avoiding having people look at you when you type your password. There’s never enough paranoia when it comes to protecting your data.
Many applications, that need identification in order to be used, have a default password. Although this password may be easy to remember, you should change it as soon as possible. Lists of default passwords can be found all over the net and that’s probably one of the first things an attacker is going to try using. The same thing applies for any situation when a password is assigned to you, login and change it, right away.
An example of a list of default passwords can be found here.
For much more information on passwords and other methods of authentication, I recommend reading the excellent Authentication: From Passwords to Public Keys by Richard Smith.
As it says on the Addison-Wesley book page:
“[This book] gives readers a clear understanding of what an organization needs to reliably identify its users and how different techniques for verifying identity are executed.”
And, to close this article, here are two interesting articles you might be interested in: | <urn:uuid:9919c34e-5040-4e8f-bb29-0a80a7fa7fb6> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2002/05/24/basic-security-with-passwords/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00377-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921082 | 952 | 2.625 | 3 |
Former U.S. defense secretary Donald Rumsfeld might well have been speaking to chief security officers when he made a head-scratching statement that immediately entered the realm of famous quotations: “There are known knowns. There are things that we know we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns.”
Word salad though it may be, this quote turns out to be a good description of the daunting challenges CSOs face. CSOs do know about many cybersecurity threats, and can confidently mount defenses against them. They also understand the nature, if not the specifics, of many “unknown” threats – everything from outside attackers exploiting zero-day bugs to disgruntled employees stealing or corrupting proprietary information. Then, of course, there are the true “unknown unknowns” – entirely new and unexpected forms of cyberassaults that could materialize at any moment.
A starting point for every cybersecurity discussion is the sobering reality that no defense against even known threats can guarantee 100 percent security. Still, there are many ways in which CSOs and their teams can increase their odds of success, even when cyberattacks deviate from known and expected patterns.
Whether threats fall into the known or unknown category, the defenses against them break into three core areas: education, security controls, and incident response.
Employee awareness. In many ways, an educated employee base represents the foundation of any cyberdefense program. Organizations can hire top-notch security professionals and deploy cutting security technologies, but can still suffer breaches if employees unwittingly click on a link in a phishing email or visit a risky website. In many ways, solid cybersecurity builds from the ground up, with well-educated and cautious employees forming a critical line of defense.
The right technology. CSOs can deploy a wide range of security controls — from firewalls and spam filtering systems to sophisticated behavioral analytics solutions — to protect an organization. These latter systems can flag deviations from known usage and traffic patterns, and may even incorporate machine-learning techniques to continuously improve their effectiveness. Such solutions can often protect organizations against new forms of threats – or can at least warn the security team that some suspicious activity warrants further investigation.
Breach management. Incident response comes into play when a cyberbreach has occurred – whether from a known or unknown attack vector. On one level, the nature of the breach is immaterial. The organization victimized still needs to isolate, contain and eliminate the threat, determine what assets may have been compromised, and inform employees, customers, regulatory agencies, and others about any personal or corporate data that was exposed.
A critical element of any incident response process, however, involves performing computer forensics to identify how the breach succeeded, and to close that vulnerability to future attacks. Among other things, successful forensics requires access to extensive and comprehensive log records. Far too many companies still fail to keep adequate usage and traffic logs, making it near impossible to analyze and defend against new types of threats even after they’ve occurred.
“Unknown unknowns” will always be out there in the cybersecurity world. But there are many steps organizations can take to protect themselves against both known and unknown cyberthreats.
Dwight Davis has reported on and analyzed computer and communications industry trends, technologies and strategies for more than 35 years. All opinions expressed are his own. AT&T has sponsored this blog post. | <urn:uuid:0fbce5fa-b3d6-4d1d-9b3d-ab4fae813a27> | CC-MAIN-2017-04 | http://www.csoonline.com/article/3143634/internet/combatting-cybersecurity-unknowns.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00405-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961362 | 715 | 2.578125 | 3 |
SSL and TLS are not enough to secure your email
A very common marketing ploy involves companies advertising “secure” services .. where that security consists of only SSL- or TLS-encrypted connection to their servers. While use of TLS and SSL is a critical part of web and email security, it is only one small aspect of security. Below, we will talk about some of the other aspects of what you should be looking for in terms of an actual secure solution so you can be more saavy of simplistic marketing claims in the future.
What SSL and TLS do for you
SSL and TLS are critical to any security solution. They work by encrypting the connection from your computer to your email provider’s servers when you check your email, send email, or use their web based interface. For details see: How does Secure Socket Layer (SSL or TLS) work? This encrypted connection ensures that no one can eavesdrop on your communications and read them, intercept your username or password, change the data in transit, or perform other malicious actions.
What is not secured with just SSL and TLS
There is a whole lot more to the flow of inbound and outbound email than your connection to your email service provider’s systems. In short and sweet terms, this includes:
- Messages travelling to and from your email service provider’s servers and your correspondent’s email servers
- If your correspondent’s connections to their servers are secure or not
- What happens to temporary storage of messages and backups of them on your provider’s or your correspondent’s provider’s servers?
There are many steps here that are not secured simply because you connected over SSL to your provider. When you send a message, even if you use SSL or TLS:
- That message could be transmitted insecurely (in plain text) to your recipient from your email provider.
- That message could be accessed insecurely by your recipient at their provider
- That message could be changed/modified at your provider, in transit, or at your recipient’s provider
Similarly, for inbound email, you have no control over the security or integrity of the message before it hits your email provider’s servers. Any of the same things could happen.
For an in depth discussion on how email flows and what the security implications are, see: The Case for Email Security.
How can you augment SSL and TLS to have any kind of real security?
What you are really looking for is “end to end email encryption“. This implies that email messages are encrypted all the way from the sending to the reading so that any possibility of eavesdropping or message modification is eliminated. You also need to trust the email service provider that is enabling this encryption service — a malicious or incompetent provider could leave security holes that allow them or attackers access to your data.
Levels of End-to-End Email Encryption
There are various ways to accomplish end-to-end encryption of email. They vary from more secure to less secure … with the security tradeoff coming in terms of usability as is typical. More security often means less convenience (at least in terms of initial setup).
Most Secure: PGP or S/MIME Encryption
The most secure method of end-to-end encryption involves packing up your email content in an encrypted block at the time of sending. In order to read your message, the recipient needs to decrypt that block. This keeps the content encrypted at all times and only the recipient with his special key and password can access it.
It is the hardest to use in general as the recipient and sender both need to be set up to use the same technology and need to trade security “keys” ahead of time. This is doable for like minded people that you frequently communicate with. This is a no-go for general communications with “just anyone”.
Escrow: Secure Message Pickup Services
The next level of security involves encrypting the message (e.g. with PGP or S/MIME or some other technology) and saving that in a secured system at your email service provider. Your recipient is then sent a regular insecure email with none of the sensitive content in it — it’s a note and link to come to a special secured web site to pick up the message.
This works well because it allows you to communicate with anyone who has an email address.
It is less secure because it involves your email service provider holding the “keys” to your message data and requires you to find ways to authenticate your recipients (so not just anyone can intercept these notices and get the secure email messages in their stead).
LuxSci’s Escrow system allows you to choose how you want your recipients to verify their identity — either by:
- Answering a security question that you provide, or
- Signing up for a free account that verifies their access to the email address in question
Option #2 is quick and easy, but not as secure as #1 … assuming that you choose good security questions for your recipients!
An Escrow-type service is also nice in that it uniquely provides:
- The ability to retract messages after being sent
- Auditing of the access of messages
The simplest, though least secure, method of end-to-end email encryption is the use of SMTP TLS. This mechanism extends the use of SSL to the sending of your message from your email provider’s servers to your recipient’s servers. It only works for some recipients — those whose email providers actually support SMTP TLS.
It is simple … because the message appears to the recipient like a regular email message — they can open and read it without any passwords or special steps. Also, the sending on your part does not require any special work. It does provide transport-level encryption from you to your recipient’s email service provider (the minimum level of security needed for HIPAA compliance). However, it is less secure than previous methods because:
- You cannot be sure that the message remains secure while on your recipient’s email server
- You cannot be certain that your recipient uses a secure connection to download the message
- The message is not encrypted when stored on disk at your email provider or your recipient’s email provider. It is thus also more susceptible to possible malicious modification.
There are other solutions available on the market as well, such as:
- Special plugins to your email client that assist in email encryption
- Systems that send executable files to recipients that they must open and enter a password to access
- Use of encrypted ZIP files
However, we find that most customers prefer it if they do not have to install anything and their recipients can access the messages without any special software or work. They also prefer it if they do not have to communicate with their recipients ahead of time, if possible, when sending secure messages. Of course, it all depends on the level of security you are trying to achieve and the goal of your communications. This is why there are so many options.
For more details, see LuxSci’s SecureLine end-to-end email encryption service, which supports PGP, S/MIME, Escrow and SMTP TLS, as well as a free secure portal for people to login and send secure messages to you, so that you can send and receive securely … in the most appropriate way for your business needs. | <urn:uuid:9215c305-1404-41b9-9e67-04470421fbae> | CC-MAIN-2017-04 | https://luxsci.com/blog/ssl-and-tls-are-not-enough-to-secure-your-email.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00313-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937315 | 1,538 | 2.5625 | 3 |
Table of Contents
The Snipping Tool is a program that is part of Windows Vista, Windows 7, and Window 8. Snipping Tool allows you to take selections of your windows or desktop and save them as snips, or screen shots, on your computer. In the past if you wanted a full featured screen shot program you needed to spend some money to purchase a commercial one. If you needed basic screen shot capability, past versions of Windows enabled you to take screen shots by pressing the the PrintScreen button to take a picture of your entire screen or Alt-Printscreen to take a screen shot of just the active window. This screen shot would be placed in your clipboard that you can then paste in another image program of your choice.
What makes the Snipping Tool so attractive is that:
This guide will walk you through the steps necessary to use the Snipping Tool to save screen shots of your running programs, portions of other pictures, and your desktop as images on your computer.
Please note, unless you have the Tablet PC Optional Components feature enabled in Windows Vista, the Snipping Tool will not be available on your computer. To enable this feature please follow the instructions in our Windows Vista Feature Guide. When you have enabled the feature, come back and follow the rest of these steps. The Snipping Tool is automatically installed in Windows 7 and Windows 8.
Before we go into more detail on how to use the tool and it's options, I want to explain how the tool works. The Snipping Tool allows you to capture portions of your screen using four methods and then save these snips as a JPG, GIF, PNG, or MHT file. The capture methods that can be used to take snips are free-form, rectangular, window, and full-screen. We will go into more information about these different methods later in the tutorial. What is important to know, though, is that when you start the Snipping Tool, it automatically goes into capture mode using the last selection type that was selected. What this means is that while Snipping Tool is capture mode, you will not be able to use Windows normally unless you either cancel the capture by pressing the Capture button or by Alt-Tabbing out of the tool. Now that we understand this, lets move on to finding and starting the Snipping Tool.
To start the Snipping Tool please follow these steps:
If you are running Windows 8, you can just search for Snipping Tool at the Windows 8 Start Screen.
The snipping tool should now be started and you will see a screen similar to the one below.
Snipping Tool Main Screen
As we are not going to take a snip right now in this portion of the tutorial, press the Cancel button to exit capture mode. Now, let's move on to learn about the different selection types available to us.
When you start the Snipping Tool you can click on the Options button to set the preferences on how you want the program to operate. Below we have a table that explains what each of these options do and how they affect the snips, or screen shots, that you create. The options are broken up into Application and Selections groups.
My suggestion is to enable all the application options other than Include URL below snips (HTML only) and Show screen overlay when Snipping Tool is active. For the selection options I would disable the Show selection ink after snips are captured option for better looking snips.
Now that we understand the options, lets learn about the different types of snips that can be taken.
There are four different selection types that you can use to take a snip using the Snipping Tool. In order to change the type of of selection the Snipping Tool will use to create a snip you would click on the small down arrow menu next to the New button. This is shown by the arrow in the image below.
A description of each selection type and an example snip is shown below.
Free-form Snip: This method allows you to draw a shape around your selection using a mouse or a stylus. Once the selection shape is drawn and you close the shape so there are no open sides, the snip will be created and shown to you. An example of a free-form snip is below. Notice how it is a circular snip because I drew a circular selection.
Free-Form Snip Example
Rectangular Snip: This method simply allows you to create a rectangular selection around a portion of your screen and anything in that rectangle will be used to create the snip. An example of a rectangular snip is below.
Rectangular Snip Example
Window Snip: When you use this method, the Snipping Tool will capture the contents of the entire window that you select. An example of a window snip is below.
Windows Snip Example
Full-screen Snip: This method will capture the entire screen on your computer. An example of this type of snip is below.
Full-Screen Snip Example
Now that we know all there is to know about the Snipping Tool, let's learn how to use it.
In this portion of the tutorial I will walk you through taking a rectangular snip. In my example, I will be using the picture of the babies that you we used previously, but any picture will work just as well. So pick a picture and let's get started!
The first step is to open the picture we want to snip, and then start the the Snipping Tool as explained previously.
Once the program is opened we want to select the Rectangular Snip type by clicking on the down arrow next to the New button and selecting Rectangular Snip. This is shown in the image below.
Rectangular Snip Selection
Once the Rectangular Snip option is selected, we click on the picture and drag a rectangular selection around the boy's faces by clicking somewhere on the picture, and while holding the left mouse button, dragging a rectangular box around the area we want to create. This selection is shown below.
Make a rectangular selection
Once the selection is made, we release the left mouse button and the rectangular region will now be sent to the snipping tool. When a snip is created, the Snipping Tool will show the snip in a small window where you can save it as an image, write some text on it using your mouse or stylus, highlight areas of the snip. The snip we just took is shown in the Snipping Tool below.
The newly created snip
Now that the snip is created, if you want to draw on the picture with your mouse or stylus you can click on the Tools menu and then select the Pen you would like to draw with. If you want to highlight certain parts of the picture you can click on the Tools menu and select Highlighter. Last, but not least, if you want to remove anything that you drew with the pen or highlighted, you can click on the Tools menu and select the Eraser to do so.
Finally, when you are happy with how the snip will appear you can:
You have now finished making your first snip. Now start sending your snips to your friends and family or embed them in web sites like this!
Now that you know how to create snips using the Windows Snipping Tool, there is nothing stopping you from making great looking screen shots of your pictures, your work, or even your desktop. As always, if you want to learn more about, or discuss with your peers, the various features available in Windows , then feel free to talk about it in our forums.
A basic, but important, concept to understand when using a computer is cut, copy and paste. These actions will allow you to easily copy or move data between one application and another or copy and move files and directories from one location to another. Though the procedures in this tutorial are considered to be basic concepts, you would be surprised as to how many people do not understand these ...
A Windows Vista feature is simply a set of programs or a particular capability of the operating system that can be enabled or disabled by an administrator. It is important to note that in Windows Vista, when you remove or disable a feature, you are not actually removing files from your hard drive, but rather just deactivating them. Therefore disabling a feature should not be used as a method of ...
Windows gives you the ability to take a snapshot of what is shown on your computer screen and save it as a file. You can then view this image at a later date to see what your screen looked like or share this image with other people to view. You may be asking why this is important and why you would want to share screen shots of your computer.
Have you ever had an experience where you are using a lot of programs in Windows, or a really memory intensive one, and notice that your hard drive activity light is going nuts, there is lots of noise from the hard drive, and your computer is crawling? This is called disk thrashing and it is when you have run out of physical RAM and instead Windows is using a file on your hard drive to act as a ...
When purchasing a new computer one of the most frustrating experiences is moving existing data to the new computer from the older one. In the past when you wanted to transfer data you had to copy the data via a network, store it onto a DVD/CD/Floppy and then copy it back onto the new PC, or physically take the hard drive out of the old machine and install it into the new machine. The main problem ... | <urn:uuid:d83a283d-0bf9-4bba-b92f-a46fb7ab2d64> | CC-MAIN-2017-04 | https://www.bleepingcomputer.com/tutorials/how-to-use-the-windows-snipping-tool/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00313-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928635 | 1,971 | 2.734375 | 3 |
The world wide web has quietly passed its 15th anniversary.
The web began as a project dubbed ENQUIRE, started by Sir Tim Berners-Lee in 1989 at the CERN physics laboratory on the France Switzerland border.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The project aimed to help researchers share information across computers, using the concept of hypertext links.
Links to the code behind the web were first posted to the alt.hypertext discussion board in August 1991. The first website went online later that year.
In 1993, CERN declared that the world wide web would be free for use by anyone. The html web page programming language was released the same year.
But despite the 15-year anniversary, Berners-Lee told this year’s World Wide Web Conference in Edinburgh that these are still early days. “We are at the embryonic stages of the web. The web is going to be more revolutionary,” he said.
He predicted “a huge amount of change to come” highlighting recent developments such as the Google search algorithm, the blogging online diary phenomenon and collaborative wikis.
Vote for your IT greats
Who have been the most influential people in IT in the past 40 years? The greatest organisations? The best hardware and software technologies? As part of Computer Weekly’s 40th anniversary celebrations, we are asking our readers who and what has really made a difference?
Vote now at: www.computerweekly.com/ITgreats | <urn:uuid:091172fe-a22e-4e46-bd7b-0f8b0aba68b2> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/2240078065/World-wide-web-is-15-years-old | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00039-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942088 | 324 | 2.78125 | 3 |
3D printing has grown in popularity throughout the manufacturing industry as it provides a way to create highly customizable products rapidly. One drawback to 3D printing, however, is finishing that products require after they are manufactured.
That’s where computer numerical controlled (CNC) machining comes in. Products are milled by machining in order to be ready for use.
“As these two technology bases have improved, so too have peripheral devices such as 3D scanners, robots, and metrology systems. For manufacturers this is all good, but more often than not the additive/subtractive techniques are viewed as mutually exclusive”, according to Joshua Johnson of 3D Printing Industry.
Hybrid manufacturing, though, is a process that combines the speed of 3D printing with the finishing of milling or subtractive manufacturing. 3D printers create products by laying down material layer by layer, while CNC technologies rapidly cut or mill metals or wood into smooth, finished products. Zach “Hoeken” Smith, co-founder of Makerbot Industries and sole proprietor of Hoektronics, explains the differences: “The key difference between the mechanical systems for additive manufacturing (3D printing) and subtractive manufacturing (CNC) is in the positioning system and the requirements each one has. For 3D printing, the requirement tends towards lower accuracy and faster speeds. For CNC, it tends to be on the opposite end of the spectrum: lower speeds, but higher accuracy.”
Companies have begun to recognize the advantages of combining these processes in one machine. Johnson writes that though the two processes are quite different, “one commonality that most open source 3D printers and almost all CNC devices share is their use of G-code and simple extrapolation from 2D vectors to physical shapes and motions. Because of this common language many controllers, as well as linear motion components, can be used for both processes in the same machine.”
Hybrid manufacturing appears to be gaining ground as evidenced by the positive reaction it’s received. As new developments in hybrid manufacturing continue, doubtless production techniques will improve and the technology will become more accessible. | <urn:uuid:84b3c1c1-2c35-4e05-9cbf-7238c72accba> | CC-MAIN-2017-04 | http://www.bsminfo.com/doc/hybrid-manufacturing-combining-two-processes-in-one-machine-0001 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00341-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950534 | 445 | 3.34375 | 3 |
nesting wildlife and ecosystems that live and produce there. At times, it makes more sense to let nature break it down over time, or let the tides capture it where it can be cleaned from the water.
Either way, responders are left to make tough choices. At best, they need to be able to make informed and educated decisions.
Lehmann said in cases like those, and others where scientific data and prediction models are crucial to an effective response, the Coast Guard leans heavily on the SSC.
He adds that when a spill happens, the conch calls on nearly every branch of science to respond. "Big oil spills are the most scientifically interdisciplinary event you'll ever run across."
He describes a concert of scientists including marine biologists, ecologists, physicists, chemists, geographers and others.
Once all of the components - scientific, government, private, community - are in place, the challenge Lehmann sees time after time is getting consensus. "One man's ceiling is another man's floor," he said.
In some cases, there are environmentalists screaming on one side and the responsible party, or spiller, screaming on the other. Lehman said that the Coast Guard and the SSCs are usually wedged tightly in the middle.
"When you have two scientific or two political groups that are screaming really hard, there's probably a little truth in what each of them is saying, and it's our job to meter in between."
It then becomes a goal of achieving balance.
"If you look at an ecosystem, nobody set it up that way," said Lehmann. "It's that way because it's balanced. Sometimes it gets out of balance, but it usually gets itself back into balance. It may take some time, but it finds its own balance - it finds its equilibrium."
Command structures, he observes, are the same way. He said he frequently refers to the organizational structure of a command as being organic.
"You can impose a system on top of it. You can impose this Incident Command System - which is a very good system - but if you try to hold that system too tightly, it's going to finally break around the edges because it's not allowed to grow. If you give it some flexibility, which is how it's designed to be, then slowly and sometimes rapidly, that spill command structure morphs into what is needed."
He said it almost always happens.
"When it doesn't happen is when it's not allowed to do that - when there is some human force preventing that from happening," he said.
That force, he said, is rigidity.
And sometimes the response is not successful. As he illustrates, there are also ecosystem that don't make it at first, but they evolve until they're productive.
The same is true of an oil spill response using the ISC.
"A spill that is going to go on for a while - it evolves into its own management structure," he said. "It finds an equilibrium with the spill."
Is it any wonder, that our man-made systems and our structures and our models and methods mirror those of ecosystems, and the natural order of the planet in general?
It shouldn't be. Given the chance to step back and observe from a clearer viewpoint, one can easily see that we're all part of one big picture.
And, as Briggs noted, it's a pretty picture and a precious one. "It's amazingly productive, as long as we don't abuse it." | <urn:uuid:d15920b7-09ac-4010-816d-0eeb87462c45> | CC-MAIN-2017-04 | http://www.govtech.com/e-government/A-Science-in-the-System.html?page=3 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00185-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.972176 | 722 | 3.109375 | 3 |
The Defense Advanced Research Projects Agency showed off prototypes Monday, Sept. 10, of new robots that climb rough terrain by mimicking the gait of pack mules. Testing on the prototypes, called the Legged Squad Support System (LS3) began earlier this year, and the robots could eventually carry gear for soldiers and other personnel.
“We’ve refined the LS3 platform and have begun field testing against requirements of the Marine Corps,” said Army Lt. Col. Joe Hitt, DARPA program manager. “The vision for LS3 is to combine the capabilities of a pack mule with the intelligence of a trained animal.”
The agency showed the prototypes jogging and running.“The LS3 has demonstrated it is very stable on its legs, but if it should tip over for some reason, it can automatically right itself, stand up and carry on," Hitt said. LS3 also has the ability to follow a human leader and track members of a squad in forested terrain and high brush.”
The pack mule robot isn't the only DARPA creation breaking new ground.
A robot developed by the DARPA broke two records the agency announced last week. The Cheetah robot, which is the fastest "legged" robot in history, broke its own record of 18 mph when it reached 28.3 mph running on a treadmill. The robot also broke the human speed record, beating sprinter Usain Bolt's peak speed record of 27.44 mph, which he set at the 2009 Berlin World Championships while setting the current 100 meter world record of 9.58 seconds.
Legged robots are used for rough terrain, where wheels or tracks are more likely to get stuck. The agency intends to test the Cheetah robot, which is being developed by Boston Dynamics, as part of DARPA's Maximum Mobility and Manipulation, to be contested next year on natural terrain. The speed increase over last year's results are the result of improved control algorithms and a more powerful pump, the agency reported.
“Modeling the robot after a cheetah is evocative and inspiring, but our goal is not to copy nature. What DARPA is doing with its robotics programs is attempting to understand and engineer into robots certain core capabilities that living organisms have refined over millennia of evolution: efficient locomotion, manipulation of objects and adaptability to environments,” DARPA Program Manager Gill Pratt said. “Cheetahs happen to be beautiful examples of how natural engineering has created speed and agility across rough terrain. Our Cheetah bot borrows ideas from nature’s design to inform stride patterns, flexing and unflexing of parts like the back, placement of limbs and stability. What we gain through Cheetah and related research efforts are technological building blocks that create possibilities for a whole range of robots suited to future Department of Defense missions.” | <urn:uuid:9aa48b37-302e-4c7f-99f8-1206a6a5e4f3> | CC-MAIN-2017-04 | http://www.govtech.com/technology/DARPA-Shows-Four-Legged-Pack-Mule-Robots.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00093-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942671 | 592 | 2.703125 | 3 |
The US Navy and Army this week talked up new laboratories they are opening that promise to develop all manner of cutting edge technologies for the future.
In the Navy's case, the service today cut the ribbon on its Laboratory for Autonomous Systems Research (LASR) in Washington, DC, that it says will "support cutting-edge research in robotics and autonomous systems such as unmanned underwater vehicles, autonomous firefighting robots, and sensor networks."
"The LASR will also advance the goals of the President's National Robotics Initiative, a multi-agency effort to strengthen U.S. leadership in robotics and to enable human-robot teams to solve important challenges in defense, space, health, and manufacturing," the Navy stated.
The LASR also includes: A facility that contain the world's largest space for real-time motion capture, improving our ability to measure and control the motion of autonomous air and ground vehicles and monitor the movements of humans interacting with them; and Electrical and machine shops that will allow researchers to "print" parts directly from electronic drawings.
Meanwhile, the Army's newest lab will open in April at its Detroit Arsenal. The eight-in-one lab known as the Ground Systems Power and Energy Laboratory (GSPEL) offers numerous testing capabilities and an unmatched combination of resources in a single lab will offer what the Army says will be "shared access to industry and academia to facilitate the exchange of information and ideas to develop emerging energy technologies and validate ground vehicle systems - research that could also help the Nation achieve energy security goals."
The eight individual labs are:
- Power and Energy Vehicle Environmental Lab is the centerpiece lab and features one of the world's largest environmental chambers enabling testing at temperatures from minus 60°F to 160°F, relative humidity levels from 0 to 95 % and winds up to 60 mph. The lab's dynamometer and environmental chamber combination allows for full mission profile testing of every ground vehicle platform in the military inventory in any environmental condition.
- Air Filtration Lab is capable of testing the air flow characteristics of various-sized media at four different flow benches using varying flows up to 12,000 standard cubic feet per minute. Each flow stream is equipped with an automated dust feeder enabling simulations from zero visibility to four times zero visibility for evaluation of air filters, cleaners and other components.
- Calorimeter Lab is the world's largest and is capable of testing radiators, charge air coolers, oil coolers individually or all three simultaneously.
- Thermal Management Lab handles work testing thermally-managed mechanical and electrical components in varying environments. It is comprised of a wide variety of chiller and heat systems for use with test bench heat exchanges for evaluating components and systems.
- The Power Lab will evaluate major vehicle electrical systems including charging systems, air conditioning systems, hydraulic systems and associated components. The lab's two explosion-proof environmental chambers allow for expanded technical research.
- The Fuel Cell lab will test future fuel cell capabilities for tactical vehicles. The lab enables the development and evaluation of fuel cell components and systems, including systems to reform JP-8 fuel, various fuel cell media and power conditioning. This work will help vehicles become quieter and more efficient.
- The Hybrid Electric Components lab will look at electric powertrains with the principal emphasis on developing hybrid motor technology and contributing to the increased electrification of vehicles. Equipment used in this lab will potentially regenerate 80 percent power back into the building, making it possible to re-use the electricity.
- The Energy Storage Lab will test and evaluate advanced chemistry battery vehicle modules. Explosion-proof battery test chambers enable safe testing of 10 - 60 kW advanced chemistry battery packs.
Layer 8 Extra
Check out these other hot stories: | <urn:uuid:4d295308-5699-4e69-ab2e-24aa1ffbfc80> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2221930/security/navy--army-open-labs-looking-for-robot--energy--fuel--mechanical-inventions.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00352-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.904539 | 754 | 2.625 | 3 |
Cornice doesn't call it a hard drive, but their tiny "storage element" refines spinning platter technology to the nth degree. Extreme Tech reviewers take a look at the technology and what it means for consumers.
Imagine a spinning glass platter overlaid with an ultra-thin magnetic film. Then imagine a magnetic head flying above the platter, based on GMR (giant magnetoresistive) head technology originally developed for desktop and server hard drives. As that little mental picture develops, it sounds suspiciously like were talking about a hard drive.
In reality, Cornice -- a startup formed by former Maxtor engineering VP Kevin Magenis -- doesnt call their tiny storage device a "hard drive." Theyve dubbed their new product a "storage element." Its likely the company believes that "hard drive" is often associated with reliability issues. The Cornice device is specifically designed as a compact storage subsystem for portable consumer electronics hardware.
The first products on the market supporting the Cornice technology are digital music players, including the Rio Nitrus
, Creative Labs MuVo2
, and the iRiver IGP-100
(one of the few players to natively support the open-source Ogg Vorbis audio compression standard).
Other products built around the Cornice drive include the Samsung ITCAM-9 (an MPEG-4-based miniature camcorder) and the MPIO HS100 1.5GB USB portable drive from Korean manufacturer Digitalway.
As we hinted at in our introduction, Cornices storage element is fundamentally based on hard drive technology. The device utilizes longitudinal magnetic recording using GMR heads, thin film, glass discs, and PRML read channels, much like current generation desktop hard drives.
Cornice specs the minimum read/write transfer rate at 4MB/sec, which is good enough for audio and even compressed video use. The tiny drive has two key attributes that make it useful to its target market: low power usage and ruggedness. Spin-up, for example, takes 207ma, roughly 1/10th that of a typical 2.5" notebook hard drive. Although idle current is a scant 30ma, the drive simply shuts down most of the time, spinning up only when the application demands it. Additionally, the storage element has no buffer, unlike traditional hard drives. This means that hardware that uses a Cornice storage element must implement its own buffering if its required.
The drive connects directly to the host via a proprietary, 20-pin parallel bus. Cornice is reluctant to release the areal density or spin rate of the drive, but 1.5GB in a 1-inch diameter platter is fairly high density, if not on the bleeding edge. Since ruggedness is another key parameter, the design shouldnt be overly aggressive in its areal density spec. Given that competing 2GB-and-up devices spec 30 gigabits per square inch, the Cornice drive probably comes in at a bit less than that.
Other companies have tried to ship one-inch hard drives, most notably MarQlin Corp and the GS Magicstor. MarQlin has yet to actually produce anything yet but Magicstor has been shipping 2.2GB and 2.4GB compact flash type II cards. Another similar product, Hitachis (formerly IBM Storage) Microdrive, is also available in a compact flash format.
To read the full story, click here. | <urn:uuid:78c33c9e-a2b4-4553-8245-197eb70b46f7> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Data-Storage/Cornices-Tiny-Hard-Drive | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00260-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93471 | 707 | 2.5625 | 3 |
Known Errors – repetitio est mater studiorum? Not in this case.
“Reinventing the wheel.” – We often use this saying in everyday life, but did you know that it’s valid in IT Service Management, i.e. ITIL, as well? Let me recall a common situation: your users open an incident; you find a workaround or solution to the incident and resolve it. A few days (or months) later, the same situation happens again. And you (or some other technician) work hard to rediscover that (same) workaround again. Sometime later – it’s the same thing all over again. Waste of time, isn’t it? That’s why ITIL installed Known Error (KE).
Known Error – a definition
According to ITIL (Service Operation), a Known Error is “a problem that has a documented root cause and a workaround.” Documented means recorded. Records are common in ITIL. Just like, e.g., incident records, a Known Error exists in the form of a record and it is stored in the Known Error Database (KEDB). Such record exists throughout the lifecycle of a Known Error, which means that the Known Error is recorded from its creation until “retirement” (if the KE record will be ever deleted at all).
A Known Error record contains (these are general parameters common to all tools. I saw different tools with various additional content):
- status (e.g. “Archived” or “Recorded Problem” when Known Error is created, but root cause and workaround are not known yet)
- error description – content of this field is used for searching through Known Errors (e.g. by Service Desk staff or users) while searching for incident/problem resolution (e.g. “Printer does not print after sending a document to the printer. However, when printing a status page locally on the printer, everything works fine.”)
- root cause – entered by Incident/Problem Management staff (e.g. “Since printer does not accept documents to be printed from user computers, but prints out status report – a faulty network card is the cause of the problem.”)
- workaround – e.g. “Closest printer to the user should be set as default printer or user should be instructed which device to use until new printer is provided.”
Figure: “Mathematical” definition of Known Error
Where does it belong?
Officially, Known Errors belong to Problem Management, but it’s not unusual for Service Desk to resolve an incident with a permanent solution, or find a workaround and create a Known Error record. The aim of the Problem Management is to find a root cause of one or more incidents. Problems are created because the root cause (the real cause of the incident) and its resolution need to be identified. The result of the problem investigation and diagnosis is identification of the root cause of the problem, and a workaround (temporary fix) or (final) resolution. These are valuable pieces of information and need to be recorded – so, a Known Error is created.
Timing – when to raise a Known Error record?
Certainly, this should be done when you identify the root cause and workaround. But, it can be recorded earlier, e.g. when the problem is recorded. This is done for informational purposes or to record every step of workaround creation. Also, if a Known Error is recorded and it takes a long time to find a workaround or resolve the problem, someone who faces the same problem has the information that someone is working on the problem resolution and which temporary workarounds are available. I saw some situations when an IT organization used the KEDB to provide users with a self-help tool (i.e. as a Knowledge Base) while creating a Known Error record tool, which Service Desk used to allow them to choose whether the Known Error record would be published publicly (honestly, not all information is useful to everyone, so sometimes some workarounds are not applicable for users). In such a way, IT provided users with knowledge where they could search for a solution before opening an incident (or, since the tool supported such functionality, to search through the KEDB while typing incidents’ subject or description).
ISO 20000 view
The Problem Management process is one of the processes required by ISO 20000 (remember, everything written down in ISO 20000-1 must be implemented). ISO 20000 requires that Known Errors shall be recorded and that up-to-date information on Known Errors (and problem resolution) is provided to the Incident and Service Request Management process (as opposed to ITIL, this is a single process in ISO 20000). So, if you are thinking about ISO 20000 implementation, it’s better to seriously considering building a KEDB.
Let’s make life easier.
It’s not necessary to have mighty tools for IT Service Management to provide KEDB functionality and gain the advantage of Known Errors. For some organizations (I noticed that some small organizations are doing it this way), a spreadsheet will be enough. It may not be a perfect solution, but it will do.
The bottom line is that everyone gains the advantage of the KEDB:
- Users – they have a tool to help themselves. Or, they can speed up incident resolution, e.g. they don’t have to wait until Service Desk staff resolves the incident, because Service Desk will most probably use the same database, i.e. KEDB.
- Service Desk, people involved in Incident Management or Problem Management – they have a body of knowledge, which saves a complete history of their work. They can do reporting, incident and problem resolution is much faster (no re-work and no unnecessary transfer of incidents to problem management)…
It’s a fact that Known Errors and the KEDB are usually forgotten. IT has a IT Service Management tool and uses recorded incidents and problems to look for workarounds or solutions. This is the hard way. The pace at which everything moves is looking for a simple, yet powerful solution. Known Errors and the KEDB are exactly that.
You can download this Known Error template to see an example of an Known Error record. | <urn:uuid:e0cae26c-7c24-4bda-9cad-5c2b78bcc632> | CC-MAIN-2017-04 | https://advisera.com/20000academy/blog/2014/02/04/known-errors-repetitio-est-mater-studiorum-case/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00076-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936096 | 1,323 | 2.734375 | 3 |
Yesterday, the Federal Trade Commission announced that it would hold a public workshop on November 21, 2013 on “the growing connectivity of consumer devices, such as cars, appliances, and medical devices”―also known as, “the Internet of Things.” The FTC will accept public comments (due June 1, 2013) in advance of the workshop.
In describing the Internet of Things, the FTC noted that consumers can already use mobile phones to adjust thermostats and open car doors and that these types of services and technologies are rapidly developing. While the FTC recognized that these functionalities may have benefits for consumers, the FTC is seeking input on the “unique privacy and security concerns associated with smart technology and its data.” For example, in a blog entry on the workshop, the FTC’s Business Center Blog asks, “What if when we drive near a grocery store, our refrigerator lets us know we’re low on milk? Would that be convenient? Disconcerting? Or maybe a little bit of both?”
Among the questions that the FTC is seeking specific input are the following:
- What are the significant developments in services and products that make use of this connectivity (including prevalence and predictions)?
- What are the various technologies that enable this connectivity (e.g., RFID, barcodes, wired and wireless connections)?
- What types of companies make up the smart ecosystem?
- What are the current and future uses of smart technology?
- How can consumers benefit from the technology?
- What are the unique privacy and security concerns associated with smart technology and its data? For example, how can companies implement security patching for smart devices? What steps can be taken to prevent smart devices from becoming targets of or vectors for malware or adware?
- How should privacy risks be weighed against potential societal benefits, such as the ability to generate better data to improve health-care decisionmaking or to promote energy efficiency? Can and should de-identified data from smart devices be used for these purposes, and if so, under what circumstances?
The Internet of Things is not just a trending topic in the United States. The European Commission recently published the results of a public consultation it conducted last year on the Internet of Things (which it short-hands to “IoT”). Last year’s consultation “sought views on an a policy approach to foster a dynamic development of Internet of Things in the digital single market while ensuring appropriate protection and trust of EU citizens.” However, the Commission’s report concludes that, at least at this time, there is no consensus on the need for public intervention in the area, especially in light of the existing legal framework of data protection and competition rules and safety and environmental legislation. The Commission has published a series of fact sheets, including one on privacy and security issues. | <urn:uuid:05b40897-2cab-4fa1-9063-abd00f4d1ca4> | CC-MAIN-2017-04 | https://www.insideprivacy.com/united-states/ftc-announces-workshop-on-the-internet-of-things-european-commission-publishes-report/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00076-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939512 | 588 | 2.765625 | 3 |
Like most humans, our mobile gadgets really don't appreciate sub-zero weather. Try to use your cell phone in the freezing cold and it's likely to die or give you errors. How cold is too cold? Let's take a look.
PCWorld Finland did a study in 2012 comparing smartphones in 32 degree weather and lower. They found most of the 15 phones died when the temperatures were 5 to -4 degrees, whereas feature phones (you know, "dumb phones") actually worked down to -13 degrees.
The Apple iPhone 4s and Nokia N9 were the first to fail starting at 23 degrees, while the Samsung Galaxy S2 held out to an impressive -22 degrees. (The phones aren't the latest models, but you can still get an idea of temperature limits.)
The takeaway may be that if you need a phone that can stand the cold when you do, it might make sense to have a dumb phone around.
Read more of Melanie Pinola’s Tech IT Out blog and follow the latest IT news at ITworld. Follow Melanie on Twitter at @melaniepinola. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook. | <urn:uuid:6d2c7a6b-561e-4d5f-9f51-273cfa8b7a8e> | CC-MAIN-2017-04 | http://www.itworld.com/article/2701683/consumerization/this-is-how-low-the-temperature-can-get-before-your-phone-starts-acting-up.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00470-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92437 | 249 | 2.53125 | 3 |
Phishing is a common Internet scam that uses official-looking email messages and websites to get you to share personal information. The people behind these kinds of activities want you to fall for their scams so they can steal your identity and commit fraud.
Be cautious of any email or website that asks for sensitive information and watch for these red flags before sharing information electronically.
Common red flags:
- Mistakes in grammar or spelling. Real organizations do mess up once in a while, but if the message is so full of errors your elementary school teacher wouldn't accept it, it's likely a scam.
- A TO/FROM address that seems fishy (so to speak). FROM addresses can be easily forged, so pay attention to the TO field. Is your email address listed? If not, the message is likely a phishing attempt.
- No personal information in the email. Most legitimate institutions have your information on file and will address you by name. A "Dear Valued Customer" salutation is suspect. However, phishers can mine public records and social networking sites for your personal details, so don't assume a message is safe just because it contains your name or other trivia.
- Requests for personal information. Sensitive information such as passwords, bank account numbers and social security numbers should never be sent via email. CenturyLink, PayPal and your bank are examples of companies that would never ask for personal information in an email.
If you receive a phishing email in your CenturyLink email inbox, forward it to firstname.lastname@example.org. Or, you can report it as spam, by simply clicking the Spam button that is located on your CenturyLink webmail toolbar. | <urn:uuid:2e927b63-ecbb-48be-af7d-95631eac3883> | CC-MAIN-2017-04 | http://www.centurylink.com/home/help/safety/whats-phishing.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00196-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931355 | 345 | 3.1875 | 3 |
They’re the workhorses of computing, including in the data center: processors. This term covers a wide variety of silicon chips designed for a variety of purposes, but broadly, they are collections of transistors (and a few other components) that perform operations on electronic data. Since the advent of the first integrated circuits in the late 1950s, processors have increased in complexity and ubiquity to the point that they can be found in a variety of machines and gadgets beyond just what we might consider traditional computers (PCs, laptops, and servers, for instance). Today, these devices have achieved remarkable capabilities in tiny packages, but can the steady innovation of the past half century be maintained, and what will the future look like?
A virtual slogan of the semiconductor industry, Moore’s Law (a 1965 prediction by Intel cofounder Gordon Moore) states that the number of transistors in a microchip will double every two years. (The precise details and best way to phrase Moore’s Law may be up for debate, but this simple statement captures the general spirit of it.) The semiconductor manufacturing industry has largely kept pace with or exceeded Moore’s Law, taking technology from simple integrated circuits with a few transistors to complex processors with billions of transistors in just a few square millimeters.
Predictions of the upcoming failure of Moore’s Law are perennial. And although these prognostications have all failed thus far, one day they will be correct—particularly for silicon processor technology. At some point, the size of transistors will reach a minimum (on the order of the size of one or several atoms), and the only option will be to scale outward. But even this approach runs into a fundamental problem: the speed of light.
Since electronic signals cannot (at least if you believe Einstein) exceed the speed of light in a vacuum—about 300,000,000 meters per second, or about 186,700 miles per second—processor sprawl limits the device’s capabilities. Hence, the need to pack more components into smaller areas is a primary consideration in designing faster processors.
Regardless of the technology, whether traditional silicon or something else (like quantum computing), computing in general may have a fundamental speed limit of its own. According to Popular Science (“Scientists Find Fundamental Maximum Limit for Processor Speeds”), “scientists say that processor speeds will absolutely max out at a certain point, regardless of how hardware or software are implemented.” Citing research by Boston University researchers, the article suggests that computer speeds will reach their maximum in about 75 years.
So, will processor research reach a dead end at that time? Who knows. The idea of an infinitely fast processor, however—even if it takes a very long time to achieve—has some disturbing philosophical implications. But, setting aside potential revolutionary technologies like quantum computing and photonic computing, what about the near future of semiconductor processors?
Semiconductor Process Technology
As mentioned above, a necessary part of making processors faster is packing more transistors into a smaller area, and this in turn means making transistors smaller. Current semiconductor process technologies have feature widths in the 20nm range—that’s 0.02 microns, or 0.00002 millimeters. The leader in this arena, Intel, has already introduced 22nm processors (“Ivy Bridge”) and is building a manufacturing plant for 14nm chips in Chandler, Arizona, according to EE Times (“Update: Intel to build fab for 14-nm chips”).
But don’t expect process technologies to continue scaling down at this rate forever. A single hydrogen atom—the smallest of the elements—is only about a tenth of a nanometer in diameter. Larger atoms (such as silicon) are, well, larger, so building transistors at the atomic level already has a fundamental limit. Furthermore, as transistors are built using thinner layers (approaching a single atom), the consequences of defects become more substantial. Thus, the process of fabricating chips becomes more difficult and, hence, more expensive.
In another innovative step, however, Intel also introduced its “FinFET” transistors in the 22nm generation. Instead of involving only planar structures, FinFETs build upward, making the transistors three-dimensional structures (part of the transistor structure has the appearance of a fin, hence the name). This new technology allows the company to increase the speed of its transistors and to pack more of them into a smaller space and into a lower power budget.
But are FinFETs and similar three-dimensional approaches to semiconductor manufacturing in line with the spirit of Moore’s Law, or are they a cheat that artificially keeps Moore’s Law alive? Since Moore’s Law is not a strictly scientific law (from a physics standpoint), this distinction may not be worth quibbling over. Clearly, however, FinFETs are a technology aimed at maintaining technological momentum in the face of an approaching barrier to further innovation.
Energy Efficiency and Future Technologies
Power consumption is a growing concern for data centers in particular but also for other industries, and for consumers. Processor technology is making strides toward greater performance per watt; for instance, smaller process technology generations generally provide more processing performance for less power than previous generations, so some increase in efficiency follows with more densely packed integrated circuits. With process technologies approaching their minimum size limit, however, this avenue for greater efficiency is also reaching its end.
The question is whether a new technology will come along to create more room for innovation—both in performance and in efficiency. One such possibility under intense study is quantum computing. Whether a quantum computer can be realized in a form that can be practically deployed in consumer and business products remains to be seen, but many scientists and engineers have high hopes for this avenue of research.
Regardless of the processing technology, however, the finite speed of light still creates problems for computer implementations. Imagine, for example, a processor with no peripherals (like a data storage device, input and output interfaces, and so on): what good is it? Processors must be supplied with data, and the output data must be stored or transmitted to some other device. This transfer of data is limited to the speed of light. Thus, for instance, if the processor must access a storage device like a hard drive, it must wait for the data to propagate over the connection. Even an infinitely fast processor would be hobbled by this limitation. And again, only so much equipment can be crammed into a given volume, meaning that the speed of an actual computer system—not just a processor—is limited. Unless, that is, you can find a way to transmit data faster than the speed of light!
The physics of materials and light is not the only limit on processor innovation. Developing a new process technology requires a large amount of capital, and building the facilities to manufacture actual chips is expensive as well. (Intel plans to invest $5 billion in its new 14nm manufacturing plant.) Economic factors may therefore place a limit on innovation, even if physical limits haven’t been reached.
Whatever the limits, the next 5 to 10 years are may see the end of Moore’s Law unless some new technology beyond standard planar (or even the new three-dimensional) semiconductor manufacturing is realized. At least that’s what the approaching fundamental limits (such as the size of atoms) would indicate.
For data centers, new process technologies mean more processing power in smaller and more power-efficient devices. But the ever-growing demand for IT resources means this pace of innovation may not be enough. Hence, companies are looking to lower-performance processors (for instance) at the heart of their servers in an attempt to increase efficiency.
The past half-century or so (or century, if you want to include the theoretical foundations) has seen unbelievable progress in computer technology. From wimpy computers that filled entire rooms to tiny handheld devices that put to shame desktop PCs of just a few years ago, innovation has been a train with remarkable momentum. But will that train begin to slow as the limits of traditional semiconductor technology are reached, or will it simply hop the tracks to another technology and continue moving forward? Only time will tell. | <urn:uuid:b26b9a94-394b-4d40-870f-c198413675f8> | CC-MAIN-2017-04 | http://www.datacenterjournal.com/the-now-and-later-of-processors/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00462-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922287 | 1,706 | 3.765625 | 4 |
Data.gov, the federal government's clearinghouse of downloadable information, plans to release new gadgets that will enable the public to easily create mashups of maps and statistics, according to officials working on the enhancements.
Mashups are a fusion of information and images that can illustrate relationships or patterns and, in this case, provide transparency into the business of Washington. Data.gov is the brainchild of federal Chief Information Officer Vivek Kundra, who has said he envisions the website becoming an online marketplace where people worldwide can exchange entire databases and reuse content in ways the federal government could never imagine.
Within the next month, the site will offer the public a chance to preview a so-called viewer that will let them combine many of the 270,000 data sets posted on Data.gov with maps, said Jerry Johnston, geospatial information officer at the Environmental Protection Agency. For the past couple of months, representatives from various agencies, including EPA, the General Services Administration, U.S. Geological Survey, Health and Human Services Department, and NASA, have assisted in the effort to add more interactive features to the site.
"Vivek Kundra wanted to make sure there was agency involvement in the project," Johnston said. "When we first stood up [Data.gov], he said what was in my mind at the time: 'It's great that you have geospatial data in the catalog, but it doesn't mean anything to me if I can't see it.' "
With the new tools, anyone will be able to diagram in one place official statistics from across the federal government -- on everything from mortality rates to houses with substandard plumbing. Individuals won't need special technical skills to create the mashups.
The feature is made possible by Geodata.gov, a separate catalog of geographic data that USGS operates. The website will power part of Data.gov through a connection that is invisible to the user, Johnston explained. Internet users can permanently download federal maps to their own computers through Data.gov, or view them with the new mashup tool for as long as they are on the site.
He said the next goal is to make the maps available as services, which are Web applications users access through the network of the agency that provides the map.
At present, "Data.gov focuses on storing data for downloads in files," and federal officials "want to move to the next step of visualizing data," said Jack Dangermond, president of ESRI, which supplies nearly every federal agency with geographic information system software. The company is providing the viewer and linking Data.gov to the maps through Geodata.gov. The work is part of a competitive contract to build Geodata.gov, which USGS awarded to ESRI in 2004.
The new mapping capabilities will allow third parties, including nonprofit government watchdogs, the press, private software providers and citizens to discover interesting or suspicious trends and correlations such as, perhaps, a high death rate in a region where a large proportion of the population is employed by mining companies.
GIS companies, including ESRI and its competitor FortiusOne, will be able to combine the maps with their products to create custom applications that they can sell to clients. In addition, open government organizations such as the Sunlight Foundation in Washington will be able to use the services to distribute free apps. School children also will be able to create and print maps for class projects.
With its viewer, Data.gov has the potential to fulfill the three objectives of President Obama's open government initiative, said Andrew Turner, chief technology officer of FortiusOne, a mapping firm that helps federal agencies and companies visualize their business data to aid in decision-making. Obama has committed his administration to achieving greater transparency, more citizen participation in government, and increased collaboration between the public and private sectors.
"Transparency is about opening the data, and Data.gov did a really good job of that at first," when the site launched in May 2009, Turner said. "Participation -- that's pulling things off for my social networking group. Collaboration -- how do I feed this back to the government? The success from the data to the tools, along the entire way, will be dependent upon making sure that entire chain stays open."
FortiusOne and ESRI offer free consumer sites, respectively called GeoCommons and ArcGIS.com, that let users create mashups with publicly available geographic data. They work similarly to the way Data.gov will function with the viewer. Johnston said he encourages this kind of repurposing of government information, but also noted that, unlike commercial or nonprofit sites, Data.gov "gets the authoritative stamp of being a .gov site and a high-profile site."
Turner said the Data.gov tool sounds like it could turn maps into social objects -- items that instigate conversation -- in the same way the photo-sharing site Flickr has turned photos into social objects.
The initiative also could become a performance management tool by enabling agencies to push out studies and asking the public to review them on easy-to-read maps, said T. Jeff Vining, a research vice president for Gartner Research. The challenge will be ensuring agencies are reporting accurate and timely data, he said.
"I think it's knowledge management concepts meeting geospatial concepts," Vining added. In the future, he expects people will be able to download the mashup maps to smart phones and broadcast their creations using the mass text-messaging service Twitter.
"Looking at how we can add mobile applications to the mix is certainly a logical next step for the team," Johnston said. | <urn:uuid:d59e494d-a7db-40eb-85cc-2818cb3e4733> | CC-MAIN-2017-04 | http://www.nextgov.com/mobile/2010/06/datagovs-next-big-thing-mashing-up-federal-stats-with-maps/46973/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00094-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947418 | 1,153 | 2.609375 | 3 |
I have some predictions as to what will happen with the current Ebola epidemic over its lifecycle. They are not pleasant.
- The virus will accelerate its advance throughout Africa for the next few months
- The virus will spread globally, but will only get true penetration into the geographies/continents/countries that are most vulnerable
- Africa, India, and parts of South America and Southeast Asia will be devastated by it
- In the more prepared countries through the end of 2014 and early 2015 there will be a number of small outbreaks (tens or dozens of people) which will be contained within days of their occurrence
- A viable vaccine (for countries with money and access) will surface around mid 2015
- By the end of 2015 we will have lost around 10 million people from around the world, mostly in Africa and India
- By the end of 2016 most of the modern world will be vaccinated, but we will have lost 20-40 million people worldwide—nearly all of which will be from large, urban, poor areas
Two years from now Ebola will be a bad memory for first-world countries, but the poor and suffering will have been absolutely slaughtered. Even worse, the isolation the poor already experiences will increase dramatically because they’ll be associated with sickness and disease.
- Use caution when taking epidemiology predictions from someone who breaks into computers for a living.
- This is not taking into account the possibility of Ebola mutating to become airborne, for example. That would change everything. | <urn:uuid:8a24a2ae-9a54-43da-a4a8-3cb054b4964b> | CC-MAIN-2017-04 | https://danielmiessler.com/blog/predictions-ebolas-spread-2016/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00122-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956686 | 308 | 2.890625 | 3 |
Back in 2005 Sun Microsystems announced the company planned to build the world's first online compute exchange. More then three years later there has been no other mention of this supposed compute exchange. In the original press released they described the offering as a plan to introduce a new electronic trading environment that will allow customers to bid on CPU usage cycles. They went on to say that being able to dynamically bid for open compute cycles will provide companies across the globe with unprecedented flexibility in planning for the purchase and use of compute power. This is a new paradigm in computing where companies can access an unlimited number of CPUs as they need them. Today as cloud computing begins to take off and regional cloud utilities start to come online the idea of a cloud exchange is again beginning to be discussed.
Back in April at the Interop conference several attendees mentioned they wanted to see the creation of such an exchange platform. The reasoning was that as new regional clouds come online having a uniform point of entry to a world wide cloud ecosystem will make this type of transition more efficient.
Right now most clouds have there own set of APIs, interfaces and unique technologies. An open compute exchange may provide a centralized point where cloud consumers and providers would be able to make decisions based upon which cloud resources they may want to utilize as well as a clearing house for providers with excess capacity. Variables may include metering based on actual use of the resources in CPU hours, gigabits (Gbs) consumed, load, network I/o, peak vs off peak time frames, geographical location, SLA's, and quality of service rules could be just some of the metrics that determine the price of a cloud providers resources.
One usage example might be in terms of a green or eco-centric point of view. Let's say Cloud A uses cheaper coal based energy source and Cloud B uses a more expensive Hydro source. Although more expensive, choosing Cloud B may help offset an enterprises carbon credits and therefor actually be a bit cheaper from a carbon point of view.
Another example may be based on geographical cloud computing. Let's say a UK cloud and a North American cloud. Rather scaling based on system load, a cloud user may want to monitor application response times based on geographical location and scale according to an end users experience. By have the option to access compute capacity through an exchange, cloud consumers who are running global network services would no longer have to signup for several cloud services. This would also effectively render edge based CDN services like Akamai irrelevant.
What are your thoughts? | <urn:uuid:8081da7e-4d73-4a4b-8504-a03abceb7a25> | CC-MAIN-2017-04 | http://www.elasticvapor.com/2008/07/global-cloud-exchange.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00424-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956125 | 512 | 2.65625 | 3 |
Teaching should be about transferring passion and enthusiasm for a subject, not just knowledge. Imagine excellent teaching as resonance in Physics — where passion for the subject material is manifested as powerful sound waves emanating from the teacher. Each student is tuned to vibrate most intensely at a certain frequency, and each subject in school (biology, mathematics, art, music) has a wavelength associated with it.
The goal of teachers should be to belt out their particular frequency and watch to see who starts to vibrate.
The goal should not be to coldly convey enough information to prepare for an exam. The world is full of A-students with no interest in the subjects they supposedly “excelled” in while in school. Human advancement is not forged by these people. True breakthroughs come from those who love what they do, and that love is what teachers are there to ignite and nurture.
Students of great teachers become infected by their subject, like a magical spell that lasts a lifetime. They can’t turn away from television programs related to the field and they can’t read enough books about it. This is exactly how it should be. This is the standard that teachers should strive for.: | <urn:uuid:e2be9ac7-f49c-44fa-a3d2-622258102f5e> | CC-MAIN-2017-04 | https://danielmiessler.com/blog/redefining-the-goal-of-teaching/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00424-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.972837 | 245 | 2.609375 | 3 |
WAN – The phrase “wide area network” in a word is known as WAN. That wide ranging network works collectively with the help of hardware such as computers and related resources. Cisco has introduced a lot of devices such as modem and routers for WAN, protocols and technologies such as ATM, Cisco frame relay etc so to provide a better wide area network (WAN) environment.
What is network? A computer network in simple form can be consisted on just two computers that are allied to share available resources as variety of hardware, files and to correspond with each other. But in broad sense, this network can enclose thousands of computers in it because running a big business in traditional way can be a difficult task without a network and that provides an easy way to be in touch and cooperate with employees too. That means computer’s networking is referred to the assemblage of various kinds of hardware components plus computer’s interconnection with the help of communications mediums. The key purpose of networking is to allow the sharing of an organization’s resources as well as information. | <urn:uuid:1f9393c4-0bf9-4c33-aef1-1a0991611626> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/tag/wan | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00150-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954399 | 222 | 3.65625 | 4 |
You may recall that in 2008, rather than risk the chance of a large piece of a failing spy satellite would fall on populated areas the government blasted it out of the sky. The physics of such a shot were complicated and the Navy had a less than 10 second window to hit the satellite as it passed overhead of its ships in the Pacific Ocean. But it worked.
Now word comes that a five year-old Intelsat TV satellite is meandering across Earth's orbit and any attempts to control it have proven futile. At issue now is that the satellite could smash into other satellites or ramble into other satellite orbits and abscond with its signals. That is possible because while the satellite is uncontrollable it is still broadcasting.
Intelsat said its Galaxy 15 satellite " experienced an anomaly on 5 April 2010" and that the system's traffic, which provides transmission capacity for cable programmers in was moved to Intelsat's Galaxy 12 satellite. Published reports say no major service problems have occurred yet from the likes of DirecTV and Comcast. Intelsat and Orbital Sciences Corporation, the manufacturer of the G-15, are conducting a technical investigation, Intelsat says.
Whether or not this rogue satellite hits something or has to be destroyed it again raises the topic of what to do about the growing amounts of space junk orbiting the Earth.
The chief of US Strategic Command last year said the US was decades behind where we should be to protect assets in space from such debris. The most serious debris comes from a couple of large events in the past few years. The intentional destruction of the Chinese Fengyun-1C weather satellite in January of 2007 and the accidental collision of American and Russian spacecraft in February of 2009 have increased the cataloged debris population by nearly 40%, in comparison with all the debris remaining from the first 50 years of the Space Age, experts say.
The United States Space Surveillance Network, managed by U.S. Strategic Command, is tracking more than 19,000 objects in orbit about the Earth, of which approximately 95% represent some form of debris. However, these are only the larger pieces of space debris, typically four inches or more in diameter. The number of debris as small as half an inch exceeds 300,000. Due to the tremendous energies possessed by space debris, the collision between a piece of debris only a half-inch in diameter and an operational spacecraft, piloted by humans or robotic, has the potential for catastrophic consequences, experts say.
More recently problems have been avoided by proactive action. In January for example, French space scientists said they had moved one of the key Earth-observing satellites out of its orbit with four NASA satellites to avoid potential collisions.
The French satellite, known as PARASOL (Polarization and Anisotropy of Reflectances for Atmospheric Science coupled with Observations from a Lidar) was flying in a constellation of satellites known as the A Train. The A-Train satellite formation consists of NASA's Aqua, CloudSat, CALIPSO, and Aura.
The French Space Agency (CNES) said that after collecting observations synchronous with the other satellites from the A-Train for almost 5 years, PARASOL was moved to a lower orbit 2.4 miles (3.9 km) under the A Train after CNES noted PARASOL orbit tracks slowly drifting eastward over the past these past few months.
CNES's said its decision to position PARASOL to a lower orbit was motivated by safety reasons to minimize the risk of collision, should PARASOL begin to fail (PARASOL, flew within about 10 minutes of the others). While the expected duration of the PARASOL mission was 2 years, it will reach 5 years in March 2010.
In the new orbit, observations from PARASOL will no longer be simultaneous with the others, except for only a few days at regular intervals, NASA said.
The A Train has already had experience with avoiding space junk. NASA has noted that the Aqua satellite in November successfully performed its first ever Debris Avoidance Maneuver to avoid a piece of the Chinese Anti-Satellite missile test debris from January 2007.
Follow Michael Cooney on Twitter: nwwlayer8
Layer 8 Extra
Check out these other hot stories: | <urn:uuid:7abb2c7b-0272-480d-90a2-00e59723aaec> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2230710/security/call-in-the-military-to-blast-rogue-satellite-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00296-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960469 | 872 | 3.0625 | 3 |
At first sight, Android seems a rather simple operating system; however, it contains a lot of hidden functions and settings (especially in the latest versions) which can make your life much easier. So, before you hurry to get root rights and install tons of software on your smartphone, you should learn about this functionality.
Until recently, based on the results of surveys and personal experience, I had the impression that users believe that the value of data stored on a device greatly exceeds the cost of the device itself. Why until recently? Well, the current US dollar exchange rate means that I haven't seen such surveys among new iPhone users :).
Everyday, new vulnerabilities are discovered in mobile devices that can be exploited by intruders. They can send an SMS to a pay-per-call number, they can collect and sell a large database of contact details, and they can also compromise a specific individual. Successful exploitation of a vulnerability requires that a whole range of conditions are met. There is another way, however! Provide the user with a really useful application (a game with birds), whose manifest contains a list of device information that we are interested in. In this article, we will look at ways of obtaining and saving important information from an Android device.
Active Directory is a phenomenon that comes about quite often during the security testing of large companies. It is all too common to come across not a single domain in a single forest, but rather a more interesting structure with more branches. So today we are going to focus on how to perform reconnaissance and study forest structures. We will also look at possibilities for increasing privileges. Then we will conclude by compromising an enterprise's entire forest!
According to cvedetails.com, more than 1,305 vulnerabilities have been found in the Linux core since 1999. Sixty-eight of these were in 2015. Most of them don't cause many problems (they are marked as Local and Low), and some may cause problems only if they are attached to certain applications or OS settings. In reality these numbers are not that big, but the core is not the entire OS. There are also vulnerabilities found in GNU Coreutils, Binutils, glibs and, of course, user applications. Let's take a look at the most interesting of the bunch.
The phrase "hacking utilities" has gradually come to acquire a negative meaning. Antivirus software teams curse them out, and users look down on them, placing them on a par with potential threats. But one can perform an audit and other relatively significant tasks simply from the browser, if it is prepared properly. In this article we take a look at the respective add-ons to Chrome, but one can find similar additions for Firefox as well.
So you've decided to jailbreak your device, downloaded a proper utility from the website pangu or taig, connected your smartphone to your computer, and launched the application. After several reboots, a message was displayed on the screen confirming the jailbreak's success and the Cydia application was installed on the device. It seems that everything worked fine, but what's next? If you've ever asked yourself this question, this article is for you. | <urn:uuid:f0142ded-7a57-4cad-846c-8541ed9fb35d> | CC-MAIN-2017-04 | https://hackmag.com/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00204-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964724 | 646 | 2.65625 | 3 |
A Californian winemaker is harnessing IoT sensors and drone technology to collect data regarding their crops.
The Hahn Estate Winery, which is home to 1,100 acres of grapes, has decided to employ the cutting-edge methods as a response to the fourth consecutive year of drought in the US state.
The drones are manufactured by PrecisionHawk and contain a number of sensors, including visual, multispectral, thermal and hyperspectral, which collect data and upload it to the cloud in real time. The unmanned aircraft is able to self-monitor its performance levels while in flight and reportedly has a number of benefits for agricultural firms. As well as analysing drainage, pathogens and yield estimates, the drones also help prevent birds from eating crops.
Commercial drone use has been a controversial topic, particularly in the US where the Federal Aviation Administration has issued prohibitive regulations. The likes of Amazon and Google have encountered difficulties when it came to launching their own drone projects, but the FAA does seem to be relenting somewhat. As well as being used at Hahn Estate Winery, a number of insurers have also been granted permission to use drones for risk assessment.
IoT and drones
Information gathered by the drones at the Californian winery is combined with data collected from ground-based IoT sensors in order to get a more holistic view of crop health. The data is then processed by Verizon’s IoT analytics platform to provide round-the-clock monitoring and hopefully, generate valuable insights.
“In agriculture IoT analytics can help drive up revenues – we’re seeing drones being used to map crop yield and gather data which can be analysed to help the farmer determine how to increase the harvest,” Greg Hanson, vice president of business operations EMEA at Informatica, told Internet of Business.
With potential applications in other industries, including search and rescue, geology, and infrastructure surveying, it might not be long before drones achieve broader commercial success. | <urn:uuid:80334457-fcce-423a-a35d-7e743b160402> | CC-MAIN-2017-04 | https://internetofbusiness.com/winemaker-uses-drones-and-iot-to-boost-crop-production/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00353-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955099 | 405 | 2.578125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.