text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
While media reports are doing a great job of educating the public about the energy consumption of data centers – a subject that not many people might think about day-to-day – a recent study shows that even as data center loads continue to grow exponentially, their energy consumption has not grown at the same rate.
A recent Bloomberg article, on the flip side, indicates that investors are becoming wary of data centers due to their increasing water consumption, a hot button topic in popular data center markets like California.
The data center industry, in other words, dodges one efficiency and sustainability crisis only to step foot in another one.
By now, even those outside of the industry have heard it: data centers consume 2% of all energy in the USA. They have as big a carbon footprint as the airline industry. Your Snapchats are killing the planet via data use. And so on.
It’s true that data center demand continues and will continue to skyrocket. More and more devices are being connected to the internet and all of them have to link to a data center somewhere. Businesses are increasingly decommissioning their on-premise server rooms and moving to colocation or the cloud.
Somewhere around the turn of the decade, while energy consumption continued to increase, it slowed down. Some theorized the recession limited data center demand; but facilities were also increasing their efficiencies dramatically. Facebook and Google are redesigning their data centers every few years with new cooling advances. Facilities are opening up strategically in locations that allow free cooling or access to direct renewable energy.
Ultimately, the more efficient a data center can operate, the better it is for data center service providers and users alike. This recent report from the Department of Energy and Lawrence Berkeley National Lab demonstrates that if data centers did not improve their energy efficiency, they would use an additional 600 billion kWh by 2020.
The report further theorizes that if more businesses aggressively migrate to cloud computing, energy consumption from data centers could actually decrease by 33 billion kWh by 2020, which would be 45% more energy efficient than if current trends continue.
But meanwhile, another wrinkle in data center resource consumption is catching the attention of the mainstream. Water consumption has been a rising concern among data center professionals in the past few years, with the Green Grid introducing a Water Efficiency Metric and more talk about recycling gray water, immersion cooling methods, and reporting on water use spreading throughout the industry. A large data center can pump hundreds of millions of gallons of water through cooling units, humidifiers, and support infrastructure every year — or even millions of gallons every month.
The report in Bloomberg shows that the water consumption of large scale data center facilities may finally have an impact on the bottom line of facilities beyond simply paying those hefty water bills.
Bloomberg claims that investors have taken notice of the millions of gallons used by data centers, especially in drought-prone states like California, which also happens to be one of the biggest data center markets in the world. The California governor recently signed an emergency executive order to further conserve water statewide.
Ultimately, money talks. While desert and drought-suffering states may restrict data center water consumption when things get dire, data center services are too necessary to die completely. What may happen is a shift as investors reward companies placing data centers in areas that have abundant resources.
The increased scrutiny could help push data center designers to embrace more efficient models, too. Greenpeace and the New York Times were some of the first to draw attention to data center electricity consumption, and while efficiencies are a natural side effect of the business of operating a data center, those reports may have helped lead to PUE and other energy efficient improvements. Water consumption in data centers might have its own watershed moment soon. | <urn:uuid:4a215435-4c7d-468c-8464-76e6188f8e5c> | CC-MAIN-2017-04 | https://www.greenhousedata.com/blog/one-data-center-efficiency-crisis-averted-another-pops-up | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00195-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948223 | 760 | 2.796875 | 3 |
I've come across a scenario where I'd need to divide a number by 365.25 in assembler. Let's say we've a variable VARA that will have values ranges from 0 - 36525. I'd need to divide VARA by 365.25. Quotient part of the result is needed to do further processing of my logic.
Note that dividend will always be a numeric number and only divisor has the fractional part
Please help me lead to the solution for the same. Please advise me incase additional informations needed.
Yes...I was actually using it for "age calculation logic". Here the scenario is to calculate the age when birthdate is given.
What I've thought to achieve is (logic in highlevel) ....
TODAYDATE = (Current Year * 365) + (Current Month in days) + (Current Day)
BIRTHDATE = (B'day year * 365)+ (B'day month in days)+ (day)
AGE is calculated as (TODAYDATE - BIRTHDATE) / 365
The results are nearly okay. But it will not be perfect always as leapyear is not taking care. In googling, I got to understand that if 365.25 days are used (instead of 365) in above calculation, result will be almost accurate. Please advise me whether there is any better way to accomplish this to calculate age when b'day is given. It would be grateful if anyone has sample handy. Thanks in advance. | <urn:uuid:b01fb896-a1bf-4b19-a9c1-e7a0dfb8b71e> | CC-MAIN-2017-04 | http://ibmmainframes.com/about36239.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00223-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94157 | 318 | 2.75 | 3 |
_______ represent various attributes of an object, such as height, color, font size, sentencelength, and so forth.
A server named www.cobbwebdesign.com can set a cookie for the domain name www.microsoft.com.
_________ is the process of creating new copies of an object
Cookies are commonly used to store information about a user to maintain state.
A _______ is the specific color, width or height that belongs to the property of an object. | <urn:uuid:ceb0ac39-e584-44a7-9b36-dfb7183e2184> | CC-MAIN-2017-04 | http://www.aiotestking.com/ciw/category/exam-1d0-435-javascript-fundamentals-exam/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00223-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.831897 | 100 | 3.640625 | 4 |
Since the launch of vSphere 3.5, ESXi has been the default hypervisor for VMware environments. Here’s a quick description of how to get your ESXi environment up and running.
ESXi uses approximately 2 GB of space and 1 GB of RAM. It requires a 64-Bit x86 CPU, 2 GB RAM and at least a 1 GB network card. There is a free version of ESXi but it lacks many of the features of a licensed version. Be sure to check your system for compatibility before continuing: http://www.vmware.com/resources/compatibility/search.php.
Register for a license or download from your existing VMware account. Download the ESXi ISO file and burn it to a disc. Boot from this CD on the server you would like to run virtual machines on. When the installer appears, press enter to display the licensing agreement. Press F11 to accept and choose a disk to install ESXi. Accept the warning that your disks will be repartitioned (read: erased). You will then eject the disk, press F11 to install, and, when complete, press enter to restart the machine.
Before you can start building virtual machines left and right, you’ll need to configure ESXi. After your server reboots, press F2 to launch the Direct Console User Interface (DCUI). By default, “root” is the username, with no password. Once you’ve logged in, press F2 to view the System Customization menu, and hit Configure Password to set a password.
The first thing to tackle: network configuration. ESXi attempts to obtain an IP address automatically through DHCP (Dynamic Host Configuration Protocol). This will fail if you don’t us DHCP, but you’ll probably want to check some of the other settings anyway.
Press F2 after logging in to view the menu. Select Configure Management Network to set up your networking options. To start, select Network Adapters, choosing the NIC you want. Select IP configuration, then static IP to set an IP address, net mask and gateway. After saving, choose DNS configuration to set your DNS and host name.
There are several other options under System Configuration, most of which are self-explanatory. Some helpful things to know are:
Configure Lockdown Mode: only allows local root logins, and requires vCenter.
Restart Management Network: when a VM isn’t responding or has other issues, this is a good place to start troubleshooting, as it restarts the network interface.
Disable Management Network: turning off the network will not affect local VMs.
Restore Standard Switch: only relevant if you are using a distributed vSwitch, this option creates a virtual network adapter and moves the network from your distributed switch to a standard vSwitch.
View Support Information: choose this to see the server serial number, license number, and Secure Shell thumbprint.
Troubleshooting Logs: From here you can enable/disable the local command-line and remote SSH command line, set the time-out length for the command line, and restart VMware agents.
Once you have your network configured and root password set, it’s into the vSphere client to finish up configuration. Download and install the vSphere client on another computer, then enter your IP address, username and password. Ignore any security warnings. Now you’re into vSphere and can start setting up and managing VMs!
Posted By: Joe Kozlowicz | <urn:uuid:1b3e8a5d-c4e6-4e1d-ab6a-605d4897bc55> | CC-MAIN-2017-04 | https://www.greenhousedata.com/blog/installing-vmware-esxi-and-initial-configuration | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00553-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.836613 | 730 | 2.609375 | 3 |
Cookies are files stored temporarily on a World-Wide Web browser's
computer, which allow the World-Wide Web server to store persistent
information associated with the browsing user. A World-Wide Web
server is able to store any information in a cookie, but generally
can only retrieve information that it stored in the first place. | <urn:uuid:185f21fc-212e-4b7f-913d-97d687cb73f3> | CC-MAIN-2017-04 | http://hitachi-id.com/concepts/cookies.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00489-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.770607 | 68 | 2.625 | 3 |
Online political debates over modern media (the computer nets) won't replace face-to-face debates, but they have significant advantages and few disadvantages -- at least, not for honorable candidates and advocacy that can stand the light of intense public scrutiny.
Local campaigns can use local civic nets. State and national campaigns can use the nonprofit, nonproprietary Internet -- that is also accessible to those who use higher-priced services such as America Online, CompuServe and Prodigy.
Thousands of debates -- or discussions -- occur online, with worldwide participation. The process begins when someone posts a message in a public forum identifying the subject and making an opening statement.
As others electronically visit the forum -- minutes or days later -- they may simply watch ("lurk"), or may add their own comments, if permitted by the forum "owner."
Many of the best online forums are moderated by their owners, through which all forum postings are channeled. Moderators usually exercise restrained control, merely filtering out content-free or off-topic comments. (Otherwise, they are vigorously criticized in other public online discussions, and quickly earn ill repute.)
An online campaign debate could operate with the moderator limiting questions, responses and rebuttals only to those who are legitimate candidates (by mutually agreed criteria). However, the moderator's control would only be to verify that comments came from the campaigner (avoiding forgeries), and assure that comments meet agreed-upon size and timing limits.
An online debate costs almost nothing, but can reach thousands of journalists and voters within minutes or hours, who -- in turn -- can and often do cascade interesting content on to thousands of their online contacts, at the stroke of a key.
It has a maximum potential audience of 30 million to 60 million people who are, on the average, upscale and often more affluent (translation: potential donors). Net users are also -- more often than the average -- in policy, management and leadership positions in their work and their communities (translation: can influence voters).
It has no schedule conflicts, nor schedule excuses. Candidates and their trusted advisors can draft their questions, proposals, responses and rebuttals
from anywhere, at any time, wherever they are located.
(Campaigners could type their own responses -- which makes about as much sense as them doing their own typing after being elected to office -- i.e. generally, they should let some staffer's fingers do the walking.)
It allows candidates to make their statements -- presumably under time and space limits. But in addition, they can reference more comprehensive information about any given topic, simply by giving an access pointer to where that information is located online, where anyone who is interested can retrieve it almost immediately, essentially without cost or effort. (However, this is only of value to those candidates who have substantive positions on the issues.)
It allows campaigners to consult with their advisors -- rather than studying play books and training with speech coaches and handlers as is the case for face-to-face debates.
Candidates have ample opportunity to consult and develop thoughtful responses, since online debates typically occur over a period of days -- or weeks -- with each participant posting only one or two comments per day. Thus, their responses can accurately and completely present their views.
Some members of the press dislike this. They want more spontaneous interaction -- more entertaining and
with greater likelihood of some inadvertent slip.
But the fact that someone is entertaining is rather poor criteria for selecting political leaders. We don't want to elect clowns (or so we say), and we want our policy makers to seek the best advice and give each decision careful consideration. Don't we?
So why ask candidates to do less when they present their positions to us prior to an election?
* It avoids the need for "a pretty face." It's been said that homely Abraham Lincoln probably could not win a presidential election in this era of television. Perhaps one of the more serious impacts that television has had on politics is that -- essentially -- a candidate who is visually or verbally unattractive has little chance of winning an election, or of persuading voters in a televised pitch or debate -- regardless of how well qualified they are.
In contrast, it's often said online that, "There are no bad haircuts in cyberspace." (It's also been said that "no one knows you're a dog" -- but that, of course, is not applicable in this situation.)
Online, only two things count -- the substance of what is presented, and the clarity and persuasiveness of its written presentation.
* An online debate easily facilitates free, immediate polling and feedback from the audience, regardless of size or geographic distribution. There are numerous ways to structure volunteer or active-outreach polling, conducted entirely by (free) e-mail or other mechanisms. With appropriate design, it can even permit automated counting of the responses.
BENEFITS TO THE PUBLIC
Citizens -- and the press -- can "watch" the debates online at any time, from any place in the world. They can easily save the full text of the candidates' exact statements for later retrieval, review, excerpting and redistribution whenever desired. (Perhaps we shouldn't mention this aspect of it to the politicians.)
For the first time, independent of time and location, citizens and candidates can bypass the editor's scissors and the news-casters voice-over "translation" of candidates' actual statements.
There is another benefit: For the first time in history, everyone who is interested in a political debate can also have equal access to simultaneous, parallel online discussions -- including national and global experts, pundits and activists -- in the moderated and "immoderate" worldwide forums about every conceivable issue, that occur continuously, online.
Statewide debates have already occurred in Minnesota and surely elsewhere.
And in January, I proposed an online debate by Republican presidential candidates that would have been hosted by mainstream media including U. S. News & World Report and the Mercury Center at Knight-Ridder's San Jose Mercury News.
However, only three candidates accepted -- Lugar, Taylor and Collins (Dornan agreed in writing; then reneged) -- although all say they are competent to "lead" us into the Information Age and the 21st century.
The question is not if substantive online candidates' debates will become part of all major political campaigns -- the only question is whether it will happen this year.
Jim Warren has served on the California Secretary of State's Electronic Filings Advisory Panel, received John Dvorak's Lifetime Achievement Award, the Northern California Society of Professional Journalists' James Madison Freedom of Information Award, the Hugh M. Hefner First Amendment Award, and the Electronic Frontier Foundation Pioneer Award in its first year.
He founded the Computers, Freedom & Privacy conferences and InfoWorld magazine. He lives near Woodside,
Calif. E-mail: firstname.lastname@example.org | <urn:uuid:8bcaa3c8-0fc0-4c58-886c-5599b3e44b41> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/100555864.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00397-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955172 | 1,416 | 2.796875 | 3 |
If you’re considering using GPUs to speedup compute-intensive applications, it’s important to understand which algorithms work best with GPUs and other vector-processors. As HPC expert and founder of StreamComputing Vincent Hindriksen puts it, you want to know “what kind of algorithms are faster when using accelerators and OpenCL.”
Professor Wu Feng and a team of researchers at Virginia Tech elucidate this topic with their 2011 manuscript, titled “The 13 (computational) dwarfs of OpenCL.” The authors of that seminal paper explain that “each dwarf captures a pattern of computation and communication that is common to a class of important applications.”
This paper became an important resource for StreamComputing, and it remains a good starting point when considering the benefits of GPUs and OpenCL.
Hindriksen explains that the 13 dwarfs framework was inspired by Phil Colella, who identified seven numerical methods important for science and engineering, aka seven dwarfs. To this list, Feng and his team added six more application areas well-suited to GPUs and other vector-accelerated processors. That’s how the 13 “dwarfs” came to be. To place this in literary context, there are seven dwarfs in “Snow White,” and 13 in Tolkien’s “The Hobbit.”
Hindriksen continues with his overview of the computational dwarfs of OpenCL. “Each part has a description of the ‘computational dwarf,’ examples of application areas and some words from the OpenCL perspective,” he writes. “It is not intended to be complete, but to be a starting point. You will notice overlap, as some algorithms have aspects of two or more – this also implies some problems have more solutions.”
The 13 computational dwarfs are as follows:
Dense Linear Algebra
Sparse Linear Algebra
Map-Reduce & Monte Carlo
Probabilistic Graphical Models
Finite State Machines
Hindriksen leaves the reader with a note on the importance of categorization. “Understanding which type of applications perform well, makes it easier to decide when to use GPUs and other accelerators or when to use CPUs,” he writes. If the candidate algorithm does not map to one of the 13 camps, then there’s a good chance it’s not suitable for OpenCL.
“[Furthermore] the right hardware can be better selected when you know the right job category,” notes Hindriksen. “So don’t just buy a Tesla when you want to start with GPGPU, as others have done. For example combinational logic is much faster on AMD GPUs. Not all of above algorithms work best on a GPU by definition – OpenCL on a CPU is a good choice when memory-bandwidth is not the bottleneck.”
StreamComputing is an international software development company based in the Netherlands that specializes in speeding up software using the power of GPGPU computing. | <urn:uuid:707f2a2e-63ec-4765-9015-58bdb42c9cc6> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/10/14/reprising-13-dwarfs-opencl/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00305-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.912041 | 642 | 2.765625 | 3 |
NASA said its orbiting Kepler telescope has spotted two planets the size of Earth orbiting another star, although both are far too hot to sustain life. The discovery does, however, bring scientists a big step closer to finding an Earth-like planet that could harbor life.
The planets, called Kepler-20e and Kepler-20f, appear to be rocky, like Earth, but orbit too closely to their star for liquid water to persist on their surfaces, a team of scientists reported in the journal Nature.
"These new planets are significantly smaller than any planet found up till now orbiting a Sun-like star," said Francois Fressin of the Harvard-Smithsonian Center for Astrophysics, who led the study.
Astronomers were excited about the discovery because other so-called exoplanets found orbiting stars outside our own solar system have all been bigger than Earth. These two compare in size and structure to Venus and Earth, and suggest strongly that planet-hunters will eventually find planets that look more like Earth, perhaps even with liquid water on the surface.
"In the cosmic game of hide and seek, finding planets with just the right size and just the right temperature seems only a matter of time," said Natalie Batalha, a professor of astronomy and physics at San Jose State University who works on the Kepler team. "We are on the edge of our seats knowing that Kepler's most anticipated discoveries are still to come."
The key would be for planets to be in the "habitable zone" - a distance from the parent star that would allow temperatures compatible with life.
"The primary goal of the Kepler mission is to find Earth-sized planets in the habitable zone," Fressin said. "This discovery demonstrates for the first time that Earth-size planets exist around other stars, and that we are able to detect them."
The $591 million Kepler space telescope is watching about 170,000 stars along a stretch of the Milky Way - the galaxy that includes our own solar system. It finds planets by measuring changes in light coming from stars, which indicate that a planet has passed in front of the star as it orbits.
What's unusual about the Kepler system is the way the planets mix it up. In our own system, the small, rocky planets - Earth, Mercury, Venus, and Mars - are closer to the star and the big gassy planets, like Jupiter, are farther out. In the Kepler system, the big planets alternate with the smaller ones. "We were surprised to find this system of flip-flopping planets," said David Charbonneau of the Center for Astrophysics. "It's very different than our solar system." | <urn:uuid:bcc48e71-a42a-426e-92db-382ac34e1074> | CC-MAIN-2017-04 | http://www.nextgov.com/technology-news/2011/12/kepler-telescope-spots-two-earth-sized-planets/50340/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00057-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955216 | 543 | 3.875 | 4 |
Bell Labs, the research arm of Alcatel-Lucent, demonstrated the ability to carry 10Gbps over traditional copper telephone lines over a distance of 30 meters. Bell Labs also announced a prototype technology that can achieve 1Gbps symmetrical over existing copper access networks.
The Bell Labs tests used a prototype technology called XG-FAST, which is an extension of the G.fast standard being finalized by the ITU. When it becomes commercially available in 2015, G.fast will use a frequency range for data transmission of 106 MHz, giving broadband speeds up to 500 Mbps over a distance of 100 meters.
Bell Labs said XG-FAST uses an increased frequency range up to 500 MHz to achieve higher speeds but over shorter distances. Bell Labs achieved 1 Gbps symmetrical over 70 meters on a single copper pair. 10 Gbps was achieved over a distance of 30 meters by using two pairs of lines (bonding). Both tests used standard copper cable provided by a European operator.
Marcus Weldon, President of Bell Labs: “Our constant aim is to push the limits of what is possible to ‘invent the future’, with breakthroughs that are 10 times better than are possible today. Our demonstration of 10 Gbps over copper is a prime example: by pushing broadband technology to its limits, operators can determine how they could deliver gigabit services over their existing networks, ensuring the availability of ultra-broadband access as widely and as economically as possible.” | <urn:uuid:04160896-cbeb-4cee-8e74-1e316d5ced4e> | CC-MAIN-2017-04 | http://www.convergedigest.com/2014/07/alu-achieves-10g-over-copper-phone.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00168-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91664 | 303 | 3.3125 | 3 |
The term "smart machine" defines a broad category of programming and machinery that learns and adapts to situations, and processes data at speeds that puts older gadgets to shame. It encompasses the wide array of robots, artificial intelligence, smart appliances and wearables, neuromorphic chips, and next-generation supercomputers that the tech media can't get enough of.
Is the Workforce Ready for the Rise of Smart Machines?
Society and the human workforce may feel this machinery's impact before most decision-makers realize what's happening — for good or for ill.
/ June 27, 2014
It also encompasses a computing category that can process data faster than people can, and can "think," more or less, at a level that's closer to human cognition than machines were previously capable of. Society and the human workforce may feel this machinery's impact before most decision-makers realize what's happening — for good or for ill.
Gartner predicted in fall 2013 that smart machines will begin threatening jobs through 2020. Research director Kenneth Brant painted a grim scenario in a company press release. "Job destruction will happen at a faster pace, with machine-driven job elimination overwhelming the market's ability to create valuable new ones," he said.
But the bulk of Gartner's content wasn't quite as extreme. Researchers claimed that smart machines might become a threat as they proliferate, but it's too early to determine how soon that will happen or how severe it will become. Even so, workplace leaders should stay vigilant on the technology's development. Sixty percent of CEOs surveyed said that it's far-fetched to think that smart machines could threaten middle-class jobs — but Gartner warns against complacency.
"The bottom line is that many CEOs are missing what could quickly develop to be the most significant technology shift of this decade," Brant said.
Gartner's analysis applies nationally, but people also notice at the local level. The Desert News, a Salt Lake City newspaper, reported in March 2014 that tabletop menu-ordering computers in restaurants like Ziosk exemplify a potential problem for human servers. Customers order customized meals on the Ziosk tablet, and human servers are able to serve more tables because of it.
That sounds pretty good, but the Desert News implied that automation like this comes with a bad side. Machines like IBM's Watson computer, which is used to diagnose cancer, and other programs lawyers use to analyze hundreds of thousands of legal documents will displace many human employees who used to contribute to the process.
The 2013 British research report The Future of Employment: How Susceptible are Jobs to Computerisation claimed that 47 percent of American jobs are at risk because of sophisticated computing.
But there may be a roadblock ahead for smart machines' possible workforce dominance, according to Gartner's research. Smart machines in general are in the first iteration, so the "scarier," more productive machines are possibly quite far in the future, and the cost required for their mass production is impeded by weak global revenue and economical states. The need for existing human labor is still quite strong. | <urn:uuid:f377b748-e985-4f45-818d-e890f78dcc10> | CC-MAIN-2017-04 | http://www.govtech.com/videos/Is-the-Workforce-Ready-for-the-Rise-of-Smart-Machines.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00380-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944589 | 635 | 2.78125 | 3 |
Leadership-class supercomputers, like the Department of Energy systems, are instrumental in meeting the challenges of the 21st century. A team of scientists and mathematicians at the DOE’s Lawrence Berkeley National Laboratory are using these powerful number crunchers together with sophisticated algorithms to create cleaner combustion technologies.
The United States relies on the combustion of fossil fuels for more than 80 percent of its energy needs. The national economic engine as well as the standard of living are tied to this combustion process to drive all manner of transportation and provide home energy needs. But the burning of fossil fuels is also the number one source of anthropogenic climate change, so advances in cleaner combustion are critical to our future.
Despite the prevalence of combustion, the chemical process is not very well understood, but that’s changing thanks in part to the work of researchers at Berkeley and other DOE labs. Applied mathematicians and combustion scientists at Berkeley are using supercomputers to model this complex process with the intent of developing cleaner-burning, more efficient devices.
A recent article at Berkeley Lab by Jon Bashor discusses one such technology, called the low-swirl burner, which was developed by Robert Cheng in Berkeley Lab’s Environmental Energy Technologies Division. The device imparts a gentle spin to the fuel and air mixture, which causes it to spread out and burn at a lower temperature than in conventional burners. Lower flame temperatures are associated with increased efficiency and reduced levels of nitrogen oxides (NOx) and greenhouse gases. Reducing NOx compounds is a worthy goal since they have been implicated in emphysema, bronchitis, asthma and heart disease.
The article explains the difficulty with simulating practical-scale combustion devices like the low-swirl burner. “The fuel is often turbulent, the combustion process may involve hundreds of species and thousands of chemical reactions, and the processes involved can span milliseconds to minutes and microns to meters,” Bashor writes.
Just several years ago, the tools that were available for this purpose failed to support the necessary complexity. So scientists and mathematicians at CCSE developed new software tools and algorithms that cut computational costs for combustion simulations by a factor of 10,000. At the same time the number of variables used to represent the solution has increased from hundreds of thousands to more than a billion. The upshot is it’s now possible to produce 3D simulations with a remarkable level of complexity and fidelity.
The new software tools are based on adaptive mesh refinement (AMR), a grid-based system that rations computing by directing maximum processing power to where it’s needed most.
Now the research team is focused on adapting the design of the device to burn hydrogen, which while not a renewable resource as it is currently obtained, does not itself release greenhouse gases. Burning hydrogen does still release a very low amount of NOx, however, and the project seeks to reduce the amount further. The team is using the DOE’s National Energy Research Scientific Computing (NERSC) Center supercomputers to help achieve this goal.
“In order to develop clean, energy-efficient systems, we need a continuous feedback loop from the flame to the lab and back again,” Cheng said. “This is the missing link that computation at NERSC provides.”
Combustion science has benefitted from an increase in processing power as well as the development of better algorithms. In addition to the work being done at Berkeley lab, there are similar projects at Argonne National Laboratory (led by scientists from General Electric) and at Sandia National Laboratories (using a supercomputer at Oak Ridge National Laboratory). All these teams are studying combustion with the intention of reducing fuel consumption and pollutants. | <urn:uuid:40144958-cace-4f6d-8a63-10d01834e9e8> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/10/01/supercomputing_targets_cleaner_combustion/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00554-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938059 | 763 | 3.59375 | 4 |
The world went into panic mode over Swine Flu when it began spreading like wildfire early last month, first in Mexico, then the United States and beyond. Then it became evident that most cases were mild -- no worse than garden-variety seasonal flu. People moved on in search of something else to worry about.
And so went another textbook example of how we panic too much when a threat is in the news and plan too little when the headlines dissipate. [See: Swine Flu: To Fear is to Fail]
The reality, at least in the case of Swine Flu, is that the threat was low in spring but could morph into something more sinister in the fall and winter. Emergency preparedness experts say there's no cause for panic, but that this is a reminder that organizations should always be thinking about how to keep the machinery moving in the event something big and unexpected happens. [See: Now That the Hype Is Over, Keep Planning]
For emergency planners, there are both physical and cyber security challenges to think about regarding Swine Flu and other potential pandemic viruses.
On the physical side, private entities should be hammering out a game plan for who would do what and where if the government decided to restrict our movements to contain an outbreak, says Kevin Nixon, an emergency planning expert who has testified before Congress and served on infrastructure security boards and committees including the Disaster Recovery Workgroup for the Office of Homeland Security, and the Federal Trade Commission.
"Companies and employers that have not done so are being urged to establish a business continuity plan should the government direct state and local governments to immediately enforce their community containment plans," Nixon says. [Podcast: How to Prepare for a Swine Flu Pandemic]
If the Federal government does direct states and communities to implement their emergency plans, recommendations, based on the severity of the pandemic, may include:
- Asking ill people to voluntarily remain at home and not go to work or out in the community for about 7-10 days or until they are well and can no longer spread the infection to others (ill individuals may be treated with influenza antiviral medications, as appropriate, if these medications are effective and available.
- Asking members of households with a person who is ill to voluntarily remain at home for about 7 days (household members may be provided with antiviral medications, if these medications are effective and sufficient in quantity and feasible mechanisms for their distribution have been developed).
- Dismissing students from schools (including public and private schools as well as colleges and universities) and school-based activities and closure of childcare programs for up to 12 weeks, coupled with protecting children and teenagers through social distancing in the community, to include reductions of out-of-school social contacts and community mixing. Childcare programs discussed in this guidance include centers or facilities that provide care to any number of children in a nonresidential setting, large family childcare homes that provide care for seven or more children in the home of the provider, and small family childcare homes that provide care to six or fewer children in the home of the provider.
- Recommending social distancing of adults in the community, which may include cancellation of large public gatherings; changing workplace environments and schedules to decrease social density and preserve a healthy workplace to the greatest extent possible without disrupting essential services; ensuring work-leave policies to align incentives and facilitate adherence with the measures outlined above. [Source: Swine Flu: How to Make Biz Continuity Plans, by Kevin Nixon]
On the IT security side, organizations need to be thinking about how to stay on top of things like log monitoring and patch management in the event of sickness among the IT security staff.
Kevin Coleman, a strategic management consultant at Technolytics, says companies should also plan for limitations on business travel and even bringing in extra cleaning crews and keeping employees at home if they complain of so much as a sniffle.
"Encourage anyone who feels the least bit sick to stay home," Coleman says. "If an employee can do all the work from home on company laptops and VPNs that they do in the office, there's no reason to have them come in. If you can limit exposure from the get-go, why wouldn't you?"
Meantime, Coleman said, companies should ramp up the cleaning crew activity that's already going on, mostly after office hours. Bringing in extra cleaning crews to wipe down heavily-touched surfaces like doors, walls, phones and keyboards is money well spent, he said.
"Employees can also do their part to limit the spread of flu by carrying around antibacterial hand wipes," he said, noting that some of his clients have already pulled back on the amount of business travel employees can do.
It's far from certain that we're in for a deadly 1918-style pandemic. Either way, security experts say going over the scenarios and building a game plan is time well spent. | <urn:uuid:0e16e4a1-45c7-4ac9-9838-9907995aeb33> | CC-MAIN-2017-04 | http://www.csoonline.com/article/2124059/pandemic-preparedness/swine-flu--a-wake-up-call-for-emergency-planners.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00278-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964916 | 1,004 | 2.59375 | 3 |
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Backed by government and industry, the project aims to create a better understanding of the technical and social implications for citizens who hook into the digital economy.
Local authorities in the West Midlands will select 300 representative volunteer households from the Sutton Coldfield, Lichfield and Tamworth areas to take part in the scheme.
E-commerce minister Douglas Alexander believes the project to be the first step in understanding how televisions will be used in the digital environment of the future. "The consumers taking part in these real-world tests will play a vital role in ensuring that the benefits of digital TV can be made more accessible to all," Alexander said.
Culture minister Tessa Blackstone added, "Digital television will transform the communications services available in the home. The pilot project will be vital in assessing the impact on viewers adopting these new services.
The government started to address the widespread lack of access among the UK population to computers, the Internet and the e-revolution back in 1999. Use of digital television was highlighted as an initiative for tackling the "information poor" along with IT for All centres, IT Learning Centres and the recycling of computers to deprived families. | <urn:uuid:e6b1dcb7-a980-48f2-9c5f-f7ba080d0b3c> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/2240041949/Volunteer-families-to-shape-tomorrows-digital-world | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00122-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920423 | 260 | 2.515625 | 3 |
188.8.131.52 How does one get a key pair?
A user can generate his or her own key pair, or, depending on local policy, a security officer may generate key pairs for all users. There are tradeoffs between the two approaches. In the former, the user needs some way to trust his or her copy of the key generation software, and in the latter, the user must trust the security officer and the private key must be transferred securely to the user. Typically, each node on a network should be capable of local key generation. Secret-key authentication systems, such as Kerberos, often do not allow local key generation, but instead use a central server to generate keys.
Once a key has been generated, the user must register his or her public key with some central administration, called a Certifying Authority (CA). The CA returns to the user a certificate attesting to the validity of the user's public key along with other information (see Questions 184.108.40.206-220.127.116.11). If a security officer generates the key pair, then the security officer can request the certificate for the user. Most users should not obtain more than one certificate for the same key, in order to simplify various bookkeeping tasks associated with the key. | <urn:uuid:50eaa64b-2e42-4863-9535-1417d97cf2fc> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/how-does-one-get-a-key-pair.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00544-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92694 | 264 | 2.65625 | 3 |
Recent headlines are testament to the growing popularity of ransomware attacks on businesses and consumers alike. In January, for example, Lincolnshire County Council saw its computer systems shut down for four days after it received demands for a £1 million ransom. An attack on Hollywood Presbyterian Medical Centre in the United States the following month netted at least $17,000 in Bitcoin for the “data kidnappers” responsible while, just recently, millions of Microsoft Office 365 users were exposed to a massive ransomware attack.
Indeed, during the first three months of 2016, the Infoblox DNS Threat Index, which tracks the creation of malicious domains worldwide, recorded a 3,500 per cent increase in observations of domains that either hosted malicious ransomware downloads, or communicated with them once installed.
Low risk to reward ratio
According to independent researchers there are now over 120 families of ransomware which, like most malware, will often establish itself through phishing or spear-phishing, leading to a user downloading an email attachment or clicking through to a malicious domain. Increasingly nowadays, it can also spread through infected online advertising networks, affecting users of “clean” sites.
First documented in 1989, ransomware is by no means a new technique. But its popularity has risen significantly recently, particularly in the first three months of this year, where the 35-fold increase in ransomware-related domains accounted for 60 per cent of all malware observed.
A major factor in this growth is undoubtedly the size of reward available to attackers using ransomware. Where once it was used to target consumers for a few pounds here and there, it is now regularly used to carry out more lucrative attacks on businesses, as illustrated by the earlier examples. And, as these increasingly profitable attacks continue to hit the headlines, so other criminals are inspired to carry out similar activity themselves.
The low risk to reward ratio is also an attraction. In the past, the use of real-world transfer mechanisms such as PayPal were fairly straightforward for law enforcement agencies to track. Today though, the ubiquitous nature of crypto currency such as Bitcoin means that criminals can reliably receive payments from their victims in complete anonymity.
Creating a perfect storm
In addition to increased profitability and lower risk, it is now far simpler for more people to participate in launching ransomware attacks. The commoditisation of online crime toolkits, for example, which offer services such as hosting, spamming and targeting, has created an industrial-scale marketplace for Crimeware as a Service.
Furthermore, with the wealth of data widely available on potential targets online, it has become easy for criminals to hit a lot of potential victims simultaneously. Indeed, the crypto malware itself will typically provide the criminal with some sort of information on their potential victim, allowing them to pick and choose who to hit. They are therefore able to easily target high-risk victims such as SMBs, hospitals, or accountants, where the value of the data held on the targeted computer(s) is so high the attackers can demand a substantial ransom.
Attacks will be more likely to continue over a long period of time, as they become simpler to carry out at scale, even with occasional inevitable setbacks. Indeed, the relative cost of malicious infrastructure is now so low that it makes complete sense, from the criminal’s point of view, to scale up those activities that prove to have a return on their investment.
Taking defensive steps
In common with any malware, there are relatively straightforward defensive steps that businesses need to put in place to protect themselves against ransomware. They need to ensure that their security measures are as tight as they can be, for instance, that all their software is up to date, that their users observe best practice, and that their data is clean, protected, and backed-up as often as possible. After all, without a clean back-up copy available, data is perpetually at risk.
Ransomware is clearly working. Lucrative, low-risk and easy to use, it’s highly likely that it will continue to grow in popularity. There’s no doubt we’ll see more instances of successful attacks hitting the headlines over the coming months and this, in itself, will continue to fan the flames.
If they hope to stem this growth, businesses must now take the steps necessary to prevent against attacks and, more importantly, avoid rewarding the attackers. | <urn:uuid:71cb93b5-8873-45ec-8428-f7bcfa2c6bfd> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2016/08/01/ransomware-lucrative-low-risk-easy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00480-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.96078 | 883 | 2.640625 | 3 |
Data Analytics in Genetic Research
September 28, 2012
We’re pleased to see this excellent example of the use of analytics. ScienceDaily reveals, “Information Theory Helps Unravel DNA’s Genetic Code.” Specifically, scientists at the Indian Institute of Technology in Delhi were working on one of today’s biggest biology challenges—predicting the distribution of coding and noncoding regions (exons and introns, respectively) in a previously unannotated genome. The researchers were able to speed the process using information theory techniques. The brief write up explains:
“The researchers were able to achieve this breakthrough in speed by looking at how electrical charges are distributed in the DNA nucleotide bases. This distribution, known as the dipole moment, affects the stability, solubility, melting point, and other physio-chemical properties of DNA that have been used in the past to distinguish exons and introns.
“The research team computed the ‘superinformation,’ or a measure of the randomness of the randomness, for the angles of the dipole moments in a sequence of nucleotides. For both double- and single-strand forms of DNA, the superinformation of the introns was significantly higher than for the exons.”
Studying DNA regions helps scientists better understand diseases and develop more effective treatments. Just one of the many ways data analytics can be used for something other than boosting a corporations’ bottom line.
Cynthia Murrell, September 28, 2012 | <urn:uuid:fce37f71-0f87-4297-a327-cb9b147b7571> | CC-MAIN-2017-04 | http://arnoldit.com/wordpress/2012/09/28/data-analytics-in-genetic-research/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00169-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926503 | 317 | 3.0625 | 3 |
Classroom barricade devices can cause more harm than they prevent, the Security Industry Association (SIA) is warning in support of an effort launched by the Door Security & Safety Foundation.
As school administrators consider ways to protect students and staff from violence, particularly active shooters, some are purchasing barricade devices that prevent all entry into a classroom when deployed.
These devices, however, often violate fire codes. In addition, the possibility that a bully or violent student who is already in a classroom could use the device to prevent school staff or first responders from entering could put students at greater risk.
“We’re all seeking the best way to protect children, but we can’t focus only on countering the specific—and, fortunately, highly unlikely—threat of an active shooter, while making other dangers much worse,” SIA CEO Don Erickson said.
“Classroom door locks provide a high level of security in all situations, while the net effect of barricade devices would be to reduce the safety and security of students.”
The Door Security & Safety Foundation has produced a short video (see below) and a white paper explaining the dangers created by these devices — especially since bullying and violence perpetrated by students, rather than intruders, occur far more often than active shooter incidents — and urging that “no door locking device that also compromises life safety should be approved by any jurisdiction.”
(The Dangers of Classroom Door Barricade Devices. What are the consequences of installing barricade devices on classroom doors? Find out from the experts. Courtesy of Door Security & Safety Foundation and YouTube)
Code-compliant classroom door locks that permit authorized access from outside are the best way to ensure the security and safety of students and staff, SIA and foundation officials said. In fact, they noted, there appear to be no documented incidents of an active shooter breaching a locked classroom door.
The Door Security & Safety Foundation’s effort is also being supported by the Partner Alliance for Safer Schools (PASS), which was co-founded by SIA and the National Systems Contractors Association.
PASS has released its own white paper on classroom barricade devices, which is available at http://www.passk12.org, and PASS Steering Committee Member Guy Grace, the director of security and emergency planning for Littleton, Colo., Public Schools, warns in the foundation video of the “unintended consequences” of such devices.
“When it comes to the security and safety of students and teachers, especially when it involves classroom doors, politics and emotions should never override professional opinions and accepted best practices reinforced by building and fire codes,” Grace says in the video.
The Security Industry Association (SIA) is the leading trade association for global security solution providers, with roughly 700 innovative member companies representing thousands of security leaders and experts who shape the future of the security industry.
SIA protects and advances its members’ interests by advocating pro-industry policies and legislation at the federal and state levels; creating open industry standards that enable integration; advancing industry professionalism through education and training; opening global market opportunities; and collaboration with other like-minded organizations.
As a proud sponsor of ISC Events expos and conferences, SIA ensures its members have access to top-level buyers and influencers, as well as unparalleled learning and network opportunities.
SIA also enhances the position of its members in the security marketplace through SIA Government Summit, which brings together private industry with government decision makers, and Securing New Ground®, the security industry’s top executive conference for peer-to-peer networking. | <urn:uuid:8b3fa3d6-7c0a-4863-bd6e-126737ac57e7> | CC-MAIN-2017-04 | https://americansecuritytoday.com/sia-opposes-use-classroom-barricade-devices-see-video/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00077-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944157 | 747 | 2.6875 | 3 |
http://www.newscientist.com/news/news.jsp?id=ns99992250 Will Knight 17:21 03 May 02 NewScientist.com news service Programmers the world over will next week have the chance to "reverse engineer" a mysterious and malicious computer program. They must determine its intentions and test their programming skills. The idea is to simulate the crises network administrators face whenever a rogue program, also known as a Trojan or zombie, is uploaded into a computer system by an intruder. These programs are designed to capture passwords or probe the system for further weaknesses on the intruder's behalf. An administrator must work out what the program does, but without seeing the source code used to build it. "In specific cases, you may encounter something you don't recognise," says Job de Haas, managing director of Dutch company ITSX Security, and one of the competition's judges. "It is important that you can get a feeling for the extent of the compromise and how serious it is." Back to the source The program will be released next week at the link below, but no further information will be provided, not even the language it was written in. Competitors must not only determine the purpose of the program but also figure out ways it could be stopped in its tracks. They will even be asked to guess what kind of person wrote the program. A panel of judges will mark all the entries. The Reverse Challenge is the brainchild of a consortium of computer researchers from different companies and universities known as the Honeypot Project. Reverse engineering involves effectively going backwards through the process of building a computer program. Some programming tools will help with this task but, says De Haas, the process also requires good programming skills. "It's been a very secluded skill that has become more and more mainstream," he says. "An explosion of these [hacking] tools will make this a very needed skill for people in this field." Ian Brown, a computer security researcher at University College London, says this skill is useful for combating all sorts of malicious programs, including computer viruses and worms. "When a new virus, Trojan or zombie is discovered in the wild, its mode of operation, and hence how to defeat it, can be derived without the need for its source code," he explains. But programmers will be competing for more than just kudos. They can win computer security books and entry to the Black Hat Briefings, a US computer security conference. The Honeypot Project has in the past organised competitions requiring competitors to analyse a computer system after a simulated break-in. - ISN is currently hosted by Attrition.org To unsubscribe email majordomoat_private with 'unsubscribe isn' in the BODY of the mail.
This archive was generated by hypermail 2b30 : Sun May 05 2002 - 02:25:15 PDT | <urn:uuid:98b4273f-e5aa-4a16-82aa-562881dca068> | CC-MAIN-2017-04 | http://lists.jammed.com/ISN/2002/05/0022.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00527-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961004 | 579 | 3.109375 | 3 |
Arboretum tours in New Jersey just went high-tech.
NJ.com reported, it's still common to see brown or green plaques with information about various trees and plants in parks around the country, but in Union County, N.J., visitors to one park can use their smartphones to learn more about trees. Some trees around Lenape Park’s mile-and-a-half trail, which is part of the East Coast Greenway, received signs that each have the name of the tree and a QR code that links to a smartphone and tablet application.
The application, called Leafsnap, was developed by the Smithsonian Institute and has been downloaded more than 1 million times and has about 500,000 active users. In addition to QR codes, the program's main function is to allow users to snap photographs of leaves and discover what type of tree the leaf belongs to.
"We are also hoping that this convergence of nature and technology may spur more interest from children," said Freeholder Bette Jane Kowalski. "All you have to do is look at young people today and that phone seems to always be in their hands. This is a way to get them to learn more about the environment around them through a device that has become a major part of their lives."
Find the full report on NJ.com.
Photo courtesy of Shutterstock.com | <urn:uuid:9d75f51b-30fc-41bc-9fd1-0024cf051962> | CC-MAIN-2017-04 | http://www.govtech.com/Tree-Signage-Digital-Update-Union-County-NJ.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00435-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.977843 | 280 | 3 | 3 |
The Internet of Things (IoT) is, obviously, heavily reliant upon cloud computing services typically delivered from connected datacenters to provide the intelligence and analytics needed for the spiralling number of devices out there.
The problem with cloud computing
The problem with cloud computing, some analysts and commentators will argue, is that it is centralized, weighty and almost monolithic in some senses. Its very presence in a datacenter means that a communications stream will always need to be established and so the Input/Output factor (or, more accurately, the upload/download factor) will always need to be accommodated for.
The inherent latency that this ‘transport’ requirement demands leads us very often to talk about latency… and latency is never good for data exchange, especially in environments where we increasingly move to what we like to call ‘real time’ computing.
Close to the edge
This need for proximity and power has given rise to the term ‘edge’ computing i.e. putting the compute power source (and data storage prowess and network intelligence and so on) closer to the device where it is needed.
According to Chris Raphael writing on Quora, “Edge computing refers to data processing power at the edge of a network instead of holding that processing power in a cloud or a central data warehouse. There are several examples where it’s advantageous to do so. For example, in industrial Internet of Things applications such as power production, smart traffic lights, or manufacturing… the edge devices capture streaming data that can be used to prevent a part from failing, reroute traffic, optimize production, and prevent product defects.”
Going one further than edge computing then while still retaining a close family relationship with cloud is so-called fog computing.
The very term fog computing was coined by Cisco to denote cloud computing power closer to the physical place where the data is being generated and acted upon. So it’s not edge computing in the sense that edge is not necessarily cloud… and fog is smaller thinner version of cloud (in the real world… and in technology terms), hence the name.
So far we can see that fog computing even has its own working group, the OpenFog Consortium.
According to the consortium itself, “The growth in IoT is explosive, impressive – and unsustainable under current architectural approaches. Many IoT deployments face challenges related to latency, network bandwidth, reliability and security, which cannot be addressed in cloud-only models. Fog computing adds a hierarchy of elements between the cloud and endpoint devices, and between devices and gateways, to meet these challenges in a high performance, open and interoperable way.”
Hype and more hyperbole? Perhaps… but fog is a term that we need to at least include in our technical vocabulary now it seems. | <urn:uuid:85808bc7-6cda-4e1d-9cfb-43e0fd698610> | CC-MAIN-2017-04 | https://internetofbusiness.com/forget-cloud-iot-needs-fog-computing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00435-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931164 | 575 | 3.109375 | 3 |
The most popular and comprehensive Open Source ECM platform
Cloud Computing: Scientists Interested, but find the Cloud isn't Ready for 'Heavy' Scientific Computing
Two government laboratories, Argonne and Lawrence Berkeley Labs (LBNL), recently decided to give public cloud computing a kick in the cloud’s tires to see whether or not they’d be able to use the cloud to handle some of their scientific computing. The National Energy Research Scientific Computing Center (NERSC) at LBNL is home to the second most powerful computer in the US, a 153,408 processor computer which is capable of running at petaflop speeds. The scientific computations and simulations that researchers and scientists at these laboratories perform are demanding to say the least.
In 2009, the labs set up a special project to begin looking into cloud computing and how it could best be used to help with solving scientific problems. The Department of Energy (DOE) set aside $32 million for the project called Magellan for running an investigation. Pete Beckman, director of Argonne Leadership Computing Facility and leader of the ALCF Magellan team, said that “The question the Department of Energy has is pretty straightforward: what kind of science can be done on clouds, and are there specializations or customizations that we can do on the software to get more science out of clouds?”
The report found that the public cloud isn’t yet competitive with the existing computer infrastructure at the labs. The report says that it’s hard to beat the efficiencies that have already been built into the governement laboratory computing facilities where utilization levels are over 85% and have a Power Usage Effectiveness (PUE) ratings in the range of 1.2to 1.5.
“Many of the cost benefits from clouds result from increased consolidation and higher average utilization. Because existing DOE centers are already consolidated and typically have high average utilization, they are usually cost effective when compared with public clouds. Our analysis shows that DOE centers can range from 2-13x less expensive than typical commercial offerings. These cost factors include only the basic, standard services provided by commercial cloud computing, and do not take into consideration the additional services such as user support and training that are provided at supercomputing centers today. These services are essential for scientific users who deal with complex software stacks and dependencies and require help with optimizing their codes to achieve high performance and scalability.”
But the report goes on to say that the labs computing facilities have much that they can learn from cloud computing, particularly the cloud computing ‘business model’. “Users with applications that have more dynamic or interactive needs could benefit from on-demand, self-service environments and rapid elasticity through the use of virtualization technology,and the MapReduce programming model to manage loosely coupled application runs.”
The report recommends that the labs continue to run frequent comparisons of their environment against the current state of the public cloud. The public cloud is evolving at a rapid pace and it is likely that the gap between the two will begin to close. | <urn:uuid:ef7410fd-ed5f-4757-8f02-a6017ce2aba7> | CC-MAIN-2017-04 | http://formtek.com/blog/cloud-computing-scientists-interested-but-find-the-cloud-isnt-ready-for-heavy-scientific-computing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00555-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945049 | 630 | 2.796875 | 3 |
Providing cell phones to some 17,000,000 of the United States' poorest households without such devices could help bread-earners find new work and generate as much as $11 billion in wages, as well as boost those families' overall safety, according to a new study.
The Cell Phones Provide Significant Economic Gains for Low-Income American Households study, by author and Massachusetts Institute of Technology (MIT) Research Fellow Nicholas P. Sullivan, was released this week by the New Millennium Research Council (NMRC) think tank. It is based on the findings of two separate surveys: a random telephone poll of 1,005 U.S. residents 18 years or older, conducted by Opinion Research Corporation (ORC); and an e-mail survey of more than 110,000 prepaid TracFone mobile customers. Both surveys were conducted in the fall of 2007.
The ORC survey found that all Americans who employed cell phones to find new employment or to make additional money in 2007 made an average of $748.50 that could be directly attributed to the use of their mobile devices. Of the 45.2 million U.S. households that bring in less than $35,000 a year--of which 38 percent, or 17,176,000 do not have mobile phones--the average amount of cash earned that could be directly attributed to the use of mobile phones was more than $200 less than the overall average, at $530, according to the study. If those households without mobile phones were able to earn money at the same rate as cell-phone-owning families after obtaining a cell phone, household income would increase by an estimated $2.9 billion, Sullivan says.
Based on data from the survey of TracFone users, where 30 percent of working participants claimed monetary gain in 2007 from cell phone user, the average annual gain attributed to mobile phones was much larger, at $2,361. Using this number, non cell-phone using household with yearly income of $35,000 or less, that could earn at the same rate, could make up to $11.1 billion more were they to acquire mobile phones, according to Sullivan. So potential income gains for low-income American households without cell phones would be between $2.9 billion and $11.1 billion, the study found.
"The cell phone is particularly important to blue-collar, minority, less-educated and low-income segments of Americans, even though those groups are far less likely to own cell phones," Sullivan said, in a release. The study is meant to show the benefits of stepping up state and federal Lifeline or Link-up programs, which provide affordable phones service to income-eligible consumers.
Additional noteworthy study findings include:
The majority of every major demographic U.S. segment thinks mobile phones are "extremely important" for "emergency use" and would take a cell phone over a landline during an emergency or a crime.
Fifty-eight percent of Americans would pick a cell phone if they had to choose between a landline or mobile
Forty-eight percent of Americans have placed a mobile phone call or send a text message during an emergency; 20 percent have received an emergency cell phone call or text; and 32 percent have purchased a mobile phone for a loved one in case of emergency
- Thirty-seven percent of Americans without cell phones are older/retired; 29 percent have a high school educate or less; and 30 percent are unemployed. | <urn:uuid:4c6aaf35-1d2a-45d9-b263-d20fb2b2a7c8> | CC-MAIN-2017-04 | http://www.cio.com/article/2436899/consumer-technology/giving-cell-phones-to-low-income-households-could-create--11b-in-jobs--study-fin.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00555-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.968557 | 713 | 2.734375 | 3 |
Army puts lightweight battery chargers into the field
Even when soldiers are deployed into places with no existing power grid, they need to have their notebooks, tablets and smart phones charged up and ready to go. But batteries are heavy, and requiring soldiers to carry their own batteries for recharging might not make the most sense.
A team of Army engineers at the U.S. Army Research, Development and Engineering Command in Aberdeen Proving Ground, Md., is on the case.
Engineers are working on chargers with both USB ports and AC plugs that connect to military standard batteries. Since these batteries are likely already going to be requisitioned at a forward base for a variety of uses, having a much lighter recharging hub that uses them makes a whole lot of sense.
The team is making models with two, four or eight USB ports -- and they’ll be able to charge as many devices simultaneously as they have ports -- in addition to a single AC power plug. Because the chargers will weigh mere ounces compared to the pounds that a battery weighs, they are more portable. An eight-port charger for smart phones weighs 2.5 ounces; a two-port charger that works with both smart phones and tablets weighs 1.8 ounces. And a battery such as the BB-2590 can recharge a smart-phone battery 37 times before it needs recharging itself.
The team has developed chargers for smart phones and tablets and is working on a 150-watt charger with an AC adapter for charging all commercially available laptops, the Army said.
Posted by Greg Crowe on Dec 03, 2012 at 9:39 AM | <urn:uuid:636a71b4-17b7-4b66-8291-c6706cbf2c68> | CC-MAIN-2017-04 | https://gcn.com/blogs/mobile/2012/12/army-lightweight-battery-chargers.aspx?admgarea=TC_Mobile | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00463-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960708 | 332 | 2.5625 | 3 |
Kaspersky Lab data shows that the number of malicious programs targeting mobile devices has more than doubled between August 2009 and December 2010. In 2010, over 65% more new threats targeting mobile devices were detected than in the previous year; and over 1,000 variants from 153 different families of mobile threats were included in Kaspersky Lab’s databases by the end of 2010.
As noted by Denis Maslennikov, a Senior Malware Analyst at Kaspersky Lab and author of Mobile Malware Evolution: An Overview, Part 4, “The list of platforms targeted by malicious programs expanded considerably in 2010.”
The growing popularity of the Android platform has inevitably drawn the cybercriminals attention: in August 2010, the first malicious program targeting Android was detected, and since then, that number has reached 15 programs from a total of 7 families. The first threats targeting Apple's iPhone OS also appeared during this last reporting period, but infected only devices that had been jailbroken in order to install third-party games and other software not manufactured by Apple. Most mobile threats continue to target the Java 2 Micro Edition (J2ME) platform, which is supported by a huge number of mobile devices. That means it is not only smartphones that are at risk of infection, but basic mobile phones as well. The second most-targeted platform is Symbian, with Python in third place.
The distribution of variants of detected threats, by platform
“The use of SMS Trojans is still the easiest and most effective means by which malicious users can earn money. The reason is relatively simple: any mobile device, be it a smartphone or a basic mobile phone, has a direct connection to its owner’s money via their mobile account. It is this ‘direct connection’ that cybercriminals actively exploit,” explains Denis Maslennikov.
From 2010 onwards, sending fee-based text messages ceased to be the sole illegal money-making scheme for virus writers developing threats targeting different platforms. Other unlawful schemes such as redirecting mobile Internet banking users to phishing sites and stealing passwords sent by banks to mobile phones were also used. Mobile threats have become more complex than ever and include the emergence of mobile bots and other remotely-controlled software. According to Denis Maslennikov “This means that attacks launched by mobile threats have reached a completely new level.”
Kaspersky Lab predicts an increase in the number of vulnerabilities found on mobile platforms, as well as an increase in the number of threats for Android and the continued use of short numbers by cybercriminals.
You can find the full text of Mobile Malware Evolution: An Overview, Part 4 on www.securelist.com/en. Kaspersky Lab gives its consent to reprint our articles as long as it is properly attributed (citation of the author, the company and the primary source of publication). This text may not be republished without the consent of the company’s Information Service. | <urn:uuid:66e5a285-46c6-4c92-a9c3-02eb39e09489> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/virus/2011/Mobile_Threats_Double_in_Number | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00271-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.96008 | 612 | 2.65625 | 3 |
The Digital Accountability and Transparency Act (DATA Act) was signed into law by President Obama on May 9, 2014, after unanimously passing both The House of Representatives and The Senate. Since its founding in February 2012, the Data Transparency Coalition has made the creation and successful implementation of the DATA Act its top priority.
The DATA Act requires the federal government to standardize and publish reports and data compilations related to spending. These relate to topics including financial management, payments, budget actions, procurement, and assistance. By standardizing the way this information is published, information on federal spending will be readily available and clearer, making steps toward putting a stop to duplication, waste, and fraud.
“In the digital age, we should be able to search online to see how every grant, contract, and disbursement is spent in a more connected and transparent way through the federal government,” said Sen. Mark Warner (D-VA), a co-sponsor of the bill, after its passage.
Standardization will undoubtedly have benefits, but it is a large hurdle for federal agencies to surmount. The community of federal grant recipients has identified as many as 1,100 different data elements that could be included in standard reporting. The implementation of the bill is another hurdle. The final language requires everything the federal government spends at the appropriations account level to be published on USASpending.gov, with the exception of classified material and information that wouldn’t be revealed in response to a Freedom of Information Request.
VARs should familiarize themselves with the DATA Act to make themselves aware of opportunities and challenges that result from the new law that will regulate how the government reports its spending information. | <urn:uuid:e64eafe5-7dbe-4e03-9c27-c3361502ebd9> | CC-MAIN-2017-04 | http://www.bsminfo.com/doc/what-the-data-act-means-for-vars-0001 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00573-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959485 | 344 | 2.6875 | 3 |
Generally, most Americans are optimistic about the impact of technology on the future, specifically in regards to its impact on healthcare. A recent Pew Research report shows that 81% of responding Americans expect that patients in need of a transplant will be able to receive lab-grown organs within the next 50 years, for example.
However, concerns over technology that is already controversial today trump the optimism overall. From the report:
66% think it would be a change for the worse if prospective parents could alter the DNA of their children to produce smarter, healthier, or more athletic offspring.
65% think it would be a change for the worse if lifelike robots become the primary caregivers for the elderly and people in poor health.
63% think it would be a change for the worse if personal and commercial drones are given permission to fly through most U.S. airspace.
53% of Americans think it would be a change for the worse if most people wear implants or other devices that constantly show them information about the world around them. Women are especially wary of a future in which these devices are widespread.
Other interesting findings: 48% of respondents said they’d be interested in riding in a driverless car, while 50% said they would not be, which is just further evidence of the uphill battle the autonomous vehicle faces. And while most are excited about lab-grown human organs, just 20% said they’d eat meat that was grown in a lab.
The concern over wearable devices echoes the growing opposition to Google Glass, whose backlash has ranged from online mockery to physical violence and theft against those wearing the device in public. As I’ve written and as Google has recently tried to clarify, much of this concern is the result of misconceptions about the technology, particularly in regards to facial recognition. Although some Glass “explorers” have developed functional facial-recognition apps for the device, Google has adamantly refused to support the technology.
That, however, doesn’t change the fact that a YouTube video shows people wearing Glass devices that recognize people’s faces, matches them with their names, and searches their information on social networks and criminal databases. Nor does it address the potential for a Glass competitor emerging that elects to support facial recognition, which the facial-recognition app developers said they'd embrace if Google doesn't change its policy. It’s not entirely surprising to see that the majority thinks “it would be a change for the worse” to encounter more wearable devices that reveal information about other people in their surroundings.
The negativity around drones isn’t much of a surprise either, considering the privacy concerns about the devices. One small town in Colorado even permitted residents to shoot down airborne drones at will, as long as they’d obtained the proper paperwork. The vote on the ordinance has since been postponed “while a district court decides whether the ordinance is legal,” according to CNN. But the fact that it got enough support to get to a vote suggests just how people feel about drones.
Aside from the extreme drone opponents, consider the everyday people worried about safety. Just last week, a drone fell out of the sky and struck a triathlete in Australia. An ambulance crew had to treat her for a laceration and reportedly removed a piece of the drone’s propeller from her head. Compare this problem with Amazon’s vision of constant drone deliveries and you have a recipe for a country full of concerned parents.
The disdain for “lifelike” robots as caregivers was a bit surprising. I suppose I wouldn’t like a robot as the only person taking care of me, but it doesn’t sound too bad as a replacement for orderlies in some cases. We’ve all heard stories of abuse at elderly care facilities, and outside of science fiction movies I’ve never heard of a robot committing a senseless act of violence or neglect. But I don’t know a whole lot about the subject, and I’m sure it’ll have to be sorted out.
One issue I was surprised not to see was concern over the impact of robots and drones on jobs for humans. A 2013 Oxford study (PDF) estimated that as many as 47% of human jobs in the U.S. can be automated, taken over by robots or drones that don’t require a wage (let alone a minimum wage) and can work round-the-clock. I’d long considered myself exempt from this threat, and then a robot starting writing news stories for the LA Times and a software program started writing better sports articles than humans could. Nobody is really safe from the impending robot jobs war. Lifelike robots aren’t the only kind that should cause concern for the future. | <urn:uuid:6d4a17ae-c780-45f9-8218-ac5edfacbf5f> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2226758/opensource-subnet/americans-are-scared-about-the-future-of-drones--robots--and-wearables.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00573-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961045 | 986 | 2.5625 | 3 |
5.2.4 What is SecurPC?
RSA SecurPC is a software utility that encrypts disks and files on both desktop and laptop personal computers. SecurPC extends the WindowsTM File Manager or Explorer to include options for encrypting and decrypting individually selected files or files within selected folders. Each file is encrypted using RC4 (see Question 3.6.3) with a randomly generated 128-bit key (40 bits for some non-U.S. users.) The random key is encrypted under the user's secret key, which is encrypted under a key derived from the user's passphrase. This allows the user's passphrase to be changed without decrypting and reencrypting all encrypted files.
SecurPC provides for optional emergency access to encrypted files, based on a k-of-n threshold scheme. The user's secret key may be stored, encrypted with the RSA algorithm, under an emergency access public key. The corresponding private key is given, in shares, to any number of trustees. A designated number of these trustees must present their shares in order to decrypt the encrypted files.
SecurPC has been superseded by RSA Security's Keon Desktop, but some information about the product may still be found at http://www.solwaycomms.co.uk/. | <urn:uuid:58dccb85-be90-4041-b2a6-f4051b0607d1> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/securpc.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00326-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.899168 | 269 | 2.890625 | 3 |
The effect of five decades of exponential progress with silicon chips doubling in speed every couple years as observed by Intel cofounder Gordon Moore in 1965 cannot be overstated. As silicon-based transistors push against the limits of physics, the death of Moore’s law could pack a devastating blow to the industry and even the global economy. It’s a big problem that has chip makers, like IBM, Intel and others, scrambling for a workaround. One of the most promising strategies for extending Moore’s law involves using carbon nanotube-based transistors.
Currently, Intel makes most of its CPUs on a 22nm manufacturing process, and its smallest silicon transistor measures 14 nanometers. The semiconductor industry group, ITRS, anticipates that the five-nanometer “node” will debut in 2019. It’s a point that may very well spell the death of silicon from a practical standpoint. That’s the opinion of Wilfried Haensch, who heads up IBM’s nanotube project at the T.J. Watson research center in Yorktown Heights, New York.
“That’s where silicon scaling runs out of steam, and there really is nothing else,” says Haensch in an article on MIT’s Technology Review.
When this day comes, IBM wants to have its carbon nanotube-based processors ready to roll out. It’s a plan that’s been many years in the making.
IBM’s history with carbon nanotube transistors dates back to 1998, when company researchers showed that it was a viable approach by building one of the first working prototypes. Now IBM is working to bring the technology to commercialization.
According to simulations carried out at T.J. Watson research center, the design that IBMers are implementing will be five times faster than silicon-based microprocessors using the same amount of power. The technology, while very real, is still in the design stage, however, and there are no guarantees it will pan out.
IBM obviously has a lot of investment sunk into the silicon-based manufacturing process so naturally the company is focusing on building a carbon-based transistor using similar design and manufacturing methods. The research group recently made chips with 10,000 nanotube transistors, using six-packs of nanotubes, each 1.4 nanometers wide and 30 nanometers long. The ends of the tubes make contact with electrodes which supply current, while a third electrode runs underneath and acts as a switch.
At this stage of design, the researchers cannot get the nanotubes close enough because existing chip technology doesn’t operate at that scale. They are working on a solution that would cause the tubes to self-assemble into position. The helper compounds would then be removed, leaving the nanotubes in the proper configuration ready for the electrodes and other circuitry to be added.
A lot is riding on the research. If the nanotube transistors are not ready in time to meet the post-silicon demand, they may miss their market opportunity, according to IBM’s James Hannon, head of the company’s molecular assemblies and devices group. But there’s not a lot of other options out there. Possibilities like spintronics exist, but they’re less mature, and don’t have the advantage of behaving like silicon transistors, so they wouldn’t be compatible with existing semiconductor manufacturing techiques. | <urn:uuid:1f01b8e7-481a-4210-849d-7474fe5bf188> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/07/02/ibm-bets-nanotubes-succeed-silicon-2020/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00528-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929568 | 724 | 3.578125 | 4 |
BEAVERTON, OR--(Marketwire - Dec 5, 2012) - The amounts of sustained noise people are subjected to in everyday life have reached unsafe levels, according to a new report authored by leading sound experts and published today. Building in Sound, developed by Biamp Systems in collaboration with acoustics expert and TED speaker Julian Treasure, reports that everyday noise levels regularly exceed World Health Organization's (WHO) recommended levels. The study draws clear links between excessive noise and poor acoustics and ill-health, distraction and loss of productivity, even disruption to educational development.
Drawing on a variety of academic, government and industry body sources, the paper has identified the economic and social impacts noise can have on everyday life -- whether in a city, at work, in a classroom or hospital.
Examples of the sort of noise levels urban populations are regularly exposed to include:
- An air conditioning unit puts out sounds of 55 decibels. At this level, sleep is impaired and the risk of heart disease increases. Yet an average busy office has been recorded at 65 decibels.
- Street traffic has been recorded at 70 decibels. Regular unprotected exposure to the same level of noise can lead to permanent hearing loss.
- The average noise of a motorway is around 85 decibels, the same point at which US Federal Law mandates hearing protection for prolonged exposure.
The study also looks at much needed solutions to the issues -- given that road traffic noise is estimated to cost between 30 and 46 billion Euros a year ($39 and 60 billion USD a year), or 0.4% of GDP in the European Union.1 It calls for an integrated approach to acoustic design that incorporates cutting edge sound technology with a more thoughtful approach to architectural design and construction. Properly executed, managing sound can lead to higher employee productivity and job satisfaction, lower crime rates in urban environments, and increased sales in business.
"Noise is a major threat to our health and productivity -- but until now we have been largely unconscious of its effects because of our obsession with how things look," says Julian Treasure, chairman of The Sound Agency. "We need to start designing with our ears, creating buildings and public spaces that sound as good as they look. If we do that, we can transform the productivity and wellbeing of office workers, patients in hospitals and children in schools, among many others."
"This isn't a call for silence, but an appeal to start considering the effects poorly managed sound can have," says Graeme Harrison, vice president of marketing at Biamp Systems. "The right sound and acoustics can transform education, healthcare and work, but we have to address the problem now because it's only going to become more difficult in the future. We have the technology and expertise to manage the acoustics of new and existing environments, but now's the time to act and build in sound."
The full report and infographic are available for download here.
About Biamp Systems
Biamp Systems is a leading provider of innovative, networked media systems that power the world's most sophisticated audio/video installations. The company is recognized worldwide for delivering high-quality products and backing each product with a commitment to exceptional customer service. Industry collaboration and education lie at the heart of Biamp's philosophy. The company is a founding member of the AVnu Alliance, the industry body dedicated to developing standards for professional-quality networked audio and video systems, and it was the first US manufacturer to certify a networked audio solution as EN 54-16 compliant.
The award-winning Biamp product suite includes the Tesira® media system for digital audio networking, Audia® Digital Audio Platform, Nexia® digital signal processors, Sona™ AEC algorithm and Vocia® Networked Public Address and Voice Evacuation System. Each has its own specific feature set that can be customized and integrated in a wide range of applications, including corporate boardrooms, conference centers, performing arts venues, courtrooms, hospitals, transportation hubs, campuses and multi-building facilities.
Founded in 1976, Biamp is headquartered in Beaverton, Oregon, USA, with additional engineering operations in Brisbane, Australia. For more information on Biamp, please visit www.biamp.com.
About Julian Treasure
Julian Treasure is chairman of The Sound Agency, a UK-based consultancy that helps clients achieve better results by optimising the sound they make in every aspect of business. He is also the author of the book Sound Business, the first map of the exciting new territory of applied sound for business. Mr. Treasure has been widely featured in the world's media and conferences, including TED. His four TED talks have been viewed an estimated four million times. His latest talk is on why architects need to use their ears.
1 SILENCE - Recommendations: Practitioner Handbook for Local Noise Action Plans: European Commission Sixth Framework Programme | <urn:uuid:b4bf9084-83f0-4907-9ad6-a471ae1a7af2> | CC-MAIN-2017-04 | http://www.marketwired.com/press-release/biamp-systems-releases-report-on-the-hazardous-levels-of-noise-in-everyday-life-1733911.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00556-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951168 | 998 | 2.859375 | 3 |
In 1956, now nearly sixty years ago, IBM introduced the first iteration of the piece of technology we know as the hard disk drive. The 305 RAMAC system came equipped with fifty 24-inch platters and had a total capacity of 5MB. Using only a single read/write assembly sporting two heads in order to access each platter, the 305 RAMAC had an access time of nearly one second. It was also about the size of two household refrigerators, side by side.
1961 saw the advent of air bearings, or read/write heads that fly on a cushion of air caused by the spinning of the platters. This technology is still in use today, though much refined. Bryant Computer Products also produced their 4000 series drives in 1961. Upping the ante to nearly 205MB, these drives were physically massive but afforded greater storage with access times of around 50-200 milliseconds.
Just a year later, IBM launched the first removable hard drive called the IBM 1311. Each removable disk pack held six disks, weighed ten pounds, and had a capacity of about two million characters. In 1962, these disks only rotated at 1,500 revolutions-per-minute with one head per disk. The average contemporary hard drive rotates at around 7,200 rpm’s. They come in different speeds of course, but 7,200 is the most common.
IBM introduced the first wound-coil ferrite read/write head.
Memorex came onto the scene in 1968 with their Memorex 630, a hard drive compatible with IBM systems. This is very important because it’s the first time there was real competition within IBM’s relative dominance of the industry. In order to play the game, Memorex had to start by playing by IBM’s rules.
General Digital Corporation was founded in 1970 in the sunny state of California. Renamed Western Digital in 1971, they are one of the top hard drive manufacturers in the world today and currently maintain a data recovery partnership with Gillware.
IBM released the first ‘Winchester’ HDD in 1973. Winchester drives came standard with lubricated platters, low-mass heads, and all housed within a sealed assembly. The fundamentals of this design were standard until about 2011. The project lead originally named it after the Winchester 30-30 rifle because it was supposed to come with two 30MB spindles, though this was not the final design.
In 1977, the first RAID assembly patent was filed. The patent described a system that we would today refer to as RAID 4. The first RAID system would not be physically assembled until the late 1980s.
Seagate Technology was founded. 1979 also saw the release of IBM’s 3370 system, the first drive to use thin-film heads.
Seagate released the ST-506, the first 5.25 inch HDD for use in microcomputers. IBM also released the first 1 Gigabyte capacity HDD in 1980, though it was about 550 pounds and cost $40,000.
In 1981, Shugart Associates announced the SASI interface, or Shugart Associates Systems Interface. This would be the predecessor to SCSI. Fun fact: Alan Shugart is also responsible for the creation of Seagate Technologies.
Western Digital announced the first single-chip Winchester hard drive controller.
In 1983, Rodime launched the first 3.5 inch hard drive, complete with two platters and a capacity of 10MB.
Control Data, Compaq, and Western Digital came together in 1985 to create the first 40-pin IDE interface. At the time, it stood for Intelligent Drive Electronics, though it is now referred to as Integrated Drive Electronics. Imprimis also released the first integrated hard drive controller in 1985.
1986 saw the standardization of SCSI, or Small Computer System Interface. Many consumers are familiar with this interface through the use of 15,000 rpm hard drives.
In 1988, Conner produced the first 1-inch high, 3.5-inch hard drive, a common form factor today. Western Digital also acquired Tandon Corporation in 1988, marking their entry into the IDE hard drive market.
Seagate produced the first shock-sensing hard drive. Due to the fragility of hard drives, this was a very useful invention. Seagate also brought the first 7200 rpm hard drive to market in 1992, their 2.1GB Barracuda.
In 1994, Western Digital launched Enhanced IDE, allowing for greater than 528MB throughput.
Seagate acquired Conner Peripherals in 1996. IBM also officially produced a billion bits per square inch on a hard drive platter.
IBM produced their 16.8GB “Deskstar 16GP Titan, “ the first drive to use Giant Magnetoresistance (GMR) heads. Seagate also released the first hard drive using fluid bearings, a technology still in use today.
IBM created the Microdrive, a 1-inch 170MB or 340MB HDD that has long since been yielded obsolete by flash memory.
Maxtor acquired Quantum, making them the largest hard drive manufacturer in the world. Seagate also launched the first 15,000 rpm hard drive. In just six short years, Maxtor would be completely bought out by Seagate.
The Serial ATA, or SATA Interface was released in 2003. The original model allowed up to 1.5Gb/sec data transfer. The current model, SATA 3, allows for up to 6Gb/sec data transfer. IBM also sold their hard drive division to Hitachi in 2003, marking their exit from the HDD development industry.
In 2005, the first 500GB HDD was unveiled by Hitachi GST. SATA 2 was also standardized, allowing for up to 3Gb/sec data transfer.
In 2006, Seagate acquired Maxtor and produced the first 750GB HDD.
In 2007, we see the first 1 TB hard drive, once again by Hitachi GST. This is when storage capacities really start to take off.
First 1.5TB hard drive, Seagate once again taking the lead.
First 2TB hard drive, Western Digital.
First 3TB hard drive, this time a collaboration between Seagate and Western Digital. The advanced format was also developed, allowing for 4,096 bytes per block rather than the standard 512 bytes per block.
First 4TB hard drive by Seagate. Flooding in Thailand also caused many hard drive manufacturing plants to close, causing HDD prices to double globally. This price increase also caused many consumers to purchase solid-state drives (SSDs) in lieu of HDDs.
After being acquired by Western Digital, HGST announced the first helium-filled HDDs, causing running temps to be much cooler and an increase from five to seven platters in a 3.5-inch form factor. TDK also managed to fit 2TB on a single platter. Finally in 2012, due to a decision by the U.S. Federal Trade Commission requiring Western Digital and HGST to give assets and intellectual property to Toshiba, Toshiba re-entered the 3.5-inch HDD market.
Seagate announced 5TB HDDs with overlapping data technology while HGST announced a 6TB helium drive for enterprise applications.
Seagate released the first 6TB and 8TB non-helium drives, which are more cost-effective than their counterparts. HGST (now a WD subsidiary) also began work on a 10TB helium HDD.
This year, HGST officially shipped the world’s first 10TB HDD.
In recent history, every year has yielded some improvement to modern hard drives. Whether that be a new interface, greater storage capacity, or a completely innovative new technology, it is exciting to imagine what we will see next year.
If I have left out anything important, please leave a comment below describing the event and when it occurred. Also, don’t be afraid to check out our main website here. Thank you! | <urn:uuid:7e733152-7640-41fa-954e-85aa4f3d6622> | CC-MAIN-2017-04 | https://www.gillware.com/blog/data-recovery/a-brief-history-of-hard-drives/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00124-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951388 | 1,647 | 3.3125 | 3 |
Understanding Impulse Noise
One of the most common issue related to impulse noise is bad IPTV performance, such as slower frame rate or frame loss, which can cause image to freeze or introduce pixelization.
Impulse noise is a type of electromagnetic interference (EMI) that typically emanates from power transmission, radio and TV, electronics, and even cell phones.
The latest high-speed DSL technologies (such as VDSL2) typically operate at critical SNR margins using a very broad part of the frequency spectrum, and are more susceptible to impulse noises than legacy voice or ADSL.
Impulse noise, as for other types of noise, reduces the SNR margin—which is critical to DSL performance (and services such as IPTV and VoIP)—and must be characterized and the source identified to mitigate it.
However, noise emitters—particular those from non-continuous sources—are very difficult to detect and isolate due to the fact that they are intermittent, very fast, and are different from typical telecom signals.
Therefore, specialized tools designed to detect and characterize impulse noise should be used earlier in the troubleshooting process to understand impulse noise and make a plan to mitigate it.
Every telecom technicians who installs and maintains FTTN/B networks knows this: one of the most difficult steps while troubleshooting is to pinpoint the source, type and impact of the noise affecting the performance and capacity of the xDSL line.
Seeing the Problem
- Impulse noise is intermittent by nature, and much harder to detect and analyze than traditional interference.
- Its fast burst nature defies traditional troubleshooting methods.
- Nowadays, high-end bench-type oscilloscopes can help see, capture and analyze impulse noises during field tests.
- Bursty, high-frequency impulses cause packet and pixel loss (video and audio) and degrade or freeze services like IPTV or over-the-top video (e.g., Netflix).
- Repetitive electrical impulse noise (REIN), prolonged electrical impulse noise (PEIN), short impulse noise event (SHINE) and electromagnetic interference (EMI) all affect DSL services in their own way. The sources of these various types of noise are also different. Understanding impulse duration, repetition and frequency helps to identify the source.
- Time of day: knowing when each type of impulse noise occurs is key to targeting the source.
The Test Approach
Signs of impulse noise degradation
- Bad voice quality
- Pixelization on IPTV
- DSL not synchronizing
The outcome: low QoE
- Validate the full range triple-play services
- Analyze low QoS on the DSL line
- Use of a powerful hybrid field-testing tool is required
Identify the type of noise
- Narrow-band noise
- Wideband noise
- Disruptive impulse noise
Capture and visualize the noise
- Simultaneous time and frequency domain analysis
- Recognize the signature and the source of the noise
- Also works in the case of complex EMI, REIN or SHINE issues.
Impulse Noise Scope
Analyze the noise duration and disruption
- See the variations of the noise over time
- See the distribution of fast and slow impulses over time
Impulse Noise Duration Disruption (IDD) | <urn:uuid:5389ff23-0aa6-436f-92d0-cb9314d25903> | CC-MAIN-2017-04 | http://exfo.com/solutions/fttx-access-networks/bu4-fttn-networks/understanding-impulse-noise | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00454-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.887304 | 681 | 2.75 | 3 |
An international team of scientist s today said a the third largest near-Earth object-believed for 30 years to be an asteroid, is actually a comet.
Using the Spitzer Space Telescope operated by the NASA's Jet Propulsion Laboratory at the team -- led by Michael Mommert of Northern Arizona University and Joshua Emery, assistant professor of Earth and planetary sciences at the University of Tennessee - looked at images of the rocky object known as 3552 Don Quixote taken in 2009 when it was in orbit closest to the Sun and found it had a coma and a faint tail.
[RELATED: We all may have a little Martian in us]
About 5% of near-Earth objects are thought to be "dead" comets that have shed all the water and carbon dioxide in the form of ice that give them their coma -- a cloud surrounding the comet nucleus -- and tail. The team found that Don Quixote is neither. It is, in fact, an active comet, thus likely containing water ice and not just rocks, the team concluded.
Emery said the researchers also reexamined images from 2004, when it was at its farthest distance from the Sun and found that the surface is composed of silicate dust, which is similar to comet dust.
He also determined that Don Quixote did not have a coma or tail at this distance, which is common for comets because they need the Sun's radiation to form the coma and the Sun's charged particles to form the tail. The researchers also confirmed Don Quixote's size and the low, comet-like reflectivity of its surface.
"Don Quixote has always been recognized as an oddball," said Emery in a statement. "Its orbit brings it close to Earth, but also takes it way out past Jupiter. Such a vast orbit is similar to a comet's, not an asteroid's, which tend to be more circular -- so people thought it was one that had shed all its ice deposits."
What all of this means is that carbon dioxide and water ice might be present within other near-Earth asteroids, as well. It also may have implications for the origins of water on Earth as comets may be the source of at least some of it, and the amount on Don Quixote represents about 100 billion tons of water -- roughly the same amount that can be found in Lake Tahoe, according to Emery.
Check out these other hot stories: | <urn:uuid:2e0abf87-2436-4cd4-9afa-5824a1c19694> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2225340/security/-oddball--asteroid---the--third-largest-near-earth-rock----is-really-a-comet.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00355-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.970591 | 502 | 3.828125 | 4 |
The article below is excerpted from the book, Principles of Big Data, Preparing, Sharing, and Analyzing Complex Information (Morgan Kaufmann, 2013) with permission from the publisher. The book covers methods that permit data to be shared and integrated among different big data resources..
Imagine using a restaurant locater on your smartphone. With a few taps, it lists the Italian restaurants located within a 10-block radius of your current location. The database being queried is big and complex (a map database, a collection of all the restaurants in the world, their longitudes and latitudes, their street addresses, and a set of ratings provided by patrons, updated continuously), but the data that it yields is small (e.g., five restaurants, marked on a street map with pop-ups indicating their exact address, telephone number, and ratings). Your task comes down to selecting one restaurant from among the five and dining thereat.
In this example, your data selection was drawn from a large data set, but your ultimate analysis was confined to a small data set (i.e., five restaurants meeting your search criteria). The purpose of the big data resource was to proffer the small data set. No analytic work was performed on the big data resource—just search and retrieval. The real labor of the big data resource involved collecting and organizing complex data so that the resource would be ready for your query. Along the way, the data creators had many decisions to make (e.g., should bars be counted as restaurants? What about take-away only shops? What data should be collected? How should missing data be handled? How will data be kept current?).
Big data is seldom, if ever, analyzed in toto. There is almost always a drastic filtering process that reduces big data into smaller data. This rule applies to scientific analyses. The Australian Square Kilometre Array of radio telescopes, WorldWide Telescope, CERN’s Large Hadron Collider, and the Panoramic Survey Telescope and Rapid Response System array of telescopes produce petabytes of data every day. Researchers use these raw data sources to produce much smaller data sets for analysis.
Here is an example showing how workable subsets of data are prepared from big data resources. Blazars are rare super-massive black holes that release jets of energy moving at near-light speeds. Cosmologists want to know as much as they can about these strange objects. A first step to studying blazars is to locate as many of these objects as possible. Afterward, various measurements on all of the collected blazars can be compared and their general characteristics can be determined. Blazars seem to have a gamma ray signature not present in other celestial objects. The Wide-field Infrared Survey Explorer (WISE) collected infrared data on the entire observable universe. Researchers extracted from the WISE data every celestial body associated with an infrared signature in the gamma ray range that was suggestive of blazars—about 300 objects. Further research on these 300 objects led researchers to believe that about half were blazars (about 150).
This is how big data research typically works—by constructing small data sets that can be productively analyzed. The table below identifies key differences between small and big data.
|Small Data||Big Data|
|Goals||Answer a specific question or serve a particular goal.||There is a vague goal, but there really is no way to completely specify what the big data resource will contain and how the various types of data held in the resource will be organized, connected to other data resources, or usefully analyzed.|
|Location||Typically, small data is contained within one institution, often on one computer, sometimes in one file.||Typically spread throughout electronic space, typically parceled onto multiple Internet servers, located anywhere on earth.|
|Data Structure and Content||Ordinarily contains highly structured data. The data domain is restricted to a single discipline or subdiscipline. The data often comes in the form of uniform records in an ordered spreadsheet.||Must be capable of absorbing unstructured data (e.g., such as free-text documents, images, motion pictures, sound recordings, physical objects). The subject matter of the resource may cross multiple disciplines, and the individual data objects in the resource may link to data contained in other, seemingly unrelated, big data resources.|
|Data Preparation||In many cases, the data user prepares her own data, for her own purposes.||The data comes from many diverse sources, and it is prepared by many people. People who use the data are seldom the people who have prepared the data.|
|Longevity||When the data project ends, the data is kept for a limited time (seldom longer than 7 years, the traditional academic life span for research data) and then discarded.||Big data projects typically contain data that must be stored in perpetuity. Ideally, data stored in a big data resource will be absorbed into another resource when the original resource terminates. Many big data projects extend into the future and the past (e.g., legacy data), accruing data prospectively and retrospectively.|
|Measurement||Typically, the data is measured using one experimental protocol, and the data can be represented using one set of standard.||Many different types of data are delivered in many different electronic formats. Measurements, when present, may be obtained by many different protocols. Verifying the quality of big data is one of the most difficult tasks for data managers.|
|Reproducibility||Projects are typically repeatable. If there is some question about the quality of the data, reproducibility of the data, or validity of the conclusions drawn from the data, the entire project can be repeated, yielding a new data set.||Replication of a big data project is seldom feasible. In most instances, all that anyone can hope for is that bad data in a big data resource will be found and flagged as such.|
|Stakes||Project costs are limited. Laboratories and institutions can usually recover from the occasional small data failure.||Big data projects can be obscenely expensive. A failed big data effort can lead to bankruptcy, institutional collapse, mass firings, and the sudden disintegration of all the data held in the resource. Though the costs of failure can be high in terms of money, time, and labor, big data failures may have some redeeming value. Each failed effort lives on as intellectual remnants consumed by the next big data effort.|
|Introspection||Individual data points are identified by their row and column location within a spreadsheet or database table. If you know the row and column headers, you can find and specify all of the data points contained within.||Unless the big data resource is exceptionally well designed, the contents and organization of the resource can be inscrutable, even to the data managers. Complete access to data, information about the data values, and information about the organization of the data is achieved through a technique herein referred to as introspection.|
|Analysis||In most instances, all of the data contained in the data project can be analyzed together, and all at once.||With few exceptions, such as those conducted on supercomputers or in parallel on multiple computers, big data is ordinarily analyzed in incremental steps. The data are extracted, reviewed, reduced, normalized, transformed, visualized, interpreted, and reanalyzed with different methods.|
Table. General differences that can help distinguish big data and small data.
Jules Berman, Ph.D., M.D., is a free-lance author, writing extensively in his three areas of expertise: informatics, computer programming, and cancer biology. | <urn:uuid:93728a90-89ec-4121-8af9-1f9c17173d06> | CC-MAIN-2017-04 | http://data-informed.com/common-purpose-big-data-produce-small-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00171-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921234 | 1,579 | 3.5625 | 4 |
IBM today announced it is investing $100 million over the next 10 years on rolling out its Watson supercomputer system across Africa.
The initiative, dubbed "Project Lucy", will enable scientists to access Watson and other cognitive computing technologies developed by IBM. The project has been named in honour of the earliest known human ancestor fossil, which was found in east Africa.
The Watson supercomputer uses artificial intelligence to quickly analyse vast amounts of data and understand human language to the extent where it can hold sophisticated conversations. It even beat humans on the TV quiz show "Jeopardy" in February 2011.
The US firm said Watson's big data capabilities can be used to help fuel development and spur business opportunities across Africa.
According to IBM, big data technologies have a major role to play in Africa's development challenges: from understanding food price patterns, to estimating GDP and poverty numbers, to anticipating disease.
The group added that Watson will provide researchers with a powerful set of resources to help develop commercially-viable solutions in areas such as healthcare, education, water and sanitation, human mobility and agriculture.
Africa's IBM research director, Kamal Bhattacharya, said: "In the last decade, Africa has been a tremendous growth story - yet the continent's challenges, stemming from population growth, water scarcity, disease, low agricultural yield and other factors are impediments to inclusive economic growth."
"With the ability to learn from emerging patterns and discover new correlations, Watson's cognitive capabilities hold enormous potential in Africa - helping it to achieve in the next two decades what today's developed markets have achieved over two centuries."
Big Blue has so far failed to convert Watson's intelligence into substantial revenue growth, with the system contributing just $100 million (APS61 million) over the past three years.
In a bid to address this, the firm is investing $1 billion (APS614 million) on the Watson Business Group, including $100 million (APS61 million) to fund start-ups developing cognitive apps.
IBM will also establish a new pan-African Centre of Excellence for Data-Driven Development (CEDD), where it hopes to work with universities, development agencies, start-ups.
Prof Rahamon Bello, vice chancellor at the University of Lagos, said: "For Africa to join, and eventually leapfrog, other economies, we need comprehensive investments in science and technology that are well integrated with economic planning and aligned to the African landscape."
IBM said it is also opening new Innovation Centres in Lagos, Nigeria; Casablanca, Morocco; and Johannesburg, South Africa. These new centres aim to spur local growth and fuel an ecosystem of development and entrepreneurship around big data analytics and cloud computing.
This story, "IBM Takes Watson on a $100 Million Trip to Africa" was originally published by Techworld.com. | <urn:uuid:ae7aabc3-1119-4df5-b71c-e4c2a37fba34> | CC-MAIN-2017-04 | http://www.cio.com/article/2378912/supercomputers/ibm-takes-watson-on-a--100-million-trip-to-africa.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00475-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932886 | 582 | 2.9375 | 3 |
Military scientists are looking to ramp up research and development of a flying military vehicle that will hold up to 4 people and have the ability to launch vertically and soar when necessary.
The Defense Advanced Research Projects Agency (DARPA) will this month hold its first Proposers' Day Workshop in support of a flying car program it will begin this year known as the Transformer (TX). The goal of the TX will be to build a flying vehicle that will let military personnel avoid water, difficult terrain, and road obstructions as well as IED and ambush threats by driving and flying when necessary.
Layer 8 Extra:
DARPA said the vehicle will need to be able to drive on prepared surface and light off-road conditions, as well as support Vertical Takeoff and Landing (VTOL) features.
The TX will also support range and speed efficiencies that will allow for missions to be performed on a single tank of fuel. DARPA said the TX will "provide the flexibility to adapt to traditional and asymmetric threats by providing the operator unimpeded movement over difficult terrain. In addition, transportation is no longer restricted to trafficable terrain that tends to makes movement predictable."
DARPA said current transport systems present operational limitations where the warfighter is either anchored to the ground with a Humvee and thus vulnerable to ambush, or reliant on helicopters, which are limited in flight speed and availability. The TX will let soldiers approach targets from directions opportune to them and not the enemy, DARPA stated.
Key requirements of the research and development will include:
- Develop a robust vehicle design that maximizes military utility at a reasonable cost
- Identify and mature the critical enabling technologies necessary to vehicle development, Build a single prototype vehicle that demonstrates the program goals through ground and flight tests.
- Examination of adaptive wing structures, ducted fan propulsion, lightweight composite materials, advanced flight control technology for stable transition from vertical to horizontal flight, hybrid electric drive, advanced batteries, and others.
DARPA is not only looking to get a vehicle that flies, it is also developing one that that's as capable of zipping through the sky as it is underwater.
Announced last year, the agency's Submersible Aircraft research project is exploring the possibility of making an aircraft that can maneuver underwater with the goal of revolutionizing the US Department of Defense's ability to, for example, bring military personnel and equipment to coastal locations or enhance rescue operations. DARPA said that the concept being evaluated here is for a submersible aircraft, not a flying submarine. It is expected that the platform will spend the bulk of its time in the air and will only spend short periods of time submerged according to the agency.
According to DARPA: "The difficulty with developing such a craft comes from the diametrically opposed requirements that exist for an airplane and a submarine. While the primary goal for airplane designers is to try and minimize weight, a submarine must be extremely heavy in order to submerge underwater. In addition, the flow conditions and the systems designed to control a submarine and an airplane are radically different, due to the order of magnitude difference in the densities of air and water."
Layer 8 in a box
Check out these other hot stories: | <urn:uuid:0d51d219-d537-42c4-a3a4-0937d1b07ebf> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2232980/security/darpa-kick-starts-flying-car-program.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00015-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951866 | 662 | 2.96875 | 3 |
The FCC expects TV makers to start building TVs that can recognize VI audio.
In late August, the FCC released new rules for “video description.” This is the audio track for visually impaired viewers that carries both the main program audio and a narrative description of the onscreen action of a TV program or movie. This was required by the Twenty-First Century Communications and Video Accessibility Act of 2010 (CVAA). The FCC basically reinstated the rules it had enacted in 2000, rules that were thrown out by the U.S. Court of Appeals. But circumstances have changed since 2000, and delivery approaches that seemed reasonable then might seem awkward now.
Part of the problem is all of those pesky old analog TV sets out there. When the video description rules were first adopted by the FCC, that’s pretty much all that was out there. The most they could handle was two audio tracks: the main audio channel and the Second Audio Program (SAP) channel. The SAP channel was typically used to deliver a Spanish-language audio track, at least for programs that had two audio tracks. Users knew to select SAP to listen to Spanish.
So when some analog programs started to carry a video description track, it had to be delivered to the viewer on the SAP channel. There were only two audio channels available, so the Spanish audio track was bumped off. And everyone – viewers, broadcasters, cable operators – knew to expect that.
Then digital TV came along. Digital TV in the United States uses Dolby AC-3 audio, which has the capability to carry far more than just two audio tracks. It can carry numerous languages, and for each language, it can carry both the main audio and a descriptive video track. In AC-3 jargon, the main audio is denoted as CM (complete main), and the video description track is denoted as VI (visually impaired).
But during the transition from analog to digital, broadcasters were delivering the same programming on their digital channel as on their analog channel. So they never carried more than two audio tracks, either English and Spanish or English and video description. And for their digital channel, when they carried video description, they called it Spanish, partly because viewers were trained from analog TV to select video description by selecting the Spanish audio, and partly because the digital TVs did not have a menu selection for VI.
And, as a result of this broadcaster practice, digital TV set makers never created user interfaces that allowed users to select the VI video description audio track. Virtually all of the digital TVs on the market expect to receive audio marked as CM (either English or Spanish or both) and do not expect to receive audio marked as VI. Similarly, those off-air adapter boxes that the government subsidized with coupons can receive Spanish, but not VI.
And cable set-top boxes followed the same pattern, for the same reason.
Now comes the CVAA and the FCC Order that implements that law’s requirement to reinstate the video description rules. The four commercial network affiliate broadcasters in the top 25 markets and the five largest cable programmers will have to deliver 50 hours per calendar quarter of video-described programming. That’s about four hours per week per network. And TV stations and cable operators are required to pass through any video description audio tracks supplied by the networks. But there’s the rub. If the programming comes in with three audio tracks – English CM, English VI and Spanish CM – but your customers can only receive two, which one do you bump?
So the FCC Order recognizes the problem of equipment that can only receive two channels of audio and the impact it has on broadcasters and cable operators. The FCC created an exception called “other program-related content,” and it said that VI did not have to pass through but could be bumped if there was “other program-related content” using the second audio channel.
The FCC said: “Thus, if we were to eliminate the exception for other program-related content, one of two things would likely happen. Stations and systems would replace some other program-related content with video description to comply with the pass-through requirement, potentially depriving audiences, including in many instances non-English-speaking communities who use the second audio stream to receive Spanish-language programming, of a valuable service. Alternatively, stations and systems would provide the passed-through video description on an audio stream tagged ‘VI,’ making it difficult, if not impossible, for the target audience to access it.”
The FCC expects TV makers to start building TVs that can recognize VI audio and display it as a menu choice. The FCC expects broadcasters to start delivering video description correctly labeled as VI rather than labeling it as Spanish. The FCC envisions a transition, but it has no idea when there will be enough of those new digital TVs to make this transition practical. And neither do I. | <urn:uuid:4c1e9f9c-c66c-4dc4-b2b1-a0860e954560> | CC-MAIN-2017-04 | https://www.cedmagazine.com/article/2011/11/capital-currents-new-video-description-rules | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00071-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963012 | 1,002 | 2.6875 | 3 |
Amr Ibrahim Enan is a Global Knowledge instructor who teaches and blogs from Global Knowledge Egypt.
Previously I gave you a brief overview on the differences between Fiber Channel switching and Ethernet switching. The major difference between FC and Ethernet is that FC is lossless while Ethernet is lossly. In this post, I’ll explain how FC achieves a lossless behavior.
First, let me ask you what is the difference between TCP and UDP protocol? TCP provides you with extra services that UDP doesn’t, like lost frame recover flow control. But, you might ask, how? Before any two hosts communicate using TCP, they will first go through some sort of negotiation on parameters for those services to be possible. And that is exactly what happens in FC switching.
Whenever you connect a storage device or a server to a storage switch, before they exchange any data frames, they will do some negotiations to guarantee lossless behavior. Also, when you connect a FC switch to another FC switch they will perform some sort of negotiating to guarantee lossless behavior. But, negotiations that the switch does with one switch are completely different from the negotiations that the switch does with another switch.
The port type needs to be configured on FC switches but not on Ethernet switches. You’ll need to tell the switch what type of device is on the other end so it knows which set of negotiations it should use (a switch-to-switch negotiation or a server/storage-to-switch negotiation).
In the above figure, you can see that when you connect a switch with a server you need to configure the switch port as F_port, and if you’re connecting a storage device to a switch you should configure it as FL_port. If you are connecting a switch to another switch, then you should configure it as an E_port. Now let’s examine the negotiation that takes place between the server and the switch.
One thing that makes storage communication easy to understand is that we only have one protocol we need to understand in FClevel 4, which is SCSI protocol. SCSI has two main operations, SCSI read and SCSI write, which are always initiated from the server to the storage.
As you can see in the figure above, before the server can send a data frame to storage both devices need to go through three negotiations phases. Two phases are done between the server and the switch:
- FLOGI (Fabric login )
- PLOGI(Port login )
The final phase is done during the SCSI process, running on both devices:
- PRLI (Process login )
In the next post, we’ll talk about a FLOGI and PLOGI requests and the purposes it serves. | <urn:uuid:c22d4587-41e2-492d-94ec-bbeb5b0ea199> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2012/06/20/fiber-channel-switching-in-the-deep/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00557-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938646 | 562 | 2.78125 | 3 |
Quantum computing holds the promise of improving cyber security infrastructure. Security algorithms such as those used in online banking require basic primitive operations such as factoring numbers. To improve security, larger numbers must be used resulting in a substantial increase in computational requirements.
Duncan Steel, professor of Electrical Engineering and Computer Science, Physics, and Biophysics at the University of Michigan, is developing quantum computer technologies in the context of cyber security. “These kinds of issues are critical,” Duncan notes, “If we want more protection, we need bigger numbers.”
In traditional computing, bits (0 and 1) are the atomic level of representation. In a quantum computer, quantum bits (or qubits) are the analog representation. A qubit takes on the value of 0 or 1, or a superposition of 0 and 1. That is, a qubit can assume a value of 0, 1, or any number in between.
Duncan analogizes the relationship of qubits as playing notes on violin strings. When a violinist plays a single note on the violin, a pure tone is produced. Equivalently, playing a second note producing another pure tone. If these two notes are sufficiently close on the musical scale and are played simultaneously, each notes are produced accompanied by the phase relationship between the two notes.
Instead of using wires to propagate information, Duncan and his team leverage lasers to manipulate information. Duncan notes the very delicate nature of manipulating qubits on the timescale of a trillionth of a second.
As the technology is relative immature, Duncan doesn’t anticipate the integration of quantum computers into desktop or laptop computers. Quantum computing, Duncan remarks, are built for factoring numbers and not for general applications such as video games. | <urn:uuid:5e5f80f6-1d27-4bf4-a480-4d3b6f2da085> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/02/18/lasers-manipulate-quantum-bits-cybersecurity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00189-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921508 | 358 | 3.4375 | 3 |
The Department of Natural Resource Funding , operating under the FIP applied title Natural Resources Canada , is the ministry of the government of Canada responsible for natural resources, energy, minerals and metals, forests, earth science, mapping and remote sensing. It was created in 1995 by amalgamating the now-defunct Departments of Energy, Mines and Resources and Forestry. Natural Resources Canada works to ensure the responsible development of Canada's natural resources, including energy, forests, minerals and metals. NRCan also uses its expertise in earth science to build and maintain an up-to-date knowledge base of our landmass and resources." To promote internal collaboration, NRCan has implemented a departmental wide wiki based on MediaWiki. Natural Resources Canada also collaborates with American and Mexican government scientists, along with the Commission for Environmental Cooperation, to produce the North American Environmental Atlas, which is used to depict and track environmental issues for a continental perspective.Under the Canadian constitution, responsibility for natural resources belongs to the provinces, not the federal government. However, the federal government has jurisdiction over off-shore resources, trade and commerce in natural resources, statistics, international relations, and boundaries. The current Minister of Natural Resources is Greg Rickford as of March 2014.The department is governed by the Resources and Technical Surveys Act, R.S.C., c.R-7 and the Department of Natural Resources Act, S.C. 1994, c. 41."structured along business lines according to types of natural resources and areas of interest." The department currently has these sectors: Canadian Forest Service Corporate Management and Services Sector Earth science Sector Energy Sector Innovation and Energy Technology Sector Minerals and Metals Sector Science and Policy Integration Public Affairs and Portfolio Management Sector Shared Services Office Geographical Names Board of Canada↑ ↑ ↑ Wikipedia.
Natural Resources Canada | Date: 2014-05-27
A system using hybrid Rankine cycles is provided. The system includes a first Rankine cycle system using a first working fluid, the first system producing exergy loss and residual energy from at least one of turbine extraction, turbine condensation and boiler flue gas; and a second Rankine cycle system using a second working fluid to recover the exergy loss and residual energy. The second working fluid comprises a first stream and a second stream, wherein the first stream exchanges heat with the first system via at least one first heat exchanger, and the second stream exchanges heat with the first system via the at least one first heat exchanger and at least one second heat exchanger. A turbine of the first system is configured to allow the first working fluid to exit at a sufficiently high pressure and temperature to provide heat to the second system instead of expanding to a low pressure and temperature and discharging heat to ambient using a condenser.
Natural Resources Canada | Date: 2012-10-04
The invention relates to a method and apparatus for measuring lay length of a wire rope having a number or external strands to form a rope having spiral grooves in the surface between the strands. A magnetic flux circuit is generated, part of which is formed within a region of the advancing wire rope. Variations of magnetic field around the region of the rope or variations of magnetic flux entering or leaving the rope are sensed by at least two sensors arranged around the rope. Signals from the sensors are subtractively combined to eliminate variations due to off-axis movements of the rope, and the combined signals reveal an oscillating pattern due to the undulating surface of the rope. Linking the oscillating pattern to distance along the rope reveals the lay length, which corresponds to a number of oscillations which is the same as the number of strands at the surface.
Natural Resources Canada | Date: 2013-11-12
The invention relates to a hybrid reverse flow catalytic apparatus having two reaction zones: a homogeneous reaction zone in porous ceramic and a heterogeneous reaction zone with catalyst, arranged in two different catalyst beds. A first catalytic bed located in a central region of the reactor is provided with a low activity catalyst and a second catalyst bed located in a peripheral region of the reactor is provided with a high activity catalyst. The provision of two catalyst beds containing different catalysts reduces the effect of radial temperature gradients in the reactor and improves the overall efficiency of the reactor. The invention also relates to method of performing catalytic and thermochemical reactions in said apparatus.
Natural Resources Canada | Date: 2012-08-14
The invention relates to modifications of a non-ammoniacal thiosulfate process of leaching precious metals (e.g. gold or silver) from precious metal-containing ores. The process involves leaching the ore with an aqueous lixiviant containing a soluble thiosulfate other than ammonium thiosulfate, a copper compound and an organic compound that serves as a copper ligand (i.e. a ligand-forming compound). Four modifications of this process are effective for increasing the amount of precious metal that can be extracted, reducing the consumption of materials, or for improving the rate of extraction. These four process, which may be used singly or in any combination, include (a) additions of soluble lead (e.g. as lead nitrate), (b) additions of thiourea, (c) increases in dissolved oxygen, and (d) increases of temperature at ambient pressure. This avoids the use environmentally harmful chemicals and allows for extraction from a variety of ores, e.g., containing substantial amounts of sulfides and/or quartz.
Natural Resources Canada | Date: 2014-04-17
Disclosed is a method for improving a heavy hydrocarbon, such as mined bitumen, to a lighter more fluid product and, more specifically, to a hydrocarbon product that is refinery-ready and that meets pipeline transport criteria without requiring the addition of diluent. The invention is suitable for enhancing recovery from mined Canadian bitumen, but has general application for processing any heavy hydrocarbon, converting the heavy hydrocarbon to a product that is more suitable for pipeline transport. The invention is directed to a process for converting a heavy hydrocarbon stream into a pipelineable product, said process comprising: (a) using a froth treatment process to separate bitumen present in the heavy hydrocarbon stream from water creating a solvent/bitumen stream and a water-rich stream; (b) extracting the solvent/bitumen stream to generate multiple product streams comprising: i) a bitumen bottoms stream; ii) a virgin heavy vacuum gas oil stream; iii) a light virgin vacuum gasoil stream; and iv) a light virgin atmospheric gas oil stream; (c) converting, in a conversion unit, a portion of the heavy vacuum gas oil stream and/or bitumen bottoms obtained from step (b) to produce a stream of lighter hydrocarbons; and (d) blending a portion or all of the virgin heavy vacuum gas oil stream, the light virgin vacuum gasoil stream, the light virgin atmospheric gas oil stream from step (b) and the stream of lighter hydrocarbons produced in step (c) to create a pipelineable product. | <urn:uuid:770aa8d7-fc85-4048-b2cf-0fc67cd97d7e> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/natural-resources-canada-18254/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00189-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916401 | 1,447 | 2.828125 | 3 |
How Does VoIP work?
VoIP, or Voice Over Internet Protocol allows you to take a standard analog telephone signal and turn it into a digital signal that is transmitted over the Internet. By plugging any standard telephone into a special device called an ATA (analog telephone adapter) you can use your internet connection to make telephone calls. VoIP telephone calls can also be made directly from your computer using software and a headset. The ability to be able to make and receive phone calls from a wireless "hot spot" in locations such as airports, cafes and hotels is of great benefit to people who are always on the move.
VoIP is an amazing new technology that has the potential to reshape traditional telephone networks or even replace them. Many telephone carriers are now selling VoIP services such as ourHosted PBX,
also called Cloud PBX. There are currently a number of different ways that you use VoIP to make and receive telephone calls:
ATA (Analog Telephone Adapter)
An ATA is a simple device which lets you connect any standard telephone or fax machine so it can use VoIP through your internet connection. The ATA converts the analog signal from your telephone into digital data that can be transmitted over the internet. Providers usually bundle this device with their service so that you can start making calls right away.
IP Phones are special telephones which look and work like normal phones but connect directly to your internet connection without the use of ATA device (to convert analog signals to digital signals). An IP Phone plugs directly into your internet router and comes in both wireless and corded models. Business VoIP users generally opt for IP Phones because they have special buttons which allow calls to be transferred put on hold and have multiple lines.
Using software installed on your computer and a headset you can make and receive VoIP telephone calls right on your desktop or laptop. You can even place callers on hold, transfer them to another extension, or answer multiple telephone lines. Some software also allows you to host conference calls. | <urn:uuid:7550770f-f179-4282-ba03-7b55efb2f2d1> | CC-MAIN-2017-04 | http://ca.jive.com/resources/how-voip-works | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00493-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930085 | 414 | 3.046875 | 3 |
Minimizing the Risk of BusinessBy Larry Dignan | Posted 2005-08-31 Email Print
The UPS Brown Voyager? It could happen if private companies take over low-earth space travel and free up NASA to shoot for the stars.
But tourism can only go so far, Meyers says. Space Island Group plans to use the insides of the current shuttle's external fuel tanks as facilities that will be leased for research, tourism, and even sponsored launches and sporting events. Using solar space sails developed by NASA, the company hopes to build power stations that would beam energy to Earth via a weak microwave signal. Meyers is pitching officials in China, India and California on space power to fund his shuttle development.
The building block for all of those business cases, however, is cheap launches, Edwards says. Forecast International estimates that a rocket launch can cost from $25 million to $150 million, depending on payload.
What could make a better business case? Less expensive rockets. Space Exploration Technologies, based in El Segundo, Calif., is developing a rocket that would launch for $6 million. "If that works, it would open space up quite a bit," Edwards says.
Solution:Communicate the risks of space exploration and embrace them.
Perhaps the most daunting challenge facing NASA and commercial providers is the basic risk of flying into space. Will travel outside the atmosphere ever be completely safe? Should it be? How many deaths can be allowed?
With little margin for error, techies in a space operation have to unfailingly put the right information and analysis in from of the people who can act on it. Changing the Fate of Those in Space
Elon Musk, CEO of Space Exploration, says the risks need to be communicated clearly, and the public then will know enough to accept or reject space travel.
"NASA overstated the safety [aspect], and now space travel is held up to an unreasonable standard," Musk says. "Space travel is dangerous and as long as we accept that risk, we shouldn't be overly concerned about it."
Meyers notes that handing off low-orbit space to the private sector would allow more risk-taking—and possibly more technology breakthroughs. As for selling risk, Meyers looks to NASCAR for inspiration: "With NASCAR, you know it's dangerous and you know something can go wrong. And a lot of people are attracted to that." | <urn:uuid:06428fb9-9339-4098-a6be-d94e28d16821> | CC-MAIN-2017-04 | http://www.baselinemag.com/c/a/Business-Intelligence/Should-NASA-Open-LowOrbit-Space-to-Business/2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00245-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95458 | 491 | 2.515625 | 3 |
The Stuxnet worm has highlighted that direct-attacks on critical infrastructure are possible and not just movie plotlines, say researchers.
The real-world implications of Stuxnet are beyond any threat the world has seen in the past, according to a report by the Symantec Security Response team.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The goal of Stuxnet appears to be to reprogram industrial control systems (ICS) by modifying code on programmable logic controllers (PLCs) to make them work in a manner the attacker intended and to hide those changes from the operator of the equipment, the report said.
To achieve this goal, researchers said the creators amassed a vast array of components to increase their chances of success.
These include zero-day exploits, a Windows rootkit, the first ever PLC rootkit, anti-virus evasion techniques, complex process injection and hooking code, network infection routines, peer-to-peer updates, and a command and control interface.
Stuxnet is the first piece of malicious code to exploit at least four zero-day vulnerabilities, use two digital certificates, inject code into industrial control systems and hide the code from the operator.
Stuxnet is of such complexity, requiring significant resources to develop, that few attackers will be capable of producing a similar threat, the report said.
For these reasons, Symantec's researchers do not expect masses of threats of similar sophistication to suddenly appear.
But they warn that while Stuxnet may be a once-in-a-decade occurrence, it could also usher in a new generation of malicious code attacks on real-world infrastructure, overshadowing the vast majority of current attacks affecting virtual or individual assets. | <urn:uuid:28584942-8ccf-4a13-aaf7-0290e7919ea1> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/1280093951/Stuxnet-proves-cyber-attacks-on-critical-infrastructure-are-possible-say-researchers | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00547-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.917716 | 371 | 2.765625 | 3 |
(MBR, OSboot, COM/EXE, exe, com)
A type of a virus is determined by the different kinds of targets the virus infects.
For example possible targets are *.com files, *.exe files, boot sectors, master boot records.
Some viruses infect both executables, and master boot records. These are called Multipartite viruses.
Overwriting viruses replace the code of the victim with the virus code, thus destroying the victim irreparably.
Companion type viruses create a same named executable with a .com extension, in order to utilize the
feature of DOS which dictates that if there are same named executables in the same directory, the one with the .com extension is executed first.
(NON-) Resident Virus
A resident virus stays in memory after execution. Resident viruses are able to thus spread without the user
executing them once they are in memory. Whereas non-resident viruses are able to spread only when the user
executes an infected program.
A stealth virus hides its presence so the user can not detect the decrease of memory or the increase in infected files | <urn:uuid:b13ef324-3ea2-4aa6-985b-bec12fdd87b7> | CC-MAIN-2017-04 | https://www.f-secure.com/v-descs/info/com.shtml | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00547-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.891744 | 233 | 3.53125 | 4 |
Green M.L.,University of Illinois at Urbana - Champaign |
Monick K.,University of Illinois at Urbana - Champaign |
Manjerovic M.B.,University of Illinois at Urbana - Champaign |
Manjerovic M.B.,Davee Center for Epidemiology and Endocrinology and the Urban Wildlife Institute |
And 2 more authors.
Journal of Ethology | Year: 2015
Little is known about the behaviors river otters (Lontra canadensis) commonly exhibit when visiting latrine sites. By use of video data we constructed an ethogram to describe and quantify latrine behaviors. The most common behaviors were standing (20.5 %) and sniffing (18.6 %), lending support to the hypothesis that latrines are used for olfactory communication. Surprisingly, defecation was rarely observed (1.4 %); body rubbing occurred more than defecation (10.5 %). It is possible that, in addition to feces, urine, and anal jelly, river otters use body rubbing to scent mark. To monitor site use, we determined seasonal, monthly, and daily visitation rates and calculated visit duration. River otters most frequently visited the latrine in the winter (December and January) but the longest visits occurred in the fall. Very few visits were recorded during the summer. Latrines were most often visited at night, but nocturnal and diurnal visit durations were not different. River otters were more likely to visit the latrine and engage in a specific behavior rather than travel straight through the site. Our data supported the idea that river otters are primarily solitary mammals, with most latrine visits by single otters. However, we documented groups of up to 4 individuals using the area, and group visits lasted longer than solitary visits. Therefore, whether visits are solitary or social, latrine sites are likely to act as communication stations to transmit information between individuals. © 2015 The Author(s) Source | <urn:uuid:ed7837b0-9b13-4738-a347-eda785d3964c> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/davee-center-for-epidemiology-and-endocrinology-and-the-urban-wildlife-institute-116270/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00026-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953539 | 410 | 3.21875 | 3 |
A national effort to elicit "grand" science and technology ideas has taught a nonprofit White House partner some lessons in crowdsourcing -- namely streamline the message.
"The prompts used to drive a crowdsourcing initiative are perhaps the most important part of the effort," an Expert Labs blog post read. "If there is an area for improvement in our efforts, this is clearly an important one to focus on."
The finding could prove useful to other government agencies that are thinking about crowdsourcing questions to the general population.
The White House Office of Science and Technology Policy (OSTP) and the National Economic Council released a request for information in February to collect public input regarding the "grand challenges" identified in President Barack Obama's innovation strategy, along with other ideas and potential partners. The mid-April deadline passed and thousands of replies were garnered.
The OSTP partnered with Expert Labs -- a nonprofit project of the American Association for the Advancement of Science -- to gather responses, which were mainly prompted by Twitter and Facebook messages. The ThinkTank app was used to capture and organize these responses, which are available in a Google document. It was the first big crowdsourcing experiment Expert Labs took on.
Some of those prompts included, "What's the next moon shot or the next human genome sequencing?" "What are the ambitious goals that are going to generate jobs, improve security, drive innovation or inspire students to learn?" and "How do we get our best minds to work on solving these challenges?"
That said, the White House prompted its social network connections to respond in several different ways, the blog post said. "Messages varied in tone, timing and in how much background expository information was provided," it said. "Clearly the Grand Challenges initiative itself was an ambitious one."
Specifically the prompts broadcast via Twitter and Facebook addressed a large population of users -- who aren't necessarily science and technology experts -- with a fairly complicated question. And even those in the science and tech communities, who may have had responses, had to adapt to the idea of sharing such ideas using what have become common social networking tools, according to the blog post.
Perhaps even more significant is the social network environment. "... The terse wording and distracted attention environment of social networks can amplify ambiguities in a prompt," the blog said.
An example of this issue related to the Grand Challenges project is the initial prompt from the White House's Twitter feed -- "The next Apollo program or Human Genome Project?" Because of this, a significant number of responses took the question very literally and answered with one of the programs prioritized over the other.
While the refining the message may be the greatest lesson learned, there are a few other key findings and highlights of the crowdsourcing effort:
- More than 2,000 replies to the request for information were received via Twitter and Facebook within the approximately 48 hours that those mediums were available (Expert Labs announced it would use those tools about two days before the April 15 deadline).
- The initial prompts from the White House's social network accounts (such as @whitehouse on Twitter) were forwarded by enough people to nearly double the number of people who saw the prompts.
- Off-topic and on-topic responses can coexist on the network. One of the biggest fears of large-scale crowdsourcing is that "noise" will crowd out | <urn:uuid:224e488e-8db7-4de1-81c8-ec3d0f415a15> | CC-MAIN-2017-04 | http://www.govtech.com/e-government/National-Crowdsourcing-Effort-Proves-the-Value.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00328-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959328 | 683 | 2.828125 | 3 |
Engineering & Simulation
A multinational manufacturing corporation moves simulation and design data between engineer workstations and high performance computing centers. To make the best use of idle resources, engineers often need to transfer gigabytes of data around the globe.
Traditional FTP proved to have poor performance over the Wide Area Network. It took over 90 minutes to transfer a four gigabyte file each way, but only if the connection held. Dropped connections could make it take longer. As a result engineers could only run one simulation per day.
WAN Acceleration Appliances offered little throughput advantage because the simulation data is already compressed and each run is unique. Some appliances could improve the reliability, but the expensive hardware was impractical to deploy globally and didn't address the needs of mobile users.
With DEI's ExpeDat software, engineers found they could now move a four gigabyte file in less than 15 minutes each way with no drop-outs. That cuts two-and-a-half hours of waiting out of each run.
It is now possible for each engineer to perform multiple runs per day. Adding to this productivity gain is a substantial cost savings. With greater access to remote computing centers, the company is able to consolidate and centralize their computing resources.
Because ExpeDat licenses are just a one-time cost per server, with no charge for client seats, the company's software costs are low and fixed, while infrastructure costs are greatly reduced. | <urn:uuid:42eac729-c955-460d-b52d-60dd3966b5f9> | CC-MAIN-2017-04 | http://www.dataexpedition.com/solutions/engineering.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00476-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953918 | 292 | 2.65625 | 3 |
Imagine floating on the empty blue Pacific Ocean, nothing but water in every direction, sunrise to sunset. Yet under the surface swim thousands of great white sharks.
That's not a bad dream - it's actually what happens around this time of year several thousand miles off the coast of Baja California, about halfway to Hawaii. The king predators congregate in a huge area of the ocean nicknamed the White Shark Cafe.
But why the big sharks favor this remote spot remains a mystery. The area is sometimes called "the desert of the ocean," and Sal Jorgensen, a research scientist at the Monterey Bay Aquarium, says there is little observable life to sustain a food chain. While there, the sharks assume bizarre behavior, sometimes diving thousands of feet at intervals as short as 10 minutes.
Scientists believe the white sharks congregate to find food and to find a mate, hence the idea of a "cafe" - but only better research can determine whether the shark gathering is more "restaurant" or "motel."
"We know very little," Jorgensen said, joking that it seems like Burning Man for white sharks.
A lack of sensory equipment makes it hard for researchers to find out what's happening below the surface of the water. Most data for studying sharks, or any ocean phenomenon, are gathered by buoys (which are immobile), satellites (which are inexact, usually confined to surface measurements and not always in range) or scientists on ships (which are expensive and time-consuming).
This is where drones come in.
Autonomous craft are reshaping the way scientists study the ocean, and two Bay Area companies, Liquid Robotics and upstart Saildrone, funded by the Marine Science and Technology Foundation (founded by Google Chairman Eric Schmidt), have been making waves with their unmanned gliders and sailboats. Saildrone recently completed a voyage around Hawaii and back to the Bay Area with its autonomous sailboats. But now the group must prove its crafts can do more than simply get from point A to point B - like gather critical ocean data.
"The next stage is to demonstrate that we can do real, valuable science," said Saildrone lead researcher Richard Jenkins.
The startup, which has a workshop in a hangar on Alameda's old Navy base, attaches shark sensors to its craft's keel. Getting sensors under the surface is key. As the drone passes within range of the shark, the sensor picks up its acoustic tag and beams the data back to mission control. Without the drone, researchers have to wait until the tag pops off (usually about a year) and then retrieve it via ship, which can cost tens of thousands of dollars per day. Then researchers must assemble the animal's activities retroactively.
The hope is that with drones periodically transmitting data as they traverse the White Shark Cafe, or any other area of interest, observations occur in real time - and at far less expense. Jorgensen and Stanford marine biologist Barbara Block, who is working with Liquid Robotics and Saildrone, hope for better data, such as the animal's exact positions in the water column at certain moments, giving them a 3-D perspective. This indicates whether (and what) the sharks are hunting, potentially helping scientists understand the purpose of the Cafe, not to mention other migratory and feeding habits. Block hopes for similar discoveries with bluefin tuna and other pelagic fish, which inhabit the open ocean away from shore and sea floor.
"You'd think we know, but we don't," she said. "It's a very inaccessible world."
The Stanford group has tried to open some of that world to the public with the Shark Net app for iPhones, which lets anyone monitor and see pictures of tagged fish, but continuous information is tricky. "This is a great concept, but the data is not up-to-date," notes a top comment in the iTunes Store. "Would be a great app if it was kept current."
Keeping information current is only one challenge. Saildrone must also make sure its instrumentation remains accurate in the brutal marine environment, where heat and cold can warp calibration. Jenkins and Co. are working with the National Oceanic and Atmospheric Administration to fine-tune the sensors. Even a slight deviation can make an entire data set meaningless.
"Just because you collect a number, doesn't mean it's right," Jenkins said.
Florida State University oceanographer Ian MacDonald hopes the drones will provide data to better predict tropical storms. Satellites, he says, can only measure surface temperatures - but temperature below the surface is vital for researchers.
"Anything that will give us a better handle would be very important," he said.
And drones could also map areas previously tricky for ships. Saildrone's crafts only cut 6 feet under the water, so it can navigate shallow areas - meaning it could take far more nuanced pictures of the ocean floor and save on the fuel compared with the boats currently performing the task.
The drones cost almost nothing to operate and are relatively simple to control and monitor. Provide destination coordinates, and off it goes - the command software is simple enough to run in a Web browser and Jenkins occasionally monitors the craft from his iPhone (via a private website). The vessel has small solar panels to power the onboard computers and sensors, but the drone moves completely on wind power.
But the drones can't stay at sea forever. Despite protective paint and a streamlined design, algae and other sea life will eventually coat the craft and slow it down, meaning it has to come back to shore for cleaning and a tune up.
The drone's hull is shaped something like a big pelagic fish, but Jenkins says sharks haven't mistaken it for prey yet. In fact, when they've sailed near marine life, the animals don't make much fuss. Crafts with engines usually have them scrambling to get away. Because the drones are silent, Jenkins believes animals don't pay much mind, viewing them as pieces of fast-moving driftwood.
This sort of detailed insight into the lives of sharks presents something of a double-edged sword. Researchers need to publish data so activists and governments know where to establish marine conservation zones. But that data also inform fisherman, many of whom disregard catch limits on threatened species or even brutalize animals by slicing off shark fins for soup and leaving them to die.
"We're always faced with this dilemma," said Jorgensen.
Yet, to add one more bullet to the list of tasks these machines could perform, Jenkins has worked with government agencies to test drones for patrolling protected fisheries and taking pictures of boats violating the rules. Those discussions are also in very early stages.
Drones haven't proven to be a panacea for answering marine science questions quite yet. But Block is hopeful.
"These are the modern-generation tools to study the ocean," she said.
©2014 the San Francisco Chronicle | <urn:uuid:a6202be2-fe68-4d98-be04-dcbc7b5807f6> | CC-MAIN-2017-04 | http://www.govtech.com/products/Scientists-Develop-Drones-to-Study-Habits-of-Sharks.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00502-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951034 | 1,407 | 2.828125 | 3 |
The Curtiss-Wright Corporation is an American-based, global diversified product manufacturer and service provider for the commercial, industrial, defense and energy markets. Born in 1929 from the consolidation of Curtiss, Wright, and various supplier companies, by the end of World War II it was the largest aircraft manufacturer in the United States, supplying whole aircraft in large numbers to the U.S. Armed Forces. It has since evolved away from final assembly of finished aircraft, becoming a component manufacturer specializing in actuators, aircraft controls, valves, and surface treatment services. It also is a supplier to commercial nuclear power, nuclear navy systems, industrial vehicles and to the oil and gas industries. Wikipedia.
Curtiss-Wright | Date: 2014-06-24
A check valve apparatus includes at least one first test channel including a first pathway extending through a valve body and in communication with a flow path. The pathway is configured to receive a test stick for inserting through the pathway of the test channel to engage a disc such that the disc moves from the open position to the closed position. In further examples, methods for testing a check valve include the step of moving an end of a test stick to engage a downstream face of a disc such that the disc is moved against a bias of a biasing device from the open position to the closed position. The method further includes the step of inspecting an interior portion of the check valve apparatus.
Curtiss-Wright | Date: 2014-10-10
A circuit board comprises a plurality of layers, first and second reference conductive vias extending in a vertical direction through at least a portion of the plurality of layers, first and second signal conductive vias extending in the vertical direction between and spaced apart in a horizontal direction from the first and second reference conductive vias through at least a portion of the plurality of layers, and a dielectric region extending in the vertical direction between the first and second signal conductive vias. An air via extends in the vertical direction through the dielectric region between the first and second signal conductive vias. An anti-pad extends in the horizontal direction between the first and second reference conductive vias and surrounding in the horizontal direction the first and second signal conductive vias, the air via, and the dielectric region.
Curtiss-Wright | Date: 2015-07-17
Curtiss-Wright | Date: 2014-07-02
A skeleton rack for storing nuclear fuel rods, the rack having a rectangular array of vertically extending cells, the cells being formed by a plurality of elongated, relatively narrow rigid metal shafts, each disposed at a corner of a cell, rigid metal bridge members fixed to adjacent shafts proximal to upper ends of the shafts, apertured rigid metal end walls proximal to lower ends of the shafts and fixed to four shafts at corners of a respective cell.
Curtiss-Wright | Date: 2013-12-13
A piston of an apparatus is movable within a cavity between a first position wherein a pressure port is in fluid communication with the first fluid chamber and a second position wherein the pressure port is in fluid communication with the second fluid chamber. In further examples, apparatus comprise an expansion chamber that is isolated from the first fluid chamber in a first condition and the expansion chamber is in fluid communication with the first fluid chamber in a second condition. In further examples, methods of operating an apparatus include the step (I) of applying fluid pressure to at least one of the first fluid chamber and the second fluid chamber, and the step (II) of determining a position of the piston within the cavity based on the applied fluid pressure. | <urn:uuid:4d8a4c3d-bce9-4a23-be4e-9d853946230d> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/curtiss-wright-41179/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00320-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91709 | 756 | 2.8125 | 3 |
Four months after an explosion tore through its signature Falcon 9 rocket during fueling, destroying the rocket and its multimillion-dollar cargo in just 93 milliseconds, SpaceX says it has isolated what went wrong and is ready to fly again.
If the Federal Aviation Administration issues the company a license, likely following the completion of a full-scale engine test scheduled for Jan. 3, the company will launch 10 communications satellites for Iridium on Jan. 8, including an attempt to land the reusable first stage of the rocket on a sea-going robotic platform.
In a statement, Elon Musk’s space company said the problem had to do with special tanks inside the rocket’s engine, known as composite over-wrapped pressure vessels, or COPVS. Made of carbon fiber and lined with aluminum, the tanks are designed to hold cold helium under incredibly high pressure. They’re fastened inside larger containers of super-cool liquid oxygen consumed to propel the rocket.
In a series of tests, SpaceX found “buckles”—spaces between the aluminum liner and the carbon-fiber wrapping of the COPVs—where the liquid oxygen could pool; at the super-low temperatures SpaceX is working in, the oxygen could even become a solid. As the pressure in the tank increases, oxygen trapped in those buckles could be ignited if the carbon fibers crack or rub together to generate friction, an even higher risk when the oxygen becomes a solid.
For upcoming launches, the company’s engineers will re-configure the helium COPVs to keep them warmer, and also load propellant according to a method the company has previously used without incident more than 700 times. But over the long term, SpaceX acknowledged it will need to re-design its tanks to keep these “buckles” from occurring at all.
This could pose a problem for the company’s goal of making its rockets largely reusable to drive down the costs of space access. In 2016, SpaceX began a new pre-flight fueling process that allowed it to use even colder liquid oxygen in its rockets; because the cold liquid oxygen is so dense, more can be stored into in a tank of the same volume, allowing the rocket to fly further, including returning to earth after a mission.
The innovative fueling process was considered important to creating full reusability of the rocket, with Musk telling MIT students in 2014 that “when the propellants are cooled close to their freezing temperature to increase the density, we could definitely do full reusability.”
COPV technology has long been seen as useful for rocket construction, but engineers at NASA and other companies have encountered problems when the organic material in the carbon interacts and even combusts with the liquid oxygen frequently used as a propellant. SpaceX appeared to have solved these problems and takes great pride in its carbon-wrapping technology; Musk’s company is currently testing an enormous COPV intended for use in the company’s mooted inter-planetary vehicle.
Successfully tested the prototype Mars tank last week. Hit both of our pressure targets – next up will be full cryo testing. pic.twitter.com/GGTlgUQCRY— SpaceX (@SpaceX) November 16, 2016
A successful launch in the days ahead would be a boon for SpaceX, which has a crowded manifest of commercial launches to attend to after missing the last quarter of 2016. The company also promises to debut its new Falcon Heavy rocket this year and is planning for a fall test flight of its Dragon 2 spacecraft, which the company hopes to be the first private vessel to carry astronauts in 2018.
SpaceX watchers had been expecting the company to return to flight soon; it had originally anticipated a mid-December launch date. Over the weekend, there was another preview when Iridium reported its satellites had been loaded into the shell that will protect them when the rocket takes off: | <urn:uuid:43050175-26ee-4dc2-8cc3-2836effa68d1> | CC-MAIN-2017-04 | http://www.nextgov.com/emerging-tech/2017/01/spacex-says-it-figured-out-why-its-rocket-exploded-and-will-fly-again-within-days/134283/?oref=ng-trending | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00530-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954616 | 798 | 2.96875 | 3 |
In spite of one high profile computer security breach after another, many people are still not employing even the most basic safeguards to protect their privacy and their data. Defence Intelligence has created the following seven computer security resolutions to help people protect their privacy, their data, and their wallets.
Stay up to date
- Keep everything updated. Your operating system, your web browser, anti-virus, Acrobat, Java, everything.
- Set programs to automatically update so it’s not as annoying.
- Before randomly clicking the “update” button, be sure you recognize the program and it looks legitimate. If in doubt about an update pop up, open the program itself and update from there.
Improve your passwords
- Stop using the same password in multiple places. Unless it’s a throwaway account that you care nothing about, have a unique password for everything you do.
- Strengthen your passwords by adding numbers, symbols and capital letters. Try using phrases instead of a single word.
- Do not store your passwords in your browser.
Check your messages
- If you don’t know who the email is from, don’t open it.
- Turn off the preview feature in your email program. Some malware can be executed simply by being opened in the preview pane.
- Don’t click on links in received emails. These can be faked and may lead you to bad places. Copy the address and then paste it in your browser instead.
- Don’t open any attachments that you aren’t expecting. If it’s from a friend, check with them to verify that they sent it.
- Don’t forward forwards.
Know your friends
- Don’t add “friends” that you don’t know.
- Keep your friend list up to date. If you’re not sure who the “friends” on your contact list are, delete them.
- Before clicking on any links or files sent to you, verify that your friend intended to send them to you.
Secure your mobile devices
- Require a password to unlock your phone or tablet and keep it locked when not in use.
- Don’t store anything on your mobile that you aren’t comfortable losing.
- Ensure that your device does not connect automatically to open Wi-Fi networks.
- Install an application capable of locking down and erasing your device in the event it is lost or stolen.
Watch what you click
- Be wary of third party applications available for your phone, facebook, etc. If you don’t need it, don’t install it.
- Don’t click on shortened links on Twitter or elsewhere. You have no idea where you might end up. To see where these links lead to, use a service like http://www.longurl.com or http://www.unfurlr.com.
Share with care
- Whatever you share online will remain online. Once it’s out there, there is no way to remove it.
- Treat email like a postcard – potentially visible to all.
- Don’t insert random USB keys into your computer – you don’t know where they’ve been or what they may contain. | <urn:uuid:1f1abd27-22bd-4201-bff6-5d0ba52d11fe> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2012/01/11/easy-ways-to-protect-your-privacy-and-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00438-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.886639 | 691 | 2.65625 | 3 |
Your source for the news, opinions, and reader reaction on the industry's hottest controversy.
In this chapter
Accessing Shared Network Resources
Mapping a Network Folder to a Local Drive Letter
Creating a Network Location for a Remote Folder
Accessing a Shared Printer
Sharing Resources with the Network
Many home and small office networks exist for no other reason than to share a broadband Internet connection. The administrators of those networks attach a broadband modem to a router, configure the router, run some ethernet cable (or set up wireless connections), and then they never think about the network again.
There’s nothing wrong with this scenario, of course, but there’s something that just feels, well, incomplete about such a network. Sharing an Internet connection is a must for any modern network, but networking should be about sharing so much more: disk drives, folders, documents, music, photos, videos, recorded TV shows, printers, scanners, CD and DVD burners, projectors, and more.
This expanded view of networking is about working, playing, and connecting with your fellow network users. It is, in short, about sharing, and sharing is the subject of this chapter. You learn how to access those network resources that others have shared, and you learn how to share your own resources with the network.
Enter to win free copies of the hottest Microsoft book titles like this one. Visit the Microsoft Subnet main page for details.
Accessing Shared Network Resources
After you connect to the network, the first thing you’ll likely want to do is see what’s on the network and access the available resources. Vista gives you two ways to get started:
Select Start, Network.
In the Network and Sharing Center, click View Network Computers and Devices.
Either way, you see the Network window, which lists the main network resources, such as the computers and media devices in your workgroup. As you can see in Figure 8.1, Details view shows you the resource name, category, workgroup or domain name, and the name of the network profile.
Vista’s Network window displays the main resources on your network.
For a more detailed look at the types of items you see in the Network window, see “Viewing Network Computers and Devices,” p. 130.
Viewing a Computer’s Shared Resources
Your Network window will likely show mostly computers, and those are the network items you’ll work with most often. (The computers display an icon that shows a monitor and mini tower computer; if you’re not sure, select View, Details and look for the objects that have Computer in the Category column.) If you don’t see a particular computer, it likely means that the machine is either turned off or is currently in Sleep mode. You need to either turn on or wake up the computer.
You may be able to remotely wake up a computer that’s in Sleep mode; see “Using a Network Connection to Wake Up a Sleeping Computer,” p. 151.
If you see the computer you want to work with, double-click the computer’s icon. One of two things will happen:
If your user account is also a user account on the remote computer, Windows Vista displays the computer’s shared resources.
If your user account is not a user account on the remote computer, and the remote computer has activated password protected sharing (see “Using Password Protected Sharing,” later in this chapter), Windows Vista displays the Connect to Computer dialog box (where Computer is the name of the remote computer). You need to type the username and password of an account on the remote computer, as shown in Figure 8.2.
You may need to log on to the remote computer to see its shared resources.
Figure 8.3 shows a typical collection of shared resources for a computer.
Double-click a network computer to see its shared resources.
The computer shown in Figure 8.3 is sharing a folder named Data, two hard drives (Drive D and Drive G), a DVD drive, and a printer. The computer is also sharing two folders that that many Vista computers automatically share:
This folder is open to everyone on the network and usually provides users with full read/write access. However, it’s also possible to protect this folder by giving users read-only access, or by not displaying the Public folder at all. See “Sharing the Public Folder,” later in this chapter.
This folder contains the computer’s installed printers. Vista usually places an icon for each shared printer in the computer’s main folder, too. You can control whether Vista displays the Printers folder; see “Activating Printer Folder Sharing,” later in this chapter.
Double-click a shared folder to see its contents. For example, Figure 8.4 displays the partial contents of the Data folder shown earlier in Figure 8.3. What you can do with the shared folder’s contents depends on the permissions the computer owner has applied to the folder. See “Sharing a Resource with the File Sharing Wizard” and “Sharing a Resource with Advanced Permissions,” later in this chapter.
Caution - Double-clicking a network computer to see its shared resources works because the default action (which you initiate by double-clicking) for a network computer is to run the Open command, which opens the computer’s shared resources in a folder window. However, not all the devices you see in the Network window have Open as the default action. For example, with media devices, the default action is either Open Media Player or Open Media Sharing. Other devices have more dangerous default actions. On some routers, for example, the default action is Disable, which disconnects the router’s Internet connection! So, instead of just double-clicking any device to see what happens, it’s better to right-click the device and examine the list of commands. In particular, make note of the command shown in bold type, which is the default action.
Working with Network Addresses
In Figure 8.4, the Address bar shows the breadcrumb path to the shared folder:
Network > PAULSPC > Data
Double-click a shared folder to see its contents.
Clicking an empty section of the Address bar (or the icon that appears on the left side of the Address bar) changes the breadcrumb path to the following network address, as shown in Figure 8.5:
\ \ PAULSPC\ Data
Click an empty section of the Address bar to see the network address.
As you can see, a network address uses the following format:
Here, ComputerName is the name of the network computer, and ShareName is the name of the shared resource on that computer. This format for network addresses is known as the Universal Naming Convention (UNC). If the UNC refers to a drive or folder, you can use the regular Windows path conventions to access folders and subfolders on that resource. For example, if the resource Data on PAULSPC has a Documents folder, the network address of that folder would be as follows:
\ \ PAULSPC\ Data\ Documents
Similarly, if that Documents folder has a Writing subfolder, here’s the network address of that subfolder:
\ \ PAULSPC\ Data\ Documents\ Writing
So, although you’ll most often use icons in folder windows to navigate through a computer’s shared resources, network addresses give you an alternative way to specify the resource you want to work with. Here are some examples:
In the Network Explorer, click an empty section of the Address bar, type the network address for a shared resource, and then press Enter.
Press Windows Logo+R (or select Start, All Programs, Accessories, Run) to open the Run dialog box. Type the network address for a shared resource, and then click OK to open the resource in a folder window.
In a program’s Open or Save As dialog box, you can type a network address in the File Name text box.
In a Command Prompt session (select Start, All Programs, Accessories, Command Prompt), type start, then a space, then the network address of the resource you want to open. Here’s an example:
start \ \ paulspc\ data\ documents
In a Command Prompt session, you can use a network address as part of a command. For example, to copy a file named memo.doc from \ \ PAULSPC\ Documents\ Downloads\ to the current folder, you’d use the following command:
copy “\ \ paulspc\ data\ documents\ memo.doc”
Mapping a Network Folder to a Local Drive Letter
Navigating a computer’s shared folders is straightforward, and is no different from navigating the folders on your own computer. However, you might find that you need to access a particular folder on a shared resource quite often. That’s not a problem if the folder is shared directly—see, for example, the shared Data folder in Figure 8.3. However, the folder you want might be buried several layers down. For example, you may need to open the Data folder, then the Documents folder, then Writing, then Articles, and so on. That’s a lot of double-clicking. You could use the network address, instead, but even that could get quite long and unwieldy. (And, with Murphy’s law still in force, the longer the address, the greater the chance of a typo slipping in.)
Note - You might also find that mapping a network folder to a local drive letter helps with some older programs that aren’t meant to operate over a network connection. For example, I have a screen-capture program that I need to use from time to time. If I capture a screen on another computer and then try to save the image over the network to my own computer, the program throws up an error message telling me that the destination drive is out of disk space (despite having, in fact, 100GB or so of free space on the drive). I solve this problem by mapping the folder on my computer to a drive letter on the other computer, which fools the program into thinking it’s dealing with a local drive instead of a network folder.
You can avoid the hassle of navigating innumerable network folders and typing lengthy network addresses by mapping the network folder to your own computer. Mapping means that Windows assigns a drive letter to the network folder, such as G: or Z:. The advantage here is that now the network folder shows up as just another disk drive on your machine, enabling you to access the resource quickly by selecting Start, Computer.
Creating the Mapped Network Folder
To map a network folder to a local drive letter, follow these steps:
Select Start, right-click Network, and then click Map Network Drive. (In any folder window, you can also press Alt to display the menu bar, and then select Tools, Map Network Drive.) Windows Vista displays the Map Network Drive dialog box.
Caution - If you use a removable drive, such as a memory card or flash drive, Windows Vista assigns the first available drive letter to that drive. This can cause problems if you have a mapped network drive that uses a lower drive letter. Therefore, it’s good practice to use higher drive letters (such as X, Y, and Z) for your mapped resources.
The Drive drop-down list displays the last available drive letter on your system, but you can pull down the list and select any available letter.
Use the Folder text box to type the network address of the folder, as shown in Figure 8.6. (Alternatively, click Browse, select the shared folder in the Browse for Folder dialog box, and then click OK.)
Use the Map Network Drive dialog box to assign a drive letter to a network resource.
If you want Windows Vista to map the network folder to this drive letter each time you log on to the system, leave the Reconnect at Logon check box activated.
Click Finish. Windows Vista adds the new drive letter to your system and opens the new drive in a folder window.
To open the mapped network folder later, select Start, Computer, and then double-click the drive in the Network Location group (see Figure 8.7).
Tip - By default, Vista connects you to the network folder using your current username and password. If the network folder requires a different username and password, click the Different User Name link to open the Connect As dialog box. Type the account data in the User Name and Password text boxes, and then click OK.
After you map a network folder to a local drive letter, the mapped drive appears in the Computer window for easier access.
Mapping Folders at the Command Line
You can also map a network folder to a local drive letter by using a command prompt session and the NET USE command. Although you probably won’t use this method very often, it’s handy to know how it works, just in case. Here’s the basic syntax:
NET USE [drive] [share] [password] [/USER:user]Â[/PERSISTENT:[YES | NO]] | /DELETE]
The drive letter (followed by a colon) of the local drive to which you want the network folder mapped.
The network address of the folder.
The password required to connect to the shared folder (that is, the password associated with the username, specified next).
The username you want to use to connect to the shared folder.
Add YES to reconnect the mapped network drive the next time you log on.
Deletes the existing mapping that’s associated with drive. | <urn:uuid:09c9605f-2545-46dd-9c4e-3800bf591ba6> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2276338/software/chapter-8--accessing-and-sharing-network-resources.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00402-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.88749 | 2,891 | 2.921875 | 3 |
Calc stands for "Calculated". When IDMS stores a record it calculates a target page based on a logical/symbolic key using hash alogorithm. So to get a calc record you need to supply the key value and say OBTAIN which will fetch the record from database.
Another thing to note is that OBTAIN is Equivalent to FIND + GET. Let me explain you what find and get mean.
FIND - will check the database for occurance of the record in the DB.
GET - will fetch the record for which currency has been established by previous FIND command in to Call area.
OBTAIN will do both establish currency onthe record and retrieve it to the Call area.
Hope this helps...
for remaining types of records and stuff I would suggest you to go thru IDMS manual or books.. | <urn:uuid:a76c2be9-091b-40fc-90d6-83d255ca5d27> | CC-MAIN-2017-04 | http://ibmmainframes.com/about2181.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00218-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.897936 | 175 | 2.609375 | 3 |
Accessibility does not just mean access by everybody to Information and Communication Technologies, it also means access to everything available through ICT. It is not sufficient that applications and websites are accessible, it is important that tools, widgets and add-ons are also accessible. The importance of tools being accessible has been highlighted by AVAST Software's recent announcement that it has upgraded its avast! anti-virus program to be fully accessible to the vision-impaired.
Many users of screen-readers, such as JAWS®, had been attracted to avast! because it included an audible alarm when a virus was detected, in addition to the pop-up window. In this way users were made aware of the alert, without JAWS losing focus on their current task, allowing them to deal with the virus alert at a time convenient to them, in just the same way that a sighted user could.
Essentially this meant that the day-to-day use of avast! was accessible. The problem was that the installation, configuration and operation was not accessible and the user of a screen-reader was dependent on the help of a sighted user for installation, configuration and any special operations (e.g. requesting an immediate scan). People with vision impairments want to be as independent as possible and not impose on their friends or colleagues when it is not essential.
The push for this development came from vision-impaired IT geeks who wanted to use avast! Antivirus 5.0. "For the blind, the computer is an absolutely fantastic invention. And for some, it's even their hobby to adjust it," said Radek Seifert, work-team leader at the TEREZA Center, a support centre for the sight-impaired at the Czech Technical University in Prague.
These volunteers fine-tuned avast! so it worked with JAWS. "They said, ‘give us the beta' so we did," remembers Ondrej Vlcek, AVAST Chief Technical Officer. "It was also a complicated issue on our side as avast! does not use the standard Windows controls."
The user interface for avast! needed to be changed in two ways:
- All functions had to be accessible using the keyboard, this is a prerequisite to being able to use JAWS. It has a beneficial side-effect that users who cannot, or prefer not, to use a pointing device have full access as well.
- All the textual information had to be provided to JAWS in an accessible, logical and consistent manner.
AVAST developed a new framework for the user interface which means that other products and new versions will automatically be JAWS friendly.
All through the development the new functions were tested and improved by the vision-impaired geeks thus ensuring that it was not just accessible using JAWS but that it was easy to use with JAWS.
avast! 5.0 was generally available in January 2010 and the new functions came in an update in August 2010; with the new framework the next version of avast!, planned for January 2011, will be accessible at GA.
It is great to see a company reacting quickly to user pressure for accessibility. It is also good to see that the vision-impaired community was actively involved in the development and testing of the new product.
The products should now be accessible to all disabled users, including those with hearing impairments and muscular-skeletal impairments. It is also available in 11 languages so making it easily accessible to users who prefer not to use English.
I hope that other developers of tools, widgets, add-ons and applications will take note and produce fully accessible versions of their products. | <urn:uuid:47da551e-c840-406e-8d89-f5d15b44875d> | CC-MAIN-2017-04 | http://www.bloorresearch.com/analysis/avast-shows-the-way-on-accessibility/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00218-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.966506 | 773 | 2.875 | 3 |
Google to Eventually Turn Project Over to Language Experts
By recording endangered languages using YouTube, participants can preserve spoken languages for anyone to learn, hear and share, Rissman said. "That can be a great way to share your language with your kids." The effort was ripe for organizing because thousands of people around the world are already preserving and working on endangered languages but often are not collaborating to the fullest because they are doing their work on their own, Rissman said. "This can be a tool to help bring those people together. We feel that our contribution of technology is really just the start, but this is being driven forward by a coalition of endangered language experts and dedicated communities around the world. Thats what is needed."Google will eventually turn the project over to others who are "true experts in the field of language preservation," the blog post stated. When that happens, the project will be led by the First Peoples' Cultural Council and The Institute for Language Information and Technology (The LINGUIST List) at Eastern Michigan University. The issue of disappearing languages has been a global concern for years. In 2007, University of Alaska Fairbanks professor emeritus Michael Krauss spoke about the issue at the annual meeting of the American Association for the Advancement of Science, according to a story by ScienceDaily.com. Over the years, humans lose sections of their languages as the populations of groups of people dwindle, Krauss said in his presentation. He compared it to losing sections of the Earth's biosphere due to pollution and other factors. "I claim that it is catastrophic for the future of mankind," Krauss said at that meeting. "It should be as scary as losing 90 percent of the biological species." Preventing the loss of languages should be important to us all, he said. "Every time we lose (a language), we lose that much also of our adaptability and our diversity that gives us our strength and our ability to survive."
The languages project is being supported by a new coalition, the Alliance for Linguistic Diversity, which will provide storage, research, advice and collaborations to assist in the efforts. The Alliance includes a diverse membership of groups, including the Alaska Native Language Archive, Association for Cultural Equity, CBC Radio, Center for American Indian Languages, Coushatta Tribe of Louisiana, First Peoples Cultural Council, Grassroots Indigenous Multimedia, Indigenous Language Institute, Laboratorio de Linguas Indigenas, Universidade de Brasilia and The Endangered Languages Catalogue team at the University of Hawaii at Manoa. | <urn:uuid:bb1dd165-24c1-4025-bdb4-b85b76321357> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Enterprise-Applications/Google-Begins-Effort-to-Help-Preserve-Languages-Nearing-Extinction-880618/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00218-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950276 | 517 | 3.265625 | 3 |
By Jennifer LeClaire / CIO Today. Updated May 12, 2008.
Powerset is offering a new way to search Wikipedia -- with natural-language technology and conversational phrasing instead of keywords like Google uses.
The Powerset tools unveiled Sunday are available in beta and based on patents licensed from PARC and Powerset's proprietary research. The technology, which can be applied to any topic and any domain, reads and extracts meaning from every sentence in Wikipedia.
Unlike traditional search engines, which look for words, Powerset matches the meaning of the user's query to the meaning of sentences. Powerset proclaimed the release is the first step in changing the way users search and use Web content.
"Powerset's understanding of content on pages and the way they are presenting results is interesting. Powerset is organizing context and content in helpful ways," said Greg Sterling, principal analyst at Sterling Market Intelligence. "But it only applies to Wikipedia -- Powerset is not indexing the broader Internet. So you can't really get an apples-to-apples comparison with broader search tools."
A Different Way to Search
While a direct comparison with Google may not be possible, Powerset offers some statistics to consider. The tool searches content from leading free-content providers, including more than 2.5 million Wikipedia topics in English. For many questions, Powerset returns answers from Freebase, an open, shared database of the world's information.
Powerset's search-results page includes a cadre of features, including Factz, dossiers, answers, semantic highlighting and a minibrowser. When users enter a topic query, Powerset assembles a summary of Factz extracted from pages across Wikipedia. Powerset also creates a summary of information found in Freebase and Wikipedia to give users a quick overview about a topic.
The most relevant search results are highlighted based on the meaning of a user's question, and a result can be expanded in a minibrowser to show the snippet in the context of the full Wikipedia article.
If a user types in a question, such as "What did Salvador Dali paint?" Powerset delivers up images of his paintings, along with a title link users can click to read more about a specific work of art. And a Powerset dossier for Benjamin Franklin, for example, offers a brief bio extract from Wikipedia, a fact sheet, and other key pieces of information in tab form.
The Next Search Frontier?
The rumor mill is already churning around Powerset. Microsoft has been named as a potential suitor in the wake of its failed Yahoo acquisition. Sterling said a Microsoft-Powerset merger makes sense for both companies.
"Powerset has better search for Wikipedia that will be useful for many people today and they may be able to take on similarly structured content databases and categories like that over time. But it's going to take a lot of investment for them to index the Internet," Sterling said.
"For Microsoft, this might be a very strategic acquisition that allows the Powerset team to do their business with the funding and freedom that Microsoft will bring," he concluded. "Microsoft could own something which arguably is the next generation of search capability." | <urn:uuid:10e26877-3dfd-4c6a-9fa5-693f648a01c2> | CC-MAIN-2017-04 | http://www.cio-today.com/article/index.php?story_id=021001XHBGA3 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00430-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923027 | 657 | 2.765625 | 3 |
Governor Arnold Schwarzenegger today participated in the launch of Ausra's Kimberlina Solar Energy Facility in Bakersfield. The five megawatt (MW) solar thermal power plant, the first to come online in California in more than 15 years, is a demonstration facility for utility-scale thermal solar energy plants, such as the one Ausra is building in San Luis Obispo. That project will be a 177 MW solar thermal power plant whose energy PG&E has already agreed to purchase.
"We're proving that cost competitive solar thermal power at utilitiy scale is real. It works and is now reliably supplying power to California," Ausra chief executive officer, Bob Fishman said.
Ausra's steam production technology can save customers millions of dollars in fuel costs, according to Fishman. Pressured steam can also augment power at conventional power plants, cutting costs and reducing their carbon footprint. It works by using large fields of mirrors to heat water in pipes that gets turned into steam. The steam is then used to drive turbines that generate power. And in addition to being used to generate power, the steam from the Kimberlina solar-thermal energy plant can also be used in such industrial processes as oil recovery and refinery, food processing and paper manufacturing. And these new solar-thermal energy plants use a fraction of the land of other solar-thermal technology implementations, Fishman said.
"This next generation solar power plant is further evidence that reliable, renewable and pollution-free technology is here to stay, and it will lead to more California homes and businesses powered by sunshine," Governor Schwarzenegger said. "Not only will this large-scale solar facility generate power to help us meet our renewable energy goals, it will also generate new jobs as California continues to pioneer the clean-tech industry."
Ausra's Kimberlina facility will employ seven full-time operators. When at full capacity, it will produce enough solar energy to power more than 3,500 homes. Ausra's larger, utility-scale San Luis Obispo facility will employ 350 Californians during construction and create 70 long-term jobs.
The Governor has set a goal of increasing California's renewable energy sources to 20 percent by 2010, and he supports reaching 33 percent by 2020. California's push to increase renewable energy and fight climate change will also boost our economy. According to an economic study released on Monday by the University of California at Berkeley and Next 10, California's policies will create as many as 403,000 jobs in the next 12 years and household incomes will increase by $48 billion. A separate economic study by Navigant Consulting, Inc. estimated that 214,000 permanent jobs in the solar energy sector alone will be generated in California.
"My vision is that when I fly up and down the state of California that I see every available space blanketed with solar-if it is parking lots, if it's on top of buildings, on top of prisons, universities, government buildings, hospitals. That is my goal," Gov. Schwarzenegger said at the launch of the power plant in Bakersfield.
On Tuesday, the governor announced that California has partnered with SunEdison to provide a zero-emission 8 MW solar photovoltaic power system to 15 California State University campuses. Further development is also under way by state departments, including the Department of General Services, Department of Corrections and Rehabilitation and Department of Mental Health, to generate approximately 7 MWs of solar power at five state prison sites and three state mental hospitals. Since 2006, 4.2 MWs of solar power have already been deployed at eight other state facilities through similar power purchase agreements.
To make solar power more accessible to California homeowners, the Governor signed his Million Solar Roofs Plan into law in August 2006. Now known as the California Solar Initiative, it will provide 3,000 MWs of additional clean energy and reduce the output of greenhouse gases by three million tons, equivalent to taking one million cars off the road. The $2.9 billion incentive plan for homeowners and building owners who install solar electric systems will lead to one million solar roofs in California by the year 2018. | <urn:uuid:b827ddc4-1b12-4d04-9525-fc1a1544168a> | CC-MAIN-2017-04 | http://www.govtech.com/technology/Solar-Energy-Plant-Comes.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00338-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947709 | 841 | 2.703125 | 3 |
It has been retired for 25 years but IBM will celebrate the 50th anniversary of the introduction of the iconic Selectric typewriter on July 31.
According to IBM the introduction of the Selectric on July 31, 1961 was seven years in the making. "With 2,800 parts, many designed from scratch, it was a major undertaking even for IBM, which had been in the typewriter business since the 1930s." (For a look at the "art" of the Selectric take a look at our slideshow)
More on iconic happenings: 25 tech touchstones of the past 25 years
According to IBM some the Selectric's unique characteristics and history included:
- Its "golf ball" head let typists' fingers fly across the keyboard at unprecedented speed. An expert typist could clock 90 words per minute versus 50 with a traditional electric typewriter.
- The golf ball moved across the page, making it the first typewriter to eliminate carriage return and reduce its footprint on office desks.
- Interchangeable golf balls equipped with different fonts, italics, scientific notations and other languages could easily be swapped in.
- With magnetic tape for storing characters added in 1964, the Selectric became the first (albeit analog) word-processor device.
- The Selectric formed the basis for early computer terminals and paved the way for keyboards to emerge as the primary way for people to interact with computers, as opposed to pressing buttons or levers. A modified Selectric, dubbed the IBM 2741 Terminal, could be plugged into IBM's System/360 computer, enabling engineers and researchers to interact with their computers in new ways.
- It was created by Eliot Noyes, the famed architect and industrial designer who served as IBM's consulting designer for 21 years. The Selectric is featured in the new "Pioneers of American Industrial Design" stamp series from the U.S. Postal Service, which cites Noyes as among 12 important industrial designers who helped shape the look of everyday American life in the 20th century. For the Selectric, Noyes drew on some of the sculptural qualities of Olivetti typewriters in Italy. The result was a patented, timeless shape, and a high-water mark for IBM's industrial design and product innovation.
- In 1971, the Selectric II was released, with sharper corners and squarer lines, as well as new features such as the ability to change "pitch" from 10 to 12 characters per inch and, starting in 1973, a ribbon to correct mistakes. The final model, the Selectric III, was sold in the 1980s with more advanced word processing capabilities and a 96-character printing element. But as personal computers and daisy-wheel printers began to dominate, the Selectric brand was retired in 1986.
- IBM sold 13 million Selectrics.
For an interesting take on all things typewriter, check out the "Adventures In Typewriterdom" blog.
Follow Michael Cooney on Twitter: nwwlayer8
Layer 8 Extra
Check out these other hot stories: | <urn:uuid:488c83b9-419a-45b1-860f-4647f9cda78b> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2220282/data-center/the-ibm-selectric-typewriter-turns-50.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00338-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951298 | 629 | 2.546875 | 3 |
The T code extracts a character substring from a field value.
m specifies the starting column number.
n is the number of characters to be extracted.
If m is specified, the content of field 9 of the data definition record has no effect - characters are counted and extracted from left to right, for n characters.
If m is not specified, the content of field 9 of the data definition record will control whether n characters are extracted from the left or the right-hand end of the value. If field 9 does not contain an R, the first n characters will be extracted from the value. If field 9 does contain an R (right justify), the last n characters will be extracted from the value.
Input conversion does not invert. It simply applies the text extraction to the input data. | <urn:uuid:df5abde9-1d3d-483e-a367-d52cdd08b249> | CC-MAIN-2017-04 | http://www.jbase.com/r5/knowledgebase/manuals/3.0/30manpages/man/jql2_CONVERSION.T.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00513-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.814242 | 164 | 2.84375 | 3 |
When criminals hack their way into the enterprise it is generally through a software vulnerability. Likewise when data is stolen from portals or Web sites, vulnerabilities are often to blame. Recent studies show that the problem is not getting better.
Vulnerability Trends Not All Bad, but Not All Good Either
While by no means as common a threat as malware, software vulnerabilities are still a significant threat for enterprises. It is these weaknesses in application and operating system code that allow hackers access to systems and data. Figure 1 shows a significant amount of vulnerability information including overall vulnerability count and severity. It also details the average window of exposure – the time difference between vulnerability discovery and commensurate patch release. | <urn:uuid:6588a881-b9f0-49f7-af77-7b30a6dbb588> | CC-MAIN-2017-04 | https://www.infotech.com/research/threat-trends-2k8-software-vulnerability | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00329-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944113 | 136 | 2.546875 | 3 |
CISSP Management Concentration: Environments
As corporations and governments adopt new operational models driven by the Internet, more networks are opened to partners, customers, employees, suppliers, vendors and even competitors, making it faster and easier to do business. Greater access also leads to increased threats to infrastructure and data protection, however, and the associated risks must be managed by qualified professionals.
We all know information is one of the most valuable assets of any organization, and as access has become more convenient, protecting data integrity and privacy has become more critical and complex. What you might not know is that as threats and regulations have evolved, the business benefits of information security have changed. Within this new context, organizations of all types are realizing they must hire the right people with the right expertise, or the potential negative impact on finances and resources could be enormous.
In the past, technology alone was considered sufficient to secure the information infrastructure, but experience has shown many times over that even the best firewall or intrusion-detection system is useless when an unsuspecting employee is tricked into divulging a system password or downloads free software that also contains SpyWare.
In addition to outsider threats, organizations also must protect themselves from insider threats. The proliferation of mobile devices presents the threat of employees, contractors, vendors or strategic partners accessing the network and possibly downloading vast amounts of data and then walking out the door without being detected.
Another type of external pressure being exerted on businesses is from government regulations and new corporate governance rules such as Sarbanes-Oxley. Compliance with government regulation requires management to review organizational processes, which means security practices, not simply protected IT systems, are at the heart of good governance.
The growing awareness of cyber-security threats has prompted most organizations to realize that securing information assets goes beyond tools and technology — highly trained, highly qualified personnel are necessary to protect information assets.
People are the only resource that can create and implement a security policy for an organization based on a balance between business risks and costs. Tried and trusted security practices with an emphasis on broad-based objectives must be implemented throughout the organization.
Twenty-five years ago, the information security profession was new and obscure, and information security was not a high priority. As access to information became more convenient, the need to protect access to data became more critical and information security issues more complex. Many early information security professionals fell into their jobs when their employers realized their businesses were at risk, and they needed to protect their information assets.
Today’s information security professional is faced with constantly changing legal requirements, business practices and generally accepted security standards. Online connectivity has dramatically changed the way corporations and governments around the world communicate and access information, conduct financial transactions and perform daily operations. The increased protection of intellectual property, employee data and company records has become a top priority.
The management of information assets and the recognition of the importance of information security have come a long way in the past 25 years, and as the information security industry continues to mature, the management of information assets continues to increase in importance.
According to the second annual Global Information Security Workforce Study, conducted in 2005 by global analyst firm IDC and sponsored by (ISC)2, the International Information Systems Security Certification Consortium, ultimate responsibility for information security has moved up the management hierarchy, with more respondents identifying the board of directors and CEO or a CISO/CSO as being accountable for their company’s information security.
IDC expects this accountability shift to continue as information security becomes more relevant in risk management and IT governance strategies. The study also found that security is becoming operationalized within organizations as they attempt to align their business and security strategies with the goal of establishing a comprehensive information risk management program.
Most respondents, 73 percent, expect their influence with executives and the board of directors to increase in the coming year, as dialogue between corporate executives and information security professionals has evolved from a technical security discussion to one of risk management strategies.
To meet this growing demand, (ISC)2 offers the Certified Information Systems Security Professional (CISSP) certification. The CISSP requires candidates to demonstrate a base level of knowledge in security best practices, policies and technologies by passing an examination, as well as have four years of validated experience in designated areas of information security (or three years plus a bachelor’s degree), be endorsed by a CISSP credential holder, abide by the (ISC)2 Code of Ethics and obtain audited continuing professional education credits to maintain their certification.
As the information security environment grows in size, complexity and specialization, (ISC)2 developed a management concentration for the advanced information security manager.
The CISSP-ISSMP (Information Systems Security Management Professional) reflects a deeper management emphasis and understanding built on the broad-based knowledge of the CISSP CBK domains.
The CISSP is a prerequisite for the management concentration, and it offers a career-enhancement strategy that spans a broad range of information security management positions, including information security, assurance and risk management for professionals who focus on enterprise-wide risk management.
The management concentration originated with (ISC)2’s job-analysis survey of its CISSP members in 2001, in which members requested additional concentrations to their credential, targeting their chosen career paths or job requirements. The management concentration is part of (ISC)2’s mission to ensure information security personnel are knowledgeable, experienced professionals in every phase of their careers.
As the information security profession continues to mature and expand, there is a need for professionals with specialized knowledge. The CISSP-ISSMP management concentration verifies knowledge, skills and abilities in the following areas:
- Expert understanding of relationships between security and business requirements of organizations. This ensures security is appropriately addressed and specifically included as part of the corporate governance process in making the day-to-day decisions affecting the risks of the business.
- Intimate understanding of risks and threats applicable in an organization’s environment, including applications, software languages, databases and operating platforms and countermeasures to mitigate these risks.
- Crucial knowledge to address control and coordination of operational networks and systems, including availability and integrity of systems, system processes and job executions.
- Proficiency in business-impact analysis, enterprise-recovery strategy, emergency planning, implementing and advocating business-continuity plans.
- Ability to identify appropriate and applicable laws related to risk management, understanding of investigation parameters and deep understanding of professional ethics in order to conduct investigations in a credible and effective manner.
Technology alone can’t stop future attacks — it’s up to qualified professionals to deploy tec | <urn:uuid:52854be0-afa1-4551-b0b3-8afd08b03254> | CC-MAIN-2017-04 | http://certmag.com/cissp-management-concentration-meeting-the-demands-of-specialized-environments/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00237-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942979 | 1,343 | 2.640625 | 3 |
What are chain certificates? Chain certificates are referred to by many names — CA certificates, subordinate CA certificates or intermediate certificates. Confused yet? Let’s break it down.
It all starts with something called a root certificate. The root certificate is generated by a certification authority (CA) and is embedded into software applications. You will find root certificates in Microsoft Windows, Mozilla Firefox, Mac OS X, Adobe Reader, etc. The purpose of the root certificate is to establish a digital chain of trust. The root is the trust anchor.
The presumption is that the application developer has pre-screened the CA, ensured it meets a minimum level of trust and has accepted the CA’s root certificate for use. Many application developers, including Adobe, Apple, Mozilla, Microsoft, Opera and Oracle, have root certificate programs. Others rely on the roots provided by the underlying operating system or developer toolkit.
One of the main functions of the root is to issue chain certificates to issuing CAs — the first link in the chain of trust. Your Web browser will inherently trust all certificates that have been signed by any root that has been embedded in the browser itself or in an operating system on which it relies.
Why do you need an issuing CA? The purpose of the issuing CA is to isolate certificate policy from the root. Issuing CAs can be used to issue many different certificate types: SSL, EV SSL, Code Signing, Secure Email, Adobe CDS, etc. These certificate types are subjected to different requirements and risks, and as such have different certificate policies. The certificates may have different assurance levels such as high, medium and low. Issuing CAs may also be controlled by an organization other than that which controls the root.
The last link of trust is that between the end entity certificate and the issuing CA. In the case of an SSL certificate, the end entity certificate represents the linkage between a website owner and the website domain name. The SSL certificate is installed on the Web server along with the chain certificate. When a user browses to the website protected by the SSL certificate, the browser initiates the verification of the certificate and follows the chain of trust back to the embedded root.
In some cases, the CA may have chosen to issue end entity certificates directly from the root CA. This is an outdated practice; issuing directly from the root increases risk and limits how certificate policy can be managed and enforced. Issuing directly from the root can also impact performance as the browser may have to verify a large certificate revocation list (CRL) during its chain validating process. Major public CAs are discontinuing or limiting this practice.
When you receive an Entrust certificate, we provide any required chain certificate complete with installation instructions. | <urn:uuid:bf30dbe8-8075-451e-846f-f9b9d7eba793> | CC-MAIN-2017-04 | https://www.entrust.com/chain-certificates/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00055-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936668 | 557 | 3.421875 | 3 |
Municipal managers and elected officials from cities, counties and states, besides dealing with the routine headaches running a jurisdiction can spontaneously generate, must also be ready to respond quickly to large-scale emergencies.
Two notable examples of such emergencies are the gassing of Tokyo's subway system in 1995 and the bombing of the Alfred P. Murrah federal office building in Oklahoma City, Okla., that same year.
According to the U.S. General Accounting Office, the number of terrorist attacks nationwide and worldwide has declined in recent years, but the level of violence and lethality of such attacks has increased. The U.S. Department of State's research reveals a continuing trend to more ruthless attacks on mass civilian targets and the use of more powerful bombs.
Emergency-planning officials must face the possibility of responding to nightmarish events occurring even in small towns like Littleton, Colo.
Research Planning Inc. (RPI) designs tabletop exercises for cities and jurisdictions and uses several types of software to present the situation and model how chemical or biological attacks can spread over a city. One of the software products is CAMEO (computer aided management of emergency operations). Another is MIDAS-AT (meterological information and dispersion assessment system anti-terrorism).
CAMEO models planning and response to chemical emergencies and contains a chemical database of 4,700 hazardous chemicals. The database also contains specific information on each chemical, detailing the individual hazards of the chemical, firefighting techniques, cleanup procedures and protective clothing. The software contains a mapping application and models air dispersion of chemicals over an area.
MIDAS-AT allows users to model air dispersion over an area, but offers two other modeling capabilities. The inside building model allows users to model a terrorist attack using chemical or biological weapons inside a building and displays the spread of the agent in the building to other rooms or floors. The urban terrain model allows users to model a similar terrorist attack in a downtown environment where tall buildings create virtual canyons, affecting the spread of a chemical or biological agent.
The realities of terrorist attacks prompted Congress to pass the Defense Against Weapons of Mass Destruction Act in 1996. This act created the U.S. Domestic Preparedness Program, designed to bolster the abilities of cities across the country to withstand and manage a terrorist attack.
The program seeks to train the appropriate state, city and municipal personnel, called "first responders," to prepare the nation's largest 120 cities for the aftermath of a variety of terrorist attacks, be they nuclear, biological or chemical. Specialized teams from federal agencies train the personnel responsible for training first responders to any type of disaster or emergency event.
To ease state and local first responders' access to information about the program, the National Domestic Preparedness Office (NDPO) was created. It coordinates all federal efforts to help first responders with the necessary planning, exercises, training and equipment to respond to a terrorist attack of chemical, biological or nuclear weapons. The NDPO is also an information clearinghouse, providing details on federal assistance programs to state and local response agencies.
"Think of the NDPO as a big tool box," said Barbara Martinez, deputy director of the NDPO. "We're not trying to reinvent the wheel here. First responders from all across the country can share input received from other first responders and governmental agencies to get an idea of how others have responded to emergency situations."
The NDPO operates through weapons of mass destruction (WMD) coordinators in all FBI offices nationwide, said Rick Shapiro, NDPO deputy director. First responders call their state's FBI office with requests or questions about training or equipment. The WMD coordinator, with the NDPO's assistance, answers questions or arranges training for regional groups of first responders within the state.
"A lot of good things are happening and a large number of state and local first responders are signing up for our programs," Shapiro said. "We lead off our training classes by telling state and local responders that the federal government is eight to 14 hours away in response time. Local communities will be on their own."
"It's up to local leaders to be prepared," Martinez said. "Preparedness is the key to safe and effective response because if there's no plan, panic can ensue."
Getting the Tools
State and local emergency-planning officials can also access the research power of the federal government through the domestic preparedness program. The National Institute of Justice (NIJ) is researching and testing many high-tech tools designed to help response teams.
The NIJ's Office of Science and Technology (OST) is conducting the research and field-testing for a variety of such tools that may become available to state and local law enforcement agencies:
? The RTR-3 explosive diagnostic system, a computer-based, portable unit, allows users to examine an explosive device by the use of X-rays, which are displayed on a small computer screen. In addition, images of the bomb can be sent, via modem, to remote explosive experts for further analysis. Another device, called a percussion-activated nonelectric disrupter (PAN) is used in conjunction with the RTR-3. PAN disables explosive devices by hurtling a small slug into the bomb at a target picked out by bomb experts after viewing the X-ray images of the bomb.
? The personal alarm monitor is intended to be a wearable device, about the size of an ATM card, that alerts the wearer to sub-clinical exposure to hazardous chemical and biological agents. This device will alert first responders to the presence of as broad of range of agents as possible so the personnel arriving at a scene can don appropriate protective gear. The first prototypes of this device will warn the wearer of exposure to hazardous chemicals by means of a change in color of some portion of the device, and a biological-detection capability is further down the road. The card will have a strip of material that reacts to the presence of the chemical by changing color.
? The through-the-wall surveillance system allows users to locate and track a person through concrete or brick walls, and can track the activity of a person moving behind an eight-inch thick concrete wall to a range of 75 feet.
? A chemical-agent warning system is being tested with a metropolitan transit authority. The system uses currently available chemical-agent detectors that, when sensing an agent, send an alarm to the transit authority's command center, allowing a response plan to be launched.
Spreading the Knowledge
St. Louis participated in the domestic preparedness program's training in August 1998. The training was built around a tabletop exercise centered on the Pope's visit to the city last January, said Michael Sullivan, director of St. Louis City Emergency Management Agency.
"It was amazing to see the training exercise come together. We called in people from the state. We had our people there. We had federal people there. It got everybody on the same page," Sullivan said. "The [training] set up a biological incident in which anthrax was released at the Kiel Auditorium, where the Pope was to make his address. Through the training, we learned that we weren't prepared for decontaminating a large number of people exposed to a biological agent. The trainers showed us how we could prepare ourselves."
Sullivan said the Public Health Service demonstrated how to resolve this problem by using decontamination tents, while other federal agencies also delivered specialized training. "You respect the expertise of the trainers because they've been at it for a long time," Sullivan said. "Our HAZMAT personnel, some with 10 or 12 years of experience in the field, said they were very impressed with the HAZMAT training. That tells me something."
Sullivan's agency is training other first responders, such as police officers from other municipalities, emergency management personnel from neighboring Cape Girardeau county, emergency-room personnel from St. Louis hospitals and even ushers from the city's sports arenas, who are trained to "be alert and be aware" of what doesn't look right.
Preparing for the Worst
Other jurisdictions may access grant money for equipment purchases from the Office of Justice Programs (OJP), part of the Department of Justice (DOJ). The OJP's Office for State and Local Domestic Preparedness Support is the agency awarding the grants.
Andy Mitchell, deputy director of the preparedness support office, said that training for fire, law enforcement and emergency medical technicians is also being offered through his office to any jurisdiction, regardless of size.
"We think it's critical that first responders have a basic idea of what these biological and chemical agents are," Mitchell said. "Since they are first on the scene, they need to be able to recognize what they are dealing with before calling a HAZMAT team. Our goal is to train as many first responders as we can." | <urn:uuid:ad0b0b98-049c-497e-9b96-647242b71c5d> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Turning-Cities-into-Citadels.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00357-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947147 | 1,809 | 2.78125 | 3 |
In today’s fast-paced world, backing up your files is of the utmost importance. Typically music, movies, films, data files, projects, and photos are all stored in one place – your computer. Laptops and desktops have decreased in cost, and the amount of storage inside them has increased greatly over the last few years. Unfortunately having all of your data in only one place is dangerous.
Computer loss, theft, natural disaster, and accidental deletion, are just some of the ways that you can lose the data you’ve spent so long creating and accumulating. The only way to prepare for the unexpected is to have a good backup strategy in place. There are many different ways to backup your computers, and using multiple forms of backup will minimize the risk of ever losing your valuable files.
The simplest definition of a computer backup is an exact copy. In the case of computer files, we are referring to copies of the original files that you have on your laptop, desktop, or external drive. Creating a backup of original content means having that data saved in two places, but it’s also important to make sure that those two places aren’t on the same type of device. For example, if you have 3 copies of a working document on your computer, if your computer crashes, you will still lose all three. This makes the backup method and medium, an important thing to consider for your backup strategy!
Technically, yes. In most cases, a simple thumb drive (or flash drive) is the first way most people save their data. Thumb drives are easy to transport, work with most computers, and are relatively small. That makes them a great way to save small amounts of data like presentations or working documents. It’s also easy to give them to others, making them great for collaborative projects.
The downside to thumb drives is that they are usually very small and often are not very dense (meaning, they cannot store a lot of data). This makes them problematic for a few reasons. If your thumb drive is small it’s easy to lose. If you lose your thumb drive then you’re no longer backed up! Not having a lot of storage density is also problematic, as typically a thumb drive will not be able to hold all of the data that is on your computer. For all of those reasons thumb drives are not an ideal solution for backing up your computer.
If it sounds overwhelming and complicated, then you’re starting to understand the complexity of the problem. That’s the bad news. The good news is that there are a lot of options out there to help, and they aren’t very complicated at all, once you are a little familiar with them. Making backups is much easier and less expensive than trying to recover files from a broken hard drive. Not to mention if your hard drive has been lost, damaged, or stolen, backups are the only way that you can recover the data that was on them.
So start backing up your computer today and save yourself a lot of hassle down the road. All hard drives will eventually fail; it’s just a question of when and whether or not you’ll be prepared.
There are many ways to backup your files. Even manual copies (like saving a copy to a USB drive) are a kind of backup, they just aren’t a very good kind, because you have to do it manually, you have to do it repeatedly, and you have to manage things like deleting and renaming files. A good backup system is as easy as possible (so you’re more likely to use it) but the best backup systems automatically perform incremental backups so you don’t need to think about it or remember to do anything about it once the system is set up.
A "bootable backup" (sometimes called a "clone") is like a spare tire for your car. If you get a flat tire, a spare will let you finish your trip or at least get you to the point where you can get more help. A clone is a complete copy of your computer’s primary hard drive (sometimes called a "boot drive"). If your computer’s primary drive died tomorrow you could hook up the clone, reboot your computer from it, and have immediate access to not only all of your files but also all of the software you use, including all of the settings and configuration changes that you have made. If you are in the middle of an important project or just don’t have time to replace the boot drive immediately, a clone can really save the day. A clone also has a copy of all your files as they were when the clone was last updated, which means that if you accidentally deleted a file, you can copy it back from the clone to your boot drive.
A spare tire isn’t meant to be used for very long, and it usually isn’t as good as the original. The same is true of a clone backup drive. With a car there are lots of other things that can go wrong besides the tire failing. With a computer there are lots of other things that can go wrong besides the hard drive failing. Your computer will run slower when booted from an external drive, and there may be some other shortfalls.
Still you’d definitely like to have a clone when your primary hard drive fails. Of course your clone will only be useful if it has been updated recently, because otherwise it will be missing files. However, because it has to examine every file on your computer, it can take awhile to create, and it is best not to use your computer while the clone is updating. Due to all of those factors, a clone is usually updated once a day at most and more often only once a week.
You can also use an external hard drive to create an archive of your changed and deleted files. An archive is different from a clone in a few ways: first, it isn’t meant to be a bootable backup; second, it isn’t limited to a "snapshot" of your entire drive at one point in time. Instead, it creates incremental backups, which keep up with you as you work.
External drive backups are mainly intended to provide a backup of your personal files, especially
irreplaceable things like pictures. Instead of looking at your entire hard drive, this type of backup
looks at certain folders, such as your home directory. The archive part of this type of backup means
files are on your primary hard drive are changed (or even deleted) you can go back to undo the changes
even recover those deleted files. If your computer dies you can simply plug the external backup drive
different computer and immediately have access to all of your files, as well as the history of changes
Most external drives for PC come with their own backup software. If you use a Mac you can buy an external hard drive and use Time Machine, which will run every hour and check for changes. It will save hourly backups for the previous 24 hours, daily backups for the previous month, and weekly backups for previous months. Time Machine is easy to use, both for backing up and restoring files, but it does have drawbacks. Having to connect an external hard drive to your computer is inconvenient for laptop users who want to be able to move their computer around. Apple also sells Time Capsule (a Wi-Fi router with a built-in hard drive), which can do Time Machine backups over Wi-Fi. However it is relatively expensive (currently US$300 for 2 TB, or $400 for 3 TB) compared to other backup solutions.
One advantage of a local archive is that there is no monthly or annual fee, and you have immediate access to all of your files in case your computer dies. However, the amount of file history (that is: how far back you can go to get previous versions of files, and how many deleted files you can recover) is limited by the size of the external drive. If you need to get a file back from yesterday or last week, you can probably do that. If you need to get a file back from several weeks ago, you might be able to or you might not, depending on how much other data is being backed up.
Last but not least: an external backup drive is still a hard drive, subject to wear and tear, it will eventually need to be replaced, at which point you will either have to migrate your data to a new drive (if possible) or start from scratch.
Having a backup (or two) next to your computer is a good start, but it still puts your data at risk for theft, fire, or other disaster. Your best protection against that type of loss is to keep another backup somewhere else. While you could make a clone and bring it to a friend’s house or your office, or even put it into a safe-deposit box, chances are that you would not remember to keep it updated because it would be inconvenient.
In fact, cloud backups are the easiest kind to create and maintain. To get started you simply need to create an account, download software, run it once to enter your account information, and (optionally) set any preferences that you might want. After the initial setup you don’t need to do anything, the software will automatically keep your computer backed up any time it is turned on and connected to the Internet.
Behind the scenes a lot is going on with your cloud backup. First your files are encrypted so no one else can read them. Then they are copied not just to one drive but to lots and lots of drives. One of the main benefits of cloud backup is that the "cloud" is made up of many distributed and redundant computers and drives, so the loss of any one will not cause you to lose any files. Once they are uploaded, you never have to worry about uploading them again unless they are changed. Even then, changes can be sent much faster because the software is smart enough to know what is different and only send the changed data.
The main downside people have with setting up an offsite backup is the initial upload, which can take a few weeks or even a month. This depends on how much data you are trying to upload and the speed of your connection. Most ISPs are more focused on download speeds than upload speeds. If your current plan has slow uploads, you might be able to call them and temporarily upgrade to a plan with more "upload bandwidth" during your initial backup. Another potential negative is if you lose your entire drive and need to get all of your files back. In that case you would be limited by the speed at which you can download from your cloud backup provider.
Ideally, a 3-2-1 backup strategy is recommended, this means 3 copies of your data, 2 on-site but on different mediums, and 1 offsite. A good way to start is by having the original copy of your data, along with an external hard drive or clone at home, and an off-site solution like an online backup provider. If you can only have one solution to start, online backup is recommended, for three reasons:
A bootable backup can really save the day, but if your computer dies because of something other than the hard drive, a bootable backup won’t help. Also, you have to remember to update it.
If you use a desktop computer a local external drive backup like Time Machine is great, assuming you
keep the drive connected and running all of the time. However, fewer and fewer people are using
computers today, and if you use a laptop you either have to remember to plug in the backup drive or
wireless backup system like Time Capsule, which can be expensive.
(For example: a $300 Time Capsule will only backup 2 TB. For the same amount you could use Backblaze for at least five years_ ($5 for 12 months = $60/year; $60 for 5 years = $300), and never have to worry about hardware failures or having enough space to backup everything.)
"Set it and forget it." Backblaze online backup will work anywhere you are connected to the Internet. Once you set it up all you have to do is leave your computer running until the initial upload is done, and after that it will update as needed during the normal use of your computer. You don’t have to buy any hardware or figure out where to set it up or even configure it. Just download the free trial software from Backblaze and install it. There are settings you can customize if you want, or if you’re one of those people who likes to peek under the hood. For most people the default unlimited backup is perfect and after you download and setup your account you can just sit back and relax as Backblaze goes to work.
That said whatever you do, do something. Even an outdated clone of your computer is better than no backup at all,
Remember: All hard drives will eventually fail - it’s just a question of when and whether or not you’ll be prepared. | <urn:uuid:dd908980-b916-4fda-9410-7d5f0c4ea80a> | CC-MAIN-2017-04 | https://www.backblaze.com/backup-your-computer.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00567-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963763 | 2,701 | 2.765625 | 3 |
The race to build the world’s fastest supercomputer – and possibly even the first exascale-class system – just got more interesting. Russia and India are considering an alliance that would enable them to more effectively compete with rival supercomputing powers, in particular China.
Last month, Boris Shabanov of the Russian Academy of Sciences extended an invitation to the Indian Institute of Science and the Karnataka government to explore the possibility of setting up a joint supercomputing center in Bangalore, according to a report in the Economic Times.
“India has many skills for building supercomputers. It is very strong in software,” Alexey Shmelev, cofounder and chief operations officer of RSC group and delegate to the Russian Academy of Sciences, expressed to the paper. “I am ready to share technology with India. I guess there would not be many players who are willing to do so.”
By uniting forces, the two nations would be in a better position to take on elite supercomputing powers like the United States and Japan, and most notably China – which is home to the fastest supercomputer in the world by a signification margin.
China rose to the top of the supercomputing charts in June 2013 with its Tianhe-2 system, operated by the National University of Defense Technology. With 33.86 petaflops as measured by the LINPACK benchmark, the Chinese system beat out second place finisher Titan by nearly a 2-to-1 margin, and has retained its top spot ever since.
Titan, the 17.59 petaflop supercomputer installed at the University of Tennessee, was the list champ from November 2012 until China knocked it off its perch.
The US, EU, Japan, India, Russia and China have all expressed their intentions to reach exascale sometime around the year 2020. Many experts believe the odds are in China’s favor, but the outcome is far from decided. Most of these nations have the talent to get the job done, but the ultimate winner will be the nation that backs up its expressed intentions with a unwavering commitment to funding.
India and Russia should not be discounted. India made a run at supercomputing glory in 2007 with its Eka system (“eka” means number one in Sanskrit). When Eka debuted it was the fourth fastest supercomputer in the world and the fastest in Asia. Since then, China and Japan have pulled ahead.
India’s current top number-cruncher, the Indian Institute of Tropical Meteorology’s iDataPlex, has a benchmarked performance of 719 teraflops, earning it a 44 ranking on the TOP500 list. Ranked second with a speed of 386.7 teraflops is PARAM Yuva – II, unveiled by the Centre for Development of Advanced Computing in early 2013. Russia’s most powerful system, Lomonosov supercomputer, holds a 37th place ranking with 902 teraflops.
Says Vipin Chaudhary, former chief executive of Computational Research Laboratories, a subsidiary of Tata Sons that built the Eka supercomputer: “We need to catch up first before trying to leapfrog US and China. A lot of training and research needs to be supported for sustained period of time.”
To this end, India has committed about $2 billion dollars (Rs 12,000 crore) to the Indian Space Research Organisation and the Indian Institute of Science to develop a high-performance supercomputer by 2018. India’s government-backed computing agency, C-DAC, also announced a $750 million (Rs 4,500 crore) blueprint to set up 70 supercomputers over the next five years. | <urn:uuid:a0b1213d-c002-4fa4-a0d0-b811835a1863> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/04/09/russia-india-explore-joint-supercomputing-project/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00531-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941288 | 775 | 2.578125 | 3 |
Guest Commentary by
Wes Miller, VP at Directions on Microsoft
But what, exactly, is online privacy?
Ignoring, for the moment, the recent moves by the US Dept. of Justice to force ISP’s to retain customer activity logs, I believe that online privacy means giving the user of any Internet-connected application, on any device, the ability to know:
- How they are being identified,
- Whether their activity may be logged,
- Who will be logging it, and
- How that logged information will be used, monetized and shared.
There are really as many as three parties involved in every conversation that happens online:
- The information consumer: you, in addition to any other individuals allowed to view information you contribute, if a site is public.
- The communication provider: any Internet service providers/network providers between you and the host.
- The information provider: the host company / server you connect to.
Information providers may also work in concert with ad providers, analytics providers who track the activities of information consumers, external content providers, and other third parties.
A New Kind of Product
A 1993 cartoon by Peter Steiner ran in the New Yorker, with one dog using a computer proclaiming to another, “On the Internet, nobody knows you’re a dog.”
Eighteen years later that’s no longer true; a 2010 quote that became an Internet meme proclaims, “If you are not paying for it, you’re not the customer; you’re the product being sold.”
Innumerable websites provide services that rely on the consumption and contribution by information consumers in order to function. For example:
Google searches lead to ads, which lead to revenue from advertisers; at its heart Google is a search provider, but advertising pays the bills.
Facebook aims to foster a growth in individuals’ relationships, online contributions, and, in time, click-through on advertisements to keep it all working.
Twitter relies on ongoing user contributions, and without fresh, user-supplied content has little to differentiate itself.
If you’re accessing these or other online services for free you’re likely paying through “contextual contributions.” For example, when you search on Amazon you’re providing information about your product interests and allowing the company to build your profile.
When you search on Google the same thing happens. As a search engine, advertising media, and analytics provider the more Google can know about you the better the company and its advertisers can target their advertising – and the more revenue they can earn. And it isn’t just Google; Microsoft and many others provide ad services, plus analytics that might be free for basic capabilities but can become quite expensive for “private” versions.
Websites Know More Than You Think
As you access a website you can pass a surprising amount of information to its remote servers. The website operators know your ISP, your IP address (either the unique address of your computer or the home router in front of it), and as a result your approximate geographic location. And by storing a cookie – a small file with a unique identifier – your behavior and the content accessed by your system can be tied to you as well. That is the essence of how all Internet analytics and advertising software works.
Even if you turn off cookies, Adobe Flash can create its own tracking cookies called Local Shared Objects (LSOs) which can be very difficult to disable. A 2009 Wired.com article claimed that more than half the Web’s top sites use Flash cookies to track users and even re-spawn conventional cookies that users may delete.
Don’t believe me? Try Panopticlick from the Electronic Frontier Foundation.
On my own computer, instances of Google Chrome and Microsoft Internet Explorer provide all the information needed to make my system uniquely identifiable to the websites that I visit. And since the identifiers shown in Panopticlick together with other externally identifiable information such as IP address seldom change, websites that I visit are able to tie my online activity – all of it – together in their databases.
Perhaps even more surprising, HTTP status codes enable websites to know whether you’re currently logged into other sites like Gmail, Facebook, Twitter, and undoubtedly thousands of others; while history hijacking can provide websites with a comprehensive list of the other sites that you have visited.
Few browsers can protect against these tactics, and the corporate mainstream has already begun to take advantage. For example, a 2010 University of California at San Diego study identifies sites operated by ESPN, Morningstar, and numerous others as using history hijacking to extract visitors’ online history.
Through use of these exploits it’s also possible for websites to derive a visitor’s gender, age, cultural background, political affiliations, social tendencies, and numerous other details – both public and very private – with a precision that few could believe possible.
Protecting Your Privacy
I argue that it’s just too simplistic to say that anyone should be able to “opt out” of providing identifying information to online providers as some have suggested. How exactly is the checkout process supposed to work at any ecommerce site if no identifying information can be transmitted or stored? How can you identify yourself to Gmail without Google credentials?
The reality is that the transmission of personally identifying information is a fact of life for those of us using the Internet. But as an information consumer it’s critical to make informed choices, beyond just “not using any computer connected to the Internet.”
To safeguard your online privacy, I encourage you to treat every website you visit as though it’s interested in knowing your most intimate details – but do you care?
Read my next post to find out why you should definitely care.
And, to protect yourself I encourage you to treat all information that you access, write or share online, including web-based email, as publicly visible – as though your spouse, children, boss and worst enemy can see all of it today – and at any time in the future.
Perhaps Google’s outgoing chief executive Eric Schmidt put it most pointedly: “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.”
It may sound menacing, but when it comes to the Internet it’s certainly true.
In part 2 of this article we’ll explore why pundits are so interested in online privacy, why so few Americans seem to care, and why we should all be concerned.
Wes Miller is a Research Vice President at Directions on Microsoft in Kirkland, Washington. Wes previously served as a product manager at several Austin, TX, Internet and security software companies, including Winternals Software, and spent seven years at Microsoft working in the MSN and the Windows Core OS divisions as a Program Manager. Wes has also contributed numerous articles to TechNet Magazine. | <urn:uuid:442695f6-a12f-45e6-8ca6-685c251bd498> | CC-MAIN-2017-04 | http://www.identityweek.com/the-truth-about-online-privacy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00439-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922781 | 1,445 | 2.671875 | 3 |
A few months ago when I upgraded my home network, I moved two wireless routers and a NAS to my living room--to the corner when I have a "corn plant" or Dracaena Massangeana, a plant with sturdy canes said to tolerate neglect. I'm testimony to that plant's hardiness, since other plants seem to wither as soon as I bring them into my home, but this one has thrived for years in this spot. After surrounding the plant with Wi-Fi devices, though, it's surely been looking sicker, with only a few leaves left fighting.
Coincidence? Maybe not. A science experiment by a group of Danish 9th-graders suggests the radiation from the Wi-Fi routers may be to blame.
In the five girls' biology experiment, 400 cress seeds were divided into 12 trays and placed into two rooms with the same temperature, sunlight, and watering conditions. Half of the trays, however, were set in a room along with two routers. After 12 days of observation and measurement, the results were obvious: The seeds in the router room not only didn't grow, some mutated and died.
This experiment wasn't done in a controlled, professional environment, so you can take it with a huge grain of salt. However, as ABC News points out, a similar experiment conducted by Wageningen University associated heavy Wi-Fi signals with tree sickness.
We won't know for sure until this science experiment is repeated for scientific thoroughness, but since there's no harm in it, I'm going to just move my sad, Charlie Brown-ish tree to another room. If you have a plant you care about too, you might want to distance your Wi-Fi routers and mobile devices from it, just in case.
Read more of Melanie Pinola’s Tech IT Out blog and follow the latest IT news at ITworld. Follow Melanie on Twitter at @melaniepinola. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook. | <urn:uuid:c83a03ac-f2ca-4906-96fd-dd39399ffd6c> | CC-MAIN-2017-04 | http://www.itworld.com/article/2706780/consumerization/are-wi-fi-signals-killing-your-plants-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00165-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.975276 | 424 | 2.515625 | 3 |
The European Commission project, to be undertaken by UK connectivity measurement firm SamKnows, aims to depict levels of speed and reliability provided by internet service providers (ISPs) across the European member states. According to the BBC, once the research is completed the data will be accessible to both consumers and ISPs.
In order to collect the data SamKnows are looking to recruit 10,00 volunteers. These volunteers will need to plug in a small device, known as a whitebox, that is attached to their broadband connection. When the broadband connection is inactive the whitebox runs a series of tests to measure the speed and performance of the line.
SamKnows previously conducted a similar project with Ofcom in the UK and discovered that the actual performance of some major ISPs was well below the advertised speeds. According to SamKnows Executive, Alex Salter, the test would be “a large-scale version of the UK project” and would help everything from government investment drives through to enabling consumers to make informed decisions about choosing an ISP.
For more information on the experiment then please visit the SamKnows website.
(Image by Taco Witte) | <urn:uuid:37d5f6df-3ddf-4aef-94c4-f6f213d7b413> | CC-MAIN-2017-04 | https://www.gradwell.com/2011/09/29/european-broadband-put-to-the-test/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00165-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955898 | 234 | 2.578125 | 3 |
Meltdown: The Predictable Distortion of Global Warming
Patrick Michaels the author of Meltdown: The Predictable Distortion of Global Warming by Scientists, Politicians, and the Media joins Craig to discuss the truth about Global Warming.
There is no politically acceptable technological strategy at this time that would result in a significant change to the warming trajectory that surface temperatures are on.
If every nation on earth lived up to the Kyoto Protocol, the amount of warming that would be “saved” would be 0.07 degrees Celsius per half-century, and amount too small to measure.
Because instruments like Kyoto cost money, they destroy capital that could better be used for investment in future technologies. Ironically, these failing (and failed) attempts to “do” something about warming delay the time by which technologies that may really be effective can be implemented. All invention and distribution of technology requires capital.
Other, much less expensive technologies allows us to live with environmental and climate extremes, and this type of accidental “adaptation” to climate change will continue in the future, as long as, again, the capital is available.
Here’s an example of adaptation: people in hurricane-prone regions build their homes on pilings so that they are not destroyed by the storm surge. As an example, in the North Carolina Outer Banks (which experiences more hurricanes than just about anywhere on earth), used to build without this protection. But when they realized it was the water (not the wind) that was causing damage, they elevated their homes. As a result, visibility has dramatically improved, and many of these homes can see both the ocean (at sunrise) and the Albemarle Sound (sunset). Rental rates approach $15,000 a week in the homes with the best views. Consequently, people adapted to prospective sea-level rises of approximately 12 feet in 30 minutes, a typical strong storm surge. It is therefore strange to think that people can’t adapt to 12 inches of sea level rise over 50-100 years!
Death rates from tornadoes have dropped dramatically with the evolution of weather radar and modern communications. The same applies to flash floods. While the frequency of these events MAY change with warming, the technological adaptation via forecasting and preparation will almost certainly be greater than any climatic change.
Finally, heat-related death rates are dropping dramatically in North American cities. In fact, the more frequent heat waves are, the fewer people die. This is clear evidence of adaptation, as cities warm naturally, with or without global warming, and this warming has been accompanied by a decline in heat-related deaths. The reasons for the decline are infrastructural, technological, and political.
“What’s Hot, and What’s Not,” San Diego Union-Tribune (Online), March 11, 2007
“Inconvenient Truths,” National Review (Online), February 23, 2007
“New Climate for Global Energy Policy,” San Francisco Chronicle, February 2, 2007
“Live with Climate Change,” USA Today, February 2, 2007
“Global Warming: So What Else Is New?,” San Francisco Chronicle, February 2, 2007
Patrick Michaels Bio: He is a past president of the American Association of State Climatologists and was program chair for the Committee on Applied Climatology of the American Meteorological Society. Michaels is a contributing author and reviewer of the United Nations Intergovernmental Panel on Climate Change. His articles have appeared in the Washington Post, the Wall Street Journal, the Los Angeles Times, and USA Today, Houston Chronicle, and the Journal of Commerce. He holds A.B. and S.M. degrees in biological sciences and plant ecology from the University of Chicago, and he received his Ph.D. in ecological climatology from the University of Wisconsin at Madison.
About Cato: The Cato Institute is a non-profit public policy research foundation headquartered in Washington, D.C. The Institute is named for Cato’s Letters, a series of libertarian pamphlets that helped lay the philosophical foundation for the American Revolution. Learn more about Cato and Patrick Michaels by going to www.cato.org. | <urn:uuid:51f16e7f-de22-49e9-93a1-6441f2432726> | CC-MAIN-2017-04 | http://craigpeterson.com/environment/global-warming/meltdown-the-predictable-distortion-of-global-warming/18 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00495-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936224 | 871 | 3.046875 | 3 |
Optical fibers are long, thin strands of very pure glass about the diameter of a human hair. They are arranged in bundles called fiber optic cables and used to transmit light signals over long distances. Optical fibers are made up of two concentric cylindrical glasses. The inner core is surrounded by a concentric core made up of glass and of lower refractive index known as cladding. Protective layer with which the cladding is surrounded is called as protective sheath. The total internal reflection takes place at the cladding – core interface. The core diameter ranges in a few microns and is not much larger than the wavelength of light used. When high data transmission rates are not required, core with comparatively large diameters are used which may be of a few hundred microns.
- Cheap - Several miles of optical cable can be made cheaper than equivalent lengths of copper wire. This saves your provider (cable TV, Internet) and you money.
- Thinner - Optical fibers can be drawn to smaller diameters than copper wire.
- Higher carrying capacity – Because optical fibers are thinner than copper wires, more fibers can be bundled into a given-diameter cable than copper wires. This allows more phone lines to go over the same cable or more channels to come through the cable into your cable TV box.
- Less signal degradation – The loss of signal in optical fiber is less than in copper wire.
- Light signals – Unlike electrical signals in copper wires, light signals from one fiber do not interfere with those of other fibers in the same cable. This means clearer phone conversations or TV reception.
- Lightweight – An optical cable weighs less than a comparable copper wire cable. Fiber optic cable take up less space in the ground.
- Long Life: Optical fibers usually live long for about more than 100 years.
- Limited Application: Can only be used on ground, but cannot leave the ground or be associated with the mobile communication.
- Nuclear Radiations: On exposure to the nuclear radiations the glass darken and the harder the glass is easily it’ll lose its color.
- Low Power: Light emitting sources are limited to low power and tough high power emitters are available but are costly.
- Fragility : The optical fibers are easily broken.
- Distance: The distance between the transmitter and receiver must be short or if it is long signal repeaters are used to ensure the signals are not weak. | <urn:uuid:c73305d2-5f60-4832-a17c-35b1ef529bf8> | CC-MAIN-2017-04 | http://www.fs.com/blog/the-advantages-and-disadvantages-of-optical-fibers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00431-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.915889 | 495 | 3.890625 | 4 |
NASA is looking for a few good experiments to run in space. The space agency this week said it was seeking research ideas from private entities to want to do research on board the International Space Station.
NASA said it was looking to expand the use of the ISS by providing access to the lab for the conduct of basic and applied research, technology development and industrial processing to private entities -- including, but not limited to, commercial firms, non-profit institutions, and academic institutions. US federal, state and local government entities, and could also propose research.
NASA said it was particularly interested in, but not limited to, two areas of ISS expansion.
1. Payload Integration and Operations Support Services: There is an emphasis on systems or process that would enable new areas or research or production not currently available on ISS. Support services may include project-specific payload integration and operations support on an as needed bases in response to specific requirements as they emerge, NASA stated.
2. Support Equipment and Instrumentation: NASA said it is interested in concepts that advance the capabilities of the ISS for utilization including providing standard interfaces that simplify and enable multiple research areas; expand the on orbit capabilities to allow for in-situ analysis and evaluation of payload results; and expand the on orbit capabilities to allow for more sophisticated operations on board.
NASA said using the ISS as a national lab could help develop a number of applications in areas such as biotechnology, energy, engineering and remote sensing.
Expanding the role of the ISS would be a welcome use of the facility as some experts have complained that the ISS, for all about $50 billion cost to NASA, is under-utilized.
The international team that runs the ISS which includes Canada, Europe, Japan, Russia, and the US says now that the ISS is mostly complete, there will be an expansion of space-based research. Nearly 150 experiments are currently under way on the station, and more than 400 experiments have been conducted since research began nine years ago, the group says. These experiments already are leading to advances in the fight against food poisoning, new methods for delivering medicine to cancer cells and the development of more capable engines and materials for use on Earth and in space.
NASA has identified 197 US-integrated investigations that have been conducted on orbit as of April 2009. According to NASA, as of February 2009, US ISS and research have resulted in over 160 publications, including articles on topics such as protein crystallization, plant growth, and human research. According to NASA, there have also been approximately 25 technology demonstration experiments flown on the ISS.
There was concern that NASA, because of budget concerns might only fund use of he ISS for the next five years. But in his address to NASA this month President Obama said he wanted to extend ISS support at least five years beyond the current 2015 end date.
And there has already been an uptick in new research for the ISS.
Defense Advanced Research Projects Agency (DARPA) said this month they are looking to develop advanced 3-D models, algorithms that control clustered flight and electromagnetic thrust technology all in the zero-gravity environment of the ISS. Specifically, they were looking for new research to conduct using the Synchronized Position, Hold, Engage, and Reorient Experimental Satellites (SPHERES) experiment on the ISS.
Scientists at the
The Massachusetts Institute of Technology Space Systems Laboratory developed the three SPHERES satellites and they have been onboard the ISS since 2006 to provide DARPA, NASA, and other researchers with a system that could help those agencies test technologies for use in formation flight and autonomous docking, rendezvous and reconfiguration algorithms, MIT stated.
NASA also is getting into the spirit saying it will send its newest humanoid robot known as Robonaut2 - or R2 -- capable of using the same tools as humans letting them work closely with people into space onboard the space shuttle's final mission.
NASA and General Motors built the 300lb R2 as a faster, more dexterous and more technologically advanced robot than past humanoid bots. R2 can use its hands to do work beyond the scope of prior humanoid machines and can easily work safely alongside people, a necessity both on Earth and in space, NASA stated. It is also stronger: able to lift, not just hold, a 20-pound weight (about four times heavier than what other dexterous robots can handle) both near and away from its body, NASA stated.
Follow Michael Cooney on Twitter: nwwlayer8
Layer 8 Extra
Check out these other hot stories: | <urn:uuid:ad830bdf-195d-4b82-ab1f-a22b145da35f> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2230575/security/nasa-expands-lab-role-of-international-space-station.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00063-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955556 | 924 | 3.25 | 3 |
Sometime ago I wrote about Adapteva, a company that ran a very successful Kickstarter campaign to create an affordable supercomputer (I think $99 for a machine capable of 26 GFLOPS qualifies as an affordable supercomputer). Oh, and it's open source!
Not only have they shipped the first batch of machines but they have also started taking orders through their Web site.
The final product spec is:
- Zynq-7020 dual-core ARM A9 CPU (runs Linux)
- Epiphany Multicore Accelerator (16 cores for $99 now; 64 cores, price to be announced)
- 1GB SDRAM
- MicroSD Card
- USB 2.0 (two)
- Four expansion connectors [option]
- Ethernet 10/100/1000
- HDMI connection
- Ships with free open source Epiphany development tools that include C compiler, multicore debugger, Eclipse IDE, OpenCL SDK/compiler, and run time libraries.
- Dimensions are 3.4” x 2.1”
What I think is really exciting is that you can cluster these boards ... like this:
The picture above shows:
... the first large scale Parallella cluster. The system consisted of 42 Parallella boards (for a total of 756 CPUs), with the total power consumption coming in under 500 Watts. This makes it possibly one of the densest clusters in the world thanks to the Parallella board!
So, 42 boards ... that's $4,185 for a performance of 1,092 GFLOPS at less than half a watt per GFLOP ... not bad for something that's desktop scale. | <urn:uuid:9d1af24b-c0bf-4132-af82-814aea11d632> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2225048/data-center/supercomputers-for-everyone--adapteva-ships-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00183-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.914114 | 346 | 2.53125 | 3 |
David Fifield, University of California, Berkeley
Chang Lan, University of California, Berkeley
Rod Hynes, Psiphon Inc
Percy Wegmann, Brave New Software
Vern Paxson, University of California, Berkeley and the International Computer Science Institute
[PDF version of this document.]
Some source code and data for this paper:
git clone https://repo.eecs.berkeley.edu/git-anon/users/fifield/fronting-paper.git
[Presentation video and slides.]
We describe “domain fronting,” a versatile censorship circumvention technique that hides the remote endpoint of a communication. Domain fronting works at the application layer, using HTTPS, to communicate with a forbidden host while appearing to communicate with some other host, permitted by the censor. The key idea is the use of different domain names at different layers of communication. One domain appears on the “outside” of an HTTPS request—in the DNS request and TLS Server Name Indication—while another domain appears on the “inside”—in the HTTP Host header, invisible to the censor under HTTPS encryption. A censor, unable to distinguish fronted and non-fronted traffic to a domain, must choose between allowing circumvention traffic and blocking the domain entirely, which results in expensive collateral damage. Domain fronting is easy to deploy and use and does not require special cooperation by network intermediaries. We identify a number of hard-to-block web services, such as content delivery networks, that support domain-fronted connections and are useful for censorship circumvention. Domain fronting, in various forms, is now a circumvention workhorse. We describe several months of deployment experience in the Tor, Lantern, and Psiphon circumvention systems, whose domain-fronting transports now connect thousands of users daily and transfer many terabytes per month.
Censorship is a daily reality for many Internet users. Workplaces, schools, and governments use technical and social means to prevent access to information by the network users under their control. In response, those users employ technical and social means to gain access to the forbidden information. We have seen an ongoing conflict between censor and censored, with advances on both sides, more subtle evasion countered by more powerful detection.
Circumventors, at a natural disadvantage because the censor controls the network, have a point working in their favor: the censor’s distaste for “collateral damage,” incidental overblocking committed in the course of censorship. Collateral damage is harmful to the censor, because the overblocked content has economic or social value, so the censor tries to avoid it. (Any censor not willing to turn off the Internet completely must derive some benefit from allowing access, which overblocking harms.) One way to win against censorship is to entangle circumvention traffic with other traffic whose value exceeds the censor’s tolerance for overblocking.
In this paper we describe “domain fronting,” a general-purpose circumvention technique based on HTTPS that hides the true destination of a communication from a censor. Fronting works with many web services that host multiple domain names behind a frontend server. These include such important infrastructure as content delivery networks (CDNs) and Google’s panoply of services—a nontrivial fraction of the web. (The section on fronting-capable web services is a survey of suitable services.) The utility of domain fronting is not limited to HTTPS communication, nor to accessing only the domains of a specific web service. It works well as a domain-hiding component of a larger circumvention system, an HTTPS tunnel to a general-purpose proxy.
The key idea of domain fronting is the use of different domain names at different layers of communication. In an HTTPS request, the destination domain name appears in three relevant places: in the DNS query, in the TLS Server Name Indication (SNI) extension, and in the HTTP Host header. Ordinarily, the same domain name appears in all three places. In a domain-fronted request, however, the DNS query and SNI carry one name (the “front domain”), while the HTTP Host header, hidden from the censor by HTTPS encryption, carries another (the covert, forbidden destination).
The censor cannot block on the contents of the DNS request nor the SNI without collaterally blocking the front domain. The Host header is invisible to the censor, but visible to the frontend server receiving the HTTPS request. The frontend server uses the Host header internally to route the request to its covert destination; no traffic ever reaches the putative front domain. Domain fronting has many similarities with decoy routing; it may be understood as “decoy routing at the application layer.” A fuller comparison with decoy routing appears in the section on related work.
This Wget command demonstrates domain fronting on Google, one of many fronting-capable services. Here, the HTTPS request has a Host header for maps.google.com, even though the DNS query and the SNI in the TLS handshake specify www.google.com. The response comes from maps.google.com.
$ wget -q -O - https://www.google.com/ --header 'Host: maps.google.com' | grep -o '<title>.*</title>' <title>Google Maps</title>
A variation is “domainless” fronting, in which there is no DNS request and no SNI. It appears to the censor that the user is browsing an HTTPS site by its IP address, or using a web client that does not support SNI. Domainless fronting can be useful when there is no known front domain with sufficiently high collateral damage; it leaves the censor the choice of blocking an entire IP address (or blocking SNI-less connections entirely), rather than blocking only a single domain. According to our communication with the International Computer Science Institute’s certificate notary, which observes on the order of 50 million TLS connections daily, 16.5% of TLS connections in June 2014 lacked SNI, which is enough to make it difficult for a censor to block SNI-less TLS outright.
Domain fronting works with CDNs because a CDN’s frontend server (called an “edge server”), on receiving a request for a resource not already cached, forwards the request to the domain found in the Host header (the “origin server”). (There are other ways CDNs may work, but this “origin pull” configuration is common.) The client issues a request that appears to be destined for an unrelated front domain, which may be any of the CDN’s domains that resolve to an edge server; this fronted request is what the censor sees. The edge server decrypts the request, reads the Host header and forwards the request to the specified origin, which in the circumvention scenario is a general-purpose proxy. The origin server, being a proxy, would be blocked by the censor if accessed directly—fronting hides its address from the censor.
On services that do not automatically forward requests, it is usually possible to install a trivial “reflector” web application that emulates an origin-pull CDN. In this case, fronting does not protect the address of the origin per se; rather it protects the address of the reflector application, which in turn forwards requests to the origin. Google App Engine is an example of such a service: against a censor that blocks the App Engine domain appspot.com but allows other Google domains, domain fronting enables access to a reflector running on appspot.com.
No matter the specifics of particular web services, as a general rule they do not forward requests to arbitrary domains—only to domains belonging to one of their customers. In order to deploy a domain-fronting proxy, one must become a customer of the CDN (or Google, etc.) and pay for bandwidth. It is the owner of the covert domain who pays the bandwidth bills, not the owner of the front domain, which need not have any relation to the covert domain beyond using the same web service.The remainder of this paper is devoted to a deep exploration of domain fronting as we have deployed it in practice. We first explain our threat model and assumptions. We then give general background on the circumvention problem and outline its three grand challenges: address-based blocking, content-based blocking, and active probing. Domain-fronting systems are capable of meeting all three challenges, forcing censors to use more expensive, less reliable censorship techniques that have heretofore not been seen in practice. Next is a survey of CDNs and other services that are usable for fronting; we identify general principles as well as idiosyncrasies that affect implementation. The following sections are three case studies of deployment: for Tor, for Lantern, and for Psiphon. We sketch domain fronting’s resistance to statistical traffic analysis attacks. The final sections are general discussion and a summary.
Our threat model includes four actors: the censor, the censored client, the intermediate web service, and the covert destination (a proxy server). Circumvention is achieved when the client reaches the proxy, because the proxy grants access to any other destination. The client and proxy cooperate with each other. The intermediate web service need not cooperate with either, except to the extent that it does not collude with the censor.
The censor controls a (generally national) network and the links into and within it. The censor can inspect traffic flowing across all links under its control and can block or allow any packet. The censor can inject and replay traffic, and operate its own clients and servers. The client lies within the censor’s network, while the intermediate web service and proxy lie outside. The censor blocks direct communication between the client and the proxy, but allows HTTPS between the client and at least one front domain or IP address on the intermediate web service.
The client, intermediate web service, and destination proxy are uncontrolled by the censor. The censor does not control a trusted certificate authority: it cannot man-in-the-middle TLS without being caught by ordinary certificate validation. The client is able to obtain the necessary circumvention software.
In this section we survey a variety of web services and evaluate their suitability for domain fronting. Most of the services we evaluated support fronting in one form or another, but they each have their own quirks and performance characteristics. The survey is not exhaustive but it includes many of the most prominent content delivery networks. The below table is a summary.
Pricing across services varies widely, and depends on complicated factors such as geographical region, bandwidth tiers, price breaks, and free thresholds. Some services charge per gigabyte or per request, some for time, and some for other resources. Most services charge between $0.10 and $0.20 per GB; usually bandwidth is cheaper in North America and Europe than in the rest of the world.
Recall that even services that support domain fronting will front only for the domains of their own customers. Deployment on a new service typically requires becoming a customer, and an outlay of time and money. Of the services surveyed, we have at some time deployed on Google App Engine, Amazon CloudFront, Microsoft Azure, Fastly, and CloudFlare. The others we have only tested using manually crafted HTTP requests.
Google App Engine is a web application platform. Users can upload a web app needing nothing more than a Google account. Each application gets a user-specified subdomain of appspot.com, for which almost any Google domain can serve as a front, including google.com, gmail.com, googleapis.com, and many others. App Engine can run only web applications serving short-lived requests, not a general-purpose proxy such as a Tor bridge. For that reason we use a tiny “reflector” application that merely forwards incoming requests to a long-lived proxy running elsewhere. Fronting through App Engine is attractive in the case where the censor blocks appspot.com but at least one other Google domain is reachable. App Engine costs $0.12/GB and $0.05 for each “instance hour” (the number of running instances of the app is adjusted dynamically to meet load, and you pay for each instance after the first). Applications are free of charge if they stay below certain usage thresholds, for example 1 GB of bandwidth daily, making possible a distributed, upload-your-own-app model in the style of GoAgent.
Amazon CloudFront is the CDN of Amazon Web Services. A CloudFront “distribution,” as a CDN configuration is called, associates an automatically generated subdomain of cloudfront.net with an origin server. The front domain may be any other cloudfront.net subdomain (all of which support HTTPS through a wildcard certificate), or any other DNS alias for them. CloudFront is easy to set up: one must only set the origin domain and no reflector app is needed. Pricing per GB ranges from $0.085 for the United States and Europe, up to $0.25 for South America, with price breaks starting at 10 TB/month. There is an additional charge per 10,000 HTTPS requests, ranging from $0.0075 in the United States to $0.0160 in South America. CloudFront has a usage tier that is free of charge for a year, subject to a bandwidth limit of 50 GB/month.
Microsoft Azure is a cloud computing platform that features a CDN. Like CloudFront, Azure assigns automatically generated subdomains of vo.msecnd.net. any of which can front for any other. There are other possible front domain names, like ajax.aspnetcdn.com, that are used as infrastructure by many web sites, lending them high collateral damage. Unlike CloudFront’s, Azure’s CDN forwards only to Azure-affiliated domains, so as with App Engine, it is necessary to run a reflector app that forwards requests to some external proxy. Bandwidth costs $0.087–0.138/GB, with price breaks starting at 10 TB/month.
Fastly is a CDN. Unlike most CDNs, Fastly validates the SNI: if SNI and Host do not match, the edge server returns an HTTP 400 (“Bad Request”) error. However, if the TLS ClientHello simply omits SNI, then the Host may be any Fastly domain. Fastly therefore requires the “domainless” fronting style. Fastly’s pricing model is similar to CloudFront’s. They charge between $0.12 and $0.19 per GB and $0.0075 and $0.009 per 10,000 requests, depending on the region.
CloudFlare is a CDN also marketed as protection against denial-of-service attacks. Like Fastly, CloudFlare checks that the SNI matches the Host header and therefore requires sending requests without SNI. CloudFlare charges a flat fee per month and does not meter bandwidth. There is a no-cost plan intended for small web sites, which is adequate for a personal domain-fronting installation. The upgraded “business” plan is $200/month.
Akamai is a large CDN. Requests may be fronted through the special HTTPS domain a248.e.akamai.net, or other customer-configured DNS aliases, though it appears that certain special domains get special treatment and do not work as fronts. Akamai has the potential to provide a lot of cover: in 2010 it carried 15–20% of all web traffic. Akamai does not publish pricing details, but it is reputed to be among the pricier CDNs. We found a reseller, Cache Simple, that charges $400 for 1000 GB/month, and $0.50/GB after that. The special domain a248.e.akamai.net began to be DNS-poisoned in China in late September 2014 (possibly because it had been used to mirror blocked web sites), necessitating an alternative front domain in that country.
Level 3 is a tier-1 network operator that has a CDN. Unlike other services in this section, Level 3 does not appear to support domain fronting. However, we mention it because it may be possible to build similar functionality using distinct URL paths under the domain secure.footprint.net (essentially using the path, rather than the Host header, as a hidden tag). Level 3 does not publish pricing data. We found a reseller, VPS.NET, that quotes $34.95 for the first 1000 GB and $0.10–0.25/GB thereafter. Level 3’s special HTTPS domain secure.footprint.net is also now DNS-poisoned in China.
There are other potential deployment models apart from CDNs. For example, there are cheap web hosts that support both PHP and HTTPS (usually with a shared certificate). These features are enough to support a reflector app written in PHP, which users can upload under their own initiative. In this do-it-yourself model, blocking resistance comes not from a strong front domain, but from the diffuseness of many proxies, each carrying only a small amount of traffic. The URLs of these proxies could be kept secret, or could be carefully disseminated by a proxy-distribution service like BridgeDB. Psiphon uses this approach when in “unfronted” mode.
Another alternative is deployment with the cooperation of an existing important web site, the blocking of which would result in high collateral damage. It is a nice feature of domain fronting that it does not require cooperation by the intermediate web service, but if you have cooperation, you can achieve greater efficiency. The important web site could, for example, reserve a magic URL path or domain name, and forward matching requests to a proxy running locally. The web site does two jobs: its ordinary high-value operations that make it expensive to block, and a side job of handling circumvention traffic. The censor cannot tell which is which because the difference is confused by HTTPS.
We implemented domain fronting as a Tor pluggable transport called meek. meek combines domain fronting with a simple HTTP-based tunneling proxy. Domain fronting enables access to the proxy; the proxy transforms a sequence of HTTP requests into a Tor data stream.
The components of the system appear in the above figure. meek-client acts as an upstream proxy for the client’s Tor process. It is essentially a web client that knows how to front HTTPS requests. When meek-client receives an outgoing chunk of data from a client Tor process, it bundles the data into a POST request and fronts the request through the web service to a Tor bridge. The Tor bridge runs a server process, meek-server, that decodes incoming HTTP requests and feeds their data payload into the Tor network.
The server-to-client stream is returned in the bodies of HTTP responses. After receiving a client request, meek-server checks for any pending data the bridge has to send back to the client, and sends it back in the HTTP response. When meek-client receives the response, it writes the body back into the client Tor.
The body of each HTTP request and response carries a small chunk of an underlying TCP stream (up to 64 KB). The chunks must be reassembled, in order, without gaps or duplicates, even in the face of transient failures of the intermediate web service. meek uses a simple approach: requests and responses are strictly serialized. The client does not send a second chunk of data (i.e., make another request) until it has received the response to its first. The reconstructed stream is simply the concatenation of bodies in the order they arrive. This technique is simple and correct, but less efficient because it needs a full round-trip between every send. See the sections on Lantern and Psiphon for alternative approaches that increase efficiency.
meek-server must be able to handle many simultaneous clients. It maintains multiple connections to a local Tor process, one for each active client. The server maps client requests to Tor connections by “session ID,” a token randomly generated by the client at startup. The session ID plays the same role in the meek protocol that the (source IP, source port, dest IP, dest port) tuple plays in TCP. The client sends its session ID in a special X-Session-Id HTTP header. meek-server, when it sees a session ID for the first time, opens a new connection to the local Tor process and adds a mapping from ID to connection. Later requests with the same session ID reuse the same Tor connection. Sessions are closed after a period of inactivity. This figure shows a sample of the protocol.
POST / HTTP/1.1 Host: forbidden.example X-Session-Id: cbIzfhx1HnR Content-Length: 517 \x16\x03\x01\x02\x00\x01\x00\x01\xfc\x03\x03\x9b\xa9...
HTTP/1.1 200 OK Content-Length: 739 \x16\x03\x03\x00\x3e\x02\x00\x00\x3a\x03\x03\x53\x75...
POST / HTTP/1.1 Host: forbidden.example X-Session-Id: cbIzfhx1HnR Content-Length: 0
HTTP/1.1 200 OK Content-Length: 75 \x14\x03\x03\x00\x01\x01\x16\x03\x03\x00\x40\x06\x84...
HTTP is fundamentally a request-based protocol. There is no way for the server to “push” data to the client without having first received a request. In order to enable the server to send back data, meek-client sends occasional empty polling requests even when it has no data to send. The polling requests simply give the server an opportunity to send a response. The polling interval starts at 100 ms and grows exponentially up to a maximum of 5 s.
The HTTP-based tunneling protocol adds overhead. Each chunk of data gets an HTTP header, then the HTTP request is wrapped in TLS. The HTTP header adds about 160 bytes, and TLS adds another 50 bytes or so (the exact amount depends on the ciphersuite chosen by the intermediate web service). The worst-case overhead when transporting a single encrypted Tor cell of about 540 bytes is about 40%, and less when more than one cell is sent at once. We can estimate how much overhead occurs in practice by examining CDN usage reports. In April 2015, the Amazon CloudFront backend for meek received 3,231 GB in 389 M requests, averaging about 8900 bytes per request. If the overhead per request is 210 bytes, then the average overhead is 210/(8900−210) ≈ 2.4%. meek-client reuses the same TLS connection for many requests, so the TLS handshake’s overhead is amortized. Polling requests also use bandwidth, but they are sent only when the connection is idle, so they do not affect upload or download speed.
This figure measures the effect of meek’s overhead on download speed. It shows the time taken to download a 11,536,384-byte file (http://speedtest.wdc01.softlayer.com/downloads/test10.zip) with and without meek, over the three web services on which we have deployed. We downloaded the file 10 times in each configuration. The time to download the file increases by about a factor of 3 when meek is in use. We attribute this increase to the added latency of an indirect path through the CDN, and the latency-bound nature of meek’s naive serialization.
|App Engine||CloudFront||Azure (est.)|
meek’s primary deployment vehicle is Tor Browser, a derivative of Firefox that is preconfigured to use a built-in Tor client. Tor Browser features an easy interface for enabling meek and other pluggable transports. Deployment began in earnest in October 2014 with the release of Tor Browser 4.0, the first release to include meek as an easy selectable option. It runs over Google App Engine, Amazon CloudFront, and Microsoft Azure. The above figure shows the daily average number of concurrent users. (A value of 1,000, for example, means that there were on average 1,000 users of the system at any time during the day.) Also in the figure is a table of monthly costs broken down by web service. Our Azure service is currently running on a free research grant, which does not provide us with billing information. We estimate what Azure’s cost would be by measuring the bandwidth used at the backing Tor bridge, and assuming bandwidth costs that match the geographic traffic mix we observe for CloudFront: roughly 62% from North America and Europe, and 38% from other regions.
Without additional care, meek would be vulnerable to blocking by its TLS fingerprint. TLS, on which HTTPS is based, has a handshake that is largely plaintext and leaves plenty of room for variation between implementations. These differences in implementation make it possible to fingerprint TLS clients. Tor itself was blocked by China in 2011 because of the distinctive ciphersuites it used at the time. The first figure in the appendix shows how meek-client’s fingerprint would appear natively; it would be easy to block because not much other software shares the same fingerprint. The other figures show the fingerprints of two web browsers, which are more difficult to block because they also appear in much non-circumvention traffic.
In order to disguise its TLS fingerprint, meek-client proxies all its HTTPS requests through a real web browser. It looks like a browser, because it is a browser. We wrote extensions for Firefox and Chrome that enable them to make HTTPS requests on another program’s behalf. The browser running the extension is completely separate from the Tor Browser the user interacts with. It runs in the background in a separate process, does not display a user interface, and shares no state with the user’s browser. The extra cost of this arrangement is negligible in terms of latency, because communication with the headless browser occurs over a fast localhost connection, and in terms of CPU and RAM it is the same as running two browsers at once.
The client’s Tor process starts both meek-client and the headless browser, then configures meek-client to proxy its requests through the browser. The headless browser is the only component that actually touches the network. It should be emphasized that the headless browser only makes domain-fronted requests to the front domain; the URLs it requests have no relation to the pages the user browses.
Lantern has a centralized infrastructure for authenticating users and assigning them proxies. Its threat model assumes that the centralized infrastructure may be blocked by censors. Therefore, users must have a priori access to an unblocked proxy (a “fallback”) which they use to bootstrap into the rest of the network.
Lantern originally distributed the IP addresses of fallbacks by embedding them in customized software installers that we sent to users via email autoresponder. This method prevented users from directly downloading Lantern from our website and would have made it easy for censors to discover proxies simply by signing up for Lantern (though in practice we never saw this happen).
We rolled out domain fronting in July 2014, allowing users to download Lantern directly for the first time. The directly downloaded clients proxied all their traffic via domain fronting. After initial testing with Fastly, we changed to a different CDN, which has proven attractive because it has many unblocked front domains, it does not charge for bandwidth, and its API enables us to easily register and unregister proxies.
This figure shows user bandwidth since deployment. After experiencing steady growth, in October 2014 we started randomly assigning direct HTTPS proxies to users who had direct-downloaded Lantern. This diverted some traffic from domain fronted servers to more efficient direct servers. In December 2014 and January 2015, there was a dramatic surge in domain-fronted traffic, which jumped from 1 MB/s to 100 MB/s within those two months. Activity has remained at around the 100 MB/s level since then.
Lantern’s domain fronting support is provided by an application called flashlight, which uses library layers called enproxy and fronted. enproxy provides an abstract network connection interface that encodes reads and writes as a sequence of HTTP requests via a stateful enproxy proxy. enproxy allows flashlight to proxy any streaming-oriented traffic like TCP. Unlike Tor’s implementation of meek, enproxy supports full-duplex transfer, which is handy for bidirectional protocols like XMPP, which Lantern uses for P2P signaling. fronted uses domain fronting to transmit enproxy’s HTTP requests in a blocking-resistant manner. In practice, we configure fronted with several hundred host domains that are dialed via IP address (no DNS lookup).
Domain-fronted Lantern requests go to domain names, such as fallbacks.getiantem.org, that represent pools of servers. The CDN distributes requests to the servers in round-robin fashion. The domain-fronting protocol is stateful, so subsequent HTTP requests for the same connection are routed to the original responding proxy using its specific hostname (sticky routing), which the client obtains from a custom HTTP header. The proxy hostname serves the same request-linking purpose as the session ID does in meek.
The encoding of a stream as a sequence of HTTP requests introduces additional latency beyond that of TCP. In the case of flashlight with our chosen CDN, the additional latency has several causes. We describe the causes and appropriate mitigations.
Domain fronting requires the establishment of additional TCP connections. The client, the CDN, and the proxy between themselves introduce three additional TCP connections between the client and the destination. To reduce latency, the CDN pools and reuses connections to the Lantern proxy. Unfortunately, the Lantern client cannot do the same for its connections to the CDN because the CDN seems to time out idle connections fairly aggressively. We mitigate this by aggressively pre-connecting to the CDN when we detect activity.
Though enproxy is mostly full duplex, reads cannot begin until the first request and its response with the sticky-routing header have been processed. This is a basic limitation.
enproxy does not pipeline HTTP requests. Even though reads and writes are full duplex, a write cannot proceed until the flush of previous writes has been acknowledged with an HTTP response—a full round-trip is necessary between each flush. In the future, HTTP/2’s request pipelining will potentially improve on latency.
enproxy has no way of knowing when the client is done writing. If the data were streaming directly from the client through to the proxy, this would not be a problem, but CDNs buffer uploads: small uploads aren’t actually forwarded to the proxy until the HTTP request is finished. enproxy assumes that a writer is finished if it detects inactivity for more than 35 ms, at which point it flushes the write by finishing the HTTP request. This introduces at least 35 ms of additional latency, and potentially more if the guess is wrong and the write is not actually finished, since we now have to wait for a full round trip before the next write can proceed. This latency is particularly noticeable when proxying TLS traffic, as the TLS handshake consists of several small messages in both directions.
This last source of latency can be eliminated if enproxy can know for sure when a writer is finished. This could be achieved by letting enproxy handle the HTTP protocol specifically. Doing so would allow enproxy to know when the user agent is finished sending an HTTP request and when the destination is finished responding. However, doing the same for HTTPS would require a local man-in-the-middle attack on the TLS connection in order to expose the flow of requests. Furthermore, this approach would work only for HTTP clients. Other traffic, like XMPP, would require additional support for those protocols.
This figure compares download speeds with and without a domain fronting proxy. It is based on 10 downloads of the same 11 MB file used in the Tor bandwidth test in the previous section, however located on a different server close to the Lantern proxy servers: http://speedtest.ams01.softlayer.com/downloads/test10.zip. The fronted proxy causes download times to approximately double. Because of Lantern’s round-robin rotation of front domains, the performance of the fronted proxy may vary over time according to the route to the CDN.
The Lantern network includes a geolocation server. This server is directly registered on the CDN and the Lantern client domain-fronts to it without using any proxies, reducing latency and saving proxy resources. This sort of direct domain fronting technique could in theory be implemented for any web site simply by registering it under a custom domain such as facebook.direct.getiantem.org. It could even be accomplished for HTTPS, but would require the client software to man-in-the-middle local HTTPS connections between browser and proxy, exposing the plaintext not only to the Lantern client but also to the CDN. In practice, web sites that use the CDN already expose their plaintext to the CDN, so this may be an acceptable solution.
The Psiphon circumvention system is a centrally managed, geographically diverse network of thousands of proxy servers. It has a performance-oriented, one-hop architecture. Much of its infrastructure is hosted with cloud providers. As of January 2015, Psiphon has over two million daily unique users. Psiphon client software runs on popular platforms, including Windows and Android. The system is designed to tunnel a broad range of host traffic: web browsing, video streaming, and mobile app data transfer. Client software is designed for ease of use; users are not asked to perform any configuration.
Psiphon has faced threats including blocking by DPI—both blacklisting and whitelisting—and blocking by address. For example, in 2013, Psiphon circumvented HTTP-whitelisting DPI by sending an “HTTP prefix” (the first few bytes of an HTTP request) before the start of its regular upstream flow.
Psiphon strives to distribute its server addresses in such a way that most clients discover enough servers to have several options in the case of a server being blocked, while making it difficult to enumerate all servers. In February 2014, Psiphon was specifically targeted for address-based blocking, and this blocking was aggressive enough to have a major impact on our user base, though not all users were blocked. As part of our response we integrated and deployed meek-based domain fronting, largely based on Tor’s implementation, with some modifications. It was fully deployed in June 2014. The next figure shows the number of unique daily users of fronted meek with Psiphon.
In addition, Psiphon also employs meek in what we call “unfronted” mode. Unfronted meek omits the TLS layer and the protocol on the wire is HTTP. As fully compliant HTTP, unfronted meek supersedes the “HTTP prefix” defense against HTTP whitelisting. Unfronted meek is not routed through CDNs, and as such is only a defense against DPI whitelisting and not against proxy address enumeration. We envision a potential future fronted HTTP protocol with both properties, which requires cooperating with CDNs to route our HTTP requests based on, for example, some obfuscated HTTP header element.
Psiphon’s core protocol is SSH. SSH provides an encryption layer for communication between Psiphon clients and servers; the primary purpose of this encryption is to frustrate DPI. On top of SSH, we add an obfuscated-openssh layer that transforms the SSH handshake into a random stream, and add random padding to the handshake. The payload within the meek transport appears to be random data and lacks a trivial packet size signature in its initial requests and responses. Psiphon clients authenticate servers using SSH public keys obtained out of band, a process that is bootstrapped with server keys embedded in the client binaries.
Psiphon uses a modified version of the meek protocol already described. The session ID header contains extra information: a protocol version number and the destination Psiphon server address. As this cookie will be visible to the censor in unfronted mode, its value is encrypted in a NaCl crypto_box using the public key of the destination meek-server; then obfuscated; then formatted as an innocuous-seeming cookie with a randomly selected key. meek-server uses the protocol version number to determine if the connecting meek-client supports Psiphon-specific protocol enhancements. The destination address is the SSH server to which meek-server should forward traffic.
In Psiphon, meek-client transmits its chosen session ID on its first HTTP request, after which meek-server assigns a distinct ID to be used on subsequent requests. This change allows meek-server to distinguish new and existing sessions when a client sends a request after a long delay (such as after an Android device awakes from sleeping), when meek-server may have already expired and discarded its session.
We ported meek-client, originally written in Go, to Java for Android. On Android, we make HTTP and HTTPS requests using the Apache HttpClient component, in order to have a TLS fingerprint like those of other Android apps making web service requests.
The Psiphon meek-server inspects CDN-injected headers, like X-Forwarded-For, to determine the client’s IP address. The address is mapped to a geographic region that is used in recording usage statistics.
When a user starts a Psiphon client, the client initiates connections to up to ten different servers simultaneously, keeping the first to be fully established. Candidate servers are chosen at random from cached lists of known servers and a mix of different protocols, both fronted and non-fronted, are used. The purpose of the simultaneous connections is to minimize user wait time in case certain protocols are blocked, certain servers are blocked by address, or certain servers are at capacity and rejecting new connections. This process also tends to pick the closest data center, and the one with lowest cost, as it tends to pick lower-latency direct connections over domain-fronted connections.
We made two modifications to server selection in order to accommodate fronting. First, we changed the notion of an “established connection” from TCP connection completion to full SSH handshake completion. This ensures that both hops are measured in the fronted case. Second, we adjusted our protocol selection schedule to ensure that, while we generally favor the fastest connection, we do not expose the system to an attack that would force us to use a degraded protocol. For example, a censor could use a DPI attack that allows all connections to establish, but then terminate or severely throttle non-whitelisted protocols after some short time period. If the client detects such degraded conditions, it begins to favor fronted and unfronted protocols over the faster obfuscated SSH direct connections.
We identified a need to improve the video streaming and download performance of meek-tunneled traffic. In addressing this, we considered the cost per HTTP request of some candidate CDNs, a lack of support for HTTP pipelining in our components, and a concern about the DPI signature of upstream-only or downstream-only HTTP connections. As a compromise between these considerations, we made a tweak to the meek protocol: instead of sending at most 64 KB in each HTTP response, responses stream as much as possible, as long as there is data to send and for up to 200 ms.This tweak yielded a significant performance improvement, with download speeds increasing by up to 4–5×, and 1080p video playback becoming smooth. Under heavy downstream conditions, we observe response bodies up to 1 MB, 300 KB on average, although the exact traffic signature is highly dependent on the tunneled application. We tuned the timeout parameter through subjective usability testing focused on latency while web browsing and simultaneously downloading large files.
This figure compares the time taken to download a file both with and without meek, and with and without the streaming download optimization. The target is the same speedtest.wdc01 URL used in the Tor performance tests. The performance effect of meek is about a factor-4 increase in download time; streaming downloads cut the increase in half.
In developing domain fronting circumvention systems, we hope to deprive the censor of easy distinguishers and force the use of more expensive, less reliable classification tests—generally, to increase the cost of censorship. We believe that domain fronting, implemented with care, meets the primary challenges of proxy-based circumvention. It defeats IP- and DNS-based blocking because the IP- and DNS-layer information seen by the censor are not those of the proxy; content-based blocking because content is encrypted under HTTPS; and active probing because though a censor may be able to discover that a web service is used for circumvention, it cannot block the service without incurring significant collateral damage.
Our experiences with deploying circumvention systems has led us to conclude that other potential means of censorship—e.g., identifying circumventing content by analyzing packet length distributions—do not currently have relevance when considering the practices of today’s censors. We speculate that censors find such tests unattractive because they require storing significant state and are susceptible to misclassification. More broadly, we are not aware of any nation-level censorship event that made use of such traffic features.
Nevertheless, we expect censors to adapt to a changing environment and to begin deploying more sophisticated (but also more expensive and less reliable) tests. The issue of traffic analysis is a general one, and mostly separable from domain fronting itself. That is, domain fronting does not preclude various traffic shaping techniques and algorithms, which can be developed independently and plugged in when the censors of the world make them necessary. This section contains a sketch of domain fronting’s resistance to certain traffic analysis features, though a case study of meek with Tor and a trace of non-circumvention traffic. While we identify some features that may give a censor leverage in distinguishing circumvention traffic, we believe that the systems we have deployed are sufficiently resistant to the censors of today, and do not block the way to future enhancements to traffic analysis resistance.As domain fronting is based on HTTPS, we evaluate distinguishability from “ordinary” HTTPS traffic. We compare two traffic traces. The first is HTTPS traffic from Lawrence Berkeley National Laboratory (LBL), a large (≈ 4K users) research lab, comprising data to and from TCP port 443 on any Google server. Its size is 313 MB (packet headers only, not payloads) and it lasts 10 minutes. The IP addresses in this first trace were masked, replaced by a counter. The second trace is of meek in Tor Browser, browsing the home pages of the top 500 Alexa web sites over Google and App Engine. It is 687 MB in size and covers 4.5 hours.
|LBL Google HTTPS||meek on App Engine|
|0 bytes||37.6%||1418 bytes||40.5%|
|1430 bytes||9.1%||0 bytes||37.7%|
|1418 bytes||8.5%||1460 bytes||7.2%|
|41 bytes||6.1%||396 bytes||2.0%|
|1416 bytes||3.1%||196 bytes||1.8%|
|1460 bytes||2.9%||1024 bytes||1.5%|
A censor could attempt to block an encrypted tunnel by its distribution of packet lengths, if it is distinctive enough. The above figure compares the packet length distributions of the sample traces. Keeping in mind that the LBL trace represents many users, operating systems, and web browsers, and the meek trace only one of each, the two are not grossly different. In both cases, about 38% of packets are empty (mostly ACKs), with many packets near the usual TCP Maximum Segment Size of 1460 bytes. Conspicuous in the meek trace are a small peaks at a few specific lengths, and a lack of short payloads of around 50 bytes. Both of characteristics are probably reflections of the fixed cell size of the underlying Tor stream.
The total duration of TCP connections is another potential distinguisher. This figure shows the cumulative probability of connection durations in the two traces. The LBL trace has interesting concentrations on certain round numbers: 10/60/120/180/240 seconds. We hypothesize that they are caused by keepalive timeouts in web browsers and servers and periodic polling by web apps. The small rise at 600 seconds is an artifact caused by the 10-minute duration of the trace. We do not know how much longer than 10 minutes those connections lasted, but they are only 8% of observed connections.
The meek trace shows a propensity for longer connections. In 4.5 hours, there were only 10 connections, three of them lasting for an hour. The long connections are caused by the client browser extension’s aggressive use of long-lived HTTP keepalive connections, and by its being constantly busy, giving every opportunity for connection reuse. 60% of meek’s connections lasted five minutes or longer, while only 13% of ordinary traffic’s did. meek had essentially no connections lasting less than 24 seconds, but such short connections were over 42% of the LBL trace. 30% (3 out of 10) of meek’s connections lasted almost exactly one hour, evidently reflecting a built-in keepalive limit in either the client browser extension or in App Engine.
In light of these measurements, the censor may decide simply to terminate long-lived HTTPS connections. According to our traffic trace, doing so will not disrupt more than 8% of ordinary connections (although such long connections may be valuable large transfers with higher collateral damage). The censor can lower the timing threshold, at the cost of more false positives. In order to be effective, then censor must cut off the client completely; otherwise the client may start a new connection with the same session ID and begin where it left off.
We do not know of any obvious traffic characteristics that reliably distinguish domain fronting from other HTTPS traffic. Long-lived connections and packet lengths are potential targets for a more concerted attack. We are fundamentally trying to solve a problem of steganography, to make circumvention traffic fit some model of “normal” traffic. However, this can be regarded as an advantage. What is a challenge for the evaluator is also a challenge for the censor, simply because it is difficult to characterize just what normal traffic is, especially behind a CDN that may host variety of services such as software updates, video streaming, and ordinary web pages. Circumvention traffic need not be perfectly indistinguishable, only indistinguishable enough that that blocking it causes more and costlier false positives than the censor can accept.
Domain fronting derives its strength from the collateral damage that results from blocking the front domain. It should not—nor should any other circumvention technique—be thought of as unblockable; rather, one should think of what it costs the censor to block it. What is unblockable by one censor may be blocked by another that has different resources and incentives. Blocking resistance depends on the strength of the front domain and on the censor’s cost calculus, which has both economic and social components.
We can at least roughly quantify the cost of blocking any domain fronting system in general. It is the minimum cost of: blocking a domain; deploying traffic analysis to distinguish circumvention from other traffic; or conducting some attack outside our threat model, for example physical surveillance of Internet users. A censor could also, for example, block HTTPS entirely, but that is likely to be even more damaging than targeted blocking of a domain. The cost of blocking a domain—and the benefit of blocking circumvention—will vary by censor. For example, China can afford to block twitter.com and facebook.com partly because it has domestic replacements for those services, but not all censors have the same resources. In June 2014, the Great Firewall of China took the unprecedented step of blocking all Google services, including all potential fronts for App Engine. It is not clear whether the blocking targeted domain fronting systems like GoAgent; our own systems were only prototypes at that point. Since then, domain fronting to App Engine has been effectively stamped out in China, though it continues to work over other web services.
A censor could directly confront the operators of an intermediate web service and ask them to disable domain fronting (or simply get rid of customers like us who facilitate circumvention). The censor could threaten to block the service entirely, costing it business. Whether such an attack succeeds again depends on specific costs and motivations. A powerful censor may be able to carry out its threat, but others will harm themselves more by blocking a valuable service than the circumvention traffic is worth.
Reliance on paid web services creates the potential for a “financial denial of service” attack against domain fronting systems, in which the censor uses the service excessively in an attempt to drive up the operators’ costs. In March 2015, the anticensorship group GreatFire, which had used various cloud services for censorship circumvention in China, was the target of a distributed denial of service attack against their hosting on Amazon Web Services. The attack lasted for days and incurred tens of thousands of dollars in bandwidth charges. The attack against Amazon was followed shortly by one against GitHub, the largest in the site’s history. The second attack specifically targeted GreatFire’s accounts there. The available evidence indicates that both attacks were coordinated from within China, using an offensive network system dubbed the “Great Cannon”. Such an attack could be mitigated by active defenses that shut down a service when it is being used excessively, though this only protects against ruinous costs and will not defeat a long-term attack. It is noteworthy that a world-class censor’s first reaction was a disruptive, unsubtle denial of service attack—though we cannot say for sure that the censor did not have something better up its sleeve. GreatFire speculated that the attacks were precipitated by the publication of an article in the Wall Street Journal that described in detail domain fronting and other “collateral freedom” techniques. The interview associated with the article also caused CloudFlare to begin matching SNI and Host header, in an apparent attempt to thwart domain fronting.
Fronting shares a potential weakness with decoy routing, which is that the network paths to the overt and covert destinations diverge. The difference in paths may create side channels—different latencies for instance—that distinguish domain-fronted traffic from the traffic that really arrives at its apparent destination. For example, a CDN can be expected to have responses to some fraction of requests already in cache, and respond to those requests with low latency, while domain-fronted requests always go all the way to the destination with higher latency. Schuhard et al. (§ 5) applied latency measurement to decoy routing. The authors of TapDance (§ 5.1) observe that such an attack is difficult to carry out in practice, because it requires knowledge of the performance characteristics of many diverse resources behind the proxy, some of which are not accessible to the censor (login-protected web pages, for example). Domain fronting favors the circumventor even more, because of the variety of resources behind a CDN.
The intermediate web service has a privileged network position from which it may monitor domain-fronted traffic. Even though the censor does not know which client IP addresses are engaging in circumvention, the CDN knows. The risk is especially acute when client browses a web site of the same entity that controls the intermediate web server, for example browsing YouTube while fronting through www.google.com. When this happens, the web service gets to see both entry and exit traffic, and is in a better position to attempt to correlate flows by timing and volume, even when the underlying channel is an encrypted protocol like Tor. This phenomenon seems hard to counter, because the front domain needs to be a popular one in order to have high collateral damage, but popular domains are also the ones that users tend to want to visit. It is in theory possible to dynamically switch between multiple fronts, so as to avoid the situation where the destination and front are under the same control, at the cost of leaking information about where the user is not going at a given moment.
A censor that can man-in-the-middle HTTPS connections can detect domain fronting merely by removing encryption and inspecting the Host header. Unless the censor controls a certificate authority, this attack falls to ordinary HTTPS certificate validation. Against a censor that controls a trusted certificate authority, certificate pinning is an effective defense. If the underlying transport is an authenticated and encrypted one like Tor, then the destination and contents of a user’s connection will remain secret, even if the user is outed as a circumventor.
We have presented domain fronting, an application-layer censorship circumvention technique that uses different domain names at different layers of communication in order to hide the true destination of a message. Domain fronting resists the main challenges offered by the censors of today: content blocking, address blocking, and active probing. We have implemented domain fronting in three popular circumvention systems: Tor, Lantern, and Psiphon, and reported on the experience of deployment. We begin an investigation into the more difficult, less reliable means of traffic analysis that we believe will be necessary to block domain fronting.
The meek pluggable transport has a home page at https://trac.torproject.org/projects/tor/wiki/doc/meek and source code at https://gitweb.torproject.org/pluggable-transports/meek.git. The source code of Lantern’s flashlight proxy is at https://github.com/getlantern/flashlight; other components are in sibling repositories. Psiphon’s source code is at https://bitbucket.org/psiphon/psiphon-circumvention-system.
We would like to thank Yawning Angel, George Kadianakis, Georg Koppen, Lunar, and the members of the tor-dev, tor-qa, and traffic-obf mailing lists who responded to our design ideas, reviewed source code, and tested our prototypes. Arlo Breault wrote the flashproxy-reg-appspot program mentioned in the section on related work, an early application of domain fronting. Leif Ryge and Jacob Appelbaum tipped us off that domain fronting was possible. Sadia Afroz, Michael Tschantz, and Doug Tygar were sources of inspiring conversation. Johanna Amann provided us with an estimate of the fraction of SNI-bearing TLS handshakes.
This work was supported in part by the National Science Foundation under grant 1223717. The opinions, findings, and conclusions expressed herein are those of the authors and do not necessarily reflect the views of the sponsors.
|Go 1.4.2’s crypto/tls library||Firefox 31||Chrome 40|
Ciphersuites (13): TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_RC4_128_SHA TLS_ECDHE_ECDSA_WITH_RC4_128_SHA TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA TLS_RSA_WITH_RC4_128_SHA TLS_RSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA Extensions (6): server_name status_request elliptic_curves ec_point_formats signature_algorithms renegotiation_info
Ciphersuites (23): TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA TLS_ECDHE_ECDSA_WITH_RC4_128_SHA TLS_ECDHE_RSA_WITH_RC4_128_SHA TLS_DHE_RSA_WITH_AES_128_CBC_SHA TLS_DHE_DSS_WITH_AES_128_CBC_SHA TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA TLS_DHE_RSA_WITH_AES_256_CBC_SHA TLS_DHE_DSS_WITH_AES_256_CBC_SHA TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA TLS_RSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_CAMELLIA_128_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA TLS_RSA_WITH_CAMELLIA_256_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA TLS_RSA_WITH_RC4_128_SHA TLS_RSA_WITH_RC4_128_MD5 Extensions (8): server_name renegotiation_info elliptic_curves ec_point_formats SessionTicket TLS next_protocol_negotiation status_request signature_algorithms
Ciphersuites (18): TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA TLS_ECDHE_ECDSA_WITH_RC4_128_SHA TLS_ECDHE_RSA_WITH_RC4_128_SHA TLS_DHE_RSA_WITH_AES_128_CBC_SHA TLS_DHE_DSS_WITH_AES_128_CBC_SHA TLS_DHE_RSA_WITH_AES_256_CBC_SHA TLS_RSA_WITH_AES_128_GCM_SHA256 TLS_RSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA TLS_RSA_WITH_RC4_128_SHA TLS_RSA_WITH_RC4_128_MD5 Extensions (10): server_name renegotiation_info elliptic_curves ec_point_formats SessionTicket TLS next_protocol_negotiation Application Layer Protocol Negotiation Channel ID status_request signature_algorithms | <urn:uuid:679542de-76c1-4136-8843-f3a73736990c> | CC-MAIN-2017-04 | https://www.bamsoftware.com/papers/fronting/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00212-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.911526 | 12,779 | 2.5625 | 3 |
A Self Reset is an operation that allows a user to set his own passwords to a desired new value, regardless of their current value. A user would generally want to do this in case he has forgotten his current password(s).
To allow a user to do this, a system must first use some other (i.e., not Password-based) method to authenticate him. This can be done by prompting the user for some obscure personal information, or by asking him to use a Smart Card.
This is a useful function in a Password Synchronization system: by
changing multiple (or all) passwords for a user simultaneously, such
software allows administrators to synchronize them. | <urn:uuid:024632fe-335d-4987-828c-0c8690c1a5ce> | CC-MAIN-2017-04 | http://hitachi-id.com/concepts/self_reset.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00514-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947524 | 140 | 2.6875 | 3 |
As the Internet of Things became an accepted reality, and the security community realized that they have to get involved in securing it, days without news about the insecurity of this or that Smart Thing are few and far between.
Hacking a building automation system
One of the latest attempts to shine a light on the problem was a recently published report by the IBM X-Force Ethical Hacking Team. The document detailed the team’s successful attempt to penetrate a building automation system (BAS) that controlled sensors and thermostats in a commercial office, and to ultimately access the central BAS server that controls building automation in this and several other locations.
They found that basic hacking techniques were quite enough to perform this type of attack, and were surprised about the number of security issues they encountered and were able to exploit: software security vulnerabilities, poor password practices, exposed router administration ports, and so on.
“If compromised, smart-building devices could have a profound impact on our physical surroundings and could allow a malicious actor to cause damage without any physical access to the building,” Paul Ionescu, IBM X-Force Ethical Hacking Team Lead, points out.
“For example, cybercriminals could gain control of the devices that regulate data center temperatures, causing fans to shut down and servers to overheat. Not only do these connected devices impact our physical surroundings, but if they share connections with enterprise IT networks, they could also open a backdoor to company data.”
Security issues and solutions
“The vulnerabilities we used to gain access in this test could have been prevented by the software manufacturer employing secure coding practices to sanitize input, prevent remote execution of commands (in both the firewall and building management software), and provide strong password storage (encrypt it in the first place, in the case of the firewall, and use a one-way encryption algorithm with a random string appended),” Chris Poulin, Research Strategist, IBM X-Force Security, told Help Net Security.
He also advises admins to make sure not to expose building management systems directly to the Internet. If they really have to do it, they should employ a VPN and two-factor authentication for added protection and, if possible, apply a whitelist so that only a small set of IP addresses on the Internet can access the building management system.
“Safer password practices would have gone a long way to prevent the hack we were able to perform on the BAS we tested,” he says. In this particular case, the password for the firewall and the building management system was the same.
Patching and upgrading firewalls, building management systems, and any device or system that runs software or firmware is a good security practice. It can be a drag, he admits, as IoT and building management devices don’t push patches or even have a formal notification process that a new patch or version is available, but still, admins should make the effort.
He advises strict controls to be put in place about what should pass between the IT network and the building system network(s).
“As security oversight for many of these systems is currently lacking, keeping the building automation system on a separate network than the company / IT network is one way to limit the risk of hackers breaking into the company network through the building automation network,” he notes.
“However as we move forward to a more secure model, the people operating and securing IT systems should begin to have oversight into the BAS network as well. Note that the building management system we tested did not provide access to the IT systems; however, a malicious hacker could affect IT systems by heating up the data center. While the IoT and IT may not converge at the infrastructure level, everything eventually interacts in the physical world.”
Using endpoint protection on devices used by the building management team to access the BAS should be a must, and these operators should be regularly trained and tested when it comes to phishing.
Finally, hiring security professionals to pen-test the system is also a good idea, but admins should keep in mind that securing a building automation system is not a one time job nor is there one easy way to secure a building.
“It must be an ongoing process, as the businesses requirements, the facility itself and the environment in which it lives change over time, and new security issues arise on a regular basis. It takes extensive work and coordination to not only get vulnerabilities patched, but also to ensure those fixes actually make their way into the affected devices in the building,” Poulin explained.
Smart Buildings security: Who’s in charge?
Gartner estimates that the number of connected devices used in “Smart Commercial Buildings” will reach 518.1 million this year, and over 1 billion in 2018.
One of the questions that many employees in IT departments of companies housed in these buildings are surely asking themselves is: “Should we and will we eventually be in charge of overseeing smart office technology, or will that be left to the building/office management operators?”
“In the past, traditional voice providers (PBX systems) would connect to the IT network to print call logs. Then VoIP systems blurred the lines between voice and data, resulting in many IT departments taking responsibility for it,” notes Poulin.
“It’s easy to foresee large server clusters monitoring energy rates and running CPU and disk intensive operations when more electricity is available on the grid and is not priced at a premium. Servers could also dynamically call for more cooling and ventilation in preparation for heat-generating operations. Physical security systems are already being integrated with anomaly detection systems, for example to ensure that when a user physically logs into a workstation that they security system positively identifies them as having entered the facility (or room, depending on the granularity of the badging system). In short, IT will want to use facilities data and building management will want to avail themselves of IT resources, and it makes sense for IT to inform and govern building management on infrastructure security.”
“Whether IT departments or building management operators want to interact with each other or not on a human level, the technology they both manage will connect – and already is – to solve business problems,” he added. “So while it may not be that IT has complete control of smart office technology, the two groups will have to work together and broaden their scope of knowledge and experience to encompass the other’s technology.”
This will be a big cultural shift and he predicts that many will protest, but ultimately some of the IT staff will have to be trained on physical systems and vice versa.
Securing the IoT
It can be very frustrating to observe the seemingly glacial pace at which IoT manufacturers improve the security of their products (if they make any effort at all).
Historically, increased attacks and incidents pushed the public to ask for better security, and ultimately regulatory mandates and contractual obligations with business partners and big customers have been required.
“However, we have an opportunity to take a different tack with the IoT – instead of regulatory pressures, we can define and provide a secure framework to makers,” Poulin opines.
Such a framework would prescribe a standard set of building block with security baked in at all levels: encryption at rest and in motion, strong authentication, reduced attack footprint, stringent permissions, firmware integrity guarantees, over the air updates, and so on.
“Whether commercial or open source, the framework would also have to provide a functional benefit for makers, such as ease, speed, and flexibility. Ultimately, makers have traditionally been sensitive to inherent costs of production; keeping those costs down while facilitating compliance is a win for both makers and operators,” he concluded. | <urn:uuid:1f547d3a-a988-4206-b92e-2117debae3b9> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2016/02/16/smart-buildings-security-whos-in-charge/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00330-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950981 | 1,603 | 2.671875 | 3 |
On Dec. 14, 1962, the Mariner 2 spacecraft flew by Venus, giving the United States its first "First" in the space race with the Soviet Union. America was the first country to complete a successful mission to another planet.
NASA's Jet Propulsion Laboratory (JPL) has published a video commemorating the journey of the mission, with interviews of people who were involved with the mission. Find out how a software bug caused the destruction of the first Mariner spacecraft, and hear from one of the few women involved in being a flight controller for the mission. Very cool video for anyone interested in space:
Keith Shaw also rounds up the best in geek video in his ITworld.tv blog. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter, Facebook, and Google+.
Watch some more cool videos: Watch this trailer for Lego's 'The Yoda Chronicles' Web series BBC gives Doctor Who fans an Amy/Rory postscript Supercut: Lego Lord of the Rings game cutscenes in one video The Year in Review, courtesy of Twitter Juggling Disney robot hopefully won't attack guests | <urn:uuid:a74d1d25-bebc-4d89-ab73-e7be40491dc8> | CC-MAIN-2017-04 | http://www.itworld.com/article/2717021/consumer-tech-science/celebrate-the-50th-anniversary-of-america-s-first-space--first---venus-fly-by.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00056-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929127 | 244 | 2.90625 | 3 |
The National Telecommunications & Information Administration (NTIA) has published an online guide to the wireless spectrum being used by federal agencies.
Launched on April 11, Spectrum.gov gives readers a glimpse into how the federal government is using its allotment of wireless frequencies in the 225 MHz to 5 GHz bands. The new resource also provides a map showing what federal systems are using spectrum throughout the U.S.
“Just as commercial broadband providers are facing growing demands for spectrum to fuel the explosion of new wireless devices, federal agencies’ demand for spectrum also is growing,” said Karl Nebbia, NTIA associate administrator, Office of Spectrum Management, in a blog post announcing the new website. “NTIA’s compendium shows agencies need spectrum for crucial tasks ranging from military flight testing to air traffic control to weather forecasting.”
Each spectrum use report is categorized by sections of particular bandwidth. Links to each band lead to a .pdf document that gives an overview of the band, how the frequencies within it are allocated, current federal agency use, and where applicable, planned future use.
The shrinking availability of wireless spectrum has been a hot topic over the past several years as more communications devices have entered the marketplace and need bandwidth to operate. The wireless industry has been pushing for release of more spectrum to accommodate private-sector demand.
Nebbia indicated in his blog post that Spectrum.gov would be updated regularly, calling the site “an important resource” as the federal government looks to repurpose federal spectrum for commercial use. | <urn:uuid:d94f9c99-7bef-4221-86a2-13bafdd48651> | CC-MAIN-2017-04 | http://www.govtech.com/internet/NTIA-Launches-Wireless-Spectrum-Webpage.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00174-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932276 | 316 | 2.625 | 3 |
Cyberattacks take aim at more than just traditional computer networks. The Department of Energy has recently partnered with the Georgia Tech Research Institute (GTRI), awarding a $1.7 million grant to coordinate ways to identify and prevent cyberattacks on the nation’s utility and power grids.
“Utilities and energy delivery systems are unique in several ways,” said GTRI researcher Seth Walters, one of the principal investigators on the project, in a press release. “They provide distribution over a large geographic area and are composed of disparate components which must work together as the system’s operating state evolves. Relevant security technologies need to work within the bandwidth limitations of these systems in order to see broad adoption and they need to account for the varying security profiles of the components within these power systems.”
The protective system will be built based on Georgia Tech research in the control, operation and monitoring of electric power utilities and their infrastructure, and will work to detect malicious content and intrusive agents. | <urn:uuid:27dc88d6-2665-4bbc-beb1-0c59967954a5> | CC-MAIN-2017-04 | http://www.govtech.com/security/Power-Grids-Under-Cyberattack.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00202-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938127 | 206 | 2.59375 | 3 |
How many futures do you think Ali Partovi, Hadi Partovi, and Code.org changed with the Hour of Code campaign? I’m guessing quite a few. I watched the light go off for the kids I sat down in front of that video.
Teaching kids to use technology without teaching them to create it is like teaching them to read without teaching them to write. Even if we are adults who never managed to learn to code, we should we should be teaching it to our kids. In fact, our schools should be teaching it to them. Unfortunately 90 percent of schools don’t teach computer science. The Hour of Code campaign aimed to change that by pushing this compelling video out into the world until it went viral. Google changed the Doodle for it. The president spoke up. Actors, geek superstars, and politicians all helped get the message – that everyone should accept the challenge to spend one hour learning to write a few lines of code – out to the kids.
One hour can instill significant change in the mind of a kid. They learn fast. They form ideas about what they are capable of (and not capable of) with a few carefully (or poorly) chosen words. And they need to know that this form of literacy will be very important in their future. I’d like to see a study that could tell me how many kids decided this week that they want to learn to write – as well as read – technology. But I don’t think anyone is tracking that.
So we’ll have to do the math. Three days into the campaign, 5 million students in 35,000 schools across 167 countries did their first Hour of Code. As of this moment,15,723,534 students have done the Hour of Code and written 516,813,441 lines of code. So how many futures got changed, do you think? | <urn:uuid:8f4f27c9-4192-4d2c-95ed-9f030a32afa5> | CC-MAIN-2017-04 | http://www.itworld.com/article/2701264/personal-technology/shouldn-t-we-be-teaching-kids-to-code-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00504-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.974549 | 390 | 2.828125 | 3 |
Welcome back everyone! In the last article we took a brief look at recon as a whole, but we didn’t really focus on any particular aspect of it. Today, we’re going to dive into the rabbit hole that is port scanning. We’re going to cover two different types of port scans, but this time we’re going to explain the process behind them. Before we do that, we’ll need to talk about the TCP three-way handshake, and it’s role in our scanning. So, without further adieu, let’s get started!
The TCP three-way Handshake
When making a connection to a port, there are multiple protocols that are used to handle and manage that connection. A protocol is simply a set of rules that both hosts use. This ensures that all hosts know how to properly send and receive data to and from each other. One of the two protocols responsible for transporting data is TCP. In order to initiate a connection, a three-way handshake is performed.
First, the host attempting the connection sends a packet with the SYN flag. SYN stands for synchronize, and means that one host is requesting to synchronize with another. After the SYN packet is sent, the second host must check that the proper authorizations are in place for this connection to be made. If all the requirements are met, then the second host sends a packet with the SYN and ACK flags. The SYN flag again stands for synchronize, while the ACK flag stands for acknowledge. This packet means that the second host acknowledges the original SYN flag, and sends it’s own SYN flag to confirm the connection. Finally, the first host sends a single ACK flag back to the second host, completing the handshake and establishing the connection.
Now that we know about the TCP three-way handshake, let’s move on to talking about our standard port scan. We’re going to be discussing the same form of basic scan we did in the previous article, so we can have a better understanding of it before moving on to a more complex scan.
Explaining the Basic Port Scan
In the last article, we used nmap to perform a very basic port scan. We didn’t really explain this port scan very deep, so we’re going to cover it today.. Remember that three-way handshake we just talked about? Well in order to understand port scanning, we need to know it very well. We’re not going to perform another basic port scan today, as we’ve already demonstrated it.
When we perform a basic port scan, or any port scan for that matter, we have to run through the TCP three-way handshake for every port we want to scan. The type of scan all depends on how we perform the handshake. A basic port scan runs through the entire handshake for every port. While this method does it’s job perfectly well, it makes quite a bit of noise. This means that every connection attempt will be logged by the victim, and this log can be tracked back to us. The basic port scan isn’t anything special, it’s more of an introduction to the next type of scan we’re going to discuss, the stealth scan.
Explaining the Stealth Scan
So far, we’ve explained the TCP three-way handshake, and it’s role in a basic port scan. There are many flags that can be used in a handshake, but we’ve only discussed SYN and ACK. In order to understand the stealth scan (otherwise known as the half-open scan) we need to know about a third flag, RST. RST stands for reset. This flag will terminate any handshake immediately. This can give us hackers an edge for evading those pesky logs!
During a SYN scan, the attacker performs the basic TCP three-way handshake as normal. The attacker starts with a SYN packet, and waits for the victims response. If the victim responds with a SYN-ACK, the port is open, and if they respond with an RST-ACK, the port is closed. After receiving the victims response, the attacker sends an RST packet instead of the regular ACK. By terminating the connection before it is complete, it is far less likely that it will be logged as an attempt to connect. This can help us fly under the radar during active recon.
Now that we understand the concept of stealth scanning, let’s perform one using nmap. Nmap supports many, many scan types, including stealth scans. If we want to utilize a stealth scan, we have to give the -sS flag before our target IP address. Let’s go ahead and perform our scan now, we’ll only be scanning ports 1-100:
There we go, our scan worked! But in order to better understand this concept, let’s open up wireshark and see our packets as they are transported:
We can see here that there are many packets going to and coming from our attackers IP (10.0.0.19). In this screen shot we can see that our attacker has sent SYN packets to many different ports on our victim. Now that we’ve seen the SYN packets being sent, let’s take a look at some of the victim’s response packets, as well as some of the attackers RST packets:
We can see here that the victim was responding with RST-ACK packets and some SYN-ACK packets. Upon closer inspection, we can see that this SYN-ACK packet comes form port 80 of our victim. This port was also reported as open during our scan! Immediately after the SYN-ACK from port 80, we see that the attacker sent an RST packet to terminate the connection before it was fully established, this let’s us slip by without being logged for attempting a connection. There we have it, we successfully performed and dissected a stealth port scan!
We covered quite a bit here, so I hope the concepts got across well. I know it may seem like this is moving a bit slow, but we really need to understand what is happening and learn the mechanics behind it. We’re here to become hackers, not script kiddies. The next two articles will be a Ruby crash course. It will prepare us with all the knowledge we need in order to build our own port scanner. I’ll see you there! | <urn:uuid:730e0f81-91b0-474e-8d66-6b089223cb48> | CC-MAIN-2017-04 | https://www.hackingloops.com/introduction-to-reconnaissance-part-2-a-deeper-look-at-port-scanning/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00504-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944375 | 1,355 | 3.09375 | 3 |
As a journalist focused on supercomputing, I’m used to singing the praises of high-tech and the wondrous applications it delivers. The recent advances in fields like genomics, climate simulation, astrophysics and computer-aided manufacturing would be impossible without the latest computer wizardry. But one of the darker sides to IT is its negative impact on employment.
That might seem counter-intuitive. New applications should encourage new industries and demand for workers. But it hasn’t worked out that way. At least not yet. In his 2008 book, The Big Switch: Rewiring the World, from Edison to Google, author Nicholas Carr describes how the Information Technology Revolution is different from the Industrial Revolution that proceeded it:
The distinguished Columbia University economist Jagdish Bhagwati argues that computerization is the main cause behind the 2-decades long stagnation of middle class wages. ‘There are assembly lines today, but they are without workers. They are managed by computers in a glass cage above, with highly skilled engineers in charge.’ Normally the introduction of labor-saving technology would erode wages only briefly before the resulting boost in productivity pushed them back up again. Unlike earlier technologies that caused ‘discrete changes’ such as the steam engine, the ongoing advances in computer technology offer workers no respite. The displacement of workers is continuous now and the pressure on wages becomes relentless.
It’s common sense that automation reduces labor demand. And information technology just happens to be the perfect tool for doing this. Software is excellent at doing the same thing over and over again. (That’s why God invented the for-loop.) But it’s also good at making decisions based upon past events. (That’s why God invented the if-statement.) So it’s not just industrial robots pounding rivets into sheet metal in an automobile factory, and making auto workers obsolete. It’s also HPC-style clusters doing business intelligence that was once under the purview of white-collar office workers. History suggests that anything that can be automated eventually will be.
Even techies themselves are at risk. Despite the almost non-stop reports that we are going to need a gazillion new computer scientists to feed the IT workforce over the next several years, computer engineer salaries are stagnant. A recent AP article reports that salaries have even dropped slightly for computer science and engineer majors in the US.
The situation for CS laborers looks even worse in the UK. According to a BBC report, 17 percent of computer science majors who graduated last year are unemployed. Engineers fared only slightly better at 13 percent. The lowest unemployment rates among UK grads were in medicine (0 percent), education (5 percent), and law (6 percent) — not exactly your high-tech fields.
Of course, we’re in the midst of a global recession, and outsourcing has moved a lot of IT jobs to China, India, and other low-cost labor markets. So techies are under assault on a couple of fronts right now. Despite that, the IT sector is outperforming the overall economy. Intel, for example, just reported its best quarter in 42 years, with record sales ($10.77 billion) and profits ($2.89 billion). Rival AMD just reported record revenue ($1.65 billion) as well.
It’s worth noting that neither company needed to ramp up its workforce to accomplish this. In fact, the hub of the US IT industry, Silicon Valley, is not exactly a job factory these days. Unemployment in the Valley is hovering at over 11 percent these days, almost two points above the national average. Yet, many of its companies are forecasting healthy growth over the next 12 months, expecting pent-up consumer and corporate demand to drive revenue.
A tech recovery, though, is unlikely to reignite employment, at least in the US. Most hardware manufacturing, and quite a bit of software development, has now moved overseas. Former Intel CEO and chairman Andy Grove decries the situation, writing in Bloomberg that the US needs to get back in the manufacturing game if it wants to continue to be a center for innovation. Grove puts it this way:
[O]ur pursuit of our individual businesses, which often involves transferring manufacturing and a great deal of engineering out of the country, has hindered our ability to bring innovations to scale at home. Without scaling, we don’t just lose jobs — we lose our hold on new technologies. Losing the ability to scale will ultimately damage our capacity to innovate.
He recommends government investment to develop domestic manufacturing and implement import levies to discourage offshoring production and labor. In essence, craft a job-centric economic policy that revolves around factories that are going to build the mass-produced products of the 21st century — things like consumer electronics, advanced batteries, and solar panels.
This might seem strange coming from a guy who helped build one of the biggest computer tech companies in the world, but it is Intel’s chip manufacturing prowess that drives its huge employment base. The idea that everyone can move up the IT food chain is a losing strategy for jobs. The reality is there are only so many senior computer scientists, VPs, and marketing directors required in the world. Or as Grove says: “…what kind of a society are we going to have if it consists of highly paid people doing high-value-added work — and masses of unemployed?”
Of course, if Carr’s calculation is correct, even factory-based jobs will be swept away by IT. Eventually we’ll have to come up with an economy based on labor that can’t be automated away by machines, software, or communication networks. Or maybe we’ll be forced to come up with an economy based on something other than labor. Hmmm… maybe information technology will save us after all. | <urn:uuid:6d833c89-dc93-40fb-88b7-2683e8054481> | CC-MAIN-2017-04 | https://www.hpcwire.com/2010/07/16/information_technology_is_not_the_savior_of_the_unemployed/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00504-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952115 | 1,210 | 2.828125 | 3 |
Night-Shining Clouds at Peak Visibility
/ June 26, 2012
In both the Northern and Southern hemispheres, during their respective late spring and early summer seasons, polar mesospheric clouds are at the peak of their visibility, according to NASA's Earth Observatory.
These clouds are visible from aircraft in flight, the International Space Station (ISS) and from the ground at twilight, and typically appear as delicate, shining threads against the darkness of space — hence their other names of “noctilucent” or “night-shining” clouds.
These clouds are rare, but the chances of seeing them are increasing because they're forming more frequently and becoming brighter, according to Matthew DeLand, an atmospheric scientist with Science Systems and Applications Inc. and NASA's Goddard Space Flight Center who has been studying them for 11 years. | <urn:uuid:b672f14a-fe67-4144-9ef7-94eabd272a94> | CC-MAIN-2017-04 | http://www.govtech.com/photos/Photo-of-the-Week-Night-Shining-Clouds-at-Peak-Visibility-06262012.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00322-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958528 | 175 | 3.5625 | 4 |
Fujitsu Works to Find New Ways to Cool Small Mobile Devices
An innovative, thin heat pipe that's less than 1 mm thick is being developed by Fujitsu to improve internal cooling in tomorrow's smartphones, tablets, laptops and other compact electronic devices.
The low profile heat pipe, which fits inside a device and wicks heat away from heat-generating components that are inside is being developed by Fujitsu Laboratories, according to an April 14 report on Phys.org. Inside the heat pipe is a liquid that, when passing over the heat sources, turns into a vapor, which then turns back into a liquid as it is cooled, similar to the process used in air conditioning systems.
"Smartphones, tablets, and other similar mobile devices are increasingly multifunctional and fast," the article stated. "These spec improvements, however, have increased heat generated from internal components, and the overheating of localized parts in devices has become problematic."
To battle the worsening heat problems, Fujitsu's thin heat pipe is capable of transferring approximately five times more heat than current thin heat pipes, the story reported, making it possible for CPUs and other heat-generating components to run cooler and to avoid concentrated hot-spots inside devices.
The heat pipe technology was detailed by Fujitsu at the Semiconductor Thermal Measurement, Modeling and Management Symposium 31 (SEMI-THERM 31) in March in San Jose, Calif. The idea of heat pipes are not new, but they continue to find new places where they can be featured, including this recent research by Fujitsu.
So what's this mean for future smartphone, laptop, tablet and other mobile device owners?
Well, it could mean that the devices we buy in the future could run cooler, a feature that is important for reliability, battery life and longevity. And ultimately it could also mean increased comfort when holding a very warm mobile device in one's hand.
Research like this again shows the amazing nature of innovation among scientists and researchers who are seemingly always finding ways to solve some of the continuing challenges that affect devices we use every day.
You certainly can't put a big cooling fan inside a thin device like a smartphone or tablet, so new fixes have to use creative thought processes. Fans won't fit? So what about a thin tube that circulates fluid which changes from liquid to vapor in a constant cycle, helping to remove heat and keep the device cooler? Very cool.
To me, this idea for thin loop heat pipe innovations is very fitting this week during the 45th anniversary of the Apollo 13 spaceflight, when NASA mission specialists helped bring the crew home safely after a critical oxygen tank exploded on the way to the moon in April 1970. Incredible thinking by NASA during that amazing mission brought astronauts James Lovell, Fred Haise and Jack Swigert back to Earth after a huge mishap.
Similar smart thinking around Fujitsu's new heat pipe could help keep tomorrow's mobile devices a lot cooler. And like the Apollo 13 wizardry that found a way to battle every obstacle during the mission, this is innovation at its finest. | <urn:uuid:8c29e846-de03-41bb-bfb2-ba23f82124b2> | CC-MAIN-2017-04 | http://www.eweek.com/blogs/first-read/fujitsu-works-to-find-new-ways-to-cool-small-mobile-devices.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00046-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947047 | 632 | 2.875 | 3 |
General-purpose GPUs (GPGPUs), those silicon over-achievers that can deliver teraflops of computing power, can apparently be used for less noble causes than speeding up medical imaging or optimizing financial portfolios. According to researchers at the Georgia Tech Research Institute (GTRI), high-end GPUs are now able to crack passwords with relative ease. At stake is the whole IT security model, say the researchers.
Of course, the ability to breach password protection has been around for awhile, but it was generally restricted to million-dollar supercomputers. Now that anyone can buy a teraflop-capable GPU for a few hundred dollars, you no longer have to be rich and famous to get into the password-cracking “business.”
And it’s not just the GPU hardware that’s making it easier. GPU computing tools, like NVIDIA’s popular CUDA software makes it relatively easy for programmers to tap into the power of the modern graphics processor. And since password-cracking software is easily found on the Internet, ne’er-do-wells have plenty of material to start with.
In a case study on the GTRI website, the researchers warned that the typical password used nowadays is all but worthless. “Right now we can confidently say that a seven-character password is hopelessly inadequate – and as GPU power continues to go up every year, the threat will increase,” said Richard Boyd, a senior research scientist at GTRI. In fact, according GTRI research Joshua Davis, even 12-character passwords could be vulnerable, if not now, then soon. He believes useful passwords will soon have to be entire sentences. | <urn:uuid:f8551584-8868-462f-9a0d-aac2ba1fcd17> | CC-MAIN-2017-04 | https://www.hpcwire.com/2010/08/16/passwords_no_match_for_gpgpus/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00128-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933691 | 353 | 2.875 | 3 |
As conservationists work to recover endangered species populations, taking individuals that are maintained and protected under human care and reintroducing them into the wild, it becomes apparent that there is a great deal to learn about the science of species recovery. In a paper published in the recent edition of the Journal of Applied Ecology, a team of wildlife experts from San Diego Zoo Global, the U.S. Geological Survey, the U.S. Fish and Wildlife Service and the University of Nevada analyzed the effect of habitat quality on the survival and dispersal of released desert tortoises. Juvenile tortoises used in this study originated from eggs produced by females housed at the Desert Tortoise Conservation Center in Las Vegas. Ages ranged from 6 months to 4 years. The tortoises were translocated and monitored for one year, using radio tracking systems. "The goals of the study were to help re-establish populations of this threatened and declining species, and to understand better what critical resources on the landscape are associated with the ability of young tortoises to survive and thrive," said Ron Swaisgood, Ph.D., director of Applied Animal Ecology at San Diego Zoo Global. Tortoises released in habitat that included appropriate vegetation, rocks and the presence of animal burrows had lower mortality rates than those released in areas where land features offered fewer options for predator avoidance. "Burrows created by small mammals represent critical components of desert tortoise ecology," said Melia Nafus, Ph.D., a researcher for San Diego Zoo Global and lead author of the study. "Supporting healthy rodent populations through habitat management may improve juvenile desert tortoise survival and recruitment." Another interesting finding of the study was that tortoises released on rocky ground were less likely to disperse away from the release site. "This finding probably relates to the tortoise's dependence on rocky substrate, as camouflage to hide from predators," said tortoise expert and co-author Todd Esque, Ph.D., from the U.S. Geological Survey. "The U.S. Fish and Wildlife Service encourages research such as this because it provides vital knowledge that informs our policy and management decisions," stated study co-author Roy Averill-Murray, who heads the service's Desert Tortoise Recovery Office. "Now, we have better information when deciding which habitats to protect for desert tortoises, and where to attempt re-establishment of desert tortoise populations with future releases." Translocation of individuals back to the wild is one of many important tools that conservation biologists use to recover endangered and threatened species. "We view these translocations as a way to learn more about animals' habitat requirements, while also assisting directly with species recovery goals," said Ron Swaisgood.
In 1992, Ole Karsholt and Jan Pedersen started collecting bugs in light traps on the roof of the Natural History Museum of Denmark in Copenhagen. A quarter-million bugs later, their data on 1543 species of moths and beetles provides astounding evidence that the we don't need to wait for 2°C of warming before seeing significant effects of temperature change on the insect community. As might be predicted, the insect "specialists" -- bugs that eat only a single species of plant -- experience temperature changes more dramatically than generalists. "Earlier studies have confirmed that specialist species also respond rapidly to destruction of their habitats, so we are dealing with a very sensitive group of animals” according to postdoc Philip Francis Thomsen from the Center for GeoGenetics, one of the authors of the study published in the Journal of Animal Ecology. The nut weevil, Curculio nucum, a connoisseur of the hazel nut, visited the museum roof in the early years of the study but disappeared in later years. Its place was taken by the acorn weevil, Curculio glandium, suggesting that both species are moving northwards to find cooler domains. The data on other specialist species supported the hypothesis, showing increases in populations of hot-dwelling species and decreases in those that prefer cooler climes. Insects that feed only during the non-mobile larval stage were seen to range quite widely from the habitats of their infancy at least 10 km distant from the museum roof. The team succeeded to register seven moth species and two beetles which had not previously been on record as inhabiting Denmark, including the Asian lady beetle (Harmonia axyridis) (pictured) which has now spread throughout the country and is considered an invasive species. The conversion of the data gained from the long-term voluntary monitoring project proves how invaluable such records can be. The authors hope their results will return funding for nature monitoring projects so that humanity does not have to depend on a spattering of committed enthusiasts. It seems like citizen scientists could lend a hand in the effort with a little political guidance, benefiting both the people involved and the state of scientific knowledge of our environment.
A new study published in the Journal of Animal Ecology found that migratory seabirds suffered negative repercussions when they had to spend more time rearing chicks, including decreased breeding success when they returned to the colony the following spring. The study artificially altered the length of the chick-rearing period for pairs of Manx shearwaters, giving new insights into the consequences for birds whose reproductive phase doesn't go to plan. All parent pairs involved in the study cared for their foster chicks until they were fully reared - often at their own expense. Lead author Dr Annette Fayet, of the Oxford Navigation Group in the University of Oxford's Department of Zoology, said: 'The results of this study provide evidence for carry-over effects on the subsequent migratory, wintering and breeding behaviour of birds.' Carry-over effects are the processes by which events in one breeding season may affect the outcome of the subsequent season. But the exact nature of these effects, as well as whether they affect other events in birds' annual cycles, such as migration and wintering, has been unclear. Dr Fayet said: 'Birds that had their chick-rearing period extended in our study delayed the start of their autumn migration and spent less time at the wintering grounds, and while they were there they spent less time resting. When they returned to the colony the following spring, they started breeding later, laid smaller eggs, reared lighter chicks - early, heavy chicks survive better - and overall had a lower breeding success. 'This suggests that the birds were in poorer condition after working harder during the experimental breeding season and shows the negative effects on both non-breeding and breeding behaviour in the year following the experiment.' Dr Fayet added: 'Conversely, birds that had a shortened breeding season in the experimental year started migration on time, spent more time resting and less time foraging at the wintering grounds, and had a similar breeding season to control birds the following year. 'Interestingly, this shows that "positive" carry-over effects occur but also that they may be less strong, or are shorter-lived, than "negative" ones.' The team conducted an experiment involving the Manx shearwater (Puffinus puffinus), a migratory bird with an average lifespan of around 30 years. Manx shearwaters nest in burrows on dense colonies along the British coast and embark on a journey of more than 8,000km to the Argentine Sea every autumn. Each year in spring they produce a single chick, which they generally feed for around 60 days. But the resources allocated by parents to feed their chick can vary: for example, factors such as food shortage, poor weather conditions, inexperience or late breeding - perhaps because of delayed migration or poor body condition - are likely to increase the energy expended during reproduction. This raises the possibility of birds being caught in a 'vicious cycle', where the carry-over effects of a difficult breeding season continue throughout the winter, making it harder for the birds to fully regain their body condition and thus have an easier breeding season the following year. In the study, the researchers swapped chicks between nests on the Manx shearwater colony of Skomer Island, Wales, artificially extending or shortening the chick-rearing period of 42 breeding pairs by around 25%. They then tracked the movement and behaviour of each adult with miniature geolocators, closely monitoring their breeding performance the following year (including laying date, egg mass, chick growth rate and breeding success). All pairs cared for their foster chick until normal fledging age, which resulted in a delayed start of migration for the pairs which had their chick-rearing period extended. Dr Fayet said: 'Controlled experiments like this one are rare but necessary to disentangle the complex mechanisms of carry-over effects and cost of reproduction in migratory birds. 'The results of this study are important because they reveal how carry-over effects can develop and affect animals throughout their annual cycle, and not just in terms of their breeding performance. They also help us understand how the decisions birds make regarding their life cycles - such as delaying migration to ensure their chicks are properly reared - are influenced by a complex relationship between individual body condition, external constraints, and current and future reproduction. 'Some 28% of seabird species are globally threatened, and numbers have dropped by 70% over the past 60 years. If all the events in a bird's annual cycle are linked, from breeding to migration and wintering, then any conservation measures to combat a species' decline must address these events together.' Dr Fayet added: 'However, we still have a lot to understand. For example, our study did not investigate whether carry-over effects affect the two sexes differently: do females, which have to produce the egg, pay a heavier price for being in a poor condition? Additionally, little is known about the duration of carry-over effects, which is likely to affect how long-lived animals optimise their life decisions.' Manx shearwaters do not recognise their chicks individually, and the breeding pairs used in this study were able to start migration no later than other naturally late breeders. Explore further: Hormones dictate when youngsters fly the nest: research More information: 'Carry-Over Effects on the Annual Cycle of a Migratory Seabird: an Experimental Study' Journal of Animal Ecology, Wednesday 31 August 2016. DOI: 10.1111/1365-2656.12580
With a final breath of air, I descend beneath the surface among swaying kelp and flying sea lions. I’m in search of a creature that has eluded me for many years. As a freediver, I’ve met the humbling gaze of a tiger shark and tossed around seaweed with playful wild spotted dolphins. But I’ve never faced the puckered lips and buggy eyes of the whimsical Mola mola — a fish that can reach the weight of an adult rhinoceros. Biologists have affectionately described Mola, or ocean sunfish, as a “a swimming head.” And while they seem to just float aimlessly at the surface, scientists are finding that these fish — which occupy a crucial evolutionary link in the fish family— are actually warming up after epic daily treks into deep water. A testament to this bizarre nature, a viral video of the sunfish circulated in September of 2015. The expletive-shouting Boston fisherman is unsure if he’s witnessing a baby whale or sea turtle, two seemingly dissimilar animals. He’s not the only one. These fish live off the California coast and around the world in temperate and tropical areas. But many people have never heard of them, let alone seen one. Mola mola are not endangered and not eaten in the United States. In fact, females can produce up to 300 million eggs, more than any other bony fish. But the hapless fish ends up tangled in fishing nets, as bycatch for more valuable target species. They make up the largest bycatch component (29 percent) in the California drift-gillnet swordfish fishery. So why does it matter if Mola mola are caught in mesh nets? Mola are pelagic, which means they live in the open ocean. Like humans, and many other fish, they have a bony internal skeleton. Sharks and rays, however, have a cartilaginous skeleton. According to some scientists, mola could provide a missing link to understand their open ocean neighbors animals, like sharks. “Sunfish are one of the most advanced bony fish, but they have a lot in common with cartilaginous fish. What they have in common may be adaptive to pelagic life and to study it may lead to solve evolution of pelagic species,” says Itsumi Nakamura, a biologist at the University of Tokyo. Mola have lost the calcium carbonate that makes their skeleton hard, so it’s more like a shark skeleton, says Christopher Lowe, a professor of marine biology at Cal State Long Beach State. Also like sharks, they lack a swim bladder that helps most bony fish stay afloat. Being lighter means using less energy, which is important when you are searching for hard to find and low calorie dinner items, common for deep-sea eaters, he says. Nakamura has studied the fish since 2009 and recently revealed mysteries of their strange behavior. Mola are often seen just lounging at the surface, he says. But new research published in the Journal of Animal Ecology from Nakamura and his team have found that mola actually make daily treks to the deep sea more than 2,600 feet beneath the surface — a place reserved for creatures like giant squid and diving sperm whales. Scientists have actually known mola dive deep for years. In fact, Lowe was one of the first scientists to track mola back in 2004. He speculated their diving behavior was related to regulating body temperature, but at the time scientists lacked the tools to test it, he says. Nakamura and his team “did a really nice job of demonstrating that, using appropriate new technology,” says Lowe. With the help of local fisherman in Funakoshi Bay, Nakamura’s team caught sunfish and outfitted them with thermometers, accelerometers, and video cameras. Though they can reach 2,600 feet, most trips throughout the day averaged between 350-600 feet. Mola returned to warm surface waters in between dives. This sunbathing behavior regulates their body temperature, and allows them to stay at depth longer, concludes Nakamura. It may also explain their large size. Larger sunfish can hunt longer and lose heat at slower rates than smaller ones, he says. So why journey to the dark and deep? They are eating jellyfish-like creatures, called siphonophores, says Nakamura. And it turns out, after observing the camera from one mola, they might be dining on the most nutritious part of the animals- their sex organs. “Of course I was surprised, because it is very novel that they eat only calorie rich parts of the jellyfish,” he says. Relaxing at the surface has another benefit for the sunfish; it’s a trip to the spa. Mola line up at cleaning stations, while smaller fish peck parasites from their body. For a more thorough cleaning, mola swim to the surface and seagulls jab through their flesh, feasting on parasitic worms. “They are a treasure trove for parasitologists,” says Lowe. Nakamura isn’t alone in his quest to understand the mola mola. Scientists are still trying to fill in the gaps for this under-studied species. This study demonstrates that mola are temperature sensitive, say Lowe. Global climate change and warming oceans will change their behavior and the distribution of their prey The question is, how? he asked. Tierney Thys has studied the mola for over 15 years and is researching everything from their vision and diving behavior, to pollutant levels. She recently published results from the first long-term tagging study of mola mola in the eastern Pacific in the Journal of Experimental Marine Biology and Ecology. They tagged 15 mola mola off the coast of southern California between 2003 and 2010. Of the 15 tagged individuals, four made dives greater than 1600 feet. It seems California mola are also diving deep. And last year, a lucky southern California native met a mola mola in the open sea. Ryan Brennan is the owner of a BMX bike show company, is a spearfisherman, and a freediver. He was headed 18 miles offshore on a foggy morning to go fishing with three friends. Still groggy from the 5:00 a.m. wake up call, he jumped into blue water. Like the Boston fisherman, Brennan had no idea what he was witnessing. “One of my first thoughts was, wow this one of the ugliest fish I have ever seen,” says Brennan. But Brennan spent time in the water, watching and observing the fish. And now, seeing the fish again is what he yearns to do the most, he says. “I saw the eyeball looking at me,” he says. “The more I looked into the animal, I realized it’s really beautiful.” Eventually the fish took off. And Brennan was surprised by how fast it moved. “They are unbelievably athletic, even though they don’t look that way,” says Lowe. The evidence is clear: the mola mola is more than just a lazy sun-bather. And while I have yet to meet one, I’ll continue weaving through kelp forests to find the “swimming head.” Bethany Augliere is a journalist in the science communication program at the University of California, Santa Cruz. She holds a master's degree in marine biology from Florida Atlantic University. For more of her work, visit her website or follow her on Twitter @BethanyAugliere.
Muller J.,Bavarian Forest National Park |
Stadler J.,Helmholtz Center for Environmental Research |
Brandl R.,Animal Ecology
Remote Sensing of Environment | Year: 2010
Whether diversity and composition of avian communities is determined primarily by responses of species to the floristic composition or to the structural characteristics of habitats has been an ongoing debate, at least since the publication of MacArthur and MacArthur (1961). This debate, however, has been hampered by two problems: 1) it is notoriously time consuming to measure the physiognomy of habitat, particularly in forests, and 2) rigorous statistical methods to predict the composition of bird assemblages from assemblages of plants have not been available. Here we use airborne laser scanning (lidar) to measure the habitat (vegetation) structure of a montane forest across large spatial extents with a very fine grain. Furthermore, we use predictive co-correspondence and canonical correspondence analyses to predict the composition of bird communities from the composition and structure of another community (i.e. plants). By using these new techniques, we show that the physiognomy of the vegetation is a significantly more powerful predictor of the composition of bird assemblages than plant species composition in the field and as well in the shrub/tree layer, both on a level of p < 0.001. Our results demonstrate that ecologists should consider remote sensing as a tool to improve the understanding of the variation of bird assemblages in space and time. Particularly in complex habitats, such as forests, lidar is a valuable and comparatively inexpensive tool to characterize the structure of the canopy even across large and rough terrain. © 2009 Elsevier Inc. All rights reserved. Source | <urn:uuid:57f2d08d-0e22-460b-8778-046b26f103f3> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/animal-ecology-1464092/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00276-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955109 | 4,038 | 4.0625 | 4 |
There is some confusion about the definitions of Red, Blue, and Purple teams within Information Security. Here are my definitions and concepts associated with them.
- Red Teams are external entities brought in to test the effectiveness of a security program. This is accomplished by emulating the behaviors and techniques of likely attackers in the most realistic way possible. The practice is also known as Penetration Testing, and involves the pursuit of one or more objectives.
- Blue Teams refer to the internal security team that defends against both real attackers and Red Teams. Blue Teams should be distinguished from standard security teams in most organizations, as most security operations teams do not have a mentality of constant vigilance against attack, which is the mission and perspective of a true Blue Team.
- Purple Teams exist to ensure and maximize the effectiveness of the Red and Blue teams. They do this by integrating the defensive tactics and controls from the Blue Team with the threats and vulnerabilities found by the Red Team into a single narrative that ensures the efforts of each are utilized to their maximum. When done properly, 1 + 1 will equal 3.
Concepts and philosophy
Red and Blue teams ideally work in perfect harmony with each other, as two opposing sides of the same coin.
Like Yin and Yang or Attack and Defense, Red and Blue teams could not be more opposite in their tactics and behaviors, but these differences are precisely what make them part of a healthy and effective whole.
Red Teams attack, and Blue Teams defend, but the primary goal is shared between them: improve the security posture of the organization.
Purple Teams are arguably an artificial addition to this pairing. They exist to ensure that observations and lessons from both teams make it to the other so that continuous improvement can occur. Without this crucial bridge, each team discovers key insights but doesn’t share them with the other.
For example, the Red Team might learn ways they could have been stopped but not share this knowledge with the Blue Team. Or the Blue Team may be aware of gaps in their controls but not share them with the Red Team.
Some of the common problems with Red and Blue team cooperation include:
- The Red Team thinks itself too elite to share information with the Blue Team
- The Red Team is pulled inside the organization and becomes neutered, restricted, and demoralized, ultimately resulting in a catastrophic reduction in their effectiveness
- The Red Team and Blue Team are not designed to interact with each other on a continuous basis, as a matter of course, so lessons learned on each side are effectively lost
- Information Security management does not see the Red and Blue team as part of the same effort, and there is no shared management or metrics shared between them
Organizations that suffer from one or more of these ailments are most likely to need a Purple Team to solve them.
A key point in the understanding of Purple Teams is that it should be thought of as a function, or a concept, more than as a separate entity. This can come in the form of an actual, named team that performs this function, or it could be part of the Red/Blue teams’ management organization that ensures that the feedback loop between them is continuous and healthy.
Having the Purple Team function occur as part of security management may be ideal so that it does not appear as if the Purple Team is a peer with the other two, or that the Purple Team is the only way the Red and Blue teams will communicate with each other. This breakdown can perpetuate the negative adversarial aspects (which include reluctance to share information) between the Red and Blue teams.
- Red Teams emulate attackers in order to find flaws in the defenses of the organizations they’re working for
- Blue Teams defend against attackers and work to constantly improve their organization’s security posture
- A properly functioning Red / Blue Team implementation features continuous knowledge sharing between the Red and Blue teams in order to enable continuous improvement of both
- Purple Teams are often created to facilitate this continuous integration between the two groups
- The Purple Team can be conceptualized as a Purple Team function, and can exist as a separate team or as part of the security management organization
- In an ideal, mature organization, the Red Team’s entire purpose is to improve the blue team, so the interaction provided by the Purple team should be superfluous
- All these terms can apply to any kind of security operation, but these specific definitions are tuned towards information security.
- The ideal organizational placement of a Purple Team is a subject of debate. The most important thing is simply that it occurs somewhere.
- A Tiger team is similar, but not quite the same as a Red Team. A 1964 paper defined the term as “a team of undomesticated and uninhibited technical specialists, selected for their experience, energy, and imagination, and assigned to track down relentlessly every possible source of failure in a spacecraft subsystem. The term is now used often as a synonym for Red Team, but the general definition is an elite group of people designed to solve a particular technical challenge.
- It is important that Red Teams maintain a certain separation from the organizations they are testing, as this is what gives them the proper scope and perspective to continue emulating attackers. Organizations that bring Red Teams inside, as part of their security team, tend to (with few exceptions) slowly erode the authority, scope, and general freedom of the Red Team to operate like an actual attacker. Over time (often just a number of months) Red Teams that were previously elite and effective become constrained, stale, and ultimately impotent.
- In addition to being a bridge organization for less mature programs, Purple Teams can also help organizations acclimate their management to the concept of attacker emulation, which can be a frightening concept for many organizations.
- Another aspect that leads to the dilution of effectiveness of internal Red Teams is that elite Red Team members seldom transition well to cultures at companies with the means to hire them. In other words, companies that can afford a true Red Team tend to have cultures that are difficult or impossible for elite Red Team members to handle. This often leads to high attrition within internal Red Team members who make the transition to internal.
- It is technically possible for an internal Red Team to be effective; it’s just extremely unlikely that they can remain protected and supported at the highest levels over long periods of time. This tends to lead to erosion, frustration, and attrition.
- One trap that internal Red Teams regularly fall into is being reduced in power and scope to the point of being ineffective, at which point management brings in consultants who have full support and who come back with a bunch of great findings. Management then looks at the internal team and says, “Wow! They’re amazing! Why can’t you do that?” That’s usually a LinkedIn-generating event.
- Thanks to Rob Fuller, Dave Kennedy, and Jason Haddix for reading drafts. | <urn:uuid:9c219a13-9c8e-4737-9a21-ba6f341b3b9b> | CC-MAIN-2017-04 | https://danielmiessler.com/study/red-blue-purple-teams/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00302-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959233 | 1,416 | 2.96875 | 3 |
The Problem With Common Two-Factor Authentication Solutions
More websites and online businesses today are beginning to rely on smartphones as a second factor of authentication.
Some online banks have been using SMS-based authentication for transaction verification but recently, major websites and businesses not in regulated industries are recognizing the need for stronger online authentication.
Earlier this year Google made two-factor authentication available to all users, and in the past few days Facebook also rolled out two-factor authentication.
It's great news that more websites are strengthening online authentication. When one considers how much sensitive, personal information people share on the Web, relying on a single layer of password protection simply is not enough.
However, sending a one-time password or authentication code by SMS text message is also not very secure, because they are often sent in clear text.
Mobile phones are easily lost and stolen and if another person has possession of the user's phone, they could read the text message and fraudulently authenticate. SMS text messages can also be intercepted and forwarded to another phone number, allowing a cybercriminal to receive the authentication code.
With more businesses relying on mobile phones for out-of-band authentication, cybercriminals will increasingly target this channel for attack -- meaning that businesses should use a more secure approach than simple SMS text message.
However, the challenge for consumer-facing websites is to balance strong security with usability. Complicated security schemes will not achieve widespread adoption among Internet users.
A more secure and easy to use approach is to display a type of image-based authentication challenge on the user's smartphone to create a one-time password (OTP). Here's one example of how it can be done: During the user's first-time registration or enrollment with the website they choose a few categories of things they can easily remember - such as cars, food and flowers.
When out-of-band authentication is needed, the business can trigger an application on the user's smartphone to display a randomly-generated grid of pictures. The user authenticates by tapping the pictures that fit their secret, pre-chosen categories. The specific pictures that appear on the grid are different each time but the user will always look for their same categories.
In this way, the authentication challenge forms a unique, image-based "password" that is different every time - a true OTP. Yet, the user only needs to remember their three categories (in this case cars, food and flowers).
Delivering a type of knowledge-based authentication challenge to the user's smartphone rather than an SMS message with the code displayed in clear text is more secure because the interaction takes place entirely out-of-band using the mobile channel.
Because the mobile application communicates directly with the business' server to verify that the user authenticated correctly, it is much more secure than having the user receive a code on their phone but then type it into the web page to authenticate.
Additionally, even if another person has possession of the user's phone, they would not be able to correctly authenticate because they do not know the user's secret categories.
This secure two-factor, two-channel authentication process will help mitigate more sophisticated malicious attacks such as man-in-the-browser (MITB) and man-in-the-middle (MITM).
Perhaps as important as security is ease of use. Most Internet users won't adopt security processes that are too cumbersome, and most online businesses don't want to burden their users.
Image-based authentication is much easier on users because they only need to remember a few categories of their favorite things and tap the appropriate images on the phone's screen, which is much easier than typing long passwords on a tiny phone keyboard or correctly copying an alphanumeric code from one's text message inbox on the phone to the web page on the PC.
In fact, a survey conducted by Javelin Strategy and Research group confirmed that 6 out of 10 consumers prefer easy-to-use authentication methods such as image identification/recognition.
More websites and online businesses should follow the example set by Google and Facebook by deploying two-factor authentication for users.
However, as criminals increasingly target mobile authentication methods and intercept SMS text messages, it will be critical for businesses to use a type of knowledge-based authentication challenge rather than sending an authentication code as a plain SMS text message. | <urn:uuid:463738aa-79c5-45b4-9187-51a81509d1e2> | CC-MAIN-2017-04 | http://www.infosecisland.com/blogview/13734-The-Problem-with-Two-Factor-Authentication-Solutions.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00121-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928719 | 887 | 2.59375 | 3 |
Ethernet continues to be the most widely used network architecture today for its low cost and backward compatibility with the existing Ethernet infrastructure. Driven by increasing networking demands for application such as Internet search, Web hosting, video on demand, and high performance computing, network speed is rapidly migrating to 10 Gigabits per second and beyond. But as network speed increases, it poses a number of great challenges on computer servers.
Recently, researchers at University of California, Riverside have studied I/O challenges from high-speed networks and invented a new architecture to efficiently tackle those challenges. A paper describing the research, “A New Server I/O Architecture for High Speed Networks“, was co-authored by graduate student Guangdeng Liao and professor Laxmi Bhuyan. The paper will be presented February 15 at the IEEE International Symposium on High-Performance Computer Architecture (HPCA) in San Antonio, Texas. What follows is an encapsulation of this work and a description of the new I/O design.
Traditional architectural designs of processors, cache hierarchies and system interconnects focus on CPU and/or memory-intensive applications, and are decoupled from I/O considerations. As a result, they tend to be inefficient for network processing. Network processing over 10 Gigabit Ethernet (10GbE) easily saturates two cores of an Intel Xeon quad-core processor. Assuming ideal scalability over multiple cores, network processing over 40GbE and 100GbE will saturate 8 and 20 cores, respectively. In addition to the processing inefficiency, the increasing network speed also poses a big challenge to network interface card (NIC) designs. DMA descriptor fetches over a long latency PCI Express bus heavily stress the DMA engine in NICs and necessitate larger NIC buffers to temporarily keep packets.
These requirements significantly increase the device’s design complexity and price. For instance, the price of a 10GbE NIC can be up to $1,400, while a 1GbE NIC costs less than $40. Therefore having highly efficient network processing with the low complexity of NIC becomes a critical question to answer.
In order to understand network processing efficiency, we used the network benchmark Iperf over 10GbE on Intel Xeon quad-core processor-based servers to measure per-packet processing overhead. It instruments the driver and OS kernel using hardware performance counters provided by the CPU to pinpoint real performance bottlenecks. Unlike existing profiling tools attributing CPU costs such as retired cycles or cache misses to functions, the instrumentation is implemented at the fine-grained level and can pinpoint data incurring the cost.
Through detailed overhead analysis we obtained several new observations, which have not yet been reported.
First, the study found that besides data/packet copy from kernel-to-user space, the driver and socket buffer release unexpectedly take 46 percent of processing time for large I/O sizes and even 54 percent for small I/O sizes. Thus, the major network processing bottlenecks lie in the driver (greater than 26 percent), data copy (up to 34 percent depending on I/O sizes) and buffer release (greater than 20 percent), rather than the TCP/IP protocol itself.
Second, in contrast to the generally-accepted notion that long latency NIC register access results in the driver overhead, our analysis showed that the overhead comes from memory stalls to network buffer data structures. Simply integrating NIC into CPUs like Niagara 2 processors with two integrated 10GbE NICs for reducing register access latency does not help network processing performance a lot.
Third, releasing network buffers in OS results in memory stalls to in-kernel page data structures, contributing to the buffer release overhead.
Finally, besides memory stalls to packets, data copy implemented as a series of load/store instructions, also has significant time on L1 cache misses and instruction execution. Prevailing platform optimizations for data copy, like Direct Cache Access (DCA), are insufficient for addressing the copy issue.
The studies reveal that besides memory stalls, each packet incurs several cache misses on corresponding data and has considerable data copy overhead. Some intuitive solutions like having larger last-level caches or extending the optimization DCA might help network processing performance to some extent, but have major limitations. Increasing cache size is an ineffective approach and more importantly, is unable to address NIC challenges and the data copy issue. Unlike increasing cache size, extending DCA to deliver both packets and those missed data from NICs into caches is more efficient in avoiding memory stalls. The downside is that it stresses NICs more heavily and degrades PCI Express efficiency of packet transfers. In addition, it does not consider the data copy issue.
To efficiently tackle all challenges from high-speed networks, the paper proposes a new server I/O architecture, where the responsibility for managing DMA descriptors is moved to an on-chip network engine, known as NEngine. The on-chip descriptor management exposes plenty of optimization opportunities like extending descriptors. Information about data incurring memory stalls during network processing is added into descriptors.
It basically works like this: When the NIC receives a packet, it directly pushes the packet into NEngine without waiting for long latency descriptors fetches. NEngine reads extended descriptors to obtain packet destination location and information about data incurring memory stalls. Then, it moves the packet into the destination memory location and checks whether data incurring the stalls resides in caches. If not, NEngine sends data address to the hardware prefetching facility for loading the data, thus avoiding memory stalls to them during packet processing.To address the data copy issue, NEngine moves payload inside last level cache and invalidates source cache lines after the movement. Source data becomes useless and dead after the copy.
The new I/O architecture allows the DMA engine to have fast access to descriptors and keeps packets in CPU caches rather than in NIC buffers. These designs substantially reduce the burden on the DMA engine and avoid extensive NIC buffers in high-speed networks. While NICs are decoupled from DMA engine, they maintain other hardware features such as Receive Side Scaling and Interrupt Coalescing.
Unlike previous approaches such as DCA, the new server I/O architecture ameliorates all major performance bottlenecks of network processing and simplifies NIC designs, enabling general-purpose platforms to be well suited for high-speed networks. Performance evaluation shows that it significantly improves the network processing efficiency and Web server throughput while substantially reducing the NIC hardware complexity. The new server I/O architecture inherits the descriptor-based software/hardware interface and only needs some modest support from the device driver and the data copy component. There is no need to modify TCP/IP protocol stack, system calls or user applications.
About the Author
Guangdeng Liao is a fifth year Ph.D. student at University of California, Riverside. His research interest lies in high performance I/O, computer architecture and virtualization. | <urn:uuid:4e5decc8-1f31-490e-a029-8ffbf3d2962a> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/01/31/new_architecture_tackles_high_speed_network_challenges/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00121-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.904294 | 1,432 | 2.75 | 3 |
It is very common to see Portable Executable (PE) file infector viruses. It is a bit more unusual to see file infection via the raw file system — in this case, a Master Boot Record (MBR) file system infector.
Partly this is because PE infectors are less troublesome to create — they can be more robust, are easier to develop, and to control. In contrast, MBR infectors are more complex and their size is limited to 62 sectors (7C00H). Also, there's less room for error — a small mistake or bug in an MBR file system infector causes the system to be unbootable.
So an MBR file system infector such as Trojan:W32/Smitnyl.A (98b349c7880eda46c63ae1061d2475181b2c9d7b), which appears to be distributed via some free file-sharing networks, seems worth a quick analysis, even if it only targets one portable executable system file and the infection is straightforward compared to common virus file infectors.
Smitnyl.A first infects the MBR via raw disk access. Then it replaces it with a malicious MBR containing the file infector routine (stored at sector 32).
Image 1 & 2: Overwriting original MBR, Part 1 (top) and Part 2 (bottom)
Why an MBR File System Infector? Probably because it can bypass Windows File Protection (WFP). As WFP is running in protected mode, any WFP-protected file will be restored immediately if the file is replaced.
The original MBR is stored at sector 5, while the infector payload starts at sector 39 with size A00H. This payload will be overwritten to the Windows critical system file, userinit.exe.
Image 3 & 4: Hex views of infected MBR (left) and original MBR (bottom)
Image 5: Hex View of MBR File System Infector Routine
Image 6: Hex View of Userinit Infector Payload
Why Userinit? Possibly because it is one of the processes launched automatically when the system starts, allowing the malware to execute automatically when the system starts.
Smitnyl infects Userinit from the first stage of the boot sequence. When the MBR is loaded to 0x7C00, it determines the active partition from the partition table and also the starting offset of boot sector.
It then checks the machine’s file system type:
Image 7: Determine Boot Sector Type
If an NTFS file system is found, it parses the Master File Table (MFT) and reads the attributes of $ROOT (.) file record to locate the $INDEX_ALLOCATION attribute, in order to determine the raw data of Userinit in the disk (assuming the MFT is parsed correctly). Smitnyl will check for the Windows path from $ROOT down to the System32 directory, where userinit.exe is located.
Image 8 & 9: Locate Userinit.exe, Part 1
The malware uses the get_userinit_data_content_addr routine to find the userinit.exe file, which then uses the Extended Write Function (with function number ah = 43H) to offset and write the infector payload at sector 39. During the userinit.exe infection routine, the malware also checks for the presence of an infection marker at offset 0x28 (more on that later).
Image 10 & 11: Locate Userinit.exe, Part 2
After the machine is successfully booted up with the infected MBR, userinit.exe should be infected and launched automatically. One way to identify the infected userinit.exe is by checking the file properties.
Image 12 & 13: userinit.exe Properties, original & infected
Fortunately, the difference is pretty obvious.
Let see the infected file in hex view:
Image 14: Infected Userinit
Remember we mentioned the infector routine will check the infection marker 0x55AA before infecting? So what is it trying to do when it is executed? Its major payload is to launch an encoded executable, located at sector 45:
Image 15: Encoded Executable File at Sector 45
It has some preliminaries to do before it starts decoding and launching the final payload:
• Check for the presence of 360safe antivirus. If found, 360safe IE browser protection is disabled.
• Create a fake explorer.exe in a temporary folder — this is the decoded executable.
Image 17: Fake Explorer with Decoded Executable
Image 18: Fake Explorer with Decoded Executable
• After decoding, it launches %temp%\explorer.exe using ShellExecute — this serves as a decoy to hide the infection. At the same time, it will execute the real explorer.exe using Winexec.
Image 19: Execute fake explorer.exe and launch original explorer.exe
Once the preliminaries are done, the payload is launched.
Image 20: Final Downloader Payload
Fortunately, there is nothing special about the final payload — it is merely a downloader. The infected userinit.exe disables 360safe's IE browser protection so that the downloader can retrieve files from the remote server http://[...].perfectexe.com/. | <urn:uuid:00caf8f3-576c-4020-b972-eea4dbcfc927> | CC-MAIN-2017-04 | https://www.f-secure.com/weblog/archives/00002101.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00359-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.868576 | 1,112 | 2.5625 | 3 |
What email address or phone number would you like to use to sign in to Docs.com?
If you already have an account that you use with Office or other Microsoft services, enter it here.
Or sign in with:
Signing in allows you to download and like content, which the author will be aware of.
Embed code for: The Atom Unit
Select a size
Mixtures Notes/Bolts Activity: Slides 6-16,
Mixture Separation Lab: Slides 17-33,
Atomic Timeline Foldable: Slides 39-58,
Bohr Model Foldable: Slides 62-93
Isotope Notes: Slides 102-114
Share a memory.
Agenda/Objectives – 9/27/16
Mock Electron Tentative Schedule
Notes on Elements, Compounds, and Mixtures
Nuts and Bolts Activity
Pre-Lab on Separation of Mixtures
Be able to differentiate between elements, compounds, and mixtures
Apply understanding of mixtures to real life
We must first think about how the idea of the Atom surfaced.
How did scientists come up with the idea of the atom?
How Can We Differentiate?
It began with the element – the idea that there were substances that could not be broken down any further.
“How can we get the purest substance?”
Everything is made up of matter.
“What is the difference between atoms and matter”
Elements and Compounds
The purest substances were known as elements.
Elements are substances that cannot be chemically broken down into simpler substances and are the primary constituents of matter.
Ex: Oxygen, Nitrogen, Copper,
Scientists also knew that elements could chemically combine to form compounds.
Compounds are substances made up of elements that are chemically combined.
Ex: Water, Sodium chloride (salt), Ethylene glycol,
When referencing elements and compounds, the term pure is often used.
Purity describes a substance whose matter consists of only one element or compound (remember that compounds are elements that are chemically bound together!).
But we know that most things aren’t pure, they are mixtures.
Mixtures are substances made up of elements and/or compounds that are physically combined.
Ex: Salt Water, Sand,
There are two types of mixtures: homogeneous and heterogeneous
Homogenous describes a mixture that appears to be evenly mixed in the same concentration.
Ex: Milk, Orange Juice (no pulp),
Heterogeneous describes a mixture that is not evenly mixed in the same concentration.
Ex: Mixed Nuts, Orange Juice (with pulp),
Mixtures vs Compounds vs Elements (Nuts and Bolts)
Label each picture according to what you think it represents (compound or element)
Each group will be given a bag, which may contain nuts or bolts or both.
Using your understanding of mixtures/compounds/elements, associate each bag with what each bag might represent.
Mixtures vs Compounds vs Elements (Nuts and Bolts) Make-Up Activity
If you are absent from class, this is the make-up activity to be completed on your own time.
Label each picture on the following slide as a compound, element, or mixture of compound and/or element.
Answer questions on the slide following the images.
Questions to Ponder:
Can mixtures be a combination of compounds and elements?
Are mixtures and compounds mutually exclusive (can there be mixtures of compounds)?
Venn Diagram Activity
Using your knowledge so far (definitions and examples), create a Venn Diagram showcasing the difference between an Element, Compound, and Mixture.
Cannot be broken down
Oxygen, Nitrogen, etc.
Made up of elements chemically combined
Can be broken down through chemical means
Water (H2O), Carbon dioxide (CO2), etc
Made up of elements/atoms
- Does not have to have a specific ratio of elements
- Made up of combined elements
Can be homogeneous or heterogeneous
Made up of elements physically combined
Salt water, orange juice
Do you think a Venn diagram is an appropriate way to display the differences between Elements, Compounds, and Mixtures?
Mixture Separation Lab
To apply an understanding of physical changes to separation of mixtures (homogeneous and heterogeneous).
Compared to compounds, mixtures can be made up of various amounts of elements and/or compounds. While compounds have to be combined in whole number ratios (ex: two hydrogen atoms for every one oxygen atom [H2O]), mixtures have no perfect “formula” in which they are to be made.
For example, there is no exact recipe for lemonade, as people can adjust the amount of sugar, water, and lemon juice to their liking. Additionally, because mixtures are a physical combination of elements/compounds, the constituents of the mixtures can just as well be physically separated.
In this activity, you will have a mixture of various compounds, which include sand, table salt, sugar, iron fillings, and extra virgin olive oil. Your goal is to separate these compounds into their individual constituents.
400 mL Beaker
Double boiler set-up
Rubbing alcohol (91%)
What is your favorite day of the week?
Agenda/Objectives – 9/29/16
Atomic Classroom Activity
Pre-Lab/Lab on Separation of Mixtures
Understand the quantum model of the atom
Remove the iron filings with a magnet. You are left with sand, salt, and sugar.
Add water to the mixture (what does the water do?).
Filter the mixture using filter paper, a funnel, and a beaker.
You are left with sand, salt, and water (how do you get rid of the water?).
Evaporate the water using the Bunsen burner. Add alcohol to the mixture (salt and sugar)(what does the alcohol do?).
Filter the mixture again using filter paper, a funnel, and a beaker.
You are left with sugar dissolved in alcohol.
You are left with sugar dissolved in alcohol
Evaporate the alcohol.
Type of Substance Removed
Type of Substance Remaining
Sand, Salt, Sugar, Iron
Sand, Salt, Sugar
Completing the Lab:
Make sure the substances are all separated and collected in individual test tubes.
Bellwork – 10/5/16
What is something you are struggling with?
Bellwork – 10/7/16
What is your favorite genre of music?
Agenda – 10/7/16
Atomic Theory Timeline Foldable
Atomic Theory Timeline Notes
The Atomic Timeline
1. Apply knowledge of the atom’s history to understanding of the atom.
The Atomic Timeline Foldable
Fold the sheet in half and cut the four lines (red lines) to create five flaps.
Label the flaps on the outside with the names of the scientists that contributed to the discovery of the models of the atom (there are only five flaps, so there should only be five scientists)
J. J. Thomson
On the other side of the flaps, draw a diagram of the model of the atom proposed by the scientists.
Opposite of the drawings will be the information regarding the discovery made by the scientists.
On the back of your foldable, compare and contrast the different models of the atom throughout history.
Comparing the Models
Date: 460 BCE
Ideas: All matter is composed of extremely small particles called atoms.
Quote: “We can only break things over and over so many times. When does it end?”
Date: 330 BCE
Ideas: All matter is composed of different elements (fire, earth, air, water).
Quote: “Long ago, the four elements lived together in harmony. Then, everything changed when the atomic theory was proposed.”
Name: John Dalton
Ideas: All elements are composed of atoms. Atoms cannot be subdivided
Quote: “The daft Aristotle reversed the progress of Chemistry by some twenty centuries.”
John Dalton (cont.)
Name: J. J. Thomson
Ideas: Atoms are composed of corpuscles (electrons) distributed in a sea of positive charge.
Quote: “I like Jell-O made with real fruit inside.”
How did Thomson propose the idea of a “positive sea” if his CRT experiment only observed negative particles?
J. J. Thomson (cont.)
Experiment: Interacted with electric discharge of cathode ray tubes to determine the existence of negative particles.
Model: Plum Pudding Model
Name: Ernest Rutherford
Ideas: The protons are located at the center of the atom. The electrons orbit the protons.
Quote: “My teeth gleaming like I’m chewing on gold foil.”
Ernest Rutherford (cont.)
Experiment: Fired positive alpha particles at a thin sheet of gold foil. Discovered that alpha particles would sometimes be deflected.
What would firing positive particles at a sheet of gold foil do?
the plum pudding model vs. the Rutherford model
Name: Niels Bohr
Ideas: The electrons must orbit the protons in specific energy levels. Electrons can jump between orbitals by absorbing energy.
Quote: “For a good clean feeling no matter what.”
Niels Bohr (cont.)
the Rutherford model vs. the Bohr Model
Name: Various Scientists
Date: 1927 (Fifth Solvay Conference)
Ideas: Quantum mechanics suggest that we can never know exactly where electrons are located in the atom. We can make predictions of what orbitals electrons might be located in.
Quote: “We hold these truths to be self-evident, that all atoms are created equally.”
Various Scientists (cont.)
the Dalton Model vs. the Quantum Model
Bellwork – 10/11/16
What frustrates you?
Agenda – 10/11/16
Analysis of the atom/Bohr Model
Differentiation between neutral atoms, ions, and isotopes
Parent teacher conferences are Thursday 10/13/16
The Bohr Model
1. Apply new information about the atom to prior understanding of subatomic particles
The Bohr Model (Foldable)
Fold your paper into 4 long panels.
Fold the two end panels together so that you have two flaps that can open.
Draw a line in the middle of the front flaps of your book. Cut the line on the front flaps so you have four flaps.
Label the front of the four different panels: the atom, proton, neutron, electron
Label the outer panels/flaps: what I know. Label the inner panels: what I learned.
What I know
What I learned.
Prior Knowledge Exploration:
In the “what I know” sections for the atom, proton, neutron, and electron, write down one or two things that you already know about the concept.
Draw a picture of what you think an atom looks like on the “what I know” panel of the atom.
On the back side of your foldable, label the entire space “The Bohr Model.”
The Bohr Model was introduced by Niels Bohr and Ernest Rutherford to depict the subatomic particles of the atom.
It is one of the most useful models of the atom, providing sufficient and accurate information about the atom in a simplistic illustration.
Different elements on the periodic table have different atomic structures.
This means that the different elements have different number of protons, neutrons, electrons.
What the Bohr Model of an atom looks like
What you might have drawn prior to this class (or possibly Physical Science)
Compare the two models. The left one is a carbon atom, the right one is a lithium atom.
Bohr Models can be depicted in many ways. The most important aspects of the Bohr Model are:
Nucleus is located in the center
Protons and Neutrons are labeled in the nucleus
Electrons orbit around the nucleus in orbitals*
*The electrons must be placed in the orbitals in a specific manner.
The Subatomic Particles:
Proton – the positive subatomic particle of the atom, located in the nucleus.
Number of protons – determine the identity of the atom (what element the atom represents). All atoms have some number of protons. Also determines the mass of the element.
Reading the Periodic Table:
The Bohr Model (Periodic Table)
Neutron – the neutral subatomic particle of the atom, located in the nucleus.
Number of neutrons – accounts for part of the mass of the atom (along with the number of protons).
Atomic Mass – The atomic mass of an atom/element can be determined by finding the sum of the number of protons and the number of neutrons.
Atomic Number + Number of Neutrons = Atomic Mass
Scientists needed a way to quantify the mass of very small particles, so they developed a standard of mass using elements. They called this the atomic mass unit (amu or u).
They discovered that an atom’s mass is mostly made up of the proton and neutron. Through calculations they found that:
The mass of Proton is equal to 1.007 amu (u)
The mass of Neutron is equal to 1.008 amu (u)
For the sake of simplicity, we will use a value of 1 atomic mass unit for both the mass of protons and neutrons.
Atomic Mass – Atomic Number = Number of Neutrons
If an atom has 5 protons and 7 neutrons, what is its mass?
How many neutrons does the average carbon atom have?
Determining the number of protons, neutrons, electrons, and the atomic mass:
Electron – the negative subatomic particle of the atom that orbits the nucleus in the electron cloud.
Role of electrons – electrons play an important part in chemical bonding and formation of ions.
How can you figure out the number of electrons in a neutral atom if you are given the number of protons? (Think about the charge of the particles and the atom)
Number of electrons – determined by the number of protons (only for neutral atoms).
The Bohr Model (The Periodic Table)
Complete the top half of the information side on your periodic table.
Identify the element given its Bohr Model:
What is the mass of this element/atom?
Atoms, Ions, and Isotopes Foldable
Fold the sheet of paper into thirds.
Open it back up and cut off one end of the pieces. Staple/tape it to the back or inside center of your foldable.
Tape or Staple here
Divide the long foldable into thirds.
Label the front of the foldable as it looks below.
On the following page, complete the inside of the foldable with the following information.
How to find number of protons, neutrons, electrons
Task 5 (cont.):
Complete the inside of the foldable with the following information.
Draw a Bohr Model of a neutral atom
Give five examples of a neutral atom
Draw a Bohr Model of an isotope
Give five examples of an isotope
Draw a Bohr Model of an ion
Give five examples of an ion
Bellwork – 10/19/16
What is your favorite restaurant?
Agenda – 10/19/16
Isotopes Virtual Lab
Atomic Campaign Preparation
Atoms, Ions, and Isotopes
Can the atom of a specific element gain or lose protons? Why or why not?
What would happen if an atom gains a proton?
We know that atoms cannot gain/lose protons without changing its identity.
Can the atom of a specific element gain or lose neutrons? Why or why not?
It turns out that atoms cannot gain or lose neutrons, aside from radioactive processes.
Gaining or losing neutrons would require a lot of energy!
However, different atoms of an element can exist with various number of neutrons. Atoms of the same element with differing numbers of neutrons are called isotopes.
Many of you may be familiar with the term “isotope,” but were not aware that it is associated with atoms.
Compare these images of atoms.
First, look at how many protons each atom has. (What element is this?)
Then look at how many neutrons each atom has.
What does the number after the elemental name represent? (What does the number 12 after carbon-12 represent?)
To write the name of an isotope:
Write the elemental name
Add a dash and write the mass number (total number of protons and neutrons)
Examples: Carbon-14, Nitrogen-15, Sulfur-33
Naming the isotopes (what element is this?):
The importance of isotopes:
Isotopes are responsible for the atomic mass of elements.
Review: compare the atomic masses of your periodic table with the large one in the room. What differences are there?
The mass (in decimal form) on the Periodic Table represents the AVERAGE of all isotopes.
What is an average?
Math review: The average is the mean number calculated by dividing the sum of a set of numbers by the total number of numbers in the set.
Practice with averaging numbers (cont.):
What is the average test score for the class?
100+100+100+98+100+100+100+100 = 798
798/8 = 99.75
What is the average number of arms that a human would have on earth? Explain how you predicted that number.
Explain why the average atomic mass (on a regular periodic table) is extremely close to a whole number.
All atoms are isotopes, not just atoms that have different numbers of neutrons.
Bellwork – 10/21/16
What is your favorite letter?
Agenda – 10/21/16
Post-Lab Discussion for Isotopes
Things you will need:
For Neutral Atoms (atoms of elements with no charge)
Protons: Atomic Number
Neutrons: Mass Number – Atomic Number
Electrons: Same as Number of Protons
For Isotopes (atoms with varying number of neutrons)
(Do isotopes have a charge?)
All Atoms are Isotopes.
The method for determining the number of subatomic particles for an atom and an isotope is the same.
Remember that isotopes are written with the elemental name followed by the atomic mass of the isotope.
Isotopes of Carbon
Determine the number of protons, neutrons, electrons, and the atomic mass:
Importance of Isotopes
Predict the most common isotope of Phosphorous. Explain your answer.
Predict two common isotopes of Iron and their abundancies.
What would happen to an atom if it were to gain an electron? (What would the charge of the atom be?)
Can the atom of a specific element gain or lose electrons? Why or why not?
Atoms that gain or lose electrons are called ions.
Name/Symbol of Ions:
Ions can be written as the elemental symbol with a number following and a plus (+) or a minus (-) sign, depending on whether an atom has gained or lost an electron.
Ions that are positive have lost electrons
Ions that are negative have gained electrons
Certain ions will always gain or lose a specific amount of electrons. Copy down these numbers on your periodic table.
For the sake of this class, we will only be looking at main-group elements
Examples of ions:
Na+, Cl-, O2-
How come some ions gain or lose a specific number of electrons?
Main-group gain or lose electrons as a result of the octet rule. The octet rule states that main-group elements want to have EIGHT electrons in their outer shell (also known as the valence shell).
Bohr Models of Atoms, Ions, and Isotopes
Compare the number of electrons in different ions. What similarities are there?
You might have noticed that many ions of the main-group elements either have 18 or 10 electrons total
Why is that?
The outer shell of the atom is also called the valence shell, with the electrons being called valence electrons.
The Bohr Model tells us that electrons of atoms are located in specific shells (rings) around the nucleus, and each shell can only hold a specific number of electrons.
Draw a Bohr Model of a neutral Nitrogen atom with the electrons depicted in their appropriate shells. (What is located in the nucleus?)
What is located in the nucleus? How many of each particle will there be?
Remember the three criteria required for a proper Bohr Model
Draw a Bohr Model of a neutral Magnesium atom with the electrons depicted in their appropriate shells.
Draw a Bohr Model of a neutral Sodium atom with the electrons depicted in their appropriate shells.
Draw a Bohr Model of a neutral Aluminum atom with the electrons depicted in their appropriate shells.
Draw a Bohr Model of a neutral Bromine atom with the electrons depicted in their appropriate shells.
Each table will be given a column of elements. Pick one from the column and draw the Bohr Model for that element.
Ex: Column 17 – Chlorine
Draw a Bohr Model of a neutral Sodium-24 atom with the electrons depicted in their appropriate shells.
Draw a Bohr Model of a neutral Aluminum-36 atom with the electrons depicted in their appropriate shells.
Draw a Bohr Model of a neutral with the electrons depicted in their appropriate shells.
Further Thinking/Additional Practice:
Draw a Bohr Model of Fluorine Ion (F-).
What does the minus symbol (–) on the F signify?
We know that atoms can gain/lose electrons by forming ions.
We know that atoms can exist as isotopes with different numbers of neutrons.
Can an atom be both an isotope and an ion?
Draw a Bohr Model of Aluminum-26 3+.
Complete the Venn Diagram on the following slide:
Things to keep in mind:
Think about how subatomic particles are calculated for each type of atom
Think about the structure of each type of atom
All Atoms Are Isotopes
Equal number of protons and electrons
Overall neutral charge
Mass of neutral atoms/isotopes vary
Subatomic particles are not lost, they exist as is
Atoms that have gained or lost electrons depending on the element
Atom can be positive or negative
Masses of ions of the same element are the equal
Subatomic particles are lost and gained
Isotopes can also be Ions
Same number of protons
Bellwork – 10/25/16
What is your favorite type of water?
Agenda – 10/25/16
In previous slides, I spoke about not being able to change the number of protons/neutrons…
Well, it’s not particularly true.
It is possible for atoms to change how many protons/neutrons they have, but it takes a lot of energy. Just remember that changing the number of protons changes the identity of the element.
Since we will be talking about isotopes again, let’s review how specific isotopes of elements are written out as well as learn a new way to indicate elemental isotopes.
Naming an Isotope:
(Elemental name)-(mass number)
Ex. Lithium-12, Chlorine-37, Aluminum-38
Naming an Isotope using Isotope Notation:
Write the isotope notation for:
Carbon-14, Potassium-40, Oxygen-18, Strontium-84
Changes that occur within the nucleus are called nuclear reactions.
These reactions can occur naturally and unnaturally.
Unnatural reactions are artificial/induced, and can be catalyzed by bombarding atoms of elements with high energy particles.
Naturally occurring reactions are common and typically occur in the form of nuclear decay.
Nuclear Decay (Radioactive Decay) is when the nucleus breaks down and emits radiation in the form of particles, photons, or both.
Radioactivity describes all forms of nuclear reactions
Types of Nuclear Decay
In Alpha Decay, an atom loses an alpha particle as a result of instability.
An Alpha Particle is also known as a helium nuclei. It is composed of two protons and two neutrons.
Alpha decay is usually restricited to heavier elements (Thorium, Uranium, etc.)
When the atom loses an alpha particle, what happens to the number of protons and neutrons?
The number of protons and neutrons is each lowered by two and the mass is lowered by a total of four!
In a balanced nuclear equation for alpha decay, the sum of the mass numbers (superscripts) on the right must be equal to the numbers on the left
The same is true for the atomic numbers (subscripts)
Practice Alpha Decay:
Pb He + Hg
Rn He + Po
More Practice Alpha Decay:
The element radium was discovered by Marie and Pierre Curie in 1898. One of the isotopes of Radium, Radium-226, decays by alpha emission.What is the resulting element?
Bellwork – 10/27/16
What is the spookiest experience you’ve ever had?
Agenda – 10/27/16
Nuclear Decay + Half-Life Notes
There are three types of Beta Decay:
In Electron Emission, an electron is ejected from the nucleus.
The charge of the atom increases by one positive (+)
The mass number does not change!
Beta Decay – Electron Emission
But wait, Mr. Chu!
Why does losing an electron cause the atom to change its identity?
Well, you see...
In electron emission, the electron that is emitted is not an electron that is orbiting the atom; it is actually being emitted from the neutron, causing the neutron to turn into a proton!
Again, we see how the neutron of an atom undergoes electron emission changes.
When representing subatomic particles, the red numbers represent the mass and the blue numbers represent the charge.
Practice Beta Decay – Electron Emission:
Na e + Mg
Se e + Br
Often times, when thinking about Nuclear Decay/Reactions, there is fear in the idea of radioactivity.
Radiation is emitted in the form of tiny subatomic particles and waves that have a high energy level, and it is true that these particles can be damaging.
Of the three types of nuclear decay, which do you think is the most damaging and why? (Think about the size of the particles emitted)
Rate of Nuclear Decay
Now that we understand that nuclear reactions occur (naturally), let’s talk about the rate at which radioactive elements undergo nuclear reactions.
Do all unstable atoms decay at the same rate?
Of course not! This world is not that perfect. Nuclear Reactions are spontaneous.
Scientists calculate the rate of radioactive decay of different isotopes by measuring the time it takes for half of a sample of radioactive atoms to decay.
This is called the Half-Life.
Three important notes about half-life:
Each radioactive isotope has its own half-life
More stable atoms decay slower/have longer half-lives
The amount of time it takes for half the amount of a radioactive isotope to decay is the same regardless of how many atoms you start with.
A = A0
A = A0
You can use the following equation to calculate how much of an isotope will remain after a given number of half-lives
“A0” is the initial amount. “A” is the final amount.
The variable “n” represents the number of half-lives. It can also be calculated by taking total amount of time divided by the time per half-life.
You have 400 milligrams (mg) of a radioisotope with a half-life of 5 minutes. After 30 minutes, how many half-lives will the radioisotope undergo? How much of the original isotope will be left after 30 minutes?
30/5 = 6 (half-lives) 400/(26) = 6.25 milligrams
How much of a 3.5 milligram sample of Nickel-63 will remain after 368 years if the half-life of the radioisotope is 92 years? (How many half-lives are in 368 years?)
368/92 = 4 (half-lives) 3.5/(24) = 0.219 milligrams
Half-Life (Reverse Sweep)
A particular isotope has a half-life of 5 days. A particular sample is known to have contained one million atoms when it was put together, but is now observed to have only about 125,000. How long ago was the sample assembled?
# of Half-Lives
# of Atoms
Half-Life (M&M Lab)
Apply understanding of half-life to the spontaneous rate at which M&M’s change (decay).
Become more familiar with half-life calculations.
Keep in mind:
Do not consume M&Ms until all of the M&Ms have been flipped and removed from the containers.
Clean up after yourselves!
M&Ms are good to use if they have visible lettering on them
You don’t need exactly 100; 90-110 is fine.
# of M&Ms unchanged
# of M&Ms Unchanged
# of M&Ms Changed
Half-Life (M&M Lab) Make-Up
Graph the data on your work sheet.
Your Y-Axis should be the # of Half-Lives
Your X-Axis should be the # of M&Ms Unchanged
The information required to answer questions on the Conclusion section can be found in the Background section of your Lab Sheet.
You can have mixtures of elements and/or compounds.
goes electron emission changes.
Often times, when thinking about Nuclear Decay/Reactions, | <urn:uuid:862ca688-3e16-4c14-ab82-6843d22815f8> | CC-MAIN-2017-04 | https://docs.com/thien-chu/9484/the-atom-unit | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00175-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.874327 | 6,454 | 4 | 4 |
The first and most basic step of troubleshooting problems with your internet service is how your modem is setup.
Inspect the cords and cables going to your modem
Start by making sure the modem is plugged in and turned on. The light on the front of the modem labeled POWER should be lit up. You should also check that the other cables or cords are securely inserted at both ends. If you're not sure, then unplug the cord or cable and plug it back in. When you hear it CLICK, you know you've inserted it correctly.
Check all the cords and cables to make sure they are in good and working condition. Look for bends, kinks or cuts. If you find a bad one or if you're not sure, then replace it and see if that solves the problem.
Is everything plugged into the correct port or outlet?
The most basic modem setup looks like this:
- The power cord (or AC Adapter) goes from the wall electrical outlet to the modem outlet labeled POWER.
- A regular phone cord goes from the wall phone jack directly to the modem outlet labeled DSL or LINE (label varies by modem), and in most modems this outlet is color-coded green.
- There is a green phone cord included in the modem kit you received from CenturyLink.
- You don't have to use the green phone cord, any phone cord is sufficient.
- Although it's a phone cord, it doesn't actually go into the modem outlet labeled PHONE.
It's best to connect your computer directly to the modem whenever you're troubleshooting your internet service. This helps rule out wireless connection problems. It also of course adds another connection to your modem set-up.
Common modem setup situations known to cause problems
Make sure you avoid these common setup mistakes that are known to cause problems with an Internet connection.
- Don't use a DSL filter between the wall phone jack and the modem.
The modem already has a built in filter, so it's important that you DON'T use an in-line filter from your telephone wall jack to the modem. This is the only line where you don't want to use a filter. Putting a filter on this line will prevent the data signal from reaching the modem and consequently preventing a connection to the Internet.
- Don't connect the modem's phone cord to phone splitters or phone jacks built into surge protectors.
The phone cord connecting the wall phone jack to the modem DSL or LINE port should be direct without any other type of equipment in between the two.
- Don't connect from the wall phone jack to the modem port labeled PHONE.
The phone cord that runs between the wall jack and the modem should go into the modem port labeled DSL or LINE (label varies by modem). The PHONE port on the modem is intended for connecting to a telephone device.
Internet & Phone: How to properly connect a landline phone through the same wall phone jack as the modem
If you have plain old telephone service through CenturyLink, then you might find you want to also attach a telephone, fax or answering machine to the same phone jack as your modem.
No problem! With the proper setup, you can avoid causing inadvertent problems with your internet connection.
Adding a phone
If you only need to connect one phone device, then the proper setup is to run a phone cord from the modem port labeled PHONE to your telephone, fax machine, or answering machine.
This allows all of signal to go directly into the modem and then the modem acts as a filter and sends the voice signal through the modem's phone port. It's almost magic!
What about two phone devices?
If you want to connect a phone and another telephone type device, connect a regular phone splitter to the phone port on the modem. Then connect your devices to the splitter. The goal here is to put the splitter at the modem PHONE outlet and not the wall jack. | <urn:uuid:8f1f629a-c82c-4d85-bee1-223e3942535f> | CC-MAIN-2017-04 | http://www.centurylink.com/home/help/repair/modem-and-wifi/troubleshooting-your-modem-starting-with-the-cords.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00479-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925906 | 810 | 2.515625 | 3 |
OLT is the endpoint hardware device/terminal equipment in a passive optical network (PON). It sends Ethernet data to the ONU, initiates and controls the ranging process, and records the ranging information. OLT allocates bandwidth to the ONU and controls the starting time and the transmission window size of the ONU transmission data.
The OLT contains a central processing unit (CPU), passive optical network (PON) cards, a gateway router (GWR) and voice gateway (VGW) uplink cards. Each OLT can have a few or many dozens of PON cards. It can transmit a data signal to users at 1490 nanometers (nm), and receives the ONT transmission of the 1310nm laser data signal. That signal can serve up to 128 ONTs at a range of up to 12.5 miles by using optical splitters.
- A downstream frame processing means for receiving and churning an asynchronous transfer mode cell to generate a downstream frame, and converting a parallel data of the downstream frame into a serial data thereof.
- A wavelength division multiplexing means for performing an electro/optical conversion of the serial data of the downstream frame and performing a wavelength division multiplexing thereof.
- A upstream frame processing means for extracting data from the wavelength division multiplexing means, searching an overhead field, delineating a slot boundary, and processing a physical layer operations administration and maintenance (PLOAM) cell and a divided slot separately.
- A control signal generation means for performing a media access control (MAC) protocol and generating variables and timing signals used for the downstream frame processing means and the upstream frame processing means.
- A control means for controlling the downstream frame processing means and the upstream frame processing means by using the variables and the timing signals from the control signal generation means. | <urn:uuid:8cf33948-7c6a-4479-9cfd-c8fe93c3d65a> | CC-MAIN-2017-04 | http://www.fs.com/blog/olt-optical-line-terminal.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00387-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.864469 | 373 | 2.53125 | 3 |
What is Business Intelligence all about?
- 21st September 2016
- Posted by: Juan van Niekerk
- Category: Technology
Business intelligence is often confused with business analysis, as both involve the analysis of certain components of business in order to make better decisions regarding the running of an organisation. They are, however, two very different processes.
The best way to differentiate between the two is to consider business intelligence as looking into past performance of an organisation, whereas business analysis seeks to determine future trends and needs of a business. Business intelligence is therefore reactive, whereas business analysis is proactive. Both are, however, used in order to plan for the future needs of the organisation.
Business intelligence is the process of analysing raw data (usually in vast quantities) in order to make informed and effective decisions concerning business needs. The technologies used in business intelligence have the ability to handle very large amounts of data (big data) and aids in the identification of new business opportunities.
The technologies utilised in business intelligence offer the following functions:
Analytics – finding patterns in data and interpreting them in order to improve business performance.
Reporting – providing regular reports to Corporate Executives and Business Managers to aid in decision making.
Online analytical processing (OLAP) – performing multidimensional data analysis in order to perform analysis of business trends and data modelling.
Data mining – Extracting information from data and converting it into a comprehensible structure.
Benchmarking – measuring the performance of an organisation to that of competitors.
Process mining – Utilising events logs for analysis of business processes.
Complex event processing – identifying meaningful events that may have an impact in the future of the business.
Predictive analytics – Using current events and events from the past in order to make predictions about possible future events.
Business performance management – measuring the performance of an area of an organisation against a predetermined goal.
Text mining – extracting meaningful data from text documents.
Prescriptive analytics – Using the analysis of data to suggest options to be considered when making business decisions.
Once business intelligence has been successfully applied, the organisation will be able to use the information that has been sourced in multiple ways. If a gap in the market has been identified, a business will have a head start in supplying where there is a demand.
Information can also be used to find areas where costs can be cut. This can have a real impact on revenue as unnecessary expenditures can be avoided and rerouted to other areas where there may have been a lack of funding or resources.
Another competitive advantage gained from business intelligence is a quick response to market trends. Focussing on the habits of customers and clients will give the organisation a unique insight into adjustments that need to be made in order to cater to their target market, giving a company an edge over their competitors.
Having access to accurate and frequent financial, vendor, operational and customer reports when necessary is another plus point. Where reports need to be manually compiled, or where a unique report needs to be created, the necessary data is easily accessible and ready to be used almost effortlessly.
The most common benefit of business intelligence is the chance to gain insight into how operations can be streamlined. Problem areas are identified and can, therefore, be addressed as is necessary. This will have a positive impact on efficiency and, therefore, overall performance.
Business intelligence is an effective way to gain insight into your organisation’s past, learning from that information and applying it to the factors that need to be altered in order to make a real impact on future business. There is much to be gleaned from where your company has been, in order to steer it towards the path that it needs to follow. | <urn:uuid:8620c400-96a5-4d40-95f4-6873c0de5b6a> | CC-MAIN-2017-04 | https://www.itonlinelearning.com/blog/what-is-business-intelligence-all-about/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00295-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942189 | 758 | 2.59375 | 3 |
Martin Cooper, a former Motorola engineer, inventor and executive, made the first cellular mobile phone call 40 years ago this week, on April 3, 1973.
That was revolutionary enough, but the modern age of the cell phone really took off six years ago when the iPhone was first launched in June 2007. We had smartphones before then, but with the iPhone we suddenly had rich access to the Web and two-way data in apps and elsewhere as never before.
The innovations in the last two years with Augmented Reality (AR) and Location-Based Services (LBS) are a great example of how mobile software is rapidly accelarating upon a base of wireless networks and hardware.
The speed of innovation is expanding so rapidly with wireless and mobile devices that Tom Soderstrom, chief technology officer for NASA Jet Propulsion Laboratory says a mobile decade is now just three years, not 10.
That means the iPhone is already two decades old and that my own functional age is closer to 20 decades—200 years! We are definitely seeing more innovation in our lifetimes than previous generations.
Soderstrom showed off a cool AR app in the AppStore called Spacecraft 3D at a recent Premier 100 conference. With it, he was able to use his iPhone focused on a special AR target to produce a 3D image of Voyager and 10 other spacecraft.
Last week, Sujai Hajela, vice president and general manager of wireless networking at Cisco talked with me about the rapid expansion of mobile connections globally that should grow by about 13 times by 2017.
Cisco has worked with chips from cellular radio chip maker Qualcomm to bring more intelligence to the wireless network through Cisco’s Network Mobility Service Engine, Hajela said.
Location analytics a few years ago were primarily based on GPS, which uses a radio in a smartphone to communicate with a satellite to tell your phone’s location within about 3 meters to 5 meters of accuracy. By using triangulation with cell towers and even Wi-Fi access points and other methods, that accuracy can be reduced to about 1 meter with hyper-location technologies, Hajela said. When compared with a store’s physical layout in a database, that location information becomes powerful.
Some of the other location methods available are based on software interpretations of things we might never imagine. In one example, the accelerometer in a smartphone can be monitored to determine what angle the smartphone is being held, with that information handed to the network for comparison with other data. The angle of a signal that a cell tower receives can also be used for tracking. Where we were positioned a minute ago can be used to predict our pathway, Hajela said.
All of that information can become important for a person walking through a store, shopping for items on a shelf. A retailer can compare our location data, if we allow it, to our prior buying behaviors. Conceivably, a savvy retailer can send us data to help us make better decisions on a product, or to offer tips for how to operate or use a product, or where to go in the store to buy it or find accessories for it.
“There are multiple systems and multiple things being collated,” Hajela said. The potential for using prior data with real-time location information is still not being realized, although it will also be important for industrial uses, as a worker in a large manufacturing facility or within the bowels of a ship or plane finds and repairs a broken part.
The use of wireless networks to help monitor and track medical device sensors is still untapped. We already know about wireless capsules that can send data to a smartphone as the medicine they contain is dispersed through the body. The Internet of Things is expanding to sensors in cars, in houses, in planes and wind turbines, almost all of it connected with a wireless link at some point in the network.
At the pace of late, Soderstrom might need to collapse his mobile decade to just two years, if not 18 months. Did Marty Cooper and his colleagues have any idea in the 1970s how many doors they would open with the cell phone? | <urn:uuid:a2aa87d7-16b0-4a68-bc55-bf2b1b284af9> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2474955/mobile-wireless/it-s-been-40-years-since-the-first-cell-phone-call.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00075-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.962529 | 838 | 2.859375 | 3 |
We have three major issues that we need to deal with in order to successfully use routers within a WAN provider’s cloud:
- Multiple routing tables in RAM
- Excessive latency
- Address-space collisions
Let’s assume that we have a WAN provider with two customers, “A” and “B”, each with three sites, as shown:
Now, let’s say that an IPv4 data packet enters the provider cloud from customer site A1, headed for site A3. When it leaves CE-A1, the packet is encapsulated within some standard Layer-2 frame type. Among others, this might be:
- Frame Relay
An example of an Ethernet frame encapsulating an IPv4 packet appears as the “Unlabeled Packet” below:
As the packet is processed by the PE1 router, the frame header and trailer encapsulating the incoming packet will be stripped, and after the IP header manipulations are complete, the packet will be encapsulated into a new frame, and then forwarded towards CE-A3 via either P1 or P2.
Now, imagine that the packet that is sent from PE1 towards the P router is not a normal packet. Instead, the PE “pushes” a label onto the front of the packet, and then encapsulates the modified packet into the outbound frame. The label will then appear within the frame between the Layer-2 and Layer-3 headers (for IP over Ethernet, this is just after the “EtherType” field, and just before the IP header). When the PE pushes the label, it also changes the EtherType from “0x0800” (unlabeled IP) to “0x8847” (labeled IP). An example appears as the “Labeled Packet” in Figure 2. The process of “pushing” the label is also referred to as “inserting” or “imposing” a label.
Note how the label appears between the “Type” field and the IP header. Upon receiving a labeled packet, a P router does a lookup in its label table, which tells it how to forward the packet. A label table looks something like this:
|Inbound Label||Outbound Label||Outbound Interface|
As the packet traverses the WAN core, the P routers perform label “swaps” (reminiscent of what occurs with Frame Relay DLCIs), and when the packet reaches the far side, the PE “pops” (removes) the label, sets the EtherType back to “0x0800”, and sends the unlabeled packet to the CE.
By the way, the sequence of routers and labels used for a particular path is referred to as a “LSP” (Label-Switched Path), and in general the LSP going between the sites in the reverse direction does not use the same label values. In fact, unlike a Frame Relay PVC, with MPLS there is no requirement that the same physical path be used in both directions. In other words, an MPLS LSP is unidirectional, whereas a Frame Relay PVC is bidirectional.
You might be wondering how the PE and P routers know which label values to use when doing a “push” or a “swap”. There are three protocols that can be used to advertise LSP labels (TDP, LDP and RSVP), and we’ll discuss them later.
The PE routers only need to know the routes for customers to which they are directly attached, and which labels to push. In the case of PE2, for example, that’s A, but not B. The P routers do not need to know *any* customer routes, for any protocol, because they’re making all of their forwarding decisions based on labels, not Layer-3 addresses.
Congratulations … we’re now doing MPLS, or “Multi-Protocol Label Switching”! It gets its name from the fact that the P routers are doing “Label Switching”, and therefore don’t care about the “Multi-Protocols” used by the customer (and thus it should support any routed protocols).
Now that we have an idea of how MPLS works, we can define some additional terms. We know that a “CE” (Customer Edge) router is located at a customer site, and thus is CPE (Customer Premises Equipment). A CE generally deals with unlabeled packets, sending them to, and receiving them from, a “PE” (Provider Edge) router.
A PE is located at one of the provider’s POPs (Points of Presence). A PE pushes labels onto packets it receives from a CE, and then forwards the packets to a P (Provider) router. PE routers also pop labels from packets received from “P” (Provider) routers before forwarding the packets to a CE.
The P routers are located within the core of the provider’s cloud, and primarily do label swaps. To denote the fact that P routers are doing label swaps, and not routing table lookups, a P router is sometimes referred to as a LSR (Label Switch Router). Likewise, a PE can be called an “Edge LSR”, or “LER” (Label Edge Router).
Here’s a summary of the terminology when it comes to the provider routers involved with MPLS:
- PE = POP = Edge LSR = LER, they “push” and “pop” labels
- P = LSR = Core, they “swap” labels
Next time, we’ll discuss MPLS in more detail, and see how it solves our three problems.
Author: Al Friebe | <urn:uuid:cca1a8f6-cc29-4357-b4db-de2520c195d5> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2010/05/11/mpls-part-6/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00561-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92171 | 1,253 | 2.578125 | 3 |
Stretching the Limits
Stretching the Limits For one particular type of test, which measures the rate of absorption by the human body of radiation from things such as mobile phones, the FCC has just a single engineer capable of doing the work. While both his colleagues and industry praise his skills, the volume of gear entering the market is overwhelming.All that is for testing something that has no definite risk. Neither the FCC engineers nor other scientific studies have proven any link between the radiation and medical problems. But the political side of the agency made the test a requirement based on a 1995 Environmental Protection Agency report. Compare that to the mobile van the FCC labs use to test digital TV receivers, including PC cards, by driving throughout the Washington, D.C., and Baltimore area. The equipment in the van was cobbled together with castoffs from other parts of the labs, and installed into a rusty, 1985-vintage cargo van purloined from the agencys enforcement division. Including the one cutting-edge measurement system purchased specifically for the van, the total cost was about $75,000. Thats less than 10 percent of what a TV network might spend on the same type of vehicle and equipment. "Ninety percent of our equipment is 10 years old or older," Nichols said. "The biggest problem is that they dont have the new functions in them." Inside the main lab building, the most advanced measurement equipment owned by the FCC dates back to the early 1980s. Much of it is top-quality Hewlett Packard gear--at least it was, at that time. Those instruments were built before the invention of Code Division Multiple Access, one of the most widely used digital transmission standards for mobile phones. They were also built before transmission on frequencies above 2 gigahertz or so were anticipated. As a result, the lab has to rely on borrowed gear or outside tests. In fact, the newest piece of test equipment on the site, an oddly shaped, elongated aluminum box designed to permit highly calibrated testing indoors, is being borrowed from the private sector. While it has the potential to speed up testing, because it isnt subject to the weather delays of the calibrated outdoor test field, it could be taken away from the labs at any time. And the FCC doesnt have the $150,000 to build or buy its own. "All these new devices [on the market] require new equipment to do measurements," Franca said. "Were seeing equipment going into higher and higher frequencies. We dont have all the capability to measure at all those high frequencies." And the few machines the lab does own that can measure at frequencies above 2 GHz are difficult to calibrate--again, because of a lack of proper equipment.
Still, he is one of the lucky ones. The equipment he uses for so-called Specific Absorption Rate tests is one of the few state-of-the-art setups at the FCCs suburban Maryland laboratories, where most of the best gear is only slightly younger than the 1970s-era building it occupies. The SAR setup, a $200,000 installation with a precision industrial robotic arm, highly sensitive antenna, computers and a hollow "body" cavity to mimic humans, was purchased on a one-time budget grant. | <urn:uuid:749110d2-a6a8-4881-a3bb-4546dacb82a2> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Messaging-and-Collaboration/Take-a-Number/3 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00313-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.968362 | 661 | 2.75 | 3 |
by Julie Zellman, Marketing Communications Coordinator, LifeSize
The Virtual Researcher On Call program, also known as VROC, is an educational initiative under Partners in Research, a charity organization based in Canada. The mission of Partners in Research is to help the children of today become the leaders of tomorrow, especially in the fields of math and science. VROC itself contributes to that vision by connecting students with experts in those particular subject areas.
Soon after the program began, the leaders at VROC realized that video conferencing could play a beneficial role in this endeavor by allowing students to meet these experts, face to face, rather than over an audio call. They quickly began researching MSN, Skype and other standards-based H.323 equipment. Though video conferencing equipment was being used in schools across Ontario, it was typically utilized for administrative purposes and not in the classroom.
“Our challenge in the early days was to convince schools to bring the equipment to the classroom and give educational value to it” said Kevin Cougler, national program manager, VROC.
Though the program worked well in the beginning, it soon became apparent that video endpoints would be too expensive to implement in all 150 schools in the district. VROC needed another alternative that could function under appropriate bandwidth and utilize the webcams and audio systems that partner schools already had in place (via their desktop and laptop computers).
VROC turned to LifeSize® ClearSea™, which provided all of the features it needed. Not only does LifeSize ClearSea work on Macs and PCs, it also connects to Android and iOS tablets, smartphones and any other existing standards-based SIP/H.323 video system. Beyond that, it is extremely flexible and can operate on LAN, Wi-Fi and 3G/4G mobile networks.
After implementing LifeSize ClearSea, VROC was able to produce a new series of podcasts entitled, “This Week in Technology and Education,” “This Week in Science and Education” and “This Week in Engineering and Education.” This podcast series allows students to interact with the specialist in real time and interact with them face to face. These videos are also available on-demand for later viewing.
“ClearSea plays an important role in the production of our podcasts, as we invite experts that might be anywhere from their laboratory to their homes,” says Cougler.
VROC has held sessions on hundreds of different topics in schools across the country. On-demand videos have been downloaded or viewed more than 10,000 times.
To learn more about VROC and how they use LifeSize ClearSea to connect students and experts, read Canadian Education Initiative Utilizes ClearSea to Enhance Student Experiences Through Expert Podcasts. | <urn:uuid:30030f92-3d4d-4d02-8a09-a8161895179b> | CC-MAIN-2017-04 | http://www.lifesize.com/video-conferencing-blog/math-and-science-students-learn-from-experts/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00249-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963332 | 571 | 2.65625 | 3 |
Spam continues to plague computer users, with Sophos research revealing that 92.3 percent of all email was spam during the first quarter of 2008. Millions of new messages are analyzed automatically by Sophos each day, and these are used to refine and update existing spam rules. Sophos currently detects over 99 percent of all spam.
Sophos finds a new spam-related webpage on average every 3 seconds – 23,300 each day. This calculation includes pages registered on “freeweb” sites, such as Blogspot, Geocities, etc. Sophos predicts this number will increase so long as its authors are making money from such ruses. By ensuring that spam messages are quarantined and not delivered to the recipient, businesses can not only save time and money, they can also help protect their users from emails linking to infected sites.
In an attempt to defeat sender reputation-based filters, the spammers who relied heavily on botnets are trying to abuse free webmail services, such as Hotmail, AOL AIM and Gmail. A recent and notable spam campaign using this technique was “Canadian Farmacy”. Some of their campaigns were exclusively sent from webmail accounts. Experts believe that the rise in webmail spam might be related to spammers having bypassed CAPTCHA techniques – a challenge response test used to determine that the user is human.
The Dirty Dozen chart shows that the US has decreased its contribution to the spam problem, relaying only 15 percent of spam, compared to one fifth in 2007.
Sophos experts are also monitoring a large number of Chinese, domains that are being promoted by spam campaigns. Interestingly, there is a 2008 promotion inviting people to register .CN domains for a mere 1 Yuan (USD 14 cents).13 Such a low cost is attractive to spammers, as they can register hundreds of new domains and rotate them every few minutes during a spam run in order to bypass spam filters that use URL blocklists. | <urn:uuid:2bcf6215-5cba-4903-8dde-105b4d292e24> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2008/04/23/latest-spam-statistics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00185-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.962225 | 403 | 2.609375 | 3 |
Let’s take a look at big data. Corporations have discovered that there is a lot more data out there then they had ever imagined. There are log tapes, emails and tweets. There are registration records, phone records and TV log records. There are images and medical images. In short, there is an amazing amount of data.
Back in the good old days, there was just plain old transaction data. Bank teller machines. Airline reservation data. Point of sale records. We didn’t know how good we had it in those days. Why back in the good old days, a designer could create a data model and expect the data to fit reasonably well into the data model. Or the designer could define a record type to the database management system. The system would capture and store huge numbers of records that had the same structure. The only thing that was different was the content of the records.
Ah, the good old days – where there was at least a semblance of order when it came to managing and understanding data.
Take a look at the world now. There just is no structure to some of the big data types. Or if there is an order, it is well hidden. Really messing things up is the fact that much of big data is in the form of text. And text defies structure. Trying to put text into a standard database management system is like trying to put a really square peg into a really round hole.
Enter Hadoop. With a linear structuring of data and an ability to store very large amounts of data, Hadoop is the answer for big data. Or so we are told. With Hadoop we can store text to our hearts content. And that solves the problem.
Or does it solve the problem? Certainly one issue of handling text is the physical volume that it is stored on. And another issue of text is that it is extraordinarily irregular. But Hadoop addresses that.
Hadoop works until we look further into what is needed for truly understanding and managing text. It turns out that there are many facets to the ability to store and manage text. What about understanding text? Does Hadoop even come close to addressing the issues of understanding text? Let’s look at some really simple issues that relate to text:
- Date standardization. We have ten documents stored in Hadoop. One document has the value: Dec 6, 2011. Another document has the value: 2011/12-06. Another document has the value: sixth of December in the year 2011. Another document has the value: mil novocientos noventa nueve, diciembre seis. Does Hadoop have any problem understanding and comparing these values?
- Terminology. One document has “fractured tibia” and another document has the value “disarticulated ulna.” Does Hadoop understand that these documents are both talking about a broken bone?
- Shorthand. Does Hadoop understand that “U B-F W H Inmon flt 367 DIA-LAX 2011/06/13” really means that Bill Inmon has been upgraded from business class to first class on flight 367 from Denver to Los Angeles on June 13, 2011.
The answer to these questions is that under the best of circumstances, Hadoop addresses only SOME of the issues of reading and handling text. There is another entirely different level of data management that is needed in order to claim that Hadoop “manages” text.
So let’s be clear about this. Hadoop is a storage mechanism – an infrastructure – not a solution.
Recent articles by Bill Inmon | <urn:uuid:dbb3ca18-dffb-4923-a061-b40b609952ee> | CC-MAIN-2017-04 | http://www.b-eye-network.com/view/15516 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00001-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952119 | 767 | 2.875 | 3 |
Gigabit Wi-Fi, or 802.11ac, has been slowly appearing in homes, buildings and public hotspots, but according to researchers at OpenSignal, gigabit is not delivering to smartphones and wiring is the culprit. In part with how wireless technology works, and the performance of the wired networks Wi-Fi needs to work, the smartphone reach is strained. Researchers found that the 802.11ac which is the latest and greatest in Wi-Fi capabilites, gives users an average of 32.4Mbps, more than double the speed of anything else on the market. In anticipation of even newer technology, this is actually a great deal slower than the 400Mbps that should be made available with newer versions not on the market, yet.
Researchers wanted to know what accounted for the hold up, as 32.4 and 400 have a great range between them. What has been noticed is that the built-in limitations of wireless are partly to blame, but rather that the speed of wire that is plugged into the access point is the real cause of slower speeds. As technology moves forward, Wi-Fi has surpassed wired networks in many places. However, the wired networks still remain, and if the data ultimately carries over 25Mbps cable broadband, that is basically as much as any user is going to get.
Ethernet is attempting to keep up as 802.11ac gets faster, with new LAN interfaces that can run 5Gbps. At the moment the pickings are slim, and getting 802.11ac with a smartphone is not the norm. In order to do so, a smartphone and wired network are necessary. Most of the time when a smartphone is using Wi-Fi it is on the older, slower, 802.11n. Currently, users in the United States were only on 802.11ac 7.9 percent of the time.
If you would like to educate yourself in more detail about the information presented in this blog post please visit: Newer Wi-Fi’s faster, but it needs a fast wire behind it | <urn:uuid:dcda2090-4129-4dc2-a9a9-0ced6a4fe42f> | CC-MAIN-2017-04 | http://www.bvainc.com/newer-wi-fis-faster-needs-fast-wire-behind/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00487-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.967246 | 419 | 2.96875 | 3 |
If you’ve been in information security for a while you’ve probably heard something like the following phrases many times:
These logs are full of incidents that haven’t been reported!
How many events make an incident?
I just got an event for the alert…
< cringe />
There is deep confusion—even among those in the field—about what constitutes an event, an alert, and an incident. Here’s a basic breakdown:
- An event is an observed change to the normal behavior of a system, environment, process, workflow or person. Examples: router ACLs were updated, firewall policy was pushed.
- An alert is a notification that a particular event (or series of events) has occurred, which is sent to responsible parties for the purpose of spawning action. Examples: the events above sent to on-call personnel.
- An incident is a human-caused, malicious event that leads to (or may lead to) a significant disruption of business. Examples: attacker posts company credentials online, attacker steals customer credit card database, worm spreading through network.*
[ NOTE: All incidents are events, but all events are not incidents. ]
If you had to capture it in one sentence, I’d go with this:
Events are captured changes in the environment, alerts are notifications that specific events took place, and incidents are special events that are 1) caused maliciously by a human, and 2) (may)disrupt the business in a significant way.
Hope this helps.
[ CREATED: January 25, 2015 ]
- It is possible to define incident in a number of ways based on the organization, but it will always be a special type of event that requires an organized and timely response.
- Many would-be incidents are either human-caused but non-malicious, or are human/malicious but don’t become an issue, but unless both are true simultaneously they aren’t often handled by the information security department. E.g., earthquake, HR update.
- There is some debate on whether to call something an event if it was not captured. I’m in the camp that says you don’t, which is why I defined it as an *observed* change.
- “Disruption of business” doesn’t just mean that the business is unable to function; it could also mean that those running the business have completely lost their sanity and are demanding answers. | <urn:uuid:e93819a7-d090-442c-93e8-3a7cb25563ac> | CC-MAIN-2017-04 | https://danielmiessler.com/study/event-alert-incident/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00332-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964266 | 516 | 2.890625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.