text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Do you remember when high schools taught woodshop? Your kids won't. Technology education based on current, industry-driven curricula is rapidly replacing traditional industrial arts in public schools, and one big reason for the shift is a growing worldwide deficit of technology workers. The U.S. Department of Labor predicts that the global economy will be short 15 million technical workers by 2020. Confronting the Shortage Project Lead The Way (PLTW), a nonprofit organization, offers high schools free, advanced technology and engineering education curricula to combat a forecasted shortage of workers in these industries. The program started in 12 New York state high schools in the 1997/1998 school year, and is currently used in more than 1,300 schools in 45 states and the District of Columbia. Patrick Leaveck, regional Midwest director of PLTW, said industrial arts lost educational priority because they didn't directly impact the global economy or the domestic shortage of technology workers. "Taking [traditional] industrial arts courses will not solve that problem, so we have to have courses that are both rigorous and relevant," Leaveck said, adding that PLTW courses count as college credits. Gaining an Edge East Senior High School, a small, 900-student school in Mankato, Minn., launched PLTW this academic year, and school officials appreciate the edge the program has given the Technology Education Department, said Mark Seiler, the department's chairman. "We never really had the curriculum that Project Lead the Way has developed, which is phenomenal," Seiler said. PLTW introduces students to the scope, rigor and discipline of engineering and engineering technology through a four-year sequence of high-school courses prior to entering college. The program offers focus in: engineering design; digital electronics; principles of engineering; computer integrated manufacturing; and civil engineering and architecture. "There is a packaged curriculum that you agree to use if you adopt the Project Lead the Way model in your school," said Barb Embacher, career education coordinator for Mankato Area Public Schools. "You send your teachers away to a really intensive program training in the summer for two weeks. They work from 7 a.m. until midnight every day for two weeks to learn how to teach [a] course." The program updates courses at least once every two years, having them evaluated by technology professors and CEOs who advise the organization on industry requirements from future job applicants. "Our American kids are getting behind some of the industrialized upcoming nations like China and India," Embacher said. PLTW aligns all its curricula with math and science standards set by the International Technology Education Association, and offers additional training for teachers throughout the school year. "Teachers have access to our Internet site, where they can download lessons to review, taught by a master teacher in video format," Leaveck said, adding that PLTW requires career counselors to be trained in how to pitch technology opportunities to students, because they typically have little technology knowledge. Seiler said the challenge is not just getting students to enter a college technology or engineering program, but directing them toward specializations tailored to their strengths. He said technology and engineering schools have high attrition rates because students often discover they aren't prepared for the intense math involved. He said the PLTW courses channel students to engineering technology-related fields better suited for them. "We need to channel that student to make sure they make the right decision, and say, 'OK, I don't need to be the engineer -- I want to be the design technician that does all the drawing," Seiler said. East Senior High School plans to integrate its PLTW Computer Integrated Manufacturing as well as its engineering, designing and drafting courses with labs at Minnesota State University, Mankato and neighboring Rasmussen Community College. A Touch of Reality East Senior High School started this school year with the PLTW's first-level course on the engineering design process -- teaching students how a concept goes from a designer's imagination to reality. "[Students] just got done working on a group problem where they had to take a child's toy and modify it to enhance it or make it better -- reverse engineer it, take it all apart, draw all the parts, enhance it with the new parts, [and then] put it all together," Seiler said, adding that they had to present their ideas as practice for executive presentations, a component still too rare in classrooms. He said students are excited about the program because it takes a hands-on approach to applying their math and science skills into real life. "Our kids today are very visual -- they love the games, they love to play on their play stations and their Xboxes," Seiler said. "They need more than memorization and regurgitation. They want to do things, they want to be active." Some schools assign math and science teachers to teach the PLTW curricula, which Seiler discourages. "Math people are trained to teach mathematics in a classroom. They've never had to apply those concepts to a real-world situation," he said "Tech-ed teachers -- we've been trained to apply the math and the science into the world of work." Seiler predicted that 400 to 450 students would participate in East Senior High School's technology education program every year -- roughly half the school. Sadly, Seiler said, few will be girls -- the carpentry and metal work activities typically associated with industrial arts classrooms deters most females from trying the classes. Attracting females to technology education classes is a struggle for PLTW officials, who seek 25 percent female enrollment in every school's tech-ed program. It is also a primary goal for Seiler that he has applied unsuccessfully to his teacher-hunting efforts. "We haven't been able to successfully recruit a female [tech-ed teacher] yet, but [whenever] we have an opportunity to, we try -- there's not a lot out there," Seiler said. Leaveck said PLTW keeps its curricula gender-neutral, but encourages projects likely to interest females, such as design-related tasks for cosmetics and similar industries of interest. "That's one of the things about math and science, especially among female students -- if they don't see how it benefits the world they live in, they're not that interested in it," Leaveck said. The organization's marketing campaign, with brochures entitled "Smart Careers for Smart Girls," is directed toward school counselors, parents and female students. "We're doing better in tracking females, and the numbers are coming up slowly each year as kids talk to other kids," Leaveck said. Leaveck added that PLTW has a similar initiative to recruit minority students at 20 percent of their population level in every state, and most states are meeting that goal. Paying for Progress The PLTW's curricula is free. Implementation,however, is expensive. "You need robotics equipment -- you need really up-to-date, fast computers with high-tech software packages on them, so [students] are using the same kind of software that the engineers are using out in the industry," Embacher said, adding that sending teachers away to be trained also racked up costs. She said Mankato's program would cost roughly $100,000 over a five-year rollout. The Kern Foundation, a Wisconsin-based family organization, donated $66,000, and the remainder will be funded with additional community donations and school district funds, said Embacher. The PLTW gives schools a list of required supplies and another of recommended vendors. "We say, 'Cross off everything you have, [and] cross off everything you can substitute with what you have -- as long as it will make the curriculum go, we don't care,'" Leaveck said. "Borrow stuff from the community college -- share stuff with a neighboring district." East Senior High School will soon implement the PLTW's middle-school outreach program called Gateway to Technology, which will put all middle-school students through an eight-week PLTW course, Embacher said. "That's when we can get them excited, give them some fun activities to do, and show them the future of technology careers they can pursue and why it would be an advantage to take high-school classes in this area."
<urn:uuid:e671b37b-f192-4d5d-83da-25fad10b2d78>
CC-MAIN-2017-04
http://www.govtech.com/policy-management/Replacing-Woodshop.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00323-ip-10-171-10-70.ec2.internal.warc.gz
en
0.968746
1,714
2.765625
3
The 2012 observance of the International Day of Commemoration in memory of the victims of the Holocaust will focus on the theme “Children and the Holocaust”. The United Nations will remember the one-and-a-half million Jewish children who perished in the Holocaust, together with the thousands of Roma and Sinti children, the disabled and others, who suffered and died at the hands of the Nazis and their collaborators. Some children managed to survive in hiding, others fled to safe havens before it was too late, while many others suffered medical experiments or were sent to the gas chambers immediately upon arriving at the death camps. Highlighting the impact of mass violence on children, this theme has important implications for the 21st century. (United Nations Official Site) Visit “The Holocaust and the United Nations Outreach Programme” here: http://www.un.org/en/holocaustremembrance/2012/calendar2012.html. While January 27, 2012 is “International Day of Commemoration in Memory of the Victims of the Holocaust”, there are events running starting today.
<urn:uuid:462cb5bc-1c19-4319-988e-07229f628517>
CC-MAIN-2017-04
https://www.nerdsonsite.com/community-involvement/?p=191?shared=email&msg=fail
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00139-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946074
227
3.640625
4
ENUM has a critical role to play in telephony services convergence. Although many carriers are adopting ENUM there are myths swirling around the confuse newcomers. In data networks, the domain name system (DNS) is responsible for converting Uniform Resource Locators (URL's) to IP addresses in order to route data traffic. The ENUM protocol performs a similar essential function of linking E.164 telephone numbers to Universal Resource Identifiers (URIs) — enabling communication services to use traditional phone numbers to set up calls over IP networks. Unfortunately, there's a good deal of hype and confusion around ENUM, which might lead carriers to delay ENUM implementations. That delay would be a mistake — ENUM has real value to offer today in VoIP and other environments. So I'll attempt to dispel the myths and misconceptions about ENUM and its role in convergence. Taking Aim at ENUM Myths Here are the more commonly-heard concerns about ENUM. Myth #1: ENUM won't be relevant until public ENUM rolls out in major countries False. Most ENUM implementations today fall within the private and carrier ENUM categories, used for interconnecting islands of services like VoIP and for reducing costs of operations. It is true that public ENUM efforts are hampered by political and regulatory differences between countries worldwide. But, regardless of the fate of public ENUM globally, private and carrier ENUM has an important place inside converged networks today and in the future. Myth #2: DNS is too slow, or inconsistent in performance, to support ENUM ENUM performance is critical to call initiation times. Performance in this respect is measured in query latency — the average time it takes an ENUM server to return a result. Query latency must be in the milliseconds, preferably under a millisecond, to meet the call initiation performance standards set by the PSTN. ENUM critics point to Internet DNS latency — in the tens or hundreds of milliseconds — as proof that ENUM is not fast enough. This issue is easily addressed once one understands the factors contributing to DNS latency. The first factor is the performance of the DNS server delivering local data. Carrier-grade, high-performance ENUM servers, such as those provided by my company Nominum, can provide one millisecond latencies today. The second factor is the latency required for the local caching DNS server to retrieve authoritative information from the global network. This includes the authoritative servers' latency as well as the latency introduced by the distance on the network. On the open Internet, this latency is indeed impossible to control. However, many carriers today are deploying ENUM within their networks and replicating authoritative data locally. This approach eliminates the latency introduced by the caching/authoritative split and reduces or even eliminates external network effects. This approach is possible using modern compression technology optimized for ENUM data, capable of storing hundreds of millions of records on a single Linux or Solaris box. Myth #3: DNS can't scale to support ENUM The data volumes for ENUM data can be quite large because ENUM servers hold routing information for subscribers inside and outside networks, potentially worldwide. Additionally, the information used by ENUM — technically stored in Naming Authority Pointer (NAPTR) records — is much longer than traditional DNS records. It is also true that today's widely-deployed DNS servers cannot handle the data volumes — tens or hundreds of millions of records — required by ENUM. But this is an implementation issue, not a design issue. Using compression algorithms optimized for ENUM data, specialized ENUM servers can gracefully handle hundreds of millions or even billions of records. Myth #4: ENUM will be vulnerable to Denial of Service and other attacks This argument extrapolates from a current weakness in DNS implementations: DNS servers are vulnerable to Denial of Service attack… ENUM relies on DNS… Therefore, ENUM will be vulnerable to DoS attacks. The logic is a bit simplistic, but the fear behind it is real. We want reliable communications services that cannot easily be disrupted. Attacks and interruptions on the Internet at large are far too common and widely publicized. One reason that DNS servers today are vulnerable to these attacks is because they are often running "flat out," unable to handle large spikes in traffic. Using more efficient server software with sufficient performance headroom alleviates the risk. Also, because the ENUM data itself will be distributed and replicated in many sites rather than clustered in a few authoritative servers, it will be much more difficult to actually disrupt ENUM service with a DoS attack. In addition, in private and carrier ENUM implementations, the ENUM servers can easily be configured to only respond to queries from approved calling elements. Myth #5: ENUM/DNS doesn't address local number portability (LNP) False again. ENUM servers can store a copy of the LNP database and keep it constantly updated with data feeds from NeuStar and other data providers. The question of how you divide phone numbers into zones has ramifications for data management, updates and provisioning. On the surface, it seems ENUM implementers have to choose one of two evils: either confront a massive scalability issue by storing each phone number in a separate and individually managed DNS zone, or take on the burden of building a management application that merges phone numbers into a few zones. Again, there are design approaches to resolve this problem. Nominum's ENUM server uses the concept of "composite zones." These virtual zones, created from multiple sources, act like full-featured DNS zones. This approach allows data from multiple zones to be organized and queried easily and efficiently, and solves the problem of how to deal with Local Number Portability without "breaking" the DNS. Myth #6: ENUM cannot support fast updates ENUM data undergoes frequent and consistent changes, as subscribers change service levels, providers, or preferences. Many of today's DNS servers are not able to support high volume of updates gracefully while handling large query volumes. Again, this concern is relevant to today's frequently-installed DNS implementations, and not to the ENUM protocol itself. An LNP database of 200 million records in the US typically experiences 300,000 changes a day — an average rate of less than 5 updates/second. There are ENUM-based servers capable of handling thousands of updates per second while serving tens of thousands of queries per second. Myth #7: ENUM requires too much network bandwidth ENUM is an essential component of the initial call connection process. Some in telephony feel that, compared with SS7 and other routing mechanisms, ENUM is not an efficient way to perform call routing, as it requires lookups over the network and therefore consumes network bandwidth. Considered in the context of VoIP traffic, however, the ENUM overhead is negligible. Network lookups for ENUM typically use a single network packet in each direction. Once the call is established, the first second of voice traffic will consume significantly more network bandwidth, and the ENUM traffic pales in comparison. If the network is capable of supporting VoIP traffic, ENUM should not be a problem. Myth #8: ENUM doesn't have all the features of SS7 call setup This actually isn't a myth. Using ENUM to route traffic on IP networks bypasses the PSTN and its rich Signaling System 7 (SS7) services. Today, it is correct that the ENUM protocol does not have the rich features built into SS7 over the years because ENUM is part of a larger IP subsystem. Many of the SS7 features are actually contained in other network elements. At the same time, ENUM providers are extending their solutions to support many of these features, including Least Cost Routing, SIP forwarding, presence, follow-me roaming, etc. And, by using IP networks to route calls, you retain all of the additional functionality provided by converged voice/data/media services, which is lost when using the PSTN. If you are looking purely at feature capabilities, ENUM is the better long-term solution, as it supports next-generation communication services like video, multimedia, conferencing, and more. When you separate fact from fiction, it is obvious that ENUM has an important role to play in converging networks. These ENUM myths will disappear as this technology is proven in real-world implementations. For now, waiting for the smoke to clear could be a missed opportunity. If you want to reduce the cost of VoIP service delivery, or deploy new, next-generation network services, then ENUM represents a real opportunity for building a long-term, standards-based solution bridging phone numbers and IP networks. Background: The Many Faces of ENUM One potential source of confusion, when talking about ENUM, is the variety of ENUM implementations in place today. Quite often, people speaking of ENUM and what it cannot do are really referring to only one of the following: Public ENUM: This refers to the original vision of ENUM as a global, public directory, with subscriber opt-in capabilities and delegation at the country-code level in the e164.arpa domain. This is also referred to as User ENUM. Private ENUM: A carrier may use ENUM within its own networks, in the same way DNS is used internally to networks. Carrier ENUM: Groups of carriers or communication service providers agree to share subscriber information via ENUM in private peering relationships. The carriers themselves control subscriber information, not the individuals. Carrier ENUM is also referred to as Infrastructure ENUM, and is being adopted today to support VoIP peering. Originally published in Converge! Network Digest By Chris Risley, Chairman of the Executive Committee of Nominum
<urn:uuid:b59e6672-7074-43e8-9127-a85a26c7b95b>
CC-MAIN-2017-04
http://www.circleid.com/posts/print/taking_aim_at_8_enum_myths/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00259-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924051
2,024
2.703125
3
What would the former IBM chief executive Thomas Watson have to say about the current development taking place with computers? After all it was Watson who claimed in 1943 that the world would need about five computers. Today, that claim looks a little wayward, to say the least, with millions of computers being sold over-the-counter every year. When computer performance is bundled together, we have what is known as a supercomputer. In the US, plans are already underway to build just such a computing colossus that will overshadow all previous such systems. The “Sequoia” project aims to build the first computer capable of reaching the 20 petaflop mark. In comparison, “Roadrunner,” currently the world’s fastest computer, just about manages a petaflop, meaning that “Sequoia” would make it look more like the world’s fastest calculator. But what does 20 petaflops per second mean? Primarily, 20 petaflops is a value, a 20 with fifteen zeros, and is in itself nothing tangible. In order to form a better appreciation for what “Sequoia” with its 20 petaflops is capable of, it would take six billion people all equipped with calculators 1,000 years to do the calculations that “Sequoia” can manage in one day. That’s a scenario that in Watson’s day would have caused an uproar and been considered as being beyond even the wildest borders of fiction. Before “Sequoia” is put into operation, another system will ensure that everything runs exactly to plan. The supercomputer “Dawn,” primarily a delivery system, will be based on Blue Gene/P technology and reach performances of over 500 teraflops. Both computers will work in tandem, although “Dawn” will afford users the opportunity of developing or adapting their applications for Blue Gene technology and to test and improve their scalability. “Dawn” is, as such, a typical porting and developing system. It will be the system on which applications are created and these applications will then execute operations and calculations in the petaflop range on “Seqouia.” Since there are not so many of these applications around, the supposedly smaller computer takes on added significance for users. They can undertake and carry out initial tests and studies and attempt to pave the way toward such petaflop applications. The National Nuclear Security Administration, which commissioned the project, is a part of the US Department of Energy. It wants to see “Sequoia” in use by 2012. By then no fewer than 96 racks will provide accommodation for the 1.6 million IBM POWER processors. According to official press releases, “Sequoia” will contribute to increased security and reliability of the United States nuclear arsenal. It goes without saying, of course, that other types of security aspects pertaining to the nuclear arsenal will be simulated, especially with regard to keeping a secure eye on aging materials. All over the world, scientists have been searching for solutions to problems raised by the safe disposal and storage of nuclear waste. “We see the entire project from the point of view of the researcher,” said Klaus Gottschalk, IT Systems architect with IBM. “For him the use of the computer is easy to evaluate. Large sums are being invested to help drive development onwards.” However, this giant machine is not only capable of turning nuclear research into visible, viewable action. The enormous potential offered by a 20 petaflop computer extends to far beyond nuclear weapons safety. According to IBM estimates, the supercomputer will be able to forecast weather up to 40 times more precisely than is possible today, and be invaluable in such areas as astronomy, energy, biotechnology and climate research. ”Modeling and simulation is crucial for ensuring the ability of our country to innovate and compete globally,” explained Dr. Cynthia McIntyre, Senior VP at the Council on Competitiveness. At this point, IBM has not said exactly how much power “Sequoia” is going to need. But according to the company, the machine is set to break all records in this area as well. It has been estimated that it will be the world’s first computer to achieve an efficiency of 3,050 calculations per watt. In terms of supercomputing, the US is no longer the only big player. The IBM-JUGENE system in Juelich, Germany, means Europe is currently ranked 11th in a list of the world’s 500 fastest computers compiled by the universities of Mannheim and Tennessee. Accordingly, the Juelich Research Center has been top of the tree in Europe for the last two years in terms of fastest computer. Plans are already afoot in Juelich to install the first petaflop computer in Europe — incidentally also from IBM — by the middle of this year. In all probability, after an initial introduction, this supercomputer will force its way into the top three of the world’s fastest computers. It will be capable of one quadrillion computational operations per second. The new supercomputer’s roughly 295,000 processors will then be housed in 72 phonebox-sized cabinets in the computing labs of the Juelich Supercomputing Center. Replete with 144 terabytes of RAM, and together with the remaining computers at the research center, Juelich will then be operating at 1.3 petaflops per second. In addition to its high speed, the supercomputer will also have access to around 6 petabytes of hard disk. That more or less corresponds to sufficient memory to store all the information contained on over one million DVDs. This will be the first machine built specifically for the Gauss Center, which has centers in Juelich, Stuttgart and Garching in Germany. The Gauss Allianz is a European-wide consortium that bundles the performance capacity of all Europe’s supercomputers. According to a spokesperson for the research centre at Juelich, “The three centers should speak with one voice and provide a counterpart and intermediary for scientists, particularly on the international stage.” The Juelich Research Center’s main focus is to be found in fundamental research. The present Blue Gene/P system has around 20 applications that use up the majority of its computing time. Top of this list belongs to the quantum chromodynamics, or QCD, application. This application is closely related to quantum electrodynamics, which help describe the strong interactions of electrically-charged particles by means of exchange of photons — thus forming a theory from high energy physics. In total, scientists from all manners of disciplines — from materials science through particle physics to medicine and environmental research — will have the opportunity to book themselves some computer time on the Juelich system. An independent committee of experts will then decide on which plans are best suited and allocate computing time accordingly. Researchers will be pleased at the enthusiasm for investment in such projects. Achim Bachem, chairman of the research center, states confidently, “Computers capable of this kind of performance form a universal key technology in helping find solutions to the most complex and most urgent scientific problems.” About the Author Markus Henkel is a geodesist, science writer and lives in Hamburg, Germany. He writes about supercomputing, environmental protection and clinical medicine. For more information, email him at firstname.lastname@example.org or visit the Web site: http://laengsynt.de.
<urn:uuid:7698538c-9788-4bdd-a69f-7cb8853d9768>
CC-MAIN-2017-04
https://www.hpcwire.com/2009/04/28/sequoia_the_next_generation_of_supercomputer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00259-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941928
1,580
3.203125
3
How To Use Web Search Engines Page 3--How To Plan The Best Search Strategy The Web is potentially a terrific place to get information on almost any topic. Doing research without leaving your desk sounds like a great idea, but all too often you end up wasting precious time chasing down useless URLs. Almost everyone agrees that there's gotta be a better way! But for now we're stuck with making the best use of the search tools that already exist on the Web. It's important to give some thought to your search strategy. Are you just beginning to amass knowledge on a fairly broad subject? Or do you have a specific objective in mind--like finding out everything you can about carpal tunnel syndrome, or the e-mail address of your old college roommate? If you're more interested in broad, general information, the first place to go is to a Web Directory. If you're after narrow, specific information, a Web search engine is probably a better choice. Interesting in finding information about people (friends, classmates, public figures) on the Web? We have some advice for you on that subject. Searching by Means of Subject Directories Think back to the library card catalogue analogy. In the old card files, and even in today's computer terminal library catalogues, you find information by searching on either the author, the title, or the subject. You usually choose the subject option when you want to cover a broad range of information. Example: You'd like to create your own home page on the Web, but you don't know how to write HTML, you've never created a graphic file, and you're not sure how you'd post a page on the Web even if you knew how to write one. In short, you need a lot of information on a rather broad topic--Web publishing. Your best bet is not a search engine, but a Web directory like the Open Directory Project, Google Directory or Yahoo. A directory is a subject-tree style catalogue that organizes the Web into major topics, including Arts, Business and Economy, Computers and Internet, Education, Entertainment, Government, Health, News, Recreation, Reference, Regional, Science, Social Science, Society and Culture. Under each of these topics is a list of subtopics, and under each of those is another list, and another, and so on, moving from the more general to the more specific. Example: To find out about Web page publishing from Yahoo, select the Computers and Internet Topic, under which you find a subtopic on the Wide World Web. Click on that and you find another list of subtopics, several of which are pertinent to your search: Web Page Authoring, CGI Scripting, Java, HTML, Page Design, Tutorials. Selecting any of these subtopics eventually takes you to Web pages that have been posted precisely for the purpose of giving you the information you need. If you are clear about the topic of your query, start with a Web directory rather than a search engine. Directories probably won't give you anywhere near as many references as a search engine will, but they are more likely to be on topic. Web directories usually come equipped with their own keyword search engines that allow you to search through their indices for the information you need. Important note: Search engines and Web directories are being integrated in interesting ways. For example, if you use the Google search engine and one of the results happens to be found in the Google's Directory (which is based on the dmoz directory), Google will offer you a link to that section of the directory. Meanwhile, if you conduct your search in the Google directory, Google will order the results according to PageRank, which is Google's all-important measure of link popularity. Searching by Means of Search Engines This is where things start to get complicated. C++ is not a word. It's a letter followed by two characters that might, depending on the index, be regarded merely as punctuation. Many text search engines have trouble handling input of this type. Many don't deal too well with numbers, either. So much for "007," "R2D2,"or "Catch-22." Here's another example of a text string search engines hate: To be or not to be. Just about anyone who finished junior high school will be able to tell you where the phrase comes from and (possibly!) what it means. But some search engines choke because all the words in the phrase are stop words--i.e., unimportant words too short and too common to be considered relevant strings on which to search. However, if you enclose the query in quotation marks, forcing the search engine to find the words, "to be or not to be" in that precise order, most search engines can recognize the phrase as a famous quotation from Hamlet. Let's take a less obvious example. Suppose you're a fan of murder mysteries and you want to search the Web for the home pages of all your favorite authors in that genre. If you simply enter the words "mystery" and "writer," most search engines will return hyperlinks to all Web documents that contain the word "mystery" or the word, "writer." This will probably include hundreds--or even thousands--of URLs, most of which will have no relevance to your search. If you enter the words as a phrase, however, you stand a better chance of getting some good hits. If you understand how search engines organize information and run queries, you can maximize your chances of getting hits on URLs that matter. Next: Find out How Search Engines Work The Spider's Apprentice was conceived and written by Linda Barlow, who maintains this site for Monash Information Services. Copyright 1996-2004. All rights reserved. Updated: 05/11/04
<urn:uuid:6f2081da-aae2-4802-8e3b-02216da4e524>
CC-MAIN-2017-04
http://www.monash.com/spidap1.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00377-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933151
1,198
2.734375
3
For the last few years, researchers from Ben-Gurion University of the Negev have been testing up new ways to exfiltrate data from air-gapped computers: via mobile phones, using radio frequencies (“AirHopper”); using heat (“BitWhisper”), using rogue software (“GSMem”) that modulates and transmits electromagnetic signals at cellular frequencies. The latest version of the data-exfiltration attack against air-gapped computers involves the machine’s fans. Dubbed “Fansmitter,” the attack can come handy when the computer does not have speakers, and so attackers can’t use acoustic channels to get the info. The attack starts with the Fansmitter malware being implanted on the air-gapped computer. “Our method utilizes the noise emitted from the CPU and chassis fans which are present in virtually every computer today. We show that a software can regulate the internal fans’ speed in order to control the acoustic waveform emitted from a computer. Binary data can be modulated and transmitted over these audio signals to a remote microphone (e.g., on a nearby mobile phone),” the researchers, lead by Mordechai Guri, head of R&D at the University’s CyberSecurity Research Center, explained. “Using our method we successfully transmitted data from air-gapped computer without audio hardware, to a smartphone receiver in the same room. We demonstrated the effective transmission of encryption keys and passwords from a distance of zero to 8 meters, with bit rate of up to 900 bits/hour. We show that our method can also be used to leak data from different types of IT equipment, embedded systems, and IoT devices that have no audio hardware, but contain fans of various types and sizes.”
<urn:uuid:bf77de51-e195-4119-9941-f0286108d857>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2016/06/24/air-gapped-computers-fan-speed/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00195-ip-10-171-10-70.ec2.internal.warc.gz
en
0.905612
381
2.703125
3
Like their federal counterparts, managers of state-, county- and municipally owned buildings are focused on increasing energy efficiency. Catalysts for this heightened focus include a growing cultural awareness of sustainability, rising electricity costs and tighter budgets. However, it is critically important for all facility managers to distinguish between energy efficiency, a desired end result, and energy management, a strategic plan designed and implemented with the building life cycle in mind. It involves a continuous process to actively monitor, manage, improve and sustain savings. Driving energy management via a carefully constructed action plan will better position state, county and municipal facility managers to achieve energy efficiency goals in the short term. At the federal level, legislation is a key driver for facility managers to set energy efficiency goals. The Energy Independence and Security Act of 2007 (EISA 2007), for example, requires all federal government facilities to reduce energy consumption by 3 percent per year through 2015 for a total 30 percent reduction. Additionally, recognizing that one can't manage what one can't measure, the Energy Policy Act of 2005 (EPAct 2005) requires facility managers to install advanced electric meters on every building by Oct. 1, 2012. Finally, Executive Order 13514, Federal Leadership in Environmental, Energy and Economic Performance, which was enacted on Oct. 5, 2009, requires facilities to set targets, and measure and report on greenhouse gas emissions. The strategies those federal facility managers deploy to comply with mandates can, and should, be replicated at the state, county and municipal level. It doesn't just make sense for the environment; it makes sense to their bottom line. According to the U.S. Department of Energy's Energy Information Administration (EIA), the per-kilowatt hour cost of electricity rose from 7.6 cents to 9.8 cents from 2004 to 2008, a 28.8 percent increase. The EIA expects that amount to increase to 10.7 cents by 2010, another 9.2 percent jump. Assuming that occurs, the cost of electricity will have increased nearly 40 percent from 2004 to 2010. Unless conservation measures are implemented, states, counties and municipalities may be forced to make cuts in other areas to pay utility bills. It is even more critical during these tough economic times when these government entities are struggling to meet their fiscal obligations. A carefully constructed energy management action plan can help state, county and municipal facility managers hone best practices to reduce energy and life cycle costs. There are many ways to instantly improve an existing facility's energy efficiency by varying degrees, but the overall goal should be continuous improvement. Without a well defined, strategic plan, implemented tactics likely won't achieve their full energy and cost savings potential. A strategic energy management action plan that incorporates a keen understanding of many factors, including energy efficiency goals, budgetary parameters and payback threshold, along with the appropriate technology solutions, will foster a mindset of ongoing energy planning and accountability. An effective plan should incorporate four basic steps: Step 1: Measure. Collect data from energy consumers within a facility and analyze individuals' impact on total consumption. Measuring energy use via a metering system identifies potential savings opportunities and creates a baseline to gauge improvement. Step 2: Fix the basics. This consists of efforts like installing low-energy-consumption devices, like LED lighting, and addressing power quality issues. However, while they can translate to substantial savings, such measures are typically a one-time improvement, or a passive approach to energy management. Step 3: Automate. Measures like schedule-based lighting control and occupancy sensors automatically turn lights on only when they are needed, while HVAC control regulates heating and cooling at optimal levels, which can change day by day. More importantly, these measures facilitate an active approach to energy management, because they can be "actively" adjusted based on fluctuating facility energy demands or supply-side programs, such as demand response, where pre-selected electrical loads are turned off based on a utility request or when electrical rates meet a pre-set threshold. Step 4: Monitor and improve. Unplanned, unmanaged shutdowns of equipment and processes; substandard automation and regulation; inadequate maintenance, and/or a lack of behavior continuity can eliminate previously gained efficiencies and savings. In fact, typically up to 8 percent savings per year is lost without a monitoring and maintenance program, and up to 12 percent savings per year is lost without regulation and control systems. Power meter installations, energy management systems (EMS), regular maintenance and retro-commissioning can all help achieve a long-term positive return on investment. Like their federal counterparts, state, county and municipal facility managers that commit to this type of active approach to energy management can realize up to 30 percent savings in a relatively short duration. Value is realized in these areas: In addition to creating and implementing a strategic energy management program, a facility manager must also educate stakeholders on how to maximize energy savings generated by active energy efficiency measures. For example, no matter how stringently adjusted a building system is upon initial occupancy, facility usage changes over time -- the lighting control system is overridden, the set points on the HVAC system are changed and equipment begins to age. These factors translate to energy and cost savings erosion. A facility manager can avoid this by emphasizing a lifecycle approach focused on maintaining those savings. This sets the stage for more effective long-term energy management, and a greater opportunity for success.
<urn:uuid:f1cfe59d-27c7-4a44-9266-acbaf823eba6>
CC-MAIN-2017-04
http://www.govtech.com/technology/Four-Steps-for-Effective-Energy-Management.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00221-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940146
1,095
2.546875
3
Voyager mission a long-lasting IT success - By Frank Konkel - Oct 15, 2013 After more than 30 years, Voyager 1 has left the solar system, still transmitting useful data thanks to long-ago prescient IT choices. Voyager 1 recently became the first manmade object to leave the solar system, and after 36 years of spaceflight, it's still producing relevant scientific data as it speeds away from Earth at 38,000 miles per hour. It has literally gone where no manmade craft has gone before, and its longevity is in part due to innovative IT decisions made by NASA engineers in the months leading up to its 1977 launch, according to Lee Holcomb, now Lockheed Martin's vice president of strategic initiatives. Nearly four decades ago, Holcomb worked as an engineer on the hugely successful NASA program that launched the Voyager 1 and 2 spacecraft. Collectively, the Voyager 1 and 2 spacecraft visited and helped map the solar system's outer planets, and Voyager 1 has continued improving scientists' understanding of the solar system through data it still transmits on plasma density, solar wind speed and magnetic fields. In three years of development at the NASA Jet Propulsion Laboratory (JPL), the Voyager spacecraft benefited from every engineering advancement made by previous spacecraft, Holcomb said. It is powered by a radioisotope thermoelectric generator that will keep the craft powered until around 2025, and its integrated propulsion system drastically increased its potential lifecycle. But Holcomb said one of the keys to Voyager 1's success can be traced back to IT. Once installed and launched, Voyager 1's onboard computer would always and forever have 69 kilobytes of memory, yet NASA's ability to communicate would change over time. The craft's systems were designed with upgrades to communications and compression technologies in mind, even though such algorithms and methods didn't exist yet. "From an IT perspective, we were dealing with antiquated computers compared to today," Holcomb said. "We enabled the communications and computer system to allow us to upload new algorithms for compression and communications. After the spacecraft launched, in the 1980s and 1990s, we were able to upload to the spacecraft more modern algorithms for compression and enabled the communications link to work at these very extreme distances with feeble, low-power communications systems." Those advances allowed the Voyager program to be as long-lasting and successful as it has been, Holcomb said, and it drives home the importance of integration and innovation in federal IT some 40 years after the project began. The federal government shells out about $80 billion per year on information technology, although the sequester has squeezed recent budgets. But affordability isn't a new concern, Holcomb said. NASA had canceled a similar spacecraft program prior to Voyager because of its expense. "Many agencies are surviving on lower budgets," said Holcomb, who now researches how transformational technologies such as cloud computing, big data, mobility and cybersecurity can change business practices in the public and private sectors. The same principles that applied to Voyager in the days of disco and Led Zeppelin still apply to federal IT. Not every IT project is built to last decades or even several years, but principles of innovation, integration and cost-effectiveness should always be expressly considered for any project, he said. "Voyager gets a lot of credit for the wonderful science it has produced, and we put these spacecraft up there to get science, but I think it is time to say there was super engineering done on that spacecraft," Holcomb said. "For it to be resurrected on affordability, that's what we're dealing with in government systems today – to integrate and innovate and drive systems' costs down." Four decades after it was built, the $250 million Voyager 1 is still in communication with Earth, and still doing science, far past its life expectancy of five to 10 years. The best IT projects stand the test of time, Holcomb said. After his time as an engineer, Holcomb would eventually climb the ranks at NASA to become its CIO in the late 1990s before taking on the challenge of overseeing the merger of 22 federal agencies into the Department of Homeland Security as its first CTO. There were major IT challenges inherent in those positions, as well as at Lockheed Martin, where he's helped the Environmental Protection Agency and other federal customers jump to cloud computing. But Holcomb's passion for IT shines brightest when he reminisces about his Voyager days. It is "one of my proudest involvements in the field of engineering," Holcomb said. Frank Konkel is a former staff writer for FCW.
<urn:uuid:1fde34d8-caf9-4961-b8b0-379cae9ece18>
CC-MAIN-2017-04
https://fcw.com/articles/2013/10/15/voyager-it-solar-system.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00221-ip-10-171-10-70.ec2.internal.warc.gz
en
0.966654
944
3.03125
3
As long as criminals have been plotting to circumvent order and established protocols, there have been crime-fighters pushing the envelope to foil their efforts. As criminals hatch sophisticated schemes to bypass detection, law enforcement agencies and other dedicated groups devise original and inventive ways to thwart their advances. Technology and innovation play key roles in crime detection, responding to ever-changing criminal methods. Throughout history, technological advances have changed the way law enforcement personal and concerned citizens police their neighborhoods. Automobiles, radios, and mobile telephones, for example, each represent game-changing innovations that enhance crime prevention, detection and control. Just as these revolutionary advances shed new light on crime-fighting practices, today’s technological innovations open new avenues of detection for modern crime fighters. Using data to identify patterns and potentially predict crimes is a growing law enforcement trend that relies on machine learning to identify predictive patterns. Old-fashioned police work requires manual data analysis, especially in crime series’, when individuals or groups commit multiple crimes. But a recent machine learning innovation called Series Finder helps police narrow their searches by growing a crime series projection from a couple of unsolved crimes. The algorithm uses historical crime data to “learn” and construct patterns based on analysis of particular data points like geographical location and modus operandi. Burglaries, for example are charted according to their occurrence, and Series Finder attempts to return relevant information about future vulnerabilities. While predictive policing is in its early stages, high profile endorsements from the Justice Department and other agencies attest to its important place in the future of crime detection. Analyzing and interpreting digital information is an increasingly important part of modern criminal proceedings, as data mining and other sophisticated electronic crimes are seen with increasing frequency and severity. Exposing the digital criminal footprint left by fraudsters and other digital criminals requires cutting-edge tactics, which continually strive to keep pace with criminal innovations. Communications and mobile technology, including smartphones and tablets, play a role in most cases, so emails, hard-drives, mobile phones and other devices each furnish digital points of reference for prosecutors, law enforcement officials and criminal defense attorneys. Whether it is a result of the stakes being higher today than at other points in history, or simply that modern technology accommodates it, surveillance is a part of daily life for most citizens. Cameras and other monitoring equipment have grown smaller and less obtrusive, and new technologies furnish greater visual clarity than older models. As a result, video surveillance has become so pervasive as to become a social issue, pitting personal liberties against safe and secure societies. Gunshot Detection Systems Technology innovations look for solutions beyond conventional wisdom, so not every tech-inspired crime detection effort is going to pay big dividends. Gunshot alert systems are deployed in many major cities, with mixed results. The sound-sensing devices are placed in high crime areas, where they relay information to police officers in real time – including notifications of gunshots detected within the sensors’ range. While the systems have helped solve crimes, detractors say the devices are not worth their expense, because residents are quick to report gunshots on their own – making the sensors obsolete. To a certain extent, crime detection technology mirrors trends among citizens. Tablets and other mobile devices, for example, dominate information technology on the streets, so it is only natural that law enforcement personnel use mobile technology to detect and solve crimes. Tablets furnish efficient tools, enabling officers to take tasks mobile, which were once performed at the station or in squad cars. As a result, officers spend more time policing and less time hidden-away completing paperwork. Mobile devices also furnish access to materials like state crime databases and other investigative resources, streamlining crime detection and enforcement. Technology and innovation are at the heart of effective crime detection; especially in the rapidly changing electronic age. Information technology plays a particularly important role in policing, so law enforcement agencies use state-of-the-art surveillance, digital forensics, and predictive policing to stay one step ahead of criminals. Author: Daphne Holmes contributed this guest post. She is a writer from ArrestRecords.com and you can reach her at firstname.lastname@example.org.
<urn:uuid:d9ca6cd5-f38b-4197-b446-76c67e6f137e>
CC-MAIN-2017-04
http://www.2-spyware.com/news/post4097.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00341-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927972
846
2.640625
3
Monitoring worker conditions in frigid environments Friday, Mar 29th 2013 The Alaskan wilderness can be an extremely inhospitable environment. Temperatures can drop to life-threatening levels. With the region's rich reserves of oil, natural gas and other fossil fuels, workers will continue to toil away in the cold, attempting to extract its resources. Ensuring the safety of employees who work in unforgiving, frigid environments should be a priority of any business. Norwegian researchers may have recently developed a way to track worker safety conditions with the help of temperature monitoring equipment. Monitoring worker conditions The Alaska Dispatch reported that the research organization SINTEF has been working on a coat with built-in environmental sensors. If exterior conditions become life-threatening, temperature sensors on the clothing will alert crews to the dangerous situation and allow them to take the proper recourse. These tools will allow crew members working in freezing cold environments to safeguard themselves against life-threatening changes to the environment. "Workers will be exposed to more extreme weather conditions, and that may lead to fatigue, impaired physical and cognitive performance. The safety of the workers is significantly affected when outside temperature decreases," wrote SINTEF research manager Hilde Faerevik, according to the news outlet. The suits are also equipped with temperature and humidity sensors inside to measure the physical condition of workers operating in cold conditions. With this information, crew supervisors can better determine whether or not conditions have become too dangerous for workers to continue. Reducing errors on the job Researchers contend that, in addition to enhancing worker safety, the coats will increase their productivity as well. Current protocols use the wind-chill index to measure environmental safety conditions. However, this is only useful for predicting the risk of workers becoming frostbitten on their skin and not for measuring the temperature of their hands. According to studies conducted by SINTEF, worker performance can be significantly impaired when the temperature of their fingers dips below 68 degrees F. "In such situations, the average worker may be so determined to get the job done that his fingers become cold and lose their dexterity, with the result that screws are not fitted correctly, leading to increased risk level sometime in the future," Oystein Wiggen, research scientist at SINTEF Health Research stated in a press release. The temperature sensors do not need to actually contact the skin to get a reading. The devices' wiring has been created using conductive thread that can then be woven into the jacket's cloth.
<urn:uuid:d5ca6296-b567-4d85-a167-83f25c226097>
CC-MAIN-2017-04
http://www.itwatchdogs.com/environmental-monitoring-news/research-labs/monitoring-worker-conditions-in-frigid-environments-413604
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00122-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951752
512
2.53125
3
Table of Contents I am sure many of you have been told in the past to defrag your hard drives when you have noticed a slow down on your computer. You may have followed the advice and defragged your hard drive, and actually noticed a difference. Have you ever wondered why defragging helps though? This tutorial will discuss what Disk Fragmentation is and how you can optimize your hard drive's partitions by defragmenting them for better performance. In order to understand why defragging works, it is important to understand how data is stored on your hard drive. When data, such as a file, is stored on a hard drive the operating system attempts to store that file in one section of contiguous, locations that are connecting without a break, space. When you have a new hard drive, storing data in contiguous spaces is not a problem. As you use the hard drive, though, files will be deleted from it and small pockets of space will be created on your hard drive. These small pockets of space on your hard drive is called fragmentation. When a hard drive is fragmented, and the operating system wants to store a file on the hard drive, it attempts to store it in a section of contiguous space that will be large enough to accommodate the file. If the hard drive is heavily fragmented, there is the possibility that there will not be enough contiguous space available to store the file, and therefore the file will be broken up and stored in multiple locations on the hard drive. This causes the file to become fragmented. This is especially bad when installing new software on your computer because the program will now be installed over multiple locations on your hard drive. Now when you run this particular application its performance will be degraded because it has to be loaded from multiple locations on the hard drive. Figure 1 below shows an example of a fragmented file. Notice how File1 is stored in two locations which are not contiguous. Figure 1: Fragmented File To solve this problem, software developers developed a type of program called a Disk Defragmenter. A defragmenter is an application that reorganizes the data on your hard drive's partitions in such a manner that the files are stored in as much contiguous space as possible. The defragmenter will search your hard drive partition and move data from one location to another location, so that the files stored there are one contiguous piece, instead of being spread throughout multiple locations on the hard drive's partition. This allows the programs and data to run more efficiently and quickly as the operating system does not have to read from multiple locations. Figure 2 below shows an example of a file store in contiguous space. Notice how the entire file is located in one area and not split between multiple locations. Figure 2: File that is not fragmented There are two ways to defragment your hard drive. One way is to use a Disk Defragmenter program and the other is to use an extra empty hard drive. We will discuss both ways below. Please Note that this tutorial will cover the defragmenter found in Windows XP only. The other versions of windows, and other vendor's applications, have similar programs that should not be too hard to figure out when following this tutorial. Step 1: Shut down all applications It is a good procedure to shut down all applications before you run a Disk Defragmenter including your Antivirus software. This is to make sure that no programs attempt to write to the drive while it is being defragmented. Though this will not cause any damage, it may cause you to have to restart the entire process from the beginning. Step 2: Running the Disk Defragmenter Windows comes with a program called Disk Defragmenter which is installed with the operating system. This program can be found in your System Tools folder under Accessories on your Programs menu. This is shown in Figure 3 below. Figure 3. Launching Disk Defragmenter Click once on the Disk Defragmenter as show above in Figure 1 to launch the program. You will then be presented with a screen similar to Figure 4 below. Figure 4. Disk Defragmenter Startup Screen The main screen of Defragmenter will show you a listing of your hard drive partitions and then give you the option to Analyze the partition or Defragment it. Before you choose the Analyze or Defragment button, you should select the partition you want to work with, by clicking once on it. By default the first partition will be selected. Step 3: Analyze your partitions When you click on the Analyze button, designated by the red box in Figure 4, the Defragmenter will scan the partition you have selected and give you a report on how badly it is fragmented. The higher percentage that you are fragmented, the worse it is. When it is done analyzing your hard drive it will give you a prompt that tells you the recommendation on whether you should defragment or not and gives you the option to defragment or view the report as show in Figure 3 below. Figure 5. Prompt to view Report or Defragment I always make it a habit to view the report so I can see how badly my hard drive is being used. You would do this by clicking on the "View Report" button designated by the red box in Figure 5. When you click on the View Report, you will be presented with a screen similar to Figure 6 below. Figure 6. Report of Analysis The report will give you a lot of details about various attributes of your file system. We are only concerned with the Volume Fragmentation as show in Figure 6 above designated by the red box. Step 4: Defragmenting NOTES: Before defragging will occur the following rules apply: As you can see from Figure 6 above, this computer has a total fragmentation of 34%. That is not good and my computer is not giving me the performance that it should be. I am therefore going to defragment my hard drive by clicking on the Defragment button, designated by the blue box in Figure 6 above. After clicking on the Defragment button you will see a window similar to Figure 7 below. Figure 7: Defragging Lets explain what you see on the screen. The two bars in the middle of this screen with all the colors on it are the the Analysis and Defragmentation displays. The top bar, labeled"Estimated disk usage before defragmentation" is the Analysis display and shows a graphical representation of your partition before the defrag started. The bar under it, labeled"Estimated disk usage after defragmentation" is the Defragmentation display. This shows a real-time graphical representation of your partition while it it going through defragmentation. The colors on the displays, with they key designated by the blue box in Figure 7, represent the state of the different sections of the partition. The colors mean the following: |Red||Most of the clusters are part of a fragmented file.| |Blue||Most of the clusters are contiguous files with clusters in the group that contain only free space and contiguous clusters.| |Green||Most of the clusters are part of a file that cannot be moved from its current location for security or physical reasons.| |White||Most of the clusters are free space and contiguous clusters| You should let the hard drive defrag your partition, which can take many hours so you may want to do something else for a while. When it is completed, it will give you a summary report and then you can shut the program or defrag any other partitions that you may have. When you are completed your hard drive partition should now be defragmented. In general disk fragmentation is not as much of a problem for Linux or Macintosh based computers. Most Linux computers use the ext2 or ext3 file system which is known to be resilient against fragmentation and is not as severely affected when it does occur. It is advised that you do not manually defragment these types of file systems. Macintosh based computers do not require defragmentation as much either. Apple actually recommends you do not defrag your drives if you are using MAC OSX as there will be little benefit. They even state that defragging has the chance of causing a loss of performance. If you would still like to use a tool that works on an Apple hard drive, the a program that is highly recommended is: Please note that many people have reported that Norton Utilities for Mac can actually cause problems when using their Disk Defrag utility, so it is advised that you do not use that program for these purposes. Probably the best way to defragment your hard drive, if it is in your budget, is to use a spare hard drive. In order to defrag you would copy all the data off the heavily fragmented drive onto a clean spare drive. You would then delete all data on the fragmented drive, and then restore that data back from the spare drive. This will then copy all the data back to the drive in a clean contiguous manner. This method will work for any operating system. There really is no way to prevent fragmentation as it is a natural process that occurs when writing and deleting data to a hard drive. Some file systems are more resilient to Disk Fragmentation such as NTFS and EX2. If you are a Windows user and are not dual-booting operating systems that are not compatible with NTFS then you should upgrade your file systems to NTFS. Not only will you gain the benefit of a more intelligent file system but you will gain added security benefits as well. As you can see disk fragmentation can cause varying degrees of problems depending on the operating system you use. If you are a Windows user, then it is advised that you defragment, if you are a Linux or Mac user then it is said to not be as important. If you do choose to defragment your hard drives, the list of tools below can be used for the various operating systems: Recommended Windows Defragmenter's: Recommended Macintosh Programs: As always if you have any comments, questions or suggestions about this tutorial please do not hesitate to tell us in the computer help forums. Bleeping Computer Basic Concepts Series BleepingComputer.com: Computer Support & Tutorials for the beginning computer user. Most people think computers, being electronic devices, don't require any mechanical maintenance, but this is not so. Many computer faults are caused by components overheating due to poor airflow in the case because of a buildup of dirt and dust over time. It's worthwhile cleaning your computer annually or even more often if it is in a particularly dusty environment, on carpet or in a ... A very common question I am asked is which is more important, the speed of the processor or the amount memory. This is a difficult question to answer and it would help if we had some understanding of what each component does and how they relate to each other. This article will strive to teach you the fundamental tasks of both memory and the cpu and how they relate to each other. Hopefully at the ... This tutorial is intended to explain what RAM is and give some background on different memory technologies in order to help you identify the RAM in your PC. It will also discuss RAM speed and timing parameters to help you understand the specifications often quoted on vendors' websites. Its final aim is to assist you in upgrading your system by suggesting some tools and strategies to help you ... This tutorial focuses on using GParted, or Gnome Partition Editor, a free and open source partition editor. To use GParted, you must first download the CD Image file (.iso file) of GParted Live for this program. Instructions on where to find and how to burn the GParted ISO file are covered in the Preparation step. In this tutorial we will be using Microsoft Windows XP for certain steps. If you use ... Everyone knows what a floppy disk is, but a common question, is how do I clean out a floppy so that it can be used again? I have spoken to many people who have said that after they store information on a floppy they throw them out as they do not know how to erase the files on them so that it is an empty and clean disk. This tutorial will cover how to reformat a floppy disk so that you can reuse it ...
<urn:uuid:e18fc012-1f20-480a-b7ff-c47b051bc72a>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/tutorials/the-importance-of-disk-defragmentation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00030-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935465
2,542
3.515625
4
Soule E.K.,Virginia Commonwealth University | Nasim A.,Virginia Commonwealth University | Rosas S.,Concept Systems Inc. Nicotine and Tobacco Research | Year: 2016 Introduction: Electronic cigarette (ECIG) use has grown rapidly in popularity within a short period of time. As ECIG products continue to evolve and more individuals begin using ECIGs, it is important to understand the potential adverse effects that are associated with ECIG use. The purpose of this study was to examine and describe the acute adverse effects associated with ECIG use. Methods: This study used an integrated, mixed-method participatory approach called concept mapping (CM). Experienced ECIG users (n = 85) provided statements that answered the focus prompt "A specific negative or unpleasant effect (ie, physical or psychological) that I have experienced either during or immediately after using an electronic cigarette device is..." in an online program. Participants sorted these statements into piles of common themes and rated each statement. Using multidimensional scaling and hierarchical cluster analysis, a concept map of the adverse effects statements was created. Results: Participants generated 79 statements that completed the focus prompt and were retained by researchers. Analysis generated a map containing five clusters that characterized perceived adverse effects of ECIG use: Stigma, Worry/Guilt, Addiction Signs, Physical Effects, and Device/Vapor Problems. Conclusions: ECIG use is associated with adverse effects that should be monitored as ECIGs continue to grow in popularity. If ECIGs are to be regulated, policies should be created that minimize the likelihood of user identified adverse effects. Implications: This article provides a list of adverse effects reported by experienced ECIG users. This article organizes these effects into a conceptual model that may be useful for better understanding the adverse outcomes associated with ECIG use. These identified adverse effects may be useful for health professionals and policy makers. Health professionals should be aware of potential negative health effects that may be associated with ECIG use and policy makers could design ECIG regulations that minimize the risk of the adverse effects reported by ECIG users in this study. © The Author 2015. Source Haque N.,Wellesley Institute | Rosas S.,Concept Systems Inc. Family and Community Health | Year: 2010 This inquiry successfully sequenced and integrated 2 participatory research methods: photovoice and concept mapping. In the photovoice phase, immigrant residents shared perceptions and thoughts of their neighborhood through photographs and stories, capturing neighborhood characteristics that influence their health and well-being. In the concept mapping phase, active involvement of immigrant residents was facilitated to systematically organize and build consensus around the wide range of neighborhood factors identified from the photovoice work. The combination of these 2 participatory methods resulted in a conceptual framework of factors influencing immigrants' health and well-being, whereas the photographs with captions facilitated interpretation and action at multiple levels. Copyright © 2010 Wolters Kluwer Health | Lippincott Williams & Wilkins. Source Goldman A.W.,Concept Systems Inc. Journal of Informetrics | Year: 2014 This paper contributes to the longitudinal study and representation of the diffusion of scholarly knowledge through bibliometrics. The case of systems biology is used to illustrate a means for considering the structure and different roles of journals in the diffusion of a relatively new field to diverse subject areas. Using a bipartite network analysis of journals and subject categories, a core-intermediary-periphery diffusion structure is detected through comparative analysis of betweenness centrality over time. Systems biology diffuses from a core of foundational, theoretical areas to more specific, applied, practical fields, most of which relate to human health. Next, cluster analysis is applied to subject category co-occurrence networks to longitudinally trace the movement of fields within the core-intermediary-periphery structure. The results of these analyses reveal patterns of systems biology's diffusion across both theoretical and applied fields, and are also used to suggest how the dynamics of a field's interdisciplinary evolution can be realized. The author concludes by presenting a typology for considering how journals may function to support attributes of the core-intermediary-periphery structure and diffusion patterns more broadly. © 2013 Elsevier Ltd. Source Gurney M.,Concept Systems Inc. Control Engineering | Year: 2011 Concept Systems was able to increase its green bean infeed rate by 100% using a 3D robotic vision system while eliminating a safety and material waste problem that was costing it 100,000 lb of lost beans per year. Plant managers decided to upgrade the bean bag handling system with the help of a system integrator having expertise in advance automation systems, including smart robotic workcells guided with machine vision systems. The control system used incorporated an advanced 3D vision system with high-end PC-based software to build a 3D model of the environment in which the infeed robot operated. The system modeled each pallet of bean bags for the coffee roaster application, using distance measurements obtained through laser triangulation. A new computer model was constructed for every tier of bags on the pallet and the model was run through an advanced algorithm that identifies unique features of the bags, and determined the precise position and orientation of each bag in that tier. Source Schmitt C.L.,Rti International | Rosas S.R.,Concept Systems Inc. Preventing Chronic Disease | Year: 2012 Introduction: Collaborations between cancer prevention and tobacco control programs can leverage scarce resources to address noncommunicable diseases globally, but barriers to cooperation and actual collaboration are substantial. To foster collaboration between cancer prevention and tobacco control programs, the Global Health Partnership conducted research to identify similarities and differences in how the 2 programs viewed program success. Methods: Using concept mapping, cancer prevention and tobacco control experts generated statements describing the components of a successful cancer prevention or tobacco control program and 33 participants sorted and rated the final 99 statements. Multidimensional scaling analysis with a 2-dimensional solution was used to identify an 8-cluster conceptual map of program success. We calculated Pearson correlation coefficients for all 99 statements to compare the item-level ratings of both groups and used t tests to compare the mean importance of ratings assigned to each cluster. Results: Eight major clusters of success were identified: 1) advocacy and persuasion, 2) building sustainability, 3) partnerships, 4) readiness and support, 5) program management fundamentals, 6) monitoring and evaluation, 7) utilization of evidence, and 8) implementation. We found no significant difference between the maps created by the 2 groups and only 1 mean difference for the importance ratings for 1 of the clusters: cancer prevention experts rated partnerships as more important to program success than did tobacco control experts. Conclusions: Our findings are consistent with those of research documenting the necessary components of successful programs and the similarities between cancer prevention and tobacco control. Both programs value the same strategies to address a common risk factor: tobacco use. Identifying common ground between these 2 research and practice communities can benefit future collaborations at the local, state, tribal, national, and international levels, and inform the broader discussion on resource sharing among other organizations whose mission focuses on noncommunicable diseases. Source
<urn:uuid:5e61b387-0ccc-4435-95b3-a6c7026118f0>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/concept-systems-inc-1382841/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00030-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922284
1,466
2.515625
3
Unlimited Scalability for Fiber-Optic Networks Dense Wavelength Division Multiplexing (DWDM) is an optical multiplexing technology used to increase bandwidth over existing fiber networks. DWDM works by combining and transmitting multiple signals simultaneously at different wavelengths on the same fiber. The technology creates multiple virtual fibers, thus multiplying the capacity of the physical medium. DWDM provides the ultimate scalability and reach for fiber networks ... Driving down the Cost per GbE Kilometer WDM has revolutionized the cost per bit of transport. Thanks to DWDM, fiber networks can carry multiple Terabits of data per second over thousands of kilometers – at cost points unimaginable less than a decade ago. State-of-the-art DWDM systems support up to 192 wavelengths on a single pair of fiber, with each wavelength transporting up to 100Gbit/s capacity – 400Gbit/s and one Terabit/s on the horizon. Web 2.0 without DWDM? Unthinkable! DWDM provides ultimate scalability and reach for fiber networks. Without the capacity and reach of DWDM systems, most Web 2.0 and cloud-computing solutions today would not be feasible. Establishing transport connections as short as tens of kilometers to enabling nationwide and transoceanic transport networks, DWDM is the workhorse of all the bit-pipes keeping the data highway alive and expanding. Would You Like To Discuss This Further?
<urn:uuid:e8385869-c444-4fb9-a53f-d3e9c24a713a>
CC-MAIN-2017-04
http://www.advaoptical.com/en/products/technology/dwdm.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00176-ip-10-171-10-70.ec2.internal.warc.gz
en
0.868959
307
3.03125
3
We’ve all heard statistics about how the information being created and stored is growing at an exponential rate. Many now regularly measure database sizes in petabytes—and the growth is not slowing. In a recent article, Gartner analysts predict that enterprise data will grow by 800 percent over the next five years and that 80 percent or more of that new data will be unstructured. Unstructured data is the most difficult to protect as it can take many forms, such as documents, videos, spreadsheets and other content that workers create. Organizations are centralizing information storage to promote better collaboration, but for many the concept of having all their data “eggs” in one basket raises new security concerns. We also know that much of the information organizations create, gather and use is considered sensitive. Yet despite being bombarded with messages from governments, NGOs, consumers, vendors and the media that sensitive information must be properly governed, many of the individuals who have access to and responsibility for this information usually treat its security as an afterthought. Why is that? With all the statistics and talk about the amount of information we’re generating, the increased security risk of pooling all this information in a central repository and the serious damage a data leak can cause, why is its security not top of mind? On the basis of my experience in the security industry working with many large organizations around the world and with many individuals who own content (or are responsible for content), I have developed a theory. People only feel compelled to secure information when all of the following apply: - They have a personal connection to it - They truly understand the risk that exposure of the information poses - The impact of such an exposure affects them directly Data users rarely secure information because it’s the right thing to do. Although there are exceptions, in general people choose to treat data securely because it protects them, rather than doing it for the good of the organization. This isn’t a pessimistic view; it’s just natural human behavior…at least it is today. Successful security requires that the end user is properly motivated. To that end, we need to examine why organizations are securing data and how to make sure the people in the organization understand why it is to their benefit to ensure that security measures are followed. So what drives people to secure information? Reducing Liability and Protecting Investment In many industries, the exposure of sensitive corporate information can have very negative business impacts. Risks can include - Compliance violations that result in heavy fines - Sanctions and legally imposed restrictions on business - Loss of business reputation (bad PR can lead to lost stock value, lost customers, lost market share and so on) - Loss or theft of intellectual property Since a business owner or a C-level executive looks at the business as a whole and is tasked (and financially rewarded) with ensuring its success, senior managers are better positioned to understand the risks. They are also more invested in the consequences of a breach, as it is they who must face the fines, lawsuits and an angry board of directors. They are also the staff with the most to gain since performance bonuses are often tied to profits. Yet, corporate security measures often fail because the average employee is not as concerned about—or even aware of—these risks to the business. Depending on the employee, they very likely don’t even know what information is sensitive or understand how the loss of these items can affect the organization. Unless you both provide a clear way to identify what information is sensitive and effectively motivate employees to secure the data they manage, their ability (and desire) to help protect against data loss will be limited. Exceptions and the Seeds of Change In recent years there has been an overall increase in attention paid to security. In industries where the public good is at risk, we find that average workers are beginning to connect to the data they handle. In government agencies, for example, public safety and mission success can be greatly affected if data is leaked. For national-defense departments, internal security agencies (such as the U.S. Department of Homeland Security) and other government departments, the personnel that deal with the data involved are typically well trained in how to handle this type of sensitive information. Additionally, people often go into these areas of work because they have a desire to serve the public or protect the public’s safety. Another area where you find the average worker making a strong connection to their data is in health care. Unlike organizations that must protect personal identifying or financial information, workers at hospitals, insurance companies and government agencies are finding motivation to protect data beyond just bad press, fines or profit loss. A stolen health identity can have tremendously negative impact on an individual’s safety. Consider that a health identity is used by an imposter who is otherwise without health insurance. Although this situation can be categorized as a kind of financial theft, more seriously the person illegally using the health identity can cause data in the victim’s health record to be modified. For example, if the impostor is blood type A, then that data could be applied to the original health record. If the actual insured individual needs an emergency blood transfusion but is type B-negative, his or her health is at risk. In these cases, typically both the senior administrators and the employees are aware of the very serious consequences a data breach can have. So What Can Be Done to Ensure Information Is Secure? In most organizations, average individuals tend to feel a true need to secure information when they have a personal connection to it, when they truly understand the consequences of exposure or when the impact of such an exposure can affect them directly. The ideal situation in any organization would be to have each and every individual truly care about securely handling sensitive information. The best way to create a security mindset is to overtly involve all employees in the organization’s security strategy. This approach starts with education to convey what information is sensitive and how it should be handled. Employees also must be aware of the very real and very negative impacts of information exposure, both to the business and to them personally. In addition, openly discussing security policy and asking for best-practice feedback from employees will help foster feelings of ownership and responsibility. Once employees have been educated and become involved in the security process, organizations need to overtly foster accountability. This accountability can be in the form of check-out procedures for highly sensitive documents or by stamping the employees’ names on all documents they access so if it is handled carelessly, it can easily be tracked back to them. Despite all the tremendous effort and work that has gone into developing some excellent security technologies, they unfortunately are not 100% effective in keeping data secure. Employees’ willingness and ability to enforce secure-information governance procedures is vital to helping organizations defend against both inadvertent and malicious exposure of sensitive information. Leading article image courtesy of The beautiful talkers About the Author Antonio Maio, Senior Product Manager with TITUS and Microsoft SharePoint Server MVP, has over 20 years of experience in software development and product management. Antonio’s background includes both formal education and industry experience in cryptographic systems, public-key infrastructure and information security. He has previously held senior positions at Corel, Entrust, and several Microsoft partner organizations. His broad knowledge and experience with Microsoft SharePoint extends over 10 years. His work centers particularly on helping government, military and large enterprise customers solve security challenges in SharePoint, allowing them to ensure they are sharing the right information with the right people.
<urn:uuid:a6fbd9d7-ef0c-452c-9d24-0ad02d83f890>
CC-MAIN-2017-04
http://www.datacenterjournal.com/drives-people-protect-information-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00480-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952054
1,554
2.5625
3
On August 2, two security experts will deliver the chilling news at Defcon, one of the world's largest hacking conferences, that cars are no less vulnerable to hacking than your tablet, phone, or desktop. There's one difference, though: car hacking poses more dangerous consequences. Anyone who has visited a car show in the past few years knows that the automobile of today and tomorrow is fundamentally a computer system on wheels. Anyone who has read the headlines of technology news sites like this one also knows that where there is technology progress, there are technology risks. In addressing the security audience at the upcoming Defcon event in Las Vegas on Friday, the researchers will reveal how they turned a laptop into a potentially lethal weapon as it took control of a car while someone else, Andy Greenberg of Forbes, was driving. A Toyota Prius and a Ford Escape were used for the experiments. The results of the hack include forcing a Toyota Prius to brake while traveling, making the steering wheel jerk from side to side by hijacking the "park assist" feature, and disabling the brakes of a Ford Escape while it was traveling at a slow speed, according to Charlie Miller, a security researcher with Twitter, and Chris Valasek, director of security intelligence at computer security firm, IOActive. Miller and Valasek are hackers of the "white hat" variety, who uncover software vulnerabilities to get a step ahead of real criminals. These hackers can prevent the flaws from turning into nightmares for public and private organizations and end users. They told the BBC that they would "love for everyone to start having a discussion about this, and for manufacturers to listen and improve the security of cars." The researchers connected the laptop to the vehicles' electronic control units (ECUs). The ECUs are part of a car's network that control actions such as accelerating, braking, and steering, by way of the on-board diagnostics (OBD) port. The sleuths were in the car, connecting to the OBD port ostensibly to overtake the car's functions. However, IT managers could argue that this is not really "hacking" in the sense of someone taking control of a car's system remotely. More Connectivity, More Challenges Still, security experts firmly believe that when it comes to technology, build something new and attackers will figure out how to gain control. While greater use of electronic controls and connectivity means power to enhance transportation safety and efficiency, as with new advancements in driverless cars, the same use brings a new challenge -- staying vigilant about potential vulnerabilities. Wheeling and Dealing Miller and Valasek intend to make their findings known at Defcon -- findings based on months of research funded by the government's research arm, the Defense Advanced Research Projects Agency (DARPA). In the spirit of white hat hacking, which seeks to preserve the integrity of computer information, Miller told the BBC, "The information will be released to everyone. If you're just relying on the fact people aren't talking about the problem to stay safe, you're not really dealing with the problem."
<urn:uuid:9cd82c3a-1849-4fcd-b00b-dc9ed6c2c411>
CC-MAIN-2017-04
http://www.cio-today.com/article/index.php?story_id=13200C6ZGGF0
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00388-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95216
629
2.609375
3
One month after a disputed presidential election sparked widespread unrest in Iran, the country's government has initiated a cyber-crackdown that is challenging hackers across the globe to find new ways to help keep Iranian dissidents connected to the Web. While the government's initial efforts to censor the Internet were blunt and often ineffective, it has started employing more sophisticated tools to thwart dissidents' attempts to communicate with each other and the outside world. Iranian dissidents are not alone in their struggle, however, as several sympathetic hacker groups have been working to keep them online. One such group is NedaNet, whose mission is to "help the Iranian people by setting up networks of proxy severs, anonymizers, and any other appropriate technologies that can enable them to communicate and organize." NedaNet project coordinator Morgan Sennhauser, who has just written a paper detailing the Iranian government's latest efforts to thwart hackers, says that the government's actions have been surprisingly robust and have challenged hackers in ways that the Chinese government's efforts at censorship have not. "China has several gigabytes per second of traffic to deal with and has a lot more international businesses," he says. "They can't be as heavy-handed with their filtration. The Iranians aren't as concerned about that… so they get to use all these fancy toys that, if the Chinese used them, could cripple their economy." With that in mind, this article will look at five of the most commonly-used technologies the Iranian government has been using to stifle dissent, as outlined in Sennhauser's paper. IP Blocking is one of the most basic methods that governments such as Iran use for censorship, as it simply prevents all packets going to or from targeted IP addresses. Sennhauser says that this was how the government banned access to the BBC's Persian news services and how it took down websites critical of the election. But while these sorts of operations are relatively simple to execute, they don't tackle the problem of individual communications between users, especially if the users have set up multi-hop circuits that use multiple servers to create a proxy ring. Traffic Classification (QoS) This is a much more sophisticated method of blocking traffic than IP blocking, as governments can halt any file sent through a certain type of protocol, such as FTP. Because the government knows that FTP transfers are most often sent through TCP port 21, they can simply limit the bandwidth available on that port and throttle transfers. Sennhauser says that this type of traffic shaping practice is the most common one used by governments today, as "it is not too resource intensive and is fairly easy to set up." Shallow Packet Inspection Shallow packet inspection is basically a blunter, broader version of the deep packet inspection (DPI) technique that is used to block packets based on their content. But unlike DPI, which intercepts packets and inspects their fingerprints, headers and payloads, shallow packet inspection makes broad generalities about traffic based solely on checking out the packet header. Although shallow packet inspection can't provide the Iranian government with the same detailed traffic assessments as DPI, Sennhauser says that it is much better at handling volume than DPI. "It's a less refined tool, but it can also deal with a lot more traffic than true DPI can," he explains. "Shallow packet inspection is more judging a book by its cover. If a packet says that it's SSL (Secure Sockets Layer) in the header, then a shallow packet inspector takes it at face value." Sennhauser notes, however, that this is a double-edged sword. If a user disguises their SSL packets as FTP packets in the header, the shallow packet inspector won't be able to tell the difference. This is a slightly more refined method of throttling packets than shallow packet inspection, as it looks not only at the packet header but at its length, frequency of transmission and other characteristics to make a rough determination of its content. Sennhauser says the government can use this technique to better classify packets and not throttle traffic sent out by key businesses. "A lot of things don't explicitly say what they are. For example, a lot of VPN traffic is indistinguishable from SSH traffic, which means that it would be throttled if SSH was," he says. "But what if businesses relied on VPN connections? You'd move the system to fingerprinting, where the two are easily distinguishable." Deep Packet Inspection / Packet Content Filtering DPI is the most refined method that the government has for blocking Internet traffic. As mentioned above, deep packet inspectors examine not only a packet's header but also its payload. This gives governments the ability to filter packets at a more surgical level than any of the other techniques discussed so far. "Viewing a packet's contents doesn't tell you much on its own, especially if it's encrypted," he says. "But combining it with the knowledge gained from fingerprinting and shallow packet inspection, it is usually more than enough to figure out what sort of traffic you're looking at." There are downsides to using DPI, of course: it's much more complicated to run and is far more labor-intensive than other traffic-shaping technologies. But on the other hand, Sennhauser says there's no magic bullet for getting around DPI as users can usually only temporarily elude it by "finding flaws in their system." And even this won't help for long, as the government can simply correct their system's flaws once they're discovered. "Once they fix the flaw, you've lost unless you can figure out some real way to circumvent it," Sennhauser notes. Endgame still unclear Sennhauser says that the government has employed these technologies smartly despite being caught flat-footed by the initial furor after the election. Indeed, he thinks the only reason that Iran hasn't yet completely shut down dissidents' communications is that they've had to fight with an army of hackers who tirelessly search for flaws in their system. "It really is an arms race," he says. "They create a problem, we circumvent it, they create another, we get around that one. This continues on until the need to do so is removed. The circumstances which will end the competition aren't clear yet." This story, "Five Technologies Iran is Using to Censor the Web" was originally published by Network World.
<urn:uuid:611fa8bb-865f-4006-985e-22253a4ab2c3>
CC-MAIN-2017-04
http://www.cio.com/article/2426238/security0/five-technologies-iran-is-using-to-censor-the-web.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00508-ip-10-171-10-70.ec2.internal.warc.gz
en
0.970564
1,323
2.6875
3
What is it? Cascading Style Sheets (CSS) is a style sheet format for HTML documents endorsed by the World Wide Web Consortium (W3C). It is used to define layouts, fonts, colours and other aspects of web document presentation. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. When XHTML2 (due in 2007) becomes mainstream, use of stylesheets will effectively become mandatory. This should make life easier for web professionals, particularly those maintaining pages that need updating frequently, since HTML pages themselves will no longer contain presentational tags. CSS enables browser and device independence (provided the browsers are compliant), and the same stylesheets can be used to define presentation in print, audio (specifying speed, pronunciation and emphasis) and Braille. CSS has a simple English-based syntax. It can be used both for XML and HTML documents, and is also used in conjunction with Extensible Stylesheet Language (XSL). CSS and XSL use the same underlying formatting model, and designers can use the same formatting features in both languages. Where did it originate? The idea of stylesheets has been around since the 1970s. The W3C began thinking about stylesheet languages in 1994. CSS is based on two proposals: Cascading HTML Stylesheets and Stream-Based Stylesheets. CSS level 1 emerged in 1996, and level 2 (effectively the version used today) arrived in 1997. However, it was not until 2000 that the first browser to provide full CSS1 support became available. No browser has yet fully implemented CSS2, and this compromises the goal of full device independence. What's it for? CSS enables presentation to be separated from content. "Cascading" means that priorities are assigned when conflicting definitions of presentation are offered by the original designers, the browser, or users. Users can define presentation to suit their own preferences and needs. What makes it special? Separating presentation from content means that all the pages on a site can have their appearance changed consistently just by changing the stylesheet. Documents are smaller and easier to maintain since they do not contain unique presentational instructions. How difficult is it to master? CSS may have a simple syntax, but in the real world there are problems with bugs and lack of support - or worse, misrendering of CSS - in different browsers. By one estimate, Internet Explorer 6 does not support about 30% of CSS level 2. This means designers still have to check and test cross-browser compliance, as they do with HTML pages. Some authoring tools help with the complexities of CSS use but, as with browsers, support is patchy. For Cascading Style Sheets training, the best place to start is the W3C website. Here you can find tutorials, updates on the development of CSS, links to external sources, and details of books and other work by W3C people involved with CSS development, such as Hakon Wium Lie, Bert Bos and Dave Raggett. You should not have to spend much the W3C recommends you start by downloading a CSS-supporting browser (Opera is the obvious one, but W3C lists many others). If you find the W3C approach too austere, there are plenty of alternative free tutorials online. Rates of pay Cascading Style Sheets experience is needed for many web developer/designer jobs, and it can also be required with premium skills such as Adobe/Macromedia Coldfusion. Salaries for web developers start at £25,000.
<urn:uuid:d14b8425-534c-40f6-a529-1c6dd8711d58>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240078330/Using-CSS-to-bring-style-to-web-development-work
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00562-ip-10-171-10-70.ec2.internal.warc.gz
en
0.91562
756
3.75
4
So the other day we were taking our species to task for making us the trailer trash of the universe because of our penchant for killing and befouling our own nests. Yeah, we know, a pleasant way to begin a Friday. Try being the one in six species attempting to enjoy their Friday knowing they may become extinct over the next century thanks to climate change, according to an analysis of data published Thursday in the journal Science. University of Connecticut ecology professor Mark Urban synthesized published data to "estimate a global mean extinction rate and determine which factors contribute the greatest uncertainty to climate change–induced extinction risks." Here's what Urban concluded: Results suggest that extinction risks will accelerate with future global temperatures, threatening up to one in six species under current policies. Extinction risks were highest in South America, Australia, and New Zealand, and risks did not vary by taxonomic group. We urgently need to adopt strategies that limit further climate change if we are to avoid an acceleration of global extinctions. See, you don't have to agree that humans are causing global warming to realize that it can have some disastrous effects. Losing one in six species could wreak havoc with the food chain. Also, hunters, that would mean far fewer animals to shoot. C'mon, NRA, take the lead on this! The Sierra Club could use a heavily armed partner. Get it together before it's too late, humans. This story, "Our planet's slow-motion mass extinction" was originally published by Fritterati.
<urn:uuid:d25ec486-8100-4621-b25d-1d81c1d30618>
CC-MAIN-2017-04
http://www.itnews.com/article/2917619/our-planets-slow-motion-mass-extinction.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00378-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947575
315
3.3125
3
Industrial fermentation is deliberate, ex-situ culturing of microorganisms which yields, by further processing, products useful to humans. The end product of fermentation (after purification and/or downstream processing) finds applications in food as well as in many other industries. Commodity chemicals, such as acetic acid, citric acid, and ethanol are manufactured in bulk by this process. In this report, production of extracellular metabolites (chemical compounds) and transformation of products (transformed products of substrate) is considered. The global market for fermentation products was valued around $30 billion in 2014 and is expected to grow at 5.4% in the period 2015-2019. Fermentation products find it application in pharmaceutical industry, alcohol industry, food and beverages industry among many others. The low cost, better outputs, and natural structure allows the fermentation products to be invariably used at large in several industries across the globe. Strive for the eco-friendly production system and advantage of easy and convenient production of bioactive biomolecules which has numerous industrial applications, and rising environmental concerns are some of the major factors is driving the fermentation market. The growth in this market will be brought about by robust demand in different product segments and applications. Some other types of segments which include vitamins, xanthan, and antibiotics are also very attractive in the global fermentation products market. Moreover, an increase in demand for alcohol is one of the major drivers of the market. Fermentation chemicals, especially alcohols, are extensively used for various applications in the F&B industry in the form of spirits, liquors, and cooking wines. Fermented alcohols have become an integral part of various production processes in the chemical industry, leading to their increased demand. The microorganisms which are used are bacteria, yeasts and molds. For these microorganisms and for fermentation the environment changes with the change in the substrate and kind of molecule to be manufactured. Some of these, physical factors dissolved oxygen level, nutrient levels, and temperature. The substrate used for the fermentation process is derivatives of agricultural commodities and is vastly affected by its seasonality. Some of them are starch and carbohydrate-rich agricultural products of corn, sugarcane, tapioca, sugar beet, and wheat Thus it is one of the major challenges that the industry is facing. High operating costs, high initial investments, research & development, process optimization bottlenecks and the limited availability of raw materials are making it difficult for the manufacturers of fermentation products to gain profits. The report covers detailed outlook on the fermentation products market by segmenting the overall market based on its product segment analysis and application segment analysis. The market based on product type is segmented as alcohols, Amino acids, organic acids, nutritionals & antibiotics and others. Alcohols accounted for the largest market share around 80%. The total revenue generated by the alcohol segment of fermentation chemicals market was around US$ 23 billion in 2014. The market is also segmented based on application which includes breweries wineries and spirits, bakery & confectionary, dairy, Pharmaceutical, animal feed, and others. The breweries, wineries and spirit application segment held the largest market share in 2014 with approx. 56 % market share in 2014. Based on regional analysis, the regional segmentation includes the current and forecast demand for different regions such as Asia Pacific, North America, Europe, and Rest of the world (RoW). Among the various geographies, North America represents the most significant and largest market for the fermentation products industry. In the recent past, growth in this market throughout this region was driven by surging growth in the pharmaceutical market in the U.S. Asia Pacific is the second largest market for fermentation chemicals, subsequently followed by Europe and RoW. Due to the trending market saturation witnessed in the European and North American markets, Asia Pacific region is expected to be the future potential market until the end of the forecast period. Some of the key players are Ajinomoto Company, Archer Daniels Midland company, Cargill Incorporated, The Dow Chemical Co., AB Enzymes, E. I. du Pont de Nemours and Company, Evonik Industries, Novozymes, Royal DSM NV, Chr. Hansen A/S, Daesang Corporation, Danisco A/S, Jungbunzlauer AG., Vedan International Limited, BASF SE and others. 1.1 Objectives of the Study 1.2 Market Definition and Scope of the Study 1.3 Markets Covered 2 Research Methodology 2.1 Top-Down Approach 2.2 Bottom-Up Approach 3 Executive Summary 4 Market Overview 4.2 Global Market Share Analysis, By Company 4.3 Global Regulatory Environment 4.4 Food & Feed Fermentation Products Market: Drivers and Restraints 5 Global Food & Feed Fermentation Products Market, By Product 5.2 Amino Acids 5.3 Organic Acids 5.4 Nutritionals and Antibiotics 6 Global Food & Feed Fermentation Products Market, By Application 6.2 Breweries and Wineries 6.3 Bakery and COnfectionery 6.6 Animal Feed 7 Global Food & Feed Fermentation Products Market, By Geography 7.2 North America 7.5 Latin America 8 Competitive Landscape 8.2 Competitive Trends 8.2.1 Mergers and Acquisitions 8.2.2 Agreements, Partnerships, Collaborations, and Joint Ventures 8.2.3 New Product Launches 8.2.4 Investments, Expansions, and Other Developments 8.2.5 Product Registration/Approval 9 Company Profiles 9.1.2 Product Portfolio 9.1.5 Recent Developments 9.2 AB Enzymes 9.8 Chr. Hansen 9.9 BASF SE 9.10 Vedan International Please fill in the form below to receive a free copy of the Summary of this Report Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement
<urn:uuid:4842461a-33ba-4983-875b-45ad97c73f20>
CC-MAIN-2017-04
http://www.micromarketmonitor.com/market-report/food-feed-fermentation-ingredients-reports-3346688128.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00378-ip-10-171-10-70.ec2.internal.warc.gz
en
0.905612
1,262
2.625
3
In this post we revisit another old friend that is used quite often in all of our modern networks, the Domain Name System (DNS). DNS is a hierarchical naming system for computers, services, or any other resource connected to the Internet or a private network. The DNS process associates database information with domain names that have been assigned to each of the participating devices. More importantly, DNS translates domain names that are meaningful to people into the numerical binary identifiers associated with networking equipment. This translation is for the purpose of locating and addressing these devices worldwide. An often-used analogy to explain the DNS is that it serves as the “phone book” for the Internet by translating human-friendly language into IP addresses. For example, www.bikeshop.com could translate into an IP address such as 220.127.116.11. A domain name is an identification label that defines a realm of administrative autonomy, authority, or control in the Internet. It is based on the DNS. The DNS makes it possible to assign unique and, in many cases, very descriptive domain names to groups of Internet users in a standard manner. This assignment can be totally independent of each user’s physical location. Because of this, World Wide Web (WWW) hyperlinks and Internet contact information can remain consistent and constant even if the current Internet routing arrangements change, or the participant uses a mobile device. Internet domain names are easier to remember than 18.104.22.168 Internet Protocol Version 4 (IPv4) or 2001:db8:1f70::999:de8:7648:6e8 Internet Protocol Version 6 (IPv6). If a user had to remember the IP addresses of all of the Web sites they visit every day, they would all suffer memory overload. Humans just are not very well adapted at remembering strings of numbers. However, most of us are good at remembering words or names. That is where domain names come in. You probably have hundreds of domain names stored in your head. For example: - www.cisco.com – a typical name - www.yahoo.com – one of the world’s best-known names - www.mit.edu – a popular EDU name - encarta.msn.com – a Web server that does not start with www - www.bbc.co.uk – a name using four parts rather than three - ftp.microsoft.com – an FTP server rather than a Web server The COM, EDU, and UK portions of these domain names are called the top-level domain or first-level domain. There are several hundred top-level domain names, including COM, EDU, GOV, MIL, NET, ORG, and INT, as well as unique two-letter combinations for every country. Characteristically, every organization that maintains a computer network will have at least one server handling DNS queries. That server, which is called a name server, will hold a list of all the IP addresses within its network. In addition, the server will build and hold a cache of IP addresses for recently accessed computers outside the network. Each computer on each network needs to know the location of only one name server. When your computer requests a name to IP address translation, one of three things happens, depending on whether or not the requested IP address is within your local network. - If the requested IP address is registered locally and is located within your organization’s network, your computer will receive a response directly from one of the local name servers listed in your workstation configuration. In this instance, there usually is little or no wait for a response. - If the requested IP address is not registered locally and is physically located outside your organization’s network, but someone within your organization has recently requested the same IP address, then the local name server will retrieve the IP address from its cache and return it to your computer. In this case, there should be little or no wait for a response. - If the requested IP address is not registered locally, and you are the first person to request information about this system in a certain period of time, usually ranging from 12 hours to one week, then the local name server will perform a search on behalf of your workstation. This search may involve querying two or more other name servers at potentially very remote locations. These queries can take anywhere from a second or two up to a minute.The delay will depend on how well connected you are to the remote network and how many intermediate name servers must be contacted. Sometimes, due to the lightweight protocol used for DNS, you may not receive a response. In these cases, your workstation or client software may continue to repeat the query until a response is received. Or, you may receive an error message. When you use an application such as telnet to connect to another computer, you most likely type in the domain name rather than the IP address of that computer. The telnet application takes the domain name and uses one of the above methods to retrieve its corresponding IP address from the name server. A good analogy is to think of DNS as an electronic telephone book for a computer network. If you know the name of the computer in question, the name server will look up its IP address. Within most Internet applications, you will not see the IP address of the computer to which you’re connecting. People take advantage of this when they input meaningful URLs and e-mail addresses without having to know how their machine will actually locate them. In my next post, we will take a more in-depth look at how DNS and its associated applications and protocols actually perform their functions. Author: David Stahl
<urn:uuid:be0a592d-73c9-4438-ac17-025b4fd83cdc>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2010/04/13/revisiting-the-domain-name-system-dns-part-1/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00406-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934181
1,171
3.828125
4
"Hey, Andy, check out this data I have. What's the best chart to show it?" I get asked this question a lot. My answer is always the same: It depends. This is partly because it depends on the audience, the purpose and the type of data you have. What is most important is that your choice of chart is determined by the story within the data just as much as it is by anything else. Just because you have geographical data, it doesn't mean you should make a map. Just because you have a date, it doesn't mean a trend line is the best thing. And just because you want to see the relationship of a part to a whole, it doesn't mean you should use a pie chart. Actually, you should rarely use a pie chart, but that's another, well-documented story! "Wait a minute, Andy. Are you saying I shouldn't use a map when I have geographical data?" No! What I'm saying is that you must explore the data first. You must find the stories or trends in your data, and then find the best articulation of that story: the one that will most resonate with your audience. If you simply choose the chart everyone says you should do for a particular type of data, how can you know you've found the best story, let alone the best way to show it? A famous, if controversial, example is the chart used by Space Shuttle engineers to try and abort the fatal Challenger explosion. What happened? The rocket booster engineers had been aware for some time that the O-rings could fail in cold temperatures. They tried to communicate this to flight planners over time. On the day before the flight itself, one of the engineers’ last attempts to outline the problem, and abort the flight, was made using the chart above. NASA overruled, and the take-off went ahead. Edward Tufte argued that if only they had drawn the chart differently, they would have been able to persuade the flight planners to abort the flight, and thus save the lives of seven astronauts. His argument is a gross oversimplification of the circumstances leading up to the disaster; there was much more to the abort attempt than one single chart. However, the idea that showing data in the best way is vital is key. Let's look at another example: small multiple maps. These are great if you have geographical data over time. Two datasets that apparently fit the bill are U.S. Road Fatalities (data here) and the U.S. Drought Index (data here). Both contain many incredible stories. I've written extensively about my discoveries with the Fatalities data set, and the drought index data became an incredible graphic in The New York Times. Below is a view of the data showing just the period 1999 - 2014. Each map shows the drought index for one month, exposing the trend within a single year period. The small multiple map is amazing for the drought data; I can see national and regional intensity as it changes throughout a year. The U.S. Road Fatalities data contains similar fields (state, date, etc.) so therefore it too should make a great small multiple, right? Wrong. Here it is: Boring isn't it? Why is that? It turns out that there just aren't any interesting monthly variations at the state level. This data does contain incredible geographical and time-related stories, but the small multiple map does not reveal them. Here are three simple steps you can take to get this right: 1. Know the guidelines for working with different data types. I highly recommend books by Stephen Few or Ben Jones for the perfect foundation in this area. Once you know the guidelines, you know the starting points for your explorations. 2. Explore your data and iterate quickly. You need to fail fast, and fail often, in order to discover the story in your data. 3. Seek feedback from others. While the story in the chart might resonate with you, there’s no guarantee others will get the same message. Get feedback on your charts to be sure your story is clear. Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.
<urn:uuid:26ac217a-559e-4ac7-87c9-daf9d855150c>
CC-MAIN-2017-04
http://www.cio.com.au/article/582860/fit-chart-story-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00314-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952101
881
2.546875
3
With great power comes not only great responsibility, but often great complexity -- and that sure can be the case with R. The open-source R Project for Statistical Computing offers immense capabilities to investigate, manipulate and analyze data. But because of its sometimes complicated syntax, beginners may find it challenging to improve their skills after learning some basics. If you're not even at the stage where you feel comfortable doing rudimentary tasks in R, we recommend you head right over to Computerworld's Beginner's Guide to R. But if you've got some basics down and want to take another step in your R skills development -- or just want to see how to do one of these four tasks in R -- please read on. I've created a sample data set with three years of revenue and profit data from Apple, Google and Microsoft. (The source of the data was the companies themselves; fy means fiscal year.) If you'd like to follow along, you can type (or cut and paste) this into your R terminal window: fy <- c(2010,2011,2012,2010,2011,2012,2010,2011,2012) company <- c("Apple","Apple","Apple","Google","Google","Google","Microsoft","Microsoft","Microsoft") revenue <- c(65225,108249,156508,29321,37905,50175,62484,69943,73723) profit <- c(14013,25922,41733,8505,9737,10737,18760,23150,16978) companiesData <- data.frame(fy, company, revenue, profit) The code above will create a data frame like the one below, stored in a variable named "companiesData": (R adds its own row numbers if you don't include row names.) If you run the str() function on the data frame to see its structure, you'll see that the year is being treated as a number and not as a year or factor: str(companiesData) 'data.frame': 9 obs. of 4 variables: $ fy : num 2010 2011 2012 2010 2011 ... $ company: Factor w/ 3 levels "Apple","Google",..: 1 1 1 2 2 2 3 3 3 $ revenue: num 65225 108249 156508 29321 37905 ... $ profit : num 14013 25922 41733 8505 9737 ... I may want to group my data by year, but don't think I'm going to be doing specific time-based analysis, so I'll turn the fy column of numbers into a column that contains R categories (called factors) instead of dates with the following command: companiesData$fy <- factor(companiesData$fy, ordered = TRUE) Now we're ready to get to work. One of the easiest tasks to perform in R is adding a new column to a data frame based on one or more other columns. You might want to add up several of your existing columns, find an average or otherwise calculate some "result" from existing data in each row. There are many ways to do this in R. Some will seem overly complicated for this easy task at hand, but for now you'll have to take my word for it that some more complex options can come in handy for advanced users with more robust needs. Simply create a variable name for the new column and pass in a calculation formula as its value if, for example, you want a new column that's the sum of two existing columns: dataFrame$newColumn <- dataFrame$oldColumn1 + dataFrame$oldColumn2 As you can probably guess, this creates a new column called "newColumn" with the sum of oldColumn1 + oldColumn2 in each row. For our sample data frame called data, we could add a column for profit margin by dividing profit by revenue and then multiplying by 100: companiesData$margin <- (companiesData$profit / companiesData$revenue) * 100 That gives us: Whoa -- that's a lot of decimal places in the new margin column. We can round that off to just one decimal place with the round() function; round() takes the format: round(number(s) to be rounded, how many decimal places you want) So, to round the margin column to one decimal place: companiesData$margin <- round(companiesData$margin, 1) And you'll get this result:
<urn:uuid:2576311a-0a82-403c-b370-a53b5dc96e47>
CC-MAIN-2017-04
http://www.itnews.com/article/2486425/business-intelligence/business-intelligence-4-data-wrangling-tasks-in-r-for-advanced-beginners.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00250-ip-10-171-10-70.ec2.internal.warc.gz
en
0.774008
928
3.25
3
The FBI’s Safe Online Surfing (SOS) Internet Challenge, which teaches students how to keep their information safe, avoid online predators, and identify cyberbullying, begins Thursday for the 2016-2017 school year. Since 2012, teachers in grades 3-8 have signed up more than 870,000 students to participate in the national competition. The website has one island per grade level where students play age-appropriate games to learn about Internet safety. After navigating through the islands, students take a quiz based on the topics they learned. The scores in each school are compiled and shown on a leaderboard for each month. The top schools at the end of each month receive a national FBI-SOS award. The schools compete in three categories depending on how many students are participating. The categories are Starfish, which includes schools that have 5-50 participants; Stingray, which includes schools that have 51-100 participants; and Shark, which includes schools that have more than 100 participants. “The information presented in the program has really resonated with our students,” said Bradley Evers, teacher and athletic director at Martin Luther School in Oshkosh, Wis. Evers has used the FBI-SOS challenge with his seventh- and eighth-grade students. The students learned about copyright law and plagiarism along with Internet safety. “According to my most recent contact with the FBI Public Affairs agent, we remain the only school in Wisconsin that has been recognized as a national winner,” Evers said. “I look forward to using the program at our school in the future.” Chrissi MacGregor, teacher at North Gwinnet Middle School in Sugar Hill, Ga., has used the program with sixth-, seventh-, and eighth-grade students. “I was looking for an online, fun, informative source to help reinforce Internet safety vocabulary as well as have some real-life scenarios,” MacGregor said. “I think the kids just like something that is interactive.” MacGregor said she likes the games that help to reinforce the vocabulary that the students learn and the quiz that students take at the end. However, she said the challenge could be updated to make it easier for teachers to see what each student comprehends. “When I need to populate the results of the quiz I get the codes with scores, so then I have to match that up with the student. Not very friendly,” MacGregor said. “I would like a student login. Then when I want to produce a report I can have their data along with their names.” Anyone can participate in the games on the FBI website, but only students can compete to win awards. “We couldn’t be more pleased with how teachers and students are responding to the program and how participation is growing in such leaps and bounds,” said Scott McMillion, official for the FBI Criminal Investigative Division’s Violent Crimes Against Children Section. “FBI-SOS is helping to turn our nation’s young people into a more cyber-savvy generation and to protect them from online crime now and in the future.”
<urn:uuid:1dc7511a-16d4-44d5-a0a1-fdf2cb873b99>
CC-MAIN-2017-04
https://www.meritalk.com/articles/students-compete-in-fbi-online-safety-challenge/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00094-ip-10-171-10-70.ec2.internal.warc.gz
en
0.968317
664
2.65625
3
The chronology of high performance computing can be divided into “ages” based on the predominant systems architectures for the period. Starting in the late 1970s vector processors dominated HPC. By the end of the next decade massively parallel processors were able to make a play for market leader. For the last half of the 1990s, RISC based SMPs were the leading technology. And finally, clustered x86 based servers captured market priority in the early part of this century. This architectural path was dictated by the technical and economic effect of Moore’s Law. Specifically, the doubling of processor clock speed every 18 to 24 months meant that without doing anything, applications also roughly doubled in speed at the same rate. One effect of this “free ride” was to drive companies attempting to create new HPC architectures from the market. Development cycles for new technology simply could not outpace Moore’s Law-driven gains in commodity technology, and product development costs for specialized systems could not compete against products sold to volume markets. The more general-purpose systems were admittedly not the best architectures for HPC users’ problems. However commodity component based computers were inexpensive, could be racked and stacked, and were continually getting faster. In addition, users could attempt to parallelize their applications across multiple compute nodes to get additional speed ups. In a recent Intersect360 study, users reported a wide range of scalable applications, with some using over 10,000 cores, but with the median number of cores used by a typical HPC application of only 36 cores. In the mid 2000s, Moore’s Law went through a major course correction. While the number of transistors on a chip continued to double on schedule, the ability to increase clock speed hit a practical barrier — “the power wall.” The exponential increase in power required to increase processor cycle times hit practical cost and design limits. The power wall led to clock speeds stabilizing at roughly 3GHz and multiple processor cores being placed on a single chip with core counts now ranging from 2 to 16. This ended the free ride for HPC users based on ever faster single-core processors and is forcing them to rewrite applications for parallelism. In addition to the power wall, the scale out strategy of adding capacity by simply racking and stacking more compute server nodes caused some users to hit other walls, specifically the computer room wall (or “wall wall”) where facilities issues became a major problem. These include physical space, structural support for high density configurations, cooling, and getting enough electricity into the building. The market is currently looking to a combination of four strategies to increase the performance of HPC systems and applications: parallel applications development; adding accelerators to standard commodity compute nodes; developing new purpose-built systems; and waiting for a technology breakthrough. Parallelism is like the “little girl with the curl,” when parallelism is good it is very, very good, and when it is bad it is horrid. Very good parallel applications (aka embarrassingly parallel) fall into such categories as: signal processing, Monte Carlo analysis, image rendering, and the TOP500 benchmark. The success of these areas can obscure the difficulty in developing parallel applications in other areas. Embarrassingly parallel applications have a few characteristics in common: - The problem can be broken up into a large number of sub-problems. - These sub-problem are independent of one another, that is they can be solved in any order and without requiring any data transfer to or from other sub-problems, - The sub-problems are small enough to be effectively solved on whatever the compute node du jour might be. When these constraints break down, the programming problem first becomes interesting, then challenging, then maddening, then virtually impossible. The programmer must manage ever more complex data traffic patterns between sub-problems, plus control the order of operations of various tasks, plus attempt to find ways to break larger sub-problems into sub-sub-problems, and so on. If this were easy it would have been done long ago. Adding accelerators to standard computer architectures is a technique that has been used throughout the history of computer architecture development. Current HPC markets are experimenting with graphics processing units (GPUs) and to a lesser extent field programmable gate arrays (FPGAs). GPUs have long been a standard component in desktop computers. GPUs are of interest for several reasons: they are inexpensive commodity components, they have fast independent memories, and they provide significant parallel computational power. FPGAs are standard devices long in use within the electronics industry for quickly developing and fielding specialty chips that are often replaced in products by standard ASICs over time. FPGAs allow HPC users to essentially customize the computer to the requirements of their applications. In addition they should benefit from Moore’s Law advancements over time. Challenges for accelerator-based systems stem from a single program being run over two different processing devices, one a general-purpose processor with limited speed, and the other an accelerator with high processing speed but with limited overall functionality. Challenges fall into three major areas: - Programming — Computers can be built to arbitrarily high levels of complexity, however the average complexity of computer programmers remains a constant. Accelerators add two levels of complexity for applications development, first writing a single program that is divided between two different processor types, and second, writing a program that can take advantage of the specific characteristics of the accelerator. - Control and communications — Performance gains from accelerations can be diminished or lost from compute overhead generated from setting up the problem on the accelerator, moving data between the standard processor and the accelerator, and coordinating the operations of both compute units. - Data management — Programming complexity is increased and performance is reduced in cases where the standard processor and accelerator use separate independent memories. Issues for managing data across multiple processors range from determining proper data decomposition, to efficiently moving data in and out of the proper memories, to stalling processes while waiting on data from another memory, to debugging programs where it is unclear which processor has last modified a data item. Many of these issues are associated with parallel computing in general, however they are still significant for accelerator-based operations, and the close coupling between the processor and the accelerator may require programmers to have a deep understanding of the behavior of the physical hardware components. Purpose-built systems are systems that are designed to meet the requirements of HPC workflows. (These systems were initially called supercomputers.) In today’s market, new HPC architectures still make use of commodity components such as processor chips, memory chips/DIMMS, accelerators, I/O ports, and so on. However they introduce novel technologies in such areas as: - Memory subsystems — Arguably the most important part of any HPC computer is the memory system. HPC applications tend to stream a few large data sets from storage through memory, into processors, and back again for a normal workflow. In addition, such requirements as spare matrix calculations lead to requirements for fast access to non-contiguous data elements. The speed at which the data can be moved is the determining factor in the ultimate performance in a large portion, if not the majority, of HPC applications. - Parallel system interconnects — Parallel computer essentially address the memory bandwidth problem by creating a logically two dimension memory structure, one dimension is within nodes. i.e., between a nodes local memory and local processors. Total bandwidth in this case is the sum off all node bandwidths and is very high. The second dimension is the node to node interconnect, which is essentially a specialized local area network that is significantly slower in both bandwidth and latency measures than local node memories. As applications become less embarrassingly parallel the communications over the interconnect increases, and the interconnect performance tends to become the limiting factor in overall applications performance. - Packaging — The speed of computer components. i.e., processors and memories can be increased by reducing the temperature at which they run. In addition, parallel computing latency issues can be addressed by simply packing nodes closer together, which requires both fitting more wires into a smaller space, and removing high amounts of heat from relatively small volumes. Developing specialized HPC architectures has, up until recently, been limited by the effects of Moore’s Law, which has shortened product cycle times for standard products, and limited market opportunities for specialized systems. Those HPC architecture efforts that have gone forward have generally received support from government and/or large corporation R&D funds. Waiting for a technology breakthrough (or the “then a miracle happens” strategy) is always an alternative; it is also the path of least resistance, and one step short of despair. Today we are looking at such technologies as optical computing, quantum entanglement communications, and quantum computers for potential future breakthroughs. The issue with relying on future technologies is there is no way to tell first, if a technology concept can be turned into viable a product — there is many a slip between the lab and loading dock. Second, even if it can be shown that a concept can be productized, it is virtually impossible to predict when the product will actually reach the market. Even products based on well understood production technologies can badly overrun schedules, sometimes bringing to grief those vendors and users who bet on new products. The above arguments suggests that the next age of high performance computing could be based on anything from reliance on clusters with speed boosts add-ons, to a brave new computer based on technologies that may not have been heard of yet. (You can never go wrong with a forecast like that.) That said, I am willing to lay odds on purpose-built computers becoming a major component, if not the defining technology of the HPC market within the next five years, for two major reasons. First, there is no “easy” technical solution. Single thread performance has plateaued; the usefulness of accelerators is dependent on both the parallelism inherent to the application and the connectivity between the accelerator and the rest of the system; and parallelism, while an advantage where it can be found, is not a panacea for computing performance. Second, the economics of HPC system development have changed. Users cannot simply sit back and wait for a faster CPU, but must make significant investments in either new software, or new architectures, or both. Staying with old economic models will lead to the computation tools defining the science, where work will be restricted to those areas that will run well on off-the-shelf computers. The HPC market is at a point where the business climate will support greater levels of innovation at the architectural level, which should lead to new organizing principle for HPC systems. The goal here is to find new approaches that will effectively combine and optimize the various standard components into systems that can continue to grow performance across a broad range of applications. Of course we can always wait for a miracle to happen.
<urn:uuid:596a0cf2-e40c-40dd-91c5-5cebccb4f3ff>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/12/08/revisiting_supercomputer_architectures/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00002-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946065
2,254
3.3125
3
Organizations have spent vast sums of money on security systems and, when deployed and operated correctly, they play a key role in safeguarding the organization. However, most systems have one critical dependency: The traffic flowing through must be readable. If the traffic is encrypted, many systems are almost completely useless, giving the system owner a false sense of security. Exactly how much of a problem is this? A recent report published by Palo Alto Networks sheds some light. According to the company's Application and Usage Risk Report, 7th Edition, 36% bandwidth on corporate networks is encrypted. That's a 36 in 100 chance your network-based information security systems will miss the bad stuff. And in reality, the chance is greater than 36%, because the bad guys know where to hide the bad stuff so your tools can't see it. Furthermore, the percentage of traffic that is encrypted is increasing as more applications and websites use encrypt-by-default policies. CLEAR CHOICE TEST: SonicWall stands tall in SSL decryption testing So what can be done? Clearly, blocking all encrypted traffic at the enterprise edge is not feasible. The answer lies with a technological capability that allows us to peek inside the encrypted traffic: on-the-fly decryption. The remainder of this article is dedicated to explaining how this can be done. I won't be referring to any one vendor's implementation, but rather will attempt to stick to the basics and explain how the technology works. Contrary to what you may be thinking, you do not need a team of mathematicians or NSA-grade supercomputers for the task. On the contrary, it's actually quite simple once you understand the basics. When you open the browser on your computer (or smartphone or tablet) and go to a secure website such as your bank, you notice the URL begins with HTTPS (notice the "S"). This indicates that all data being exchanged with the remote Web server is being encrypted by an encryption scheme called PKI (public key infrastructure). It works like this: - The Web server has a secret encryption key called a private key, which is just a long, seemingly random string of characters stored in a computer file. Only the Web server has access to the private key. It also has a public certificate (which is also just a computer file) that contains another encryption key, called the public key, that is different from the secret key. - The private key and the public key are mathematically related such that anything encrypted by the public key can only be decrypted by the private key. In other words, the encryption operation cannot be reversed using the public key. (Exactly how the mind-bending math works is beyond both the scope of this article and, frankly, my intelligence). When your browser wants to communicate with an encrypted Web server, the following sequence of events occurs (depicted graphically in Figure 1 for those who like pictures). - The Web browser asks the Web server for its public certificate (which, remember, contains the public key). - The Web server gladly obliges and sends the certificate. - The browser then generates a brand new random encryption key (I'll call it the session key because it is unique to this particular browsing session). - Using the public encryption key from the server's certificate, the browser encrypts the session key. Remember, only the private key can decrypt something encrypted by the public key. - The browser sends the encrypted session key to the Web server. - The Web server decrypts the session key using its private key. The browser and Web server now have a shared secret key that they can use like a decoder ring to establish encrypted communication for the remainder of the browsing session. Make sense? The key to understanding PKI encryption is the relationship between the public and private keys; the public key is used to encrypt, and the private key is used to decrypt. And since the only entity in the whole world that has access to the private key is the server, anything encrypted by the public key can only be decrypted by the Web server. Now that we have a basic understanding of PKI, let's get back to the subject at hand. To decrypt traffic so your security tools can examine it we have to get in the middle of the session. How we do this depends on the function or type of traffic you are trying to decrypt. There are two categories: - Inbound traffic initiated by a client computer in the outside world, probably on the Internet, and directed toward a server in your organization, perhaps a Web server. - And outbound traffic, initiated by a client computer within your organization and directed toward a server in the outside world, probably on the Internet. Let's take a look at each of these and explore how to decrypt each. Decrypting inbound traffic Let's say your company has a website hosted on a Web server in your data center. The website uses SSL, so all traffic to and from the website is encrypted. When a client on the Internet accesses the site using a computer, smartphone or tablet, an end-to-end SSL-encrypted connection is established between the client's browser and the Web server, making the connection completely invisible to your organization's network security tools. Decrypting this traffic to make it visible to your security tools requires two steps: - Placing a copy of the server's private key on a decryption-capable device - Getting the data, or a copy of the data, to the decryption-capable device The first step is easy, but the second step can be accomplished in several ways (depicted in Figure 2): - Mirror the traffic to the decryption device by using a network tap or some other similar mechanism. The security device uses the server's private key to decrypt the session key sent by the client's browser the same way the actual server does, and then follows the SSL session, decrypting all traffic to and from the server. These devices are reactive in nature. That is to say, they cannot actively block an attack in progress because the security device is only looking at a copy of the traffic. - Place the decryption device directly inline between the client and the server. In the same way as the scenario above, the decryption device uses the server's private key to decrypt the session key and is then able to follow the SSL session. The benefit to this solution over the one above is the security device's ability to proactively block attacks in progress due to the fact that the actual connection is being routed through the security device. - Actually terminate the SSL connection on the security device itself. In this scenario, when a client browser initiates a connection to a website, the connection is actually established with the security device. The security device then establishes its own connection to the real Web server, retrieves the content requested by the client, and sends the content to the client. The connection between the security device and the real Web server may itself be encrypted as well (the security device becomes the client from the perspective of the Web server), but doesn't have to be. If the security device and the real Web server are both located in the same secure facility, for example, there may not be a need to secure the communication between them, thus saving resources on the Web server. Each of the above methods has its strengths and weaknesses, and which is used in a given architecture depends on many factors. However, they all share one key point: Unencrypted data never leaves the device. As a result, end-to-end data encryption is maintained. Decrypting outbound traffic With outbound traffic, the vast majority originates from employees browsing the Internet, checking their email, posting on Facebook, etc. This traffic is potentially damaging to your organization in numerous ways: Users may send proprietary company information over web-based email, post confidential data on social network sites, etc. Every user in your organization who accesses an encrypted website is a potential point of entry into your network, or point of exit for confidential information. Decrypting outbound traffic is a little trickier than decrypting inbound traffic. As we just discussed, when decrypting inbound traffic we load the private key for the server onto the decryption device, giving it the ability to decrypt traffic to or from the server. But we can't load the private key for every single Web server on the Internet onto your security device, so another strategy is necessary. The strategy for decrypting outbound traffic requires a somewhat more detailed understanding of PKI. Look back again at the Encryption 101 section above. I intentionally skipped an important detail right after Step 2 to keep it simple. What really happens after the server sends its certificate to the browser, before going any further, the browser decides whether or not it trusts the certificate. It makes this decision based on who signed the certificate. An entity that signs certificates is called a certificate authority, and computer browsers come pre-loaded with a list of trusted certificate authorities. Any website certificate signed by a certificate authority that the browser trusts will also be trusted. We can exploit this behavior to decrypt outbound traffic like this: - Make the decryption device a certificate authority, giving it the ability to sign certificates - Configure the users' browser to trust the new certificate authority - Place the decryption device inline between the users and the Internet MOXIE MARLINSPIKE WEIGHS IN: The SSL certificate industry can and should be replaced Do you see where we are going with this? When a user browses to an encrypted website, the encryption device intercepts the request, generates a new certificate on the fly pretending to be the Web server, signs it, and sends it to the user. And because the user's browser is configured to trust certificates signed by the decryption device certificate authority, it will have no idea that it had the wool pulled over its eyes and continue establishing the encrypted connection. The decryption device then establishes its own connection to the actual Web server and transparently proxies all requests between the user and the server. Not all of the decryption methods described above are appropriate for every scenario. You'll have to analyze your architecture to determine which solution works for your environment. Many vendors produce decryption-capable systems and I recommend you take a look at the strengths and weaknesses of several before deciding which to deploy. Be sure you understand the limitations of each and test in a lab or pilot environment before a production deployment. With the right tools in the right place, you can take a peek inside your traffic and see what's lurking inside. Heder, CCIE No. 24788, is a network architect with NES Associates in Alexandria, Va., specializing in large-scale enterprise and data center network design. Heder holds a master's degree with a concentration in network architecture and design, and has a patent filed for an IPv6 technology. He can be reached at email@example.com.
<urn:uuid:69f485c1-aa8e-4d93-8a68-33f1502322a9>
CC-MAIN-2017-04
http://www.networkworld.com/article/2163739/tech-primers/what-s-lurking-in-your-network--find-out-by-decrypting-ssl.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00399-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933457
2,244
2.75
3
Yes. There are multicast addresses and broadcast addresses. Let’s look at multicast first. Multicast (point-to-multipoint) is a communication pattern in which a source host sends a message to a group of destination hosts. Multicasting can be best compared to a television broadcast. Imagine your local PBS (Public Broadcasting Service) station. It broadcasts one copy of its programming on a certain frequency that can be picked up by any device that is tuned to that channel. While a network interface must accept traffic destined for its own unicast address and the broadcast address, accepting multicast traffic is an individual decision. Its possible that some networked systems’ users may choose to participate in more than one multicast group. Multicast addresses can be recognized by an odd value in their most significant byte. That occurs when the low value bit in the first byte of the Ethernet address field is set to 01 as shown above. Some ways to use multicast include: An investment firm might run real-time stock quotes on the desktops of all the brokers. If the ticker was run as a unicast application, it would eat up a serious chunk of the available bandwidth. A broadcast version on the other hand would create hardware interrupts on every computer on that subnet, not just the machines used by the brokers. However, by using multicasting, the stock quotes are delivered to the right users with a minimum of network overhead. Decrease of the network load is one big advantage of multicasting. There are many other applications for this type of data transmission including audio and video teleconferencing, distance learning, and data transfer to a large number of hosts. The set of hosts listening on a specific IP multicast address is called a host group. Members can receive traffic to their unicast address and to their group address. Host group membership is dynamic, and hosts can join and leave the group at any time. There are no limitations to the size or location of a host group. RFC#1700 contains a list of multicast Ethernet addresses. An updated version of the same list is also available at http://www.iana.org/assignments/ethernet-numbers. Broadcast packets go to every device on the local media segment. The broadcast addresses contain all binary 1s, which protocol analyzers display as hexadecimal FF FF FF FF FF FF. Broadcast packets should be used sparingly. By definition, they must be received and processed by every device on the local network segment, which causes each device to stop what is was doing, pass the packet up to the higher level protocols, examine the contents of the packet to see if a response is needed, then resume it’s prior processing. While processing a single broadcast may not degrade performance of an individual device, high volumes of broadcast traffic significantly impact performance. By default, routers block these broadcasts rather than forwarding them, which keeps MAC broadcasts local. In defense of the broadcast, it provides a good way to search a local network for a particular device even though the system has only recently joined the network, or to advertise available resources. As a reminder, when you look at a frame header in a protocol analyzer, you will see the target or destination MAC address first, followed by the source MAC address. While the destination may be Unicast, Multicast, or Broadcast, the source should only be Unicast. If it’s other than Unicast, it has probably been artificially created.
<urn:uuid:49d6aa9b-daa3-47ce-bff3-d59d1d256716>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2012/07/19/are-there-addresses-other-than-unicast/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00031-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942035
724
2.890625
3
Geographic information systems (GIS) have a wide range of applicability even outside the traditional mapping, spatial-analysis and data-storage uses. This article discusses an application of ArcView, a desktop GIS package, in the research and treatment of melanoma. With the exception of epidemiology, the field of medicine is not one to which GIS has often been applied. However, aspects of the field lend themselves extremely well to spatial analysis and therefore to the use of spatial tools. Cutaneous lymphoscintigraphy is the nuclear-medicine technique that allows the pattern of lymphatic drainage to be recorded from any part of the skin. This is achieved by injection of the radiopharmaceutical Technetium-99m-antimony sulfide colloid (99mTc-Sb2S3) around the excision biopsy site or primary lesion. Dynamic images of the tracer moving through the lymphatic channels are recorded using a digital gamma camera and are computer enhanced to ensure that even the faintest channels are detected. Once the channels have been defined, their course can be marked on the skin of the patient, if the surgeon plans to excise these channels. In most patients, however, these are not removed surgically and treatment involves wide local excision of the primary excision biopsy site followed by lymph node dissection, if the risk of nodal metastases is high. In addition to the channels, interval nodes (nodes along the channel but not in the lymph node fields) and sentinel nodes (the nodes to which the lesion directly drains) are also detected and their location marked on the skin. The depth of the sentinel nodes beneath the skin surface is measured on the scan. This technique allows draining node fields to be accurately sampled for the presence of metastases with the minimum of surgery. It also ensures that all relevant material is removed, even if the path taken through the system or the draining node fields themselves are different from those predicted by traditional methods. The results of this technique, which was performed on over 1400 patients, were recorded in a spreadsheet and then transferred onto schematic maps of the body using ArcView. The images were used to examine some of the commonly held perceptions about the node fields to which lesions on various parts of the body drain. Traditional medical concepts of lymph node drainage paths date back to 1843 when Sappey injected cadavers with mercury to trace the paths taken through the lymphatic system from various points on the body (Sappey 1843 cited in Uren, "Lymposcintigraphy in High Risk Melanoma of the Trunk: Predicting Draining Node Groups, Defining Lymphatic Channels and Locating the Sentinel Nodes," Journal of Nuclear Medicine, Vol. 34, 1993). Lymphoscintigraphy has shown these concepts to be incorrect in a large proportion of patients. Mapping the primary lesions and their draining node fields allows the researcher to quantify and analyze the divergence of paths actually taken from those predicted to diverge. Plots of all primary lesions draining to a particular node field can be used to establish the general pattern of distribution. With the addition of color, it can be shown that the lines traditionally used to delineate watershed boundaries in the lymphatic system are incorrect. Melanoma depends almost exclusively on surgical treatment. After a biopsy has revealed malignant melanoma, a wide local excision of the area surrounding the primary lesion site is performed. Because the thickness of the melanoma is the most important prognostic factor, the margins for this excision increase with the thickness of the original melanoma. The thicker the melanoma, the worse the prognosis and the more likely the presence of metastases in the draining node fields. In patients with intermediate thickness melanoma (between 1 mm and 4 mm), about 30 percent will have micrometastases in the draining lymph nodes. Elective lymph-node dissection in this group of patients would thus involve unnecessary surgery in about 70 percent of the patients. By accurately locating the sentinel lymph nodes, lymphoscintigraphy allows each relevant node field to be sampled with minimal surgery. If the sentinel lymph node is normal, then the node field is normal and no further nodal surgery is required. If the sentinel node is positive for metastases, then a radical node dissection is performed in that node field. Once the location of the sentinel nodes has been determined, it is necessary to communicate this information to the surgeon and record it for use in research. The application described elsewhere in this article satisfies both of these requirements. An application was developed that allows the physician carrying out the lymphoscintigraphy to record the details of the patient and the results of the investigation. The location of the primary lesion is recorded as a map number and x and y coordinates on that map. The draining node fields are recorded as codes showing the depth and number of sentinel nodes. For example, 1.5la2 indicates that the left axilla field contains two sentinel nodes at a depth of 1.5 cm. The name and sex of the patient as well as the number of draining channels and the maximum separation between the channels are also recorded. There is provision for noting details of surgery performed immediately or as follow up. The primary storage of the data is currently in a Filemaker Pro V3 database file. This communicates with ArcView via DDE (in Windows) or Appletalk (on the final target system) and passes a DBF file of the data for display in a report. The script processes the data and prints out a report based on it. This report can be sent to the surgeon and is also stored on the patient's file. Working With Scripts Several scripts were developed to produce the reports. The fact that these could be developed on a PC and seamlessly ported for use on a Macintosh highlighted the excellent cross-platform capabilities of ArcView. Much of the inspiration for the code came from Amir H. Razavi (ArcView Devoloper's Guide, OnWord Press, 1995), although it must be said that neither his book nor the ArcView online help are particularly well organized and both leave a great deal to be desired in the area of indexing and cross referencing. The main script for the application is a list containing the data to be displayed. In particular, they contain the map number, x and y coordinates, node field codes, comments and patient details. The map number is used to retrieve the appropriate image from a list and the image added to a view. The site of the primary lesion is then marked. Next, the node-field codes are parsed into a list and each element passed to the node-field definition script for display. A dictionary containing a count of the number of times each node field is referenced is established for use by the node-field definition script. The layout production script follows, and finally the layout is printed and the view and layout removed from the project. If the layout cannot be printed, it is renamed (and not removed) to allow subsequent manual printing. The script returns the name of the layout as an indication to the calling program of the process's success. The node-field definition script highlights the node-field area on the schematic diagram and labels it with the code, indicating the number of nodes and their depth below the surface. Since nodes exist at different depths and each of the depths is treated as a separate call to this script, the dictionary defined in the main script is used to ensure that the highlight shape is only drawn once and that text is not overwritten. The location and dimension for node-field highlights are stored in the node-field locations table. A layout object is created in the main script and passed to the layout production script for population and formatting. First, the page properties are established. This revealed what appeared to be a bug. The units of measure for the layout could not be reset from the default inches. The title and patient details are placed at the top of the page using the auxiliary DrawText script. The borders are placed on the layout using the auxiliary DrawBox script. Finally, the view is placed within the border and sized so that it is at the largest scale possible. Auxiliary Scripts, Tables and Images Two auxiliary scripts were developed. The first, Drawtext, instructs the program to place, or pass, a graphics object, a location and the text that is to be placed on the graphics layer. Optionally, it can be passed point size, text angle, font name and font style. If the optional arguments are not passed, defaults are used. The second script, DrawBox, is passed a graphics object and a rectangle object with the location and dimensions of the box. Wherever possible, tables are used to store information for the application. Avenue's table handling capabilities are limited, but as the tables are not large, performance is not a major concern. The principal table is the node-field locations table, which stores the location and dimension of the highlight box to be drawn for any given node field on any given map. There are six images stored as part of the application. These are the schematic diagrams for the posterior and anterior torso, posterior and anterior lower limbs and left and right lateral head. When the report is produced, the appropriate image is placed on the view as a theme. Inter-Application Communication (IAC) relies on the ability of ArcView to accept information from another application either through DDE (on the Windows platform) or AppleTalk. This is an excellent example of the use of complementary applications running under an operating system and shows how data can be shared between tools allowing developers to choose the appropriate tools for the job, rather than simply the monolithic application that meets most of the criteria. In addition to IAC, the other feature of ArcView, Filemaker Pro and Excel (and several other applications), which makes development easier, is the ability to develop on one platform (in this case Windows) and to implement the solution on another (in this case the Macintosh). The ArcView code needs no modification and the Excel code only requires small changes to make the IAC calls appropriate to the operating system. Geocoding Melanoma Data Over 1400 patients have undergone lymphoscintigraphy in this study. In each case, the draining node fields and the number and location of the sentinel nodes and any interval nodes were recorded in a Filemaker Pro database file. The challenge inherent in using a GIS to map the data was that the locations were descriptive. Only a small sketch of the location had been recorded, and the images produced by the lymphoscintigraphy did not have any common reference points marked to allow normalization and automatic geocoding of locations. Six schematic diagrams representing the surface of the body were drawn and a grid marked on them. Each case was manually reviewed and a map number, and x and y coordinates were recorded for each primary lesion site. These coordinates were then randomized within the level of precision of the grid used to avoid clustering at grid points. The schematic diagrams of the body were scanned as TIFF files. These images were geocoded using the ARC/INFO commands REGISTER and RECTIFY. The maximum RMS error in the registration process was 0.38mm, less than 10 percent of the grid size. This was considered to be sufficient for the purposes of this study. This application has potential for far greater automation and data storage, perhaps using ArcView itself. Adopting the ArcView approach would facilitate the data entry process in that the physician could geocode the data, not onto a hard-copy sketch with subsequent transfer, but with a single click of the mouse on the appropriate image. The other data-entry components could also be more user friendly than they currently are. Further research into the spatial distribution of melanoma in the study group is also possible. Examination of the channels taken through the lymphatic system may reveal some correlation between it and one or more of the independent variables recorded. Integration of the images obtained from the digital gamma camera with the application is also a long-term goal. If some reasonable common reference points could be introduced to the lymphoscintigraphic images, a registration and rectification process could produce a normalized representation with far more detail than is currently stored in the system. Techniques such as neural-net pattern recognition could also be used to classify channels and nodes. For more information, contact Andrew Coates BE, School of Civil Engineering, University of New South Wales, Sydney, Australia.; Dr. Roger F. Uren FRACP DDU, Nuclear Medicine and Diagnostic Ultrasound, Missenden Medical Centre.
<urn:uuid:6ab1424b-d28f-4681-a594-00c511d3ea27>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/Mapping-the-Human-Body.html?page=4
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00517-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927747
2,614
2.8125
3
As we discussed in a previous post, an Uninterruptible Power Supply (UPS) is an electrical apparatus that provides emergency power to a load when the input power source fails. It does this by means of one or more attached batteries and associated electronic control circuitry. A UPS differs from an auxiliary or emergency power system in that it provides instantaneous (or nearly so) protection from input power interruptions. However, the on-battery runtime of most UPS systems is relatively short, with 5-15 minutes being typical for smaller units. Although this period seems relatively short, it is sufficient to allow time to bring an auxiliary power source on line or to properly shut down the protected equipment. UPS units are divided into categories that are based on what type, and in some cases the number of different power related problems they address. The general categories of modern UPS systems are online, line-interactive, and standby. The Online UPS is ideal for environments where electrical isolation is necessary or for equipment that is very sensitive to power fluctuations. Although this technology was once previously reserved for very large installations of 10 kW or more, advances in technology have permitted it to now be available as a common consumer device, usually supplying 500 watts or less. The Online UPS is generally more expensive than other technologies but may be necessary when the power environment is “noisy,” such as in industrial settings or larger equipment loads like data centers. An Online UPS system takes incoming AC power, converts it to DC through a rectifier and feeds it to both the battery bank and an output DC to AC inverter. The UPS conditions the DC and converts it back to AC through the inverter. In an Online UPS, the batteries are always connected to the inverter, so that no power transfer switches are necessary. When AC input power loss occurs, the input rectifier simply drops out of the circuit and power is provided by the batteries through the inverter. When power is restored, the rectifier resumes carrying most of the load and begins charging the batteries. This process means the UPS is always online since there is no delay to switch to battery. Another significant advantage of the online UPS is its ability to provide an electrical firewall between the incoming utility AC power and sensitive electronic equipment. This type of UPS process provides a layer of insulation from power quality problems. It allows control of output voltage and frequency regardless of input voltage and frequency. Standby UPS technology is the simplest and least expensive UPS design. In this type of UPS, the primary power source is line power directly from the utility, and the secondary power source is the battery. It is called a standby UPS because the battery and DC to AC inverter are normally not supplying power to the equipment. The battery charger is converting AC line power to DC through a rectifier to charge the battery. The battery and inverter are waiting “on standby” until they are needed. When the incoming AC voltage goes out, or falls below a predetermined level. the UPS turns on the DC-AC inverter circuitry, which is powered from the internal bank of batteries. The UPS then mechanically switches the connected equipment on to its DC-AC inverter output. When line power is restored, the UPS switches back. The switchover time can be as long as 25 milliseconds, depending on the amount of time it takes the Standby UPS to detect the lost utility voltage. The line-interactive UPS uses a totally different design than any other type of standby UPS. In this type of unit the separate battery charger, DC to AC inverter, and source selection switch, have all been replaced by a combination inverter/converter. This unit both charges the battery and converts the battery DC voltage to AC for the output. The AC line power is still the primary power source, and the battery is the secondary power source. And, when the line power is operating, the inverter/converter charges the battery. However, when the input power fails, the unit operates in reverse. An on-line UPS also uses a double-conversion method of accepting AC input. The UPS rectifies the AC input to DC for passing through to the battery or battery strings. Then the UPS inverts the DC back to 120V/240V AC for powering the protected equipment. The line-interactive type of UPS is able to tolerate both continuous under-voltage brownouts and overvoltage surges without consuming the limited reserve battery power. Instead, it compensates for these conditions by auto-selecting different power taps on an autotransformer. The process of changing the autotransformer tap can cause a very brief output power disruption as it briefly switches to battery before changing the selected power tap. As you continue your study of UPS systems, you will find new technologies are becoming available, such as a Fuel Cell UPS that has been developed using hydrogen and a fuel cell as a power source. This new technology has the potential to provide long run times in a small space. As a CCNA, you may also be called upon to repair and maintain a Rotary UPS that uses the inertia of a high-mass spinning flywheel that provides short-term ride-through in the event of power loss. The flywheel also acts as a buffer against power spikes and sags, since such short-term power events are not able to appreciably affect the rotational speed of the high-mass flywheel. It is also one of the oldest designs, predating vacuum tubes and integrated circuits. Author: David Stahl
<urn:uuid:aa49d90a-97d4-4660-80fc-b555a2d8833b>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2010/01/28/frequently-used-ups-technologies/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00269-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936673
1,131
3.640625
4
Data centers must be cooled, but designing a new facility or changing an existing one to maximize cooling efficiency can be a mammoth task. Any number of different design strategy or floor layout variations can affect the results, thereby changing efficiency, creating hotspots or altering the amount of infrastructure required for the design. Computational fluid dynamics (CFD) offers a method of evaluating new designs or alterations to existing designs before they are implemented. As with any simulation tool, however, maximizing the benefits requires a careful balance of budget, effort and understanding of the process. What Is CFD? Air, like water, is a fluid that moves in response to temperature and pressure gradients. Obstructions of various types (walls, floors and ceilings, server racks and so on) change the way the air flows, and as the number and complexity of these obstructions increases, so does the complexity of the analysis of how air flows. Add to that heat sinks (air conditioners, for instance) and heat sources (servers) and you have a very complicated problem that you can’t analyze using pencil and paper. If your data center relies mainly on air to cool IT equipment, you’ve probably struggled with “hot spots” and other air flow problems as you try to ensure that your servers don’t overheat. In an existing data center, identifying regions that are insufficiently cooled is as simple as installing temperature sensors and monitoring the data. But what if you want to know what cooling problems might crop up in a new design or a reconfiguration of an existing data center? That’s where CFD comes in. CFD takes a computer model of a data center and, using numerical techniques, calculates a steady-state airflow “map” of the facility, revealing locations that are insufficiently cooled or that are contributing to cooling inefficiency. Thus, CFD provides insights into challenges that might be posed by a new data center design, a rearrangement of a facility’s layout or another major change. So, When Do We Start? On its face, CFD sounds like an indubitably helpful tool, but as you might expect, there are qualifications. Obtaining a CFD analysis—whether on your own computers using purchased software or by way of an analysis firm—costs money, and the potential benefits should not be outweighed by the investment. Here are some essential considerations if you’re looking into CFD as a method of refining your data center design. - Outsourcing or do-it-yourself—The first thing you must decide is whether you want to tackle the analysis yourself or hire a professional consultancy to do it for you. CFD involves a steep learning curve that may cost you heavily in terms of both time and results. Numerical techniques are fraught with pitfalls that can greatly affect the accuracy of results, and unless you plan to use CFD extensively (i.e., more than just once), you are likely better served by hiring a professional to do the work for you. - Budget—Depending on whether you’ve decided on a consultancy or an in-house analysis, your costs will vary. Obviously, a consultancy will charge you for its expertise as well as to amortize its own costs (software, computers and so on), and you receive only results for a single design. (Some consultancies will work with you to some extent to optimize the design or to otherwise perform several analyses, but ultimately, you are paying to get just one set of results.) For an in-house analysis, the obvious cost is the software, which can vary depending on number of licenses and so on. Furthermore, you can either buy the software outright or, in some cases, use a cloud-based “pay-as-you-go” model. Less obvious costs are computer usage and employee time—the learning curve has its own price! - Return on investment—CFD is not a solution for everybody. If you just want to find hot spots in your existing data center, you don’t need CFD, you need data center monitoring equipment. CFD, like any computational (and, particularly, numerical) technique, can suffer from any number of inaccuracies, and it is best used as a predictive tool when actual measurements are impractical. Be sure that investing in a CFD analysis—whether in-house or through a consultancy—has enough potential returns to justify the costs. Some Pitfalls of CFD If you’re convinced CFD is right for your data center project, you must be aware of several pitfalls that can crop up in a CFD analysis. First, you’re dealing with a numerical technique, not an analytical one (i.e., CFD doesn’t find the “equation of your data center”—it breaks your data center into tiny pieces and analyzes each piece in light of the pieces around it). Furthermore, the accuracy of the results is limited by the accuracy of the model you supply: every unfilled cable hole or obstruction in an aisle can affect airflow, so the more details you provide, the better your chances of receiving helpful results. Second, if you’re using software on your own, you shouldn’t just trust everything the software tells you. One of your most important tasks is to gain some insight into how the algorithm calculates results, as this will help you interpret the analysis correctly. (Avoiding such “extra work” is one of the advantages of hiring a professional CFD firm.) Details of your model that seem perfectly reasonable to you could cause the software to return strange results, simply because of quirks in the algorithm. Third, CFD is attempting to solve an extremely complex problem in a manner that simply cannot take into account every variable, so regardless of whether you perform the analysis in house or hire a consultancy, you should take the results with a grain of salt. This is not to say that CFD analysis is inaccurate, but not every detail of the analysis will correspond with the reality of your newly built facility. As data centers consume more energy (which is converted to heat that must be removed from the facility) and energy prices continue to rise, the potential return on a CFD analysis increases. Long-term energy savings can greatly outweigh the costs of the analysis, whether through a consultancy or via the in-house route. For instance, if you are able to identify potential hot spots and take steps to eliminate them in your design, you can reduce the required cooling infrastructure, saving capital cost. Furthermore, your facility can operate at a higher temperature, saving operating costs. And when your data center operates at a higher temperature, you increase your opportunities for free cooling, which bypass traditional mechanical cooling methods and save even more cost. The key to successfully using CFD is understanding the purpose and limitations of the technology as well as your own company’s needs. Photo courtesy of Rob Bulmahn
<urn:uuid:a3c91b16-b208-4a3a-a44f-9ad71d20d0a2>
CC-MAIN-2017-04
http://www.datacenterjournal.com/computational-fluid-dynamics-cfd-for-data-centers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00481-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937659
1,431
2.65625
3
If you are in any way associated with information technology (IT), business, scientific, media and entertainment computing or related areas, you may have heard big data mentioned. Big data has been a popular buzzword bingo topic and term for a couple of years now. Big data is being used to describe new and emerging along with existing types of applications and information processing tools and techniques. I routinely hear from different people or groups trying to define what is or is not big data and all too often those are based on a particular product, technology, service or application focus. Thus it should be no surprise that those trying to police what is or is not big data will often do so based on what their interest, sphere of influence, knowledge or experience and jobs depend on. Not long ago while out traveling I ran into a person who told me that big data is new data that did not exist just a few years ago. Turns out this person was involved in geology so I was surprised that somebody in that field was not aware of or working with geophysical, mapping, seismic and other legacy or traditional big data. Turns out this person was basing his statements on what he knew, heard, was told about or on sphere of influence around a particular technology, tool or approach. Fwiw, if you have not figured out already, like cloud, virtualization and other technology enabling tools and techniques, I tend to take a pragmatic approach vs. becoming latched on to a particular bandwagon (for or against) per say. Not surprisingly there is confusion and debate about what is or is not big data including if it only applies to new vs. existing and old data. As with any new technology, technique or buzzword bingo topic theme, various parties will try to place what is or is not under the definition to align with their needs, goals and preferences. This is the case with big data where you can routinely find proponents of Hadoop and Map reduce position big data as aligning with the capabilities and usage scenarios of those related technologies for business and other forms of analytics. Not surprisingly the granddaddy of all business analytics, data science and statistic analysis number crunching is the Statistical Analysis Software (SAS) from the SAS Institute. If these types of technology solutions and their peers define what is big data then SAS (not to be confused with Serial Attached SCSI which can be found on the back-end of big data storage solutions) can be considered first generation big data analytics or Big Data 1.0 (BD1 😉 ). That means Hadoop Map Reduce is Big Data 2.0 (BD2 😉 😉 ) if you like, or dislike for that matter. Funny thing about some fans and proponents or surrogates of BD2 is that they may have heard of BD1 like SAS with a limited understanding of what it is or how it is or can be used. When I worked in IT as a performance and capacity planning analyst focused on servers, storage, network hardware, software and applications I used SAS to crunch various data streams of event, activity and other data from diverse sources. This involved correlating data, running various analytic algorithms on the data to determine response times, availability, usage and other things in support of modeling, forecasting, tuning and trouble shooting. Hmm, sound like first generation big data analytics or Data Center Infrastructure Management (DCIM) and IT Service Management (ITSM) to anybody? Now to be fair, comparing SAS, SPSS or any number of other BD1 generation tools to Hadoop and Map Reduce or BD2 second generation tools is like comparing apples to oranges, or apples to pears. Lets move on as there is much more to what is big data than simply focus around SAS or Hadoop. Another type of big data are the information generated, processed, stored and used by applications that result in large files, data sets or objects. Large file, objects or data sets include low resolution and high-definition photos, videos, audio, security and surveillance, geophysical mapping and seismic exploration among others. Then there are data warehouses where transactional data from databases gets moved to for analysis in systems such as those from Oracle, Teradata, Vertica or FX among others. Some of those other tools even play (or work) in both traditional e.g. BD1 and new or emerging BD2 worlds. This is where some interesting discussions, debates or disagreements can occur between those who latch onto or want to keep big data associated with being something new and usually focused around their preferred tool or technology. What results from these types of debates or disagreements is a missed opportunity for organizations to realize that they might already be doing or using a form of big data and thus have a familiarity and comfort zone with it. By having a familiarity or comfort zone vs. seeing big data as something new, different, hype or full of FUD (or BS), an organization can be comfortable with the term big data. Often after taking a step back and looking at big data beyond the hype or fud, the reaction is along the lines of, oh yeah, now we get it, sure, we are already doing something like that so lets take a look at some of the new tools and techniques to see how we can extend what we are doing. Likewise many organizations are doing big bandwidth already and may not realize it thinking that is only what media and entertainment, government, technical or scientific computing, high performance computing or high productivity computing (HPC) does. I'm assuming that some of the big data and big bandwidth pundits will disagree, however if in your environment you are doing many large backups, archives, content distribution, or copying large amounts of data for different purposes that consume big bandwidth and need big bandwidth solutions. Yes I know, that's apples to oranges and perhaps stretching the limits of what is or can be called big bandwidth based on somebody's definition, taxonomy or preference. Hopefully you get the point that there is diversity across various environments as well as types of data and applications, technologies, tools and techniques. What about little data then? I often say that if big data is getting all the marketing dollars to generate industry adoption, then little data is generating all the revenue (and profit or margin) dollars by customer deployment. While tools and technologies related to Hadoop (or Haydoop if you are from HDS) are getting industry adoption attention (e.g. marketing dollars being spent) revenues from customer deployment are growing. Where big data revenues are strongest for most vendors today are centered around solutions for hosting, storing, managing and protecting big files, big objects. These include scale out NAS solutions for large unstructured data like those from Amplidata, Cray, Dell, Data Direct Networks (DDN), EMC (e.g. Isilon), HP X9000 (IBRIX), IBM SONAS, NetApp, Oracle and Xyratex among others. Then there flexible converged compute storage platforms optimized for analytics and running different software tools such as those from EMC (Greenplum), IBM (Netezza), NetApp (via partnerships) or Oracle among others that can be used for different purposes in addition to supporting Hadoop and Map reduce. If little data is databases and things not generally lumped into the big data bucket, and if you think or perceive big data only to be Hadoop map reduce based data, then does that mean all the large unstructured non little data is then very big data or VBD? Of course the virtualization folks might want to if they have not already corner the V for Virtual Big Data. In that case, then instead of Very Big Data, how about very very Big Data (vvBD). How about Ultra-Large Big Data (ULBD), or High-Revenue Big Data (HRBD), granted the HR might cause some to think its unique for Health Records, or Human Resources, both btw leverage different forms of big data regardless of what you see or think big data is. Does that then mean we should really be calling videos, audio, PACs, seismic, security surveillance video and related data to be VBD? Would this further confuse the market, or the industry or help elevate it to a grander status in terms of size (data file or object capacity, bandwidth, market size and application usage, market revenue and so forth)? Do we need various industry consortiums, lobbyists or trade groups to go off and create models, taxonomies, standards and dictionaries based on their constituents needs and would they align with those of the customers, after all, there are big dollars flowing around big data industry adoption (marketing). What does this all mean? Is Big Data BS? First let me be clear, big data is not BS, however there is a lot of BS marketing BS by some along with hype and fud adding to the confusion and chaos, perhaps even missed opportunities. Keep in mind that in chaos and confusion there can be opportunity for some. IMHO big data is real. There are different variations, use cases and types of products, technologies and services that fall under the big data umbrella. That does not mean everything can or should fall under the big data umbrella as there is also little data. What this all means is that there are different types of applications for various industries that have big and little data, virtual and very big data from videos, photos, images, audio, documents and more. Big data is a big buzzword bingo term these days with vendor marketing big dollars being applied so no surprise the buzz, hype, fud and more. Ok, nuff said, for now.
<urn:uuid:639b4925-7eda-46fa-9f1d-bfaac0e4ad78>
CC-MAIN-2017-04
http://www.datacenterjournal.com/little-data-big-data-and-very-big-data-vbd-or-big-bs/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00481-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941506
1,987
2.53125
3
What You'll Learn - Authenticate and authorize users. - Assign server and database roles. - Authorize users to access resources. - Protect data with encryption and auditing. - Recovery models and backup strategies. - Back up SQL Server databases. - Restore SQL Server databases. - Automate database management. - Configure security for the SQL Server agent. - Manage alerts and notifications. - Manage SQL Server using PowerShell. - Trace access to SQL Server. - Monitor a SQL Server infrastructure. - Troubleshoot a SQL Server infrastructure. - Import and export data. - Basic knowledge of the Microsoft Windows operating system and its core functionality. - Working knowledge of Transact-SQL. - Working knowledge of relational databases. - Some experience with database design. Who Needs To Attend - Individuals who administer and maintain SQL Server databases, performing database administration and maintenance as their primary area of responsibility, or who work in environments where databases play a key role in their primary job. - Individuals who develop applications that deliver content from SQL Server databases.
<urn:uuid:cc37f7b6-6f55-43c3-a6c9-f5fd64a670bf>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/course/151304/administering-a-sql-database-infrastructure-m20764/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00297-ip-10-171-10-70.ec2.internal.warc.gz
en
0.670281
231
2.734375
3
In the years since Google set out to "Organize the world's information and make it universally accessible and useful," technology has changed the world by making unlimited amounts of information available in seconds from anywhere. This has transformed almost every industry from music to news to medicine to retail and everything in between. But somehow, education, the one endeavor that is almost entirely about conveying information –from the minds of adults to those of our children – is still debating technology’s value. High tech companies have been telling me for years that this failure to create a tech-and-science-savvy workforce will result in a world where Americans aren’t prepared to work in IT. And we are seeing that now as companies reach out to other countries for the engineers they need. So the debate that has waged lately over the massive effort – driven by adoption of the Common Core of Standards -- by some school districts to get a tablet into the hands of every student makes me, frankly, angry. Every misstep has people crying for a rethink of why students needs tablets. Every dollar spent has critics asking why we would give kids expensive toys. I don’t mind a bit of healthy debate. But let’s debate which tablet to buy, not whether we should be investing in one-to-one device initiatives in our classrooms. Geeks argue for the democratization of learning, customized instruction, teachers as coaches rather than lecturers, and data collection. (Full disclosure, I am entirely in the geek camp on this.) While Luddites argue that technology is expensive, isolating, distracting, and not the point. The problem with all of this debate, of course, is that we have no choice. Technology is here. And everyone knows it. Only fifteen percent of American adults don’t use the internet and only five percent of teens don’t. Today, I asked a class of fisth graders, how many had a smart phone or tablet at home and 100 percent of them raised their hands. Expecting students to show up and pay attention to a less powerful means of distributing information – a person with a bit a chalk perhaps? -- is silly. This is a generation that learned to parse a reliable source while learning to tie their shoes. (In fact, toddlers are probably learning to tie their shoes by asking Google to show them.) Teachers who ignore the unignorable fact that their students know how to get the answer to any question in seconds from a device that’s probably in their pocket will fail by proving themselves not to be a reliable source. In a thoroughly researched article last year, in the New York Times, for example, the highly educated college teacher author revealed his own demonization of technology repeatedly in an in-depth story about a particular school’s tablet program. Instead of discussing the tablets as a tool, a means to access the greatest library of information humans have ever created (the Internet), he cast the question instead as “Is technology a savior or a demon?” Then he promptly admitted a personal bias toward demon by opening with the statement that he would, “strenuously oppose any plan by [my middle school children’s] school to add so much screen time to my children’s days.” In an era where “screen time” can include EdX.org and the Kahn Academy it seems backward (at least to this geek) for an educator to lump educational tablets (the school in question was using Amplify tablets) in with Sponge Bob mashups and LOL cat videos. Again, while talking about the students using their tablets while eating lunch – keep in mind they are studying as opposed to, say, having a food fight or picking on the fat kid -- he laments, “The raptly tender way they touched, pinched and stroked the screens awoke in me an urge to yank the gadgets and junk food out of their hands and lead them to a library or a good climbing tree.” If he was describing any of the other tools in the classroom -- pens, paper, books borrowed from the library, the chalkboard -- the statement would seem absurd, even to him. But we can’t really mess around over this debate anymore. “I think it is critical that we get technology into schools and that we really do focus on those 21st century skills,” agrees John Galvin general manager of Intel Education. He and I chatted about Intel’s research into what classrooms need from a tablet recently. “We are trying to raise a generation that can process information and to use technology to communicate.” So, sure, let’s discuss which tablet, which apps, if we should control or harness social media in the classroom, how to integrate technology into the curriculum, and everything else about the "how" of getting every student a device that can access the Internet while in school. But we need to stop arguing about the “if” kids should have technology in the classroom.
<urn:uuid:35978c30-57ee-4290-a86c-01dd76b69c79>
CC-MAIN-2017-04
http://www.itworld.com/article/2700243/careers/let-s-stop-arguing-about-technology-in-schools-and-get-the-kids-what-they-need.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00325-ip-10-171-10-70.ec2.internal.warc.gz
en
0.968609
1,034
2.59375
3
The United States is ill prepared to tackle oil spills in the Arctic, whether from drilling or from cargo and cruise ships traveling through newly passable waterways once clogged with ice, the National Research Council reported Wednesday. Extreme weather conditions and sparse infrastructure in the Chukchi and Beaufort seas — more than 1,000 miles from the nearest deep-water port — would complicate any broad emergency response. Ice in those remote oceans can trap pockets of oil, locking it beyond the reach of conventional cleanup equipment and preventing it from naturally breaking down over time. "The lack of infrastructure in the Arctic would be a significant liability in the event of a large oil spill," scientists said in a 198-page National Research Council report requested by the American Petroleum Institute, the Coast Guard, the federal Bureau of Safety and Environmental Enforcement and five other entities. "It is unlikely that responders could quickly react to an oil spill unless there were improved port and air access, stronger supply chains and increased capacity to handle equipment, supplies and personnel." The report offers more than a dozen recommendations for what regulators, the oil industry and other stakeholders need to do to boost their ability to tackle a crude oil or fuel spill at the top of the globe, as retreating sea ice spurs new energy development and ship traffic there. A chief recommendation: More research across the board, from meteorological studies to investigations of how oil spill cleanup methods would work in the Arctic. The National Research Council — an arm of the National Academy of Sciences and the National Academy of Engineering — insisted the United States needs "a comprehensive, collaborative, long-term Arctic oil spill research and development program." The council encouraged controlled releases of oil in Arctic waters — a practice generally barred under U.S. environmental laws — to evaluate response strategies. Although the federal government and oil industry are conducting lab studies that attempt to replicate Arctic conditions, the report suggests there is no substitute for the real thing and said the studies could be done without environmental harm. Most information on responding to oil spills has been developed in temperate conditions, such as in the Gulf of Mexico, so it may not translate to the Arctic, where cold water and sea ice may limit the amount of oil that naturally disperses and evaporates. Because no response methods are completely effective or risk-free, the industry and government need a broad "oil spill response toolbox," the report said. Pre-tested and pre-positioned equipment — along with plans for using it — would be critical to ensuring a swift response in an oil spill, the group said. When Shell was drilling for oil in the Chukchi Sea and Beaufort seas in 2012, it stashed containment booms and other equipment along Alaska's northern coast and had a fleet of spill response vessels floating nearby. Shell since has suspended operations there following a series of marine mishaps before and after the drilling projects. Arctic cleanup options include chemical dispersants that can break down oil, either applied at the surface or near a wellhead, but the researchers said more work is needed to understand their effectiveness and long-term effects in the Arctic. While burning thick patches of floating oil is a viable spill countermeasure in the Arctic — potentially aided by ice that helps corral the crude — that approach fails when ice drifts apart and oil spreads too thin to ignite. Using booms, vessels and skimmers to concentrate oil slicks also may be difficult in the region, where there are few disposal sites for the contaminated equipment, sparse port facilities for the vessels and limited airlift capabilities. The National Research Council says this kind of mechanical recovery is probably best for small spills in pack ice, but it would likely be inefficient for a large offshore spill in the U.S. Arctic. The group also suggested the U.S. Coast Guard's relatively small presence in the U.S. Arctic is not sufficient, and that it needs icebreaking capability, more vessels for responding to emergency situations, and eventually aircraft support facilities that can work year-round. The report cited other resources now lacking in the Arctic, including equipment to detect, monitor and model the flow of oil on and under ice, and real-time monitoring of vessel traffic in the U.S. Arctic. A politically tricky recommendation is for the Coast Guard to expand an existing bilateral pact with Russia to allow joint Arctic spill exercises. Chris Krenz, a Juneau, Alaska-based senior scientist with the conservation group Oceana, said the report offers "a sobering look at our lack of preparedness" and suggests that the U.S. should reconsider whether to allow offshore drilling in the region. But oil industry representatives said the council rightly calls for more research and resources to combat spills there. The American Petroleum Institute was "encouraged by the report's emphasis on the need for a full toolbox of spill response technologies," spokesman Carlton Carroll said. The report was the product of a 14-member committee of the National Research Council, organized by the National Academy of Sciences, with representatives drawn from academia, the oil industry and Alaska. ©2014 the Houston Chronicle
<urn:uuid:6b7cc5f5-3224-4ad0-827d-0601d76e362c>
CC-MAIN-2017-04
http://www.govtech.com/em/disaster/US-Ill-Prepared-Arctic-Oil-Spill-Report-Says.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00353-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953224
1,040
3
3
Most malware is severely crippled if it can’t contact the C&C servers from which it receives its instructions and updates, so malware authors are constantly coming up with new ways to thwart firewalls, intrusion prevention systems and local gateways blocking such communication. The latest innovation in this particular “field” has been spotted by Symantec researcher Takashi Katsuki, who recently discovered a Trojan that uses Sender Policy Framework (SPF) to keep the connection between malware and C&C servers alive and well. Ironically, the SPF is an email validation system designed to spot email spoofing and, therefore, spam. “SPF consists of a domain name server (DNS) request and response. If a sender’s DNS server is set up to use SPF, the DNS response contains the SPF in a text (TXT) record,” explains Katsuki. “The point for the malware author is that domains or IP addresses in SPF can be obtained from a DNS request and this DNS request doesn’t need to be requested from a computer directly. Usually the local DNS server is used as a DNS cache server. The DNS cache server can send a request instead of the computer.” By sending out a DNS request to the attackers’ DNS server with a generated domain that has a .com or .net TDL, The Trojan – dubbed Spachanel – gets back a response with an SPF record that contains malicious domains or IP addresses: The researcher speculates that this is done like this because the attacker wants to hide communication in legitimate DNS queries. “If this malware connects to the attacker’s server by a higher port number using the original protocol, it may be filtered by a gateway or local firewall, or blocked by an intrusion prevention system (IPS). In some cases, specific domains are blocked by a local DNS server, but this malware generates a domain that is rarely filtered,” he explains. “Furthermore, DNS requests are generally speaking not sent directly. Usually there is a DNS cache server in the network or in the ISP network, which makes it difficult for a firewall to filter it. Therefore, this is the attacker’s attempt to maintain a solid connection between the malware and the attacker’s server.”
<urn:uuid:a5c9fec1-03ea-4a67-832e-b29265fdbef0>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/01/28/trojan-uses-anti-spam-system-to-keep-in-touch-with-cc-servers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00353-ip-10-171-10-70.ec2.internal.warc.gz
en
0.902261
480
2.640625
3
Students living in the far reaches of North America are plugging into the Internet and overcoming their rural isolation by joining students thousands of miles away for online class projects. These students are learning about other cultures, communicating with scientists and participating in research projects with other schools through a program coordinated by Canadian federal and provincial governments. The Ministry of Industry Canada is investing the equivalent of about U.S. $38 million in a series of network access programs, including School Net. School Net is a program designed to "facilitate getting all schools and libraries online by 1998," said Elise Boisjoly, School Net director. The federal role is mainly to act as a catalyst for provincial governments, with private-sector partnerships, to hook public institutions to the Internet and provide some educational content, she said. "We work with teachers at the grass roots to build their own information on the Internet," she explained. The Canadian educational system is similar to that in the U.S., with most school authority residing at the provincial and local level. But there are some things the federal government can do, like creating frameworks and helping bring the private sector into partnerships with school districts. The ministry works with telecommunications companies to wire schools and secure reduced rates for Internet access. This is especially significant in huge rural areas of the country because telecommunications costs can be prohibitive. Internet service in much of the Northwest Territories is a long-distance call away, making online connections much more expensive than in an urban area. "Cost is a major issue," said Boisjoly. "There have been a lot of [budget] cuts in the provinces," she said, adding that many school districts don't have the money to connect to and use the Internet. Another issue is getting teachers comfortable with the technology. Teachers, like much of society, are relative newcomers to the Internet. To teach the teachers, School Net has trained some instructors to go to schools and mentor teachers in using technology with students. These support teachers report to a School Net board -- made up of teachers, administrators, higher education representatives and federal officials -- which monitors and guides the national program. The federal government also began providing a Web page last September for providing teachers with ideas and guidance on using the Internet as a classroom tool. "There is a lot of information on the Internet, and some of it is valuable," said Boisjoly. "The key is finding that information rapidly." School Net continues to compile a database of information resources for teachers to use in class, such as addresses of educational sites. There are also sections on the database for teachers to conference on various subjects. School Net is bilingual, with teachers communicating in their choice of French or English. Canada is working with France on agreements to get more content for the country's Francophone population, as most Internet information is in English. "We try to build a database to be searched and to share ideas with other teachers," Boisjoly said. "It could become a database for teachers on any subject they want." While an important part of School Net is getting school buildings wired, an equally important part is bringing the Internet into the curriculum, rather than allowing online access to be a diversion from traditional lessons. Teachers are coming up with some interesting projects to fill modern educational needs. One example is a weekly treasure hunt used by students to practice research skills on the Internet. Students search the Internet for answers to questions, such as who discovered X-rays and when. Staff members at a Quebec high school post new questions each week and acknowledge students who correctly answered the previous question. Getting familiar and comfortable with online data is an important skill for students to learn for the future, when information will be an important part of the economy, Boisjoly explained. "They need proper skills to be ready for this market," she said. Most provinces and territories are implementing school Internet programs and investing millions of dollars to get buildings wired, acquire equipment and develop content for classroom use. British Columbia, for example, is investing about U.S. $75 million over five years to help get schools online. The goal is to connect all provincial schools by the turn of the century. The province has a Web site , which includes resources for supporting classroom lessons and Internet projects. A listserve for educators is also available, as well as ideas for Internet projects in the classroom. Newfoundland's STEM~Net is another example of provincial-based programs for schools. The relatively rural province, located above Quebec and Nova Scotia along the Northern Atlantic, is aggressively getting classrooms involved in online projects and programs. Most of the schools have a local access provider, but one-third use long-distance tolls to connect to the Internet. "Until we conquer the cost problem, we probably won't grow more than what we are now," said Harvey Weir, executive director of STEM~Net. But what the province does have going is impressive. All 450 schools have Internet access from at least one computer, and there were over 200 classes with some 6,000 students participating in 60 projects last spring. About 90 percent of the 10,000 educators in the province have Internet accounts, and half of them go online on a regular basis, Weir said. STEM~Net was started in 1993 as a platform for educators to communicate, and the Web site has gradually expanded to include curriculum. The next step will be to make sections of the site for student use. "We started with teachers because they have to provide the leadership," Weir said. Newfoundland also has a number of distance learning programs using the Internet. With many of Newfoundland's schools in remote areas, and some one-room school houses, a very limited number of classes can be offered. A teacher uses the Internet to send lessons and assignments to students spread across the province. The Internet is a great tool for these remote schools because it eliminates the barrier of distance from data on various subjects which, until recently, were available in library collections affordable only to schools closer to urban areas. The one-room schoolhouse collections just aren't large enough to allow students to do in-depth research with current material. And it's not just students. A problem for a teacher at a remote island school or a location where everything is flown in is "professional isolation," where the latest educational developments rarely arrive. The Internet connection in Newfoundland is helping to alleviate this condition. "Teachers couldn't do research in school," Weir said. "Now they can use the Internet to add to the textbook." Teachers also use the network for continuing education and professional development, and there are even some working on master's degrees over the network. Training teachers to use the Internet is important because they then pass research and critical thinking skills on to their students. Because there is so much data available on the Internet, a researcher has to be able to narrow it down to the material relevant to the research at hand, and be able to identify reliable and verifiable sources, something that computers have yet to teach. "This is the teacher's craft to be able to do this," Weir said.
<urn:uuid:782835ff-bb1e-4a9f-8960-c8715a16ca03>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/Isolated-Schools-Link-to-Internet.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00261-ip-10-171-10-70.ec2.internal.warc.gz
en
0.967154
1,440
3.1875
3
One day, someone will build an electronic brain that will power an android, such that it behaves as a person. Before that can happen, we must first understand how the human brain operates. Plenty attempts have been made to digitally simulate a human brain, but one valiant effort is being attempted by a team of computational neuroscientists at the University of Waterloo. Their Project Spaun is a digital brain comprised of 2.5 million virtual neurons. That’s just a fraction of the 100 billion that would be required to represent the real version, but enough to support some interesting visual tasks. To create artificial intelligence, there are two general schools of thought: top-down and bottom-up. Bottom-up, such as what IBM is building with their Blue Brain Project, starts from scratch and attempts to build up digital neurons and synapses through training and machine learning, hoping to eventually reach the level of humans. Top-down essentially tries to program everything there is to be programmed, including ill-fated attempts at programming every single common sense rule. The Spaun team characterizes its approach as top-down, although their approach, they say, is slightly different. They took what they know from the human brain, focusing on the fact that certain regions control certain functions. As such, and as shown by the architecture diagram below, the mechanical brain sends data through various functional units in order to better understand its visual input. Team member Xuan Choo explains: “We took a different approach than most neural network models out there. Rather than have a gigantic network which is trained, we infer the functionality of the different parts of the model from behavioural data (i.e. we look at a part of the brain, take a guess at what it does, and hook it up to other parts of the brain).” For example, when Spaun sees a number and is asked to replicate it, the machine processes the visual input and calls on its memory while selecting an action, much like how a person relies on memory to make decisions based on their sensory input. “The basic run-down,” said Choo “of how it works is: It gets visual input, processes said visual input, and based of the visual input, decides what to do with it. It could put it in memory, or change it in some way, or move the information from one part of the brain to another, and so forth.” It is important to note that the Waterloo team is not trying to build a computerized brain, but rather is using this simulation to better understand how the human version works. The brain processes information through transient electrical pulses along neurons, where the neurons’ electrical reading spikes when they are being used. The mechanical brain does that as well with one key exception: there is less regularity in the human brain’s energy level when the neurons spike. With that being said, the Waterloo team is learning a significant amount about human cognition. “A big part of what comes out of our work is finding that some algorithms are very easy to implement in neurons, and other algorithms are not,” said Terrence Stewart. In order to expand Spaun’s cognitive capacity, the team has been training it to recognize and reproduce handwritten numbers. Like a grade school student, it has moved from recognizing and writing the numbers to pattern recognition — for example, the next in a series such as 1 , 3, 5,… “Only the visual system in Spaun is trained, and that is so that it could categorize the handwritten digits,” noted Choo. There are a couple of competing viewpoints on what the next challenge will be for the Waterloo team. Choo thinks the next major advancement will be to get Spaun to operate in real time. Stewart, who is confident that such operation will be a reality in about two years, is looking beyond that. “The next goals are all going to be to add more parts to this brain,” said Stewart. “There are tons of other parts that we haven’t got in there at all yet (especially long-term memory).” Either way, for neuroscience enthusiasts and Singularity-seekers alike, Spaun is worth keeping an eye on.
<urn:uuid:710bc500-6a0a-40ca-b367-72872b77e528>
CC-MAIN-2017-04
https://www.hpcwire.com/2012/12/05/canadian_researchers_build_virtual_brain/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00197-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96114
876
3.921875
4
The registry is one of the most vital components of the Microsoft Windows operating system. In simple phrase, it is a complex database containing virtually all system, software, hardware and user settings. Almost every piece of software keeps its data in the registry. It is so important, that Windows would not even start without it. The major part of all dangerous parasites, especially browser hijackers, trojans, spyware and adware threats modify the Windows registry. Parasites add various registry entries, create new keys, change default values. This is made in attempt to register a pest in the system, alter essential settings of the Windows operating system and installed software. Most of such changes are made for malicious purpose. On our site you can find parasite registry entries that need to be manually removed. However, editing the registry is a difficult task that only advanced users and professionals can accomplish safely. Most anti-spyware programs will remove malicious registry entries for you. However, even the most powerful spyware removers might be unable to get rid of certain threats. The reason is simple: security software vendors cannot examine each recent pest immediately after it goes wild, and new pests appear almost every day. Anti-spyware tools rely on spyware definition databases. A few advanced products can find unknown suspicious files, but unknown harmful registry entries often stay unrecognized. This is why you need to know how to manually edit the Windows registry. But you have to be extremely careful. One inappropriate value, mistyped registry key or other small mistake in the registry may damage installed software and even corrupt the entire system! Do not modify the registry if there is no real need for this! The following guide thoroughly explains how to manually remove malicious registry entries. Back up the Windows registry before editing it, so that you can quickly restore it later if something goes wrong. Please read the article Backing up and restoring the Windows registry to learn more. Remember, this step is very important! Launch the Registry Editor. Press the Start button and then click Run. Type in regedit into the Open: field. Then click on the OK button. Image 1. Open the Registry Editor This program consists of two panes. Use the left pane (on Image 2 it is designated by the red box) to navigate to certain registry key. In the right pane (it is in the blue box) you will see values, which belong to that selected key. Image 2. The Registry Editor To edit the value, right-click on it and select the Modify option (on Image 3 it is designated by the red box) from the appeared menu. Image 3. Select the value You can also double-click on the value with you left mouse button or use the Edit (on Image 3 it is in the blue box) menu. Type in the preferred value in the appeared window and click OK. The same action can be performed with any other value or registry key. Image 4. Edit the value Perform the same sequence of actions as just described in order to delete the value or the registry key. However, this time you will have to select the Delete option (on Image 5 it is in the red box) instead of Modify. Image 5. Delete the value To add a new registry key or a new value, click on the Edit menu, select New and choose a type for the entry. Image 6. Add the new value You can export any key or value from the registry to the defined file. Right-click on the object and select Export (on Image 7 it is in the red box). Image 7. Export the value Enter a file name. Export registry files should have the .reg extension. Image 8. Export registry entries to a file You can also import a certain value or a key. Click on the File menu and select Import. Then choose the file containing objects you want to import. Image 9. Import registry entries If after modifying the registry something goes wrong, you can restore the registry from a backup. Read the article Backing up and restoring the Windows registry to learn more. If you do not know how to perform the described actions, you are not certain, why you have to do some steps, or the above guide is too difficult for you, feel free to try our recommended automatic spyware removers.
<urn:uuid:bdc44fda-5b0c-46d3-aaae-da35249a76c7>
CC-MAIN-2017-04
http://www.2-spyware.com/news/post226.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00343-ip-10-171-10-70.ec2.internal.warc.gz
en
0.897628
891
2.765625
3
We seldom take the time to consider the limitations brought on by the bit-based computers we use today where the state of any bit can be either 1 or 0, on or off. After all, so far, we've been able to double computing power about every 18 months. That's a nice rate of improvement, but ultimately unsustainable without a paradigm shift. The most promising shift will be to quantum computing. Quantum computing, based on "qubits" which allow bits to be BOTH 1 and 0. As this 2000 article from MSNBC.COM attests, "As you string together more and more qubits, the power grows exponentially. If you link two qubits together, you can work with four values at the same time. Three qubits can work with eight values, and so on. If you can get up to 40 qubits, you could work with more than a trillion values simultaneously." So far, quantum computing exists only in the lab. And, from what is leaked out, it sure very slow-developing. However, it's very possible that our children will work completely outside the limitations of on/off bits and detectable processing times for most computing requests. Posted February 13, 2006 12:04 PM Permalink | 1 Comment |
<urn:uuid:fb7c02bb-5e28-4110-b61f-b6c6c6e31558>
CC-MAIN-2017-04
http://www.b-eye-network.com/blogs/mcknight/archives/2006/02/the_great_leap.php
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00343-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937712
257
3.078125
3
USB has become a very popular interface over the years. Plenty of devices have been developed that provide a USB interface such as cameras, phones, music players, the list is endless. While this is a useful technology, some of these devices have also brought with them new threats to our computer systems that need to be mitigated. The obvious threat everyone thinks about when mentioning USB security is the USB storage devices. This device is small, it is portable, it is inconspicuous enough to easily hide and it can store a lot of data. The obvious threat here comes in the form of a disgruntled employee copying your source code or client list before he leaves the company; however, it is not just that. It can introduce viruses, Trojans and even illegal software / media onto your network, and potentially go even further than that. When U3 developed a system where a small partition on a USB storage drive is automatically treated by Windows as a CD ROM drive so that it can auto run programs on the USB drive, it opened the doors to a new attack vector. The USB Switchblade required that one simply inserts the USB drive on the target computer and it would automatically and silently steal information about the computer, password hashes and any other data. That was the first generation and then came the USB Hacksaw. The problem with Switchblade was that you had limited time for the attack to be successful. It is easy to convince the victim to plug in USB storage in their system. They can be convinced by asking the victim to print a file on that USB drive or to look at something stored on that same USB drive, a report or pictures. While this is happening the USB would silently copy items but due to the time constraints the Switchblade attack could only copy files that resided in specific directories. There was no time to have the attacking program search all hard drives. Hacksaw fixed that. The first time the malicious USB drive containing the Hacksaw attack is plugged in; it will install a small program. This program will run automatically and will search the hard drive for interesting files such as documents and passwords. The attacker can then safely remove the drive within seconds. He then stays patient for an hour or two while the program on the victim’s computer gathers the files into its own folder. Once enough time passes the attacker goes back and inserts the USB drive again. This second time the program installed previously will copy all the data it found since it was first activated back to the USB drive. This was only the first version; futures implementations had software that simply sent the found files remotely by email and technically the same method can be used to deploy any malware including root kits and backdoors. In order to protect against USB drive copying and switchblade attacks, the best option would be to disable USB access if this is not required. If USB is required then software that allows control and can restrict access to only devices which are allowed based on classes or even device serial number can be used. USB Key Loggers Key loggers have always been a threat to any business. They can be used to compromise passwords, steal source code, intelligence, credit card numbers and confidential company secrets. With software key loggers, some antivirus solutions and other anti malware software can be used to detect them. However it is not so easy with USB key loggers. These insidious devices connect between the keyboard and the computer’s USB port and they record every key press. They can store more than a year’s worth of key presses. Once installed they can be hard to detect, since they’re small and people do not generally go looking behind computers to see that nothing was added. However the risk is great. If a malicious employee wants to steal company information in most cases it would be trivial for him to install such a device and once he does it is very unlikely that he will get caught. Mitigating this can be quite tricky. The best approach would be to ensure physical security on the machines by, for example, locking offices when people leave. Alternatively if the data is sensitive enough it might be possible to protect against such devices by actually installing a USB monitoring tool to block any device including input devices and simply whitelist the keyboard and mouse you want to use. However this would be quite labor intensive to do on each machine, but it’s probably the only sure way to protect against this device. Even this might not be 100% effective since future key loggers might simply clone the keyboard serial as well. USB Wireless Devices Wireless is another obvious device that can be a threat to the company. Risks here are both incidental and intentional. Incidental threats can come from employees hooking up a wireless access point to the network so that they can use their laptop wirelessly with the intention of actually increasing productivity. Intentional threats can include cases where malicious people hook up the access point with the intention of actually getting illegal access from outside the building where it is safer to operate. There are actually documented cases where this type of attack was actually carried out. Back in 2004 a post office in Haifa, Israel was broken into. After an inventory found nothing missing the matter was dropped believing the thieves got scared and ran before taking anything. However a few days later large unauthorized transaction were detected and another inspection found a rouge access point. The thieves hadn’t run away with nothing, they had in fact planted a wireless access point to give them access from outside whenever they wanted. Cases such as this – adding of unauthorized devices to the network indicate clearly the need to keep a hardware inventory. There are solutions that periodically scan the network and alert the administrator when new hardware is added or even removed. This allows an administrator to detect the change quickly and be able to act in a timely manner. In all cases the hardest part for an attacker is delivery. Does an attacker only carry out inside jobs or does he need to break into a company to get physical access to his target? Obviously there are a lot of options for someone determined especially if this is a targeted attack. What if the attacker pays a janitor to hook up the USB drive to the highest ranking manager’s machine and then retrieve it the next day? During that day it would have copied countless credentials and if it key logged as well it would also have copied a lot of confidential information. If the attacker is particularly daring he might also open a backdoor on that machine; however, if the attacker doesn’t go that far it is a good bet that the whole operation can be completed without anyone ever discovering it. If the attacker feels that bribing people is too risky there are other options. Purposely dropping a compromised USB drive using the hacksaw method in front of the company premises or during a conference that employees are attending might see one of them pick it up and there’s a good chance that the first thing they will do is insert it in their computer to see what it contains. At this stage it could gather data and send it by email or open a back door. The possibilities are endless and frightening. There are various risks to a computer system through an attack targeting USB. A lot of these attacks are ideal for inside jobs but a clever attacker might find other ways to target a specific company or even a specific person. The threat posed by USB should not be underestimated. Physical security and USB management software can be a great help in protecting an organization from such attacks.
<urn:uuid:073ca791-8904-4e77-ba3a-6ecc1ff70114>
CC-MAIN-2017-04
https://techtalk.gfi.com/hacking-devices-usb/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00251-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962514
1,514
2.625
3
A wide range of tools is available with which users can analyze their data stored in data warehouses and production databases. These tools range from straightforward reporting tools via interactive online analytical processing tools to advanced statistical tools. All these tools help users in some way to improve their business operations and business decisions. They help by presenting data in a textual or graphical way by summarizing data, by grouping data, or by making predictions. But there are things most of these tools canít do, and that is analyze data when itís structured as a graph or network and when that data must be analyzed by traversing the graph. For example, imagine a manager of a social networking website wants to know who the central members of the social network are. And let's define the term "central member" as a member who has the shortest paths to most of the other members. This problem canít be solved by simply summarizing data, nor does it have anything to do with predicting. Instead, the data must be organized as a graph and a tool must be able to traverse that graph; it has to be able "walk" from node to node. And today, this is not a feature found in most reporting and analytical tools. Analyzing network structures or graph-based structures is the domain of graph analytics. This is a special form of analytics that has been around for a very long time. In fact, the history of graph analytics and the underlying graph theory goes back to the early 18th century. Today, powerful tools and database servers are available that support graph analytics. Examples of graph database servers are InfiniteGraph, AllegroGraph RDFStore, Neo4j, and vertexdb. Whatís special about them is that they allow fast online graph traversal even if the graphs consist of hundreds of millions of nodes. Graph Analytics and Database Servers The challenge of graph analytics on massive graphs is how to store and access data in such a way that graph traversal is fast. Currently, classic SQL database servers donít offer the right features for graph analytics. The way in which data is organized as tables and columns doesnít make them ideal for graph traversal. As an example, imagine that flight data is stored in a table with the following structure: Also imagine that users want to know what the cheapest flights from Amsterdam to Phoenix are leaving on March 1, 2007, with a maximum of two stops, and each stop should be less than 4 hours. Using SQL, the query would look as follows: Although this query returns the right result, itís not an easy query to write. And, more importantly, it will be hard for a SQL database server to process this query quickly because the query is hard to optimize and to parallelize. So especially when the graphs become large, these queries might take a long time to process and probably consume a lot of resources. Graph database servers have been designed for this form of analytics. First of all, they would store this data more like a graph: Secondly, their database languages and APIs are designed for traversing graphs which leads to straightforward and simple queries. Application Areas of Graph Analytics Some think that graph analytics is only relevant for a small number of organizations because they don't see examples of graph-based structures in their own organizations. This is not the case however. Graph-based data structures can be found in almost any industry, including the world of social networking sites, bio-engineering, drug development, fraud detection, and traffic optimization. Here some examples where data can be organized as graphs and where graph analytics might be useful: - All the flights from and to airports can be organized as a graph. In this case, the airports are the objects and the flights the relationships between the objects. Such a graph can be created for all the flights of one airline, or for a set of airlines (such as Expediaís website). - The network of all the members of a social networking site, such as LinkedIn and Facebook, plus all their relationships, can be arranged as a graph. - The accounts of a bank with all the inter-account money transfers form a graph. - In the context of a parcel service, all the parcel shipments between addresses worldwide can be organized as a graph. - A visitorís journey on an organizationís website can also be seen as a graph, where webpages form the objects and the visitorís clicks become the relationships. - In the context of a telecommunication company, all the call detail records between callers can be viewed as relationships between objects, and together they form an incredibly large graph. Different Forms of Graph Analytics Different forms of graph analytics exist. For example, a graph can be traversed to find an indirect link between two vertices, or the importance of a vertex in a graph can be determined. Here are some popular forms: With single path analysis the goal is to find a path through a graph starting with a specific vertex. The path is determined in steps. First, all the edges plus corresponding vertices that can be reached by one hop are evaluated. From the vertices found, one is selected and the first hop is made. Next, the edges and vertices of this vertex found is determined, and this process continues. The result of such an exercise is a path consisting of a number of vertices and edges. When shortest path analysis is used, the shortest path is found between two vertices. Shortest means the smallest number of hops. Evidently, the shortest path between two vertices consists of one hop. Optimal path analysis can be used to find the "best" path between two vertices. The best path could be the fastest, the safest, or the cheapest. The best is based on the properties of the vertices and the edges. With path existence analysis, you determine whether paths exist between two vertices. In other words, if we start with two vertices and their edges are followed, will the paths meet somewhere? An example of path existence analysis is the challenge called the Six Degrees of Kevin Bacon. If a graph is created in which all the movie stars are the vertices and the edges represent the movies in which they played together, it is claimed that anyone can be linked to Kevin Bacon within six hops. The last one we mention here is vertex centrality analysis. Various measures of the centrality of a vertex within a graph have been defined in graph theory. The higher such a measure is, the more "important" the vertex is in the graph. The following measures have been defined: - Degree centrality: This measure indicates how many edges a vertex has. The more edges, the higher the degree centrality. A vertex with high degree centrality is generally an active vertex or a hub. - Closeness centrality: This measure is the inverse of the sum of all shortest paths to other vertices. In other words, it indicates for a vertex the smallest number of hops to make to reach all other vertices individually. A vertex with high closeness centrality has short paths to many vertices. - Betweenness centrality: This measure indicates the number of shortest paths a vertex is on. It shows a vertexís position within a graph in terms of its ability to make connections to other groups in the graph. - Eigenvector centrality: This measure indicates the importance of a vertex in a graph. Scores are assigned to vertices based on the principle that connections to high-scoring vertices contribute more to the score than equal connections to low-scoring vertices. To summarize, graph analytics is a powerful new form of analytics that clearly enriches the set of reporting and analytical capabilities already available. Many organizations can benefit from this form of analytics. The tools are available, and, more importantly, the database servers that make it possible to analyze massive graphs online are also available. If you work in the world of business intelligence and you haven't studied this topic yet, my recommendation would be to check out what it could mean for your organization. Note: For more information on graph analytics, see InfiniteGraph: Extending Business, Social and Government Intelligence with Graph Analytics, a technical white paper. Recent articles by Rick van der Lans
<urn:uuid:459d085c-fa15-49bc-bbb8-d535feb50683>
CC-MAIN-2017-04
http://www.b-eye-network.com/view/14474
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00067-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946841
1,678
3.296875
3
IBM claims to have created the world’s smallest magnetic computer memory bit using only 12 atoms in a research paper published in peer-reviewed journal Science. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. The groups of anti-ferromagnetically coupled atoms were arranged using a scanning tunnelling microscope, researchers say. They were able to form a byte made of eight of the 12-atom bits. The experimental low-temperature memory is 100 times denser than current hard-disk drives and 150 times denser than solid-state memory chips, which use around one million atoms to store a single bit of information. The breakthrough could eventually enable the production of smaller, faster and more energy-efficient devices, but scientists will first have to come up with new manufacturing techniques. The research project was aimed at determining the least number of atoms required to store one bit of data to get a better idea of the physical limit of hard-disk drive and solid-state memory density. Below 12 atoms the researchers found that the bits randomly lost information, owing to quantum effects, according to the BBC. In conventional magnetic data storage, the information is stored in ferromagnetic material, which causes interference when miniaturised. But the IBM research shows that, in principle, data can be stored much more densely using anti-ferromagnetic bits, the paper said. Video: Storage at the atomic scale IBM researcher Andreas Heinrich explains the need to examine the future of storage at the atomic scale.
<urn:uuid:c2b88230-12b2-4543-be38-b7eb5a369126>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240113949/IBM-creates-worlds-smallest-magnetic-memory-bit
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00067-ip-10-171-10-70.ec2.internal.warc.gz
en
0.913865
327
3.828125
4
A survey conducted by Steganos, a provider of privacy software to more than two million consumers and SMEs worldwide, has found that many users are woefully unaware of the privacy and security software and settings on their computers. The survey was conducted in July 2008 among laptop and PC users in the US and UK. The aim was to find out the levels of awareness regarding privacy and security settings, including the installation of anti-virus, firewall, anti-spam and encryption software; and users’ knowledge of the privacy settings in their web browser. The results show that an alarmingly high proportion of users did not know what software was running on their computers to ensure they had adequate protection from hackers, malware, viruses, “dirty’ websites, and other online threats. More than one-tenth of respondents (13%) said they did not have any anti-virus software installed on their machines at all, while a further 9% did not know if anti-virus was installed. Almost one-fifth of respondents (19%) did not know if they had firewalls installed. Three-fifths of the respondents (60%) did not have privacy software installed, and a further quarter (25%) did not know whether they did or not. When asked if they knew what the privacy settings were on their browser, over half (52%) admitted they didn’t know. Less than half (46%) of all respondents, when asked: “do you think the privacy of information stored on your computer is adequate,” said they thought they had adequate protection for their online data and security. The survey showed that some consumers are aware of online dangers and as well as having anti-virus, anti-span, firewall, and privacy software installed, they also had encryption software installed. Encryption software ensures that should anyone have access to a user’s computer – whether through dishonest means such as theft, or innocuous means such as sharing a home PC or work laptop – any files or content the user wishes to keep private are encrypted and protected. This can include photographs, important documents such as downloaded bank statements, and music.
<urn:uuid:d0a358a0-1720-45cd-944e-bee326b1ce40>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2008/09/24/lack-of-awareness-of-privacy-and-security-software/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00067-ip-10-171-10-70.ec2.internal.warc.gz
en
0.979364
441
2.875
3
This resource is no longer available Taking the fear out of bringing Government Systems online There's no question that people are navigating to a more mobile environment and many individuals are urging government agencies to develop online services. Agencies may include self-directed customer service among other benefits. However, making information more accessible could also put valuable data at risk. While online systems offer many benefits, it's essential to outline the possible risks. Many terrorist organizations take advantage of unlinked government information systems and steal valuable information. This white paper helps uncover more about the basics of government-based online information security including; promoting system availability, confirming data integrity and increasing data privacy.
<urn:uuid:4bd17c13-9e9e-46c4-9fe2-6ce9b026646d>
CC-MAIN-2017-04
http://www.bitpipe.com/detail/RES/1341857159_243.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00463-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928263
132
2.640625
3
- What is the DNS? - How it Works - The Protocol - DNS Caches vs. DNS Servers vs. DNS Resolvers - Recursive vs. Iterative Queries - Authoritative and Non-authoritative Responses - Reverse vs. Forward Queries - Zone Transfers - Anycast DNS - Wildcard DNS - Dynamic DNS - Record Types - DNS Security - DNS Software - Online Providers - DNS Tools - DNS Interview Questions [ NOTE: For more primers like this, check out my tutorial series. ] What is the DNS? The Domain Name System (DNS) makes the Internet usable to humans by providing a naming structure for online resources and mapping those names to the addresses where the resources reside. Without it, websites would be accessible only by entering long strings of numbers, such as: “220.127.116.115”. Humans aren’t good at retaining such things, but remembering “npr.org” is fairly manageable. Enter the DNS. The DNS was created by Paul Mockapetris at UC Irvine in 1983. Before then, people were mapping names to addresses by sharing a big text file called hosts.txt. This is why most operating systems have a hosts file even today. Let’s look at how the DNS works. For those who prefer a walkthrough approach, the video above describes DNS resolution visually. As we said in the intro, the DNS is a system that finds resources for you by name. You ask where a name is, and it returns you an IP address. This is done through a distributed database system whereby requests for names are handed off to various tiers of servers which are delineated by the dot (.) in the name you’re looking for. It uses the client-server model. The structure is hierarchical, and moves from right to left like so: - The root domain (dot) - The top-level domain (TLD) - The second-level domain - The subdomain - The host/resource name Clients, like your laptop or desktop, are usually configured with a DNS server that it uses to get names resolved. Clients usually make recursive queries, meaning that they just want the final answer—leaving the DNS server to do the work of walking the tree. Name resolution for a typical client follows the steps described below: - A DNS server is configured with an initial cache (so called hints) of the known addresses of the root name servers. The hint file is updated periodically in a trusted, authoritative way. - When a client makes a (recursive) request of that server, it services that request either through its cache (if it already had the answer from a previous lookup) or by performing the following steps on the client’s behalf. - A query is made to one of the root servers to find the server authoritative for the top-level domain being requested. - An answer is received that points to the nameserver for that resource. - The server “walks the tree” from right to left, going from nameserver to nameserver, until the final step which returns the IP address of the host in question. - The IP address of the resource is then given to the client. [ NOTE: The name resolution process is different for recursive vs. iterative queries. See that section for more detail. ] The DNS protocol is relatively light (see the image above) and thus sits on top of UDP so that queries can happen quickly and without much overhead. Queries over 512 bytes, and certain heavier operations such as Zone Transfers, however, switch to using TCP so that delivery will not become an issue. The protocol has the following fields: - The Identifier: a 16-bit ID field that matches requests and responses. - The Query/Rsponse Flag: a 4-bit field that designates whether the packet is a request or a response. - Opcode: Specifies the type of message being carried. Options include: 0 for standard query, 1 for an inverse query (obsolete), 2 for server status, 3 is reserved and unused, 4 is notify message, and 5 is an update (used for Dynamic DNS). - AA: This is a 1-bit field indicating an authoritative answer. The bit is set to 1 if it’s authoritative, meaning that the server who gave the answer is authoritative for the domain in question. If it’s set to 0 it’s a non-authoritative answer. - TC: A 1-bit field for truncation, yes or no. Usually indicates it was sent via UDP but was longer than 512 byes. - RD: A 1-bit field called “Recursion Desired”, meaning that the client is asking the server to walk the tree on the client’s behalf and just return the answer as opposed to telling it where to look next. - RA: A 1-bit field called “Recursion Available”, in which a DNS server tells a client that it support recursion or not. - Z: Three reserved bits that are always set to zeroes. - RCode: A 4-bit field that’s set to zero in queries (because they’re not responses) with the following options: 0 is no error, 1 is format error, 2 is server failure, 3 is name error, 4 is not implemented, 5 is refused, 6 the name exists but it shouldn’t, 7 a resource record exists that shouldn’t, a resource record that should exist doesn’t, 9 the response is not authoritative, 10 the name in the response is not within the zone specified. - QDCount: How many questions in the question section. - ANCount: How many answers in the answer section. - NSCount: How many resource records in the authority section. - ARCount: How many resource records in the additional section. One of the biggest points of confusion regarding DNS comes from the difference between DNS caches, DNS servers, and DNS resolvers. A DNS cache can mean a couple of different things, which is why it’s confusing. - The list of names and IPs that you’ve resolved recently, which are “cached” such that if you ask the question again you’ll get the same answer without generating network traffic. - A DNS server that doesn’t have any authoritative names itself, but just performs recursive queries and caching (saving those answers for future requests within a certain amount of time). So when someone says they need to clear their DNS cache, they’re probably talking about their local cache. If they’re talking about setting up a DNS Cache, they’re probably talking about a DNS server that just makes DNS queries faster for the network. A DNS server is software that serves DNS requests for clients. It can be a cache (see above) which doesn’t have any names of its own and just performs recursive queries (and caching), or it can be a “real” server, meaning that it does hold the authoritative answers for certain resources. DNS resolvers are just DNS clients. They can make two main types of queries: iterative, and recursive. See that section below. As mentioned above, recursive queries are queries where the client asks the server to do all the work for it. It sends in its query the RECURSION DESIRED flag, and the DNS server will either honor that or not. Iterative queries are the opposite of recursive queries. When they’re used, the server doesn’t go find the answer for the client (unless it’s on the first question and response), but rather tells the client where to look next. So if the client asks for chat.google.com, it tells the client to check with the .com servers and considers its work done. Authoritative responses are responses that come directly from a nameserver that has authority over the record in question. Non-authoritative answers come second-hand (or more), i.e., from another server or through a cache. Reverse queries simply reverse the direction of DNS lookups, i.e. going from IP to name instead of name to IP. Forward queries are another name for normal name-to-IP queries. Zone Transfers are the means by which slave servers pull records from master servers for backup and redundancy purposes. They take place over TCP because the data being transfered is usually substantial (and mostly likely over 512-bytes). During the operation, the client sends a query type of IXFR instead of AXFR. Zone Transfers are sensitive from a security standpoint because when someone knows what and where your resources are, it helps them plan an attack against you. Zone Transfers should only be allowed by approved systems. Performing a Zone Transfer When you perform a zone transfer you basically want to define two things: - The server you’re asking - The domain you’re trying to pull You can perform the actual transfer using a number of tools. # host -la $DOMAIN [ NOTE: Keep in mind that there are two pieces to doing a Zone Transfer: defining the server you’re asking, and defining the zone you’re trying to pull. With host, you need to define the first piece by using your resolv.conf file, i.e. by setting your DNS server to be the target. ] dig command you can do it in one step… # dig @server $DOMAIN axfr You can also use nslookup to perform a Zone Transfer, but it’s an elaborate process and therefore silly. Use [ NOTE: Performing a Zone Transfer against a domain without permission may be considered “hacking” to some. You should either have permission or be prepared to face consequences. ] Anycast is a brilliant protocol that allows the same IP to be served from multiple locations. The network then decides intelligently which location to route a given user request to, based on distance, latency, network health conditions, etc. Anycast DNS does this for DNS, making it almost like a CDN for your DNS. If you have a site that is accessed from many parts of the world, and where speed is a consideration, you should consider using a DNS provider that has an Anycast option. [ NOTE: This site uses Anycast DNS (through DYN) as part of its /stack. ] Wildcard DNS is a type of DNS record that will respond to non-existent subdomains/hosts within a zone. So if you have a wildcard for “*.danielmiessler.com”, then someone going to “aargghhh.danielmiessler.com” will be directed to wherever I point that wildcard record to. *.danielmiessler.com. 3600 TXT "This is a wildcard." Dynamic DNS allows clients with changing DHCP addresses to update a DNS server with their latest IP so that it can be found by name at its current location. There are a number of things to think about from a security perspective when it comes to DNS. First among these is the fact that if someone controls where you are sent when you ask for a given name, they control something quite powerful. Spoofing a legitimate site Modifying DNS servers for clients is often a primary objective of an attacker after gaining control to a system or network. This means changing the DNS resolution so that certain sensitive names (like bankofamerica.com, for example), or even all names, are redirected to a server that the attacker controls. This enables an attacker to present a login form that looks similar to (or even identical to) the real thing. If the user signs into the fake site it means the attacker now has stolen their credentials. It’s critical, therefore, that the nameserver responding to client requests in your environment is legitimate and is not compromised. By default, DNS is fairly easy to spoof because it’s based on UDP. In many cases You can simply send a response to a client and it will assume that you made a previous request and update the record in the cache. DNSSEC is a set of security-oriented DNS extensions designed to address a number of issues with DNS. It is primarily concerned with helping resolvers (clients) ensure that DNS data in fact came from an authorized origin. DNSSEC works by digitally signing responses using public-key cryptography and uses several new resource records, shown below. - RRSIG – contains the DNSSEC signature for a record set. DNS resolvers verify the signature with a public key, stored in a DNSKEY-record. - DNSKEY – contains the public key that a DNS resolver uses to verify DNSSEC signatures in RRSIG-records. - DS – holds the name of a delegated zone. You place the DS record in the parent zone along with the delegating NS-records. references a DNSKEY-record in the sub-delegated zone. - NSEC – Contains a link to the next record name in the zone and lists the record types that exist for the record’s name. DNS Resolvers use NSEC records to verify the non-existence of a record name and type as part of DNSSEC validation. - NSEC3 – Contains links to the next record name in the zone (in hashed name sorting order) and lists the record types that exist for the name covered by the hash value in the first label of the NSEC3 -record’s own name.These records can be used by resolvers to verify the non-existence of a record name and type as part of DNSSEC validation. NSEC3 records are similar to NSEC records, but NSEC3 uses cryptographically hashed record names to avoid the enumeration of the record names in a zone. - NSEC3PARAM – Authoritative DNS servers use this record to calculate and determine which NSEC3-records to include in responses to DNSSEC requests for non-existing names/types. When DNSSEC is used, each answer to a DNS lookup contains an RRSIG DNS record, in addition to the record type that was requested. The RRSIG record is a digital signature of the answer DNS resource record set. The digital signature is verified by locating the correct public key found in a DNSKEY record. There are number of attack types associated with DNS. We’ll cover just a few of them. - Changing Your DNS Server: An attacker who can change your DNS server can control where you are taken when you request sensitive resources, like your bank, etc. - Distributed Denial of Service: Because DNS uses UDP, you can request from thousands, or millions, of DNS servers the address for a given name, but do so with the spoofed source address of your victim. This results in that victim being melted by all the response traffic. Many tools exist for doing this. - The Kaminsky Attack: The Kaminsky attack was an issue with predictable DNS IDs that allowed attackers to flood a given system with responses that would then be written and passed onto clients.For more on this attack, see this excellent resource. The primary players in DNS software are: - Bind: Ubiquitous, powers most of the Internet. - DJBDNS: Focused on security, less used A number of DNS providers exist that provide various benefits. - DYN: DYN is a DNS provider that allows you to do registration and host your zones. It also provides Anycast and other advanced services. - DNSMadeEasy: DNSMadeEasy is another big name in DNS that provides a number of related services. - GoDaddy: Godaddy is a rather popular DNS service that has a number of services in the space. - OpenDNS: OpenDNS is an interesting service that provides content and security filtering based on monitoring and blocking DNS requests. You configure your hosts or network to use OpenDNS’s DNS servers, and it will block requests for certain categories of site and/or for malware before your browser even starts going there. This author has always used DYN, but any of the top tier services are likely good options. The key to remember when choosing a provider is that if your DNS is compromised for a site that is sensitive, you will likely experience similar negative effects to having your site compromised. Choose wisely. There are a few DNS tricks and tools that you always want to have available to you. digis supremely valuable for performing all manner of DNS tasks. I’ll be doing a primer on it soon. nslookupis my least favorite DNS tool because I dislike the syntax, but it is on most systems and thus should be learned to some degree. Here are a couple things you’ll want to be able to do on any computer related to DNS. Change Your DNS Server Sometimes you need to be able to change your DNS server. - Windows Go to your network configuration and change your DNS server(s) and apply/exit. - OSX: Open your Network Preferences and click the Advanced button for the connection you’re on. - Linux: Modify your Clear Your DNS Cache There are times when you have previously resolved a name for a resource, which has now changed, and you need to get the entry updated in your cache. This is how to do it for the three main operating systems. sudo killall -HUP mDNSResponder sudo /etc/init.d/nscd restart Q: What port does DNS work over? Q: What protocol does DNS work over? Q: Does DNS always work over UDP? A: No, it switches to TCP if the content is greater than 512 bytes. Q: What’s the primary role of DNSSEC? A: To assure clients that the answers received came from the authorized server. Q: What’s an example of a security problem related to DNS? A: Having someone change your DNS server to a malicious one, having someone send you malicious DNS replies that get accepted as legitimate, using DNS to DDoS someone. - Resolution actually starts at the far right dot (.), not at the TLD, but this is a minor technical detail. - The Wikipedia article on DNS. - Definitely check out my friend Steve’s phenomenal description of the Kaminsky DNS Attack: http://unixwiz.net/techtips/iguide-kaminsky-dns-vuln.html.
<urn:uuid:b2f2092b-bfab-46f9-b052-3b74f7b0b407>
CC-MAIN-2017-04
https://danielmiessler.com/study/dns/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00095-ip-10-171-10-70.ec2.internal.warc.gz
en
0.919075
3,949
3.9375
4
You sit at your desk, click around the internet, and type away. Then you take out your smartphone to check messages and social media. You swipe your badge to maneuver around the building. But when was the last time you cleaned all these items? Technology has become a constant companion in our modern world. We’re connected almost 24/7 to one gadget or another. We keep our desks tidy, we wipe down counters, and our office attire gets washed regularly. But so many of us never consider cleaning our devices. We here at CBT Nuggets wondered just how dirty our technology and other tools really are. So, we had a team do some bacteria swabbing of typical items used in an IT office. Get ready to head out for cleaning wipes, and read on to find out why you’ll want to. Lurking Germ Comparison There are certain everyday items that we consider dirty, including toilet seats, money, and things that end up in Fido’s mouth. And indeed, those items do host plenty of germs. However, the technology we use constantly is also home to grubby bacteria, including our cellphones. We compared several surfaces commonly used in a tech office and everyday “dirty” items to gauge how much bacteria are really lurking on our tech. We discovered that the most bacteria-laden item of all was the ID badge, which had 243 times more bacteria than a common pet toy! The cleanest item on our tech list: a laptop trackpad. All The Bacteria There are several common bacteria that have found their way into our lives and onto our gadgets. These include bacilli, gram-positive cocci, gram-positive rods, and gram-negative rods. Bacilli and gram-positive cocci tend to cause the most sickness. In fact, bacilli are the usual culprits in food poisoning. Gram-negative rods aren’t so great to have around either because they can be resistant to drug and antibiotic treatments. The only common bacteria that isn’t typically harmful to humans are gram-positive rods. We looked at the distribution of bacteria between all the office items that we swabbed: Gram-positive cocci were the most common bacteria found, at just over 42 percent. These bacteria are behind strep and staph infections, so it may not be one you want to have regular contact with. The next most common bacteria we found was bacilli (around 25 percent), which can survive in extreme environments. Gram-negative rods came in at 21.5 percent, while gram-positive rods only made up about 10.8 percent of the bacteria present. It might be safe to say that illness could be lurking at your fingertips. Our bacteria swab results showed they harbored some pretty nasty guests. This is likely because Since we don’t think about them often, nor do we tend to think of them as getting dirty. You may want to get those germ wipes ready. Here’s the breakdown of bacteria we found on ID badges. At 61 percent, gram-positive cocci were the most common bacteria found on the badges, possibly carrying your next strep or staph infection. The next common bacteria found on the badges were bacilli (nearly 26 percent). The final foe discovered was gram-negative rods, at about 13 percent. It’s been well established that keyboards are a prime host for bacteria and germs. It makes sense, considering we use them to work, type up reports, and send out emails. There’s also the things we don’t realize make way onto our keyboards, like crumbs from that desk lunch, or residue from those crunchy snacks, and even the occasional coffee or soda spill. Here’s the bacterial breakdown of the keyboards we studied: The largest group of bacteria present was gram-positive cocci. It comprised almost 38 percent of the bacterial makeup. Gram-negative rods weren’t far behind, with 34 percent. Gram-positive rods made up 17 percent of the total bacteria on keyboards. Bacilli came in at almost 11.3 percent. It seems to be a good mix of nasty germs on that keyboard, ready to send you home with the flu. Phone Friends or Foes Let’s face it, our cellphones are pretty dirty. We take them everywhere, use them after touching all kinds of things, and set them on all sorts of surfaces. Most people are even guilty of bringing them into the bathroom. If we knew the kinds of germs that make their home on our phones, we would probably pay more attention to their cleanliness. Here are the results from our phone swabs: Our swab revealed an almost even mix of bacteria. From what we found, a majority of the bacteria were bacilli (37.5 percent) and gram-positive rods (37.5 percent). A final quarter of the bacteria were gram-positive cocci. It might be a good plan to establish a cellphone cleaning routine and stick with it. Your immune system will thank you later, and your employer will appreciate you not using all your sick days. Bacterial Showdown: Mouse Versus Trackpad Do you prefer using a trackpad or an external mouse to navigate on your computer screen? Everyone has a preference. But perhaps you might be interested in knowing about the bacterial content of both. Just like your germy keyboard, a mouse tends to be covered in bacteria. You don’t think much about touching a public work surface, clicking your mouse to pull up a boss’ email, and then unwrapping your sandwich at lunchtime. But you might want to think twice. We were curious to see if there was much of a bacterial difference between your computer trackpad or an external mouse: As it turns out, the trackpad was only harboring two kinds of bacteria: gram-positive rods (nearly 67 percent) and gram-positive cocci (33.3 percent). The mouse, on the other hand, contained all four kinds of bacteria. Bacilli and gram-negative rods made up 44 percent of the makeup each. Gram-positive cocci came in at 12.4 percent, while there was a small trace (0.01 percent) of gram-positive rods. Both the trackpad and mouse play host to some nasty germs – but the trackpad may just be the winner here. Your office at work is indeed a dirty place. It might look clean on the surface, but hiding on your commonly used devices is a whole host of bacteria. Many of the items that we touch and use on a daily basis aren’t cleaned regularly. Most of us just don’t think about it as we hurry through our busy days. One important way to combat all these germs is to wash your hands regularly and properly. The CDC calls handwashing a “do-it-yourself” vaccine, which tells you how important it is. To effectively keep bacterial counts low, regularly clean and wipe down gadgets and surfaces. If you know you’re getting sick, just stay home until you’re well again. By coming to work, you are spreading all those germs throughout the workplace and putting everyone else at risk. But don’t let these germs get in your way of sharpening your IT skills. You can train from the convenience of your home with online courses from CBT Nuggets. So the next time you go to put on your ID badge or type up an email, ask yourself if you’ve cleaned them recently. You’ll be grateful you did when flu season comes around. We swabbed five items within each category to find the average colony-forming units (CFU) per square inch on each surface. All testing was done by EMLab P&K. Fair Use Statement: Bacteria can be gross, but there’s nothing yucky about sharing our content. All we ask is that you attribute back to our scientists (or authors of this page) by giving them proper credit.
<urn:uuid:7ade6843-4f12-4600-8896-7c0f1afea264>
CC-MAIN-2017-04
https://blog.cbtnuggets.com/2016/10/bytes-and-bacteria-exposing-the-germs-on-your-technology/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00400-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933106
1,685
2.671875
3
October 8th, 2014 - by Alexey Zhebel Java Virtual Machine (JVM) is an execution environment for Java applications. In the general sense, the JVM is an abstract computing machine defined by a specification, which is designed to interpret bytecode that is compiled from Java source code. More commonly, the JVM refers to the concrete implementation of this specification with a strict instruction set and a comprehensive memory model. It can also refer to the runtime instance of the software implementation. The primary reference implementation of the JVM is HotSpot. The JVM specification ensures that any implementation is able to interpret bytecode in exactly the same way. It can be implemented as a process, a standalone Java OS, or a processor chip that executes bytecode directly. Most commonly known JVMs are software implementations that run as processes on popular platforms (Windows, OS X, Linux, Solaris, etc.). The architecture of the JVM enables detailed control over the actions that a Java application performs. It runs in a sandbox environment and ensures that the application does not have access to the local file system, processes, and networking without proper permission. In case of remote execution, code should be signed with a certificate. Besides interpreting Java bytecode, most software implementations of the JVM include a just-in-time (JIT) compiler that generates machine code for frequently used methods. Machine code is the native language of the CPU and can be executed much faster than interpreting bytecode. You do not need to understand how the JVM works to develop or run Java applications. However, you can avoid many performance problems that are in fact straightforward if you do have some understanding. The JVM specification defines the subsystems and their external behavior. The JVM has the following major subsystems: - Class Loader. Responsible for reading Java source code and loading classes into the data areas. - Execution Engine. Responsible for executing instructions from the data areas. The data areas occupy memory that is allocated by the JVM from the underlying OS. The JVM uses different class loaders organized into the following hierarchy: - The bootstrap class loader is the parent for other class loaders. It loads the core Java libraries and is the only one written in native code. - The extension class loader is a child of the bootstrap class loader. It loads the extension libraries. - The system class loader is a child of the extension class loader. It loads the application class files that are found in the classpath. - A user-defined class loader is a child of the system class loader or another user-defined class loader. When a class loader receives a request to load a class, it checks the cache to see if the class has already been loaded, then delegates the request to the parent. If the parent fails to load the class, then the child attempts to load the class itself. A child class loader can check the cache of the parent class loader, but the parent cannot see classes loaded by the child. The design is such because a child class loader should not be allowed to load classes that are already loaded by its parent. The execution engine executes commands from the bytecode loaded into the data areas one by one. To make the bytecode commands readable to the machine, the execution engine uses two methods. - Interpretation. The execution engine changes each command to machine language as it is encountered. - Just-in-time (JIT) compilation. If a method is used frequently, the execution engine compiles it to native code and stores it in the cache. After that, all commands associated with this method are executed directly without interpretation. Although JIT compilation takes more time than interpretation, it is done only once for a method that might get called thousands of times. Running such method as native code saves a lot of execution time compared to interpreting each command one by one every time it is encountered. JIT compilation is not a requirement of the JVM specification, and it is not the only technique that is used to improve JVM performance. The specification defines only which bytecode commands relate to which native code; it is up to the implementation to define how the execution engine actually performs this conversion. The Java memory model is built on the concept of automatic memory management. When an object is no longer referenced by an application, a garbage collector discards it and this frees up memory. This is different from many other programming languages, where you have to manually unload the object from memory. The JVM allocates memory from the underlying OS and separates it into the following areas. - Heap Space. This is a shared memory area used to hold the objects that a garbage collector scans. - Method Area. This area was previously known as the permanent generation where loaded classes were stored. It has recently been removed from the JVM, and classes are now loaded as metadata to native memory of the underlying OS. - Native Area. This area holds references and variables of primitive types. Breaking the heap up into generations ensures efficient memory management because the garbage collector does not need to scan the whole heap. Most objects live for a very short time, and those that survive longer will likely not need to be discarded at all until the application terminates. When a Java application creates an object, it is stored in the eden pool of the heap. Once it is full, a minor garbage collection is triggered at the eden pool. First, the garbage collector marks dead objects (those that are not referenced by the application any more) and increments the age of live objects (the age is represented by the number of garbage collections that the object has survived). Then the garbage collector discards dead objects and moves live objects to the survivor pool, leaving the eden pool clear. When a surviving object reaches a certain age, it is moved to the old generation of the heap: the tenured pool. Eventually, the tenured pool fills up and a major garbage collection is triggered to clean it up. When a garbage collection is performed, all application threads are stopped, causing a pause. Minor garbage collections are frequent, but are optimized to quickly remove dead objects, which are the major part of the young generation. Major garbage collections are much slower because they involve mostly live objects. There are different kinds of garbage collectors, some may be faster in certain situations when performing a major garbage collection. The heap size is dynamic. Memory is allocated to the heap only if it is required. When the heap fills up, the JVM reallocates more memory, until the maximum is reached. Memory reallocation also causes the application to stop briefly. The JVM runs in a single process, but it can execute several threads concurrently, each one running its own method. This is an essential part of Java. An application such as an instant messenger client, runs at least two threads; one that waits for user input and one that checks the server for incoming messages. Another example is a server application that executes requests in different threads: sometimes each request can involve several threads running concurrently. All threads share the memory and other resources available to the JVM process. Each JVM process starts a main thread at the entry point (the main() method). Other threads are started from it and present an independent path of execution. Threads can run in parallel on separate processors, or they can share one processor. The thread scheduler controls how threads take turns executing on a single processor. The performance of the JVM depends on how well it is configured to match the functionality of the application. Although memory is automatically managed using garbage collection and memory reallocation processes, you have control over their frequency. In general, the more memory you have available for your application, the less memory management processes are required, which pause your application. If garbage collections are occurring more frequently than you would want, you can start the JVM with more maximum heap size. The longer it takes for a generation of the heap to fill up, the fewer garbage collections occur. To configure the maximum heap size, use the -Xmx option when you start the JVM. By default, the maximum heap size is set to either 1/4th of the physical memory available to the OS, or to 1 GB (whichever is the smallest). If the problem is with memory reallocation, you can set the initial heap size to be the same as the maximum. This means that the JVM will never need to allocate more memory to the heap. However, you will also lose the adaptive memory optimization gained from dynamic heap sizing. The heap will be of fixed size from the moment you start your application. To configure the initial heap size, use the -Xms option when you start the JVM. By default, the initial heap size is set to either 1/64th of the physical memory available to the OS, or to some reasonable minimum that is different for different platforms (whichever is the largest). If you know which garbage collections (minor or major) are causing performance degradation, you can set the ratio between the young and old generations without changing the overall heap size. For applications that create a lot of short-lived objects, increase the size of the young generation (this will leave less memory for the old generation). For applications that operate with a lot of longer surviving objects, increase the size of the old generation (by setting less memory for the young generation). The following ways can be used to control the sizes of the young and old generations. - Specify the ratio between the young and old generation using the -XX:NewRatio option when you start the JVM. For example, to make the old generation five times larger than the young generation, specify -XX:NewRatio=5. By default, the ratio is set to 2 (the old generation occupies ⅔ of the heap, and the young generation occupies ⅓). - Specify the initial and maximum size of the young generation using the -Xmn option when you start the JVM. The old generation size will be set to whatever memory remains on the heap. - Specify the initial and maximum size of the young generation separately, using the -XX:NewSize and -XX:MaxNewSize options when you start the JVM. The old generation size will be set to whatever memory remains on the heap. Most applications (especially servers) require concurrent execution, handling a number of tasks. Some of these tasks are more important at a given moment, while others are background tasks that can be executed whenever the CPU is not busy doing anything else. Tasks are executed in different threads. For example, a server may have a low-priority thread that calculates statistics based on some data and starts a higher-priority thread to handle incoming data, and another higher-priority thread to serve a request for some of the data that was calculated. There can be many sources of data, and many clients requesting data from the server. Each request will briefly stop the execution of the background calculation thread to serve the request. So you have to monitor the number of threads that are running and make sure there is enough CPU time for the thread that is making the necessary calculations. Each thread has a stack that holds the method calls, return addresses, and so on. Some memory is allocated for the stack, and if there are too many threads, this can lead to an OutOfMemory error. Even if you have enough heap memory allocated for objects, your application may be unable to start a new thread. In this case, consider limiting the maximum size of the stack in threads. To configure the thread stack size, use the -Xss option when you start the JVM. By default, the thread stack size is set to 320 KB or 1024 KB, depending on the platform. Whether you are developing or running a Java application, it is important to monitor the performance of the JVM. Configuring the JVM is not a one-time affair, especially if you are dealing with a server running on Java. You have to constantly check the allocation and usage of both heap and non-heap memory, the number of threads that the application creates, and the number of classes that are loaded into memory. These are the core parameters. Using the Anturis Console, you can set up monitoring of the JVM for any hardware component (such as a computer running a Tomcat web server) in your infrastructure by adding the JVM monitor to the component. The JVM monitor can measure the following metrics. - Total memory usage (MB) is the amount of memory that the JVM uses. This metric can affect overall performance of the underlying OS if the JVM consumes all available memory. - Heap memory usage (MB) is the amount of memory that the JVM allocates for objects used by the running Java application. Unused objects are regularly removed from the heap by the garbage collector. If this metric grows, it can indicate that your application is not removing references for unused objects, or that you need to configure the garbage collector properly. - Non-Heap memory usage (MB) is the amount of memory allocated for the method area and the code cache. The method area is used to store references to loaded classes. If these references are not removed properly, the permanent generation pool can increase every time the application is redeployed, leading to a non-heap memory leak. It can also indicate a thread-creation leak. - Total pool memory usage (MB) is all the memory used by the various memory pools allocated by the JVM (that is, the total memory without the code cache area). This can give you an idea of how much memory your application consumes without the JVM overhead. - Threads (threads) is the number of active threads in the JVM. For example, each request to a Tomcat server is processed in a separate thread, so this metric can give you an idea of the number of requests that are currently being served, and whether it affects the background tasks that are running in threads set to a lower priority. - Classes (classes) is the number of loaded classes. If your application dynamically creates a lot of classes, this can be a source of a severe memory leak.
<urn:uuid:91da2ae7-70d1-485c-bdc6-e05fe961a250>
CC-MAIN-2017-04
https://anturis.com/blog/java-virtual-machine-the-essential-guide/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00426-ip-10-171-10-70.ec2.internal.warc.gz
en
0.919809
2,921
3.890625
4
By Pierluigi Paganini founder of the security blog "Security Affairs," Editor-in-Chief at CyberDefense magazine and author of the books "The Deep Dark Web" and "Digital Virtual Currency and Bitcoin". Every day, we read about cyber-attacks and data breaches, incidents that represent in many cases a disaster for private companies and governments. Technology plays a significant role in our lives; every component that surrounds us runs a piece of software that could be affected by flaws and exploited by those with ill intentions. Of course, the impact of these vulnerabilities depends on the nature and scope of the exposed software. Some applications are more commonly used, and their vulnerabilities could expose users to serious risks. Take for example the recent vulnerability discovered in Skype, in which a bug allowed an attacker to obtain full access to any Skype account by simply knowing the email address used by a victim during the creation of the account. The possible damage that the exploit of a vulnerability could do depends on different factors such as the level of diffusion of the application compromised, the previous knowledge of the vulnerabilities, and the context in which the compromised application is used. In the wide universe of vulnerabilities, zero-day vulnerabilities represent a real nightmare for security experts. Knowledge of any leak about them makes it impossible to predict how and when they could be exploited. This characteristic makes their use ideal in state-sponsored attacks and in the development of cyber weapons. Interest in the discovery of unknown vulnerabilities for a widespread application has totally changed the role of hackers. In the past, they were figures who kept away from government affairs; today, the industry and even intelligence agencies have launched a massive recruitment campaign for this new type of expertise. Profiting from these vulnerabilities can be done through different channels: flaws could be sold to the makers of the compromised application; a government interested in exploiting a flaw could acquire it to conduct cyber-attacks against hostile countries; or it could be sold in the underground market. Around this concept of vulnerability grew a market in which “instantaneity” of any transactions is a fundamental factor. Once a new bug is found and exploited, the researcher must be to quickly identify possible buyers, contact them to negotiate a price, and then complete the sale. Timing is crucial; the value of the sale could decay to zero if any third party preemptively divulges information on the vulnerability. The famous security expert Charles Miller described this market in the document, “The Legitimate Vulnerability Market: The Secretive World of 0-Day Exploit Sales,” which discusses some of the main issues: - The difficulty in finding buyers and sellers - Checking the buyer's reliability - The difficulty of demonstrating the efficiency of a Zero-Day without exposing info on it - Ensuring exclusivity of rights The principal problem for a hacker who needs to sell a vulnerability is his ability to do it without exposing too much information on the flaw. The sale is very complicated because the buyers want to be certain of the effectiveness of the exploit and may possibly require a demonstration of its existence. The only way to prove the validity of the information is to either reveal it or demonstrate it in some fashion. Obviously, revealing the information before the sale is undesirable as it leaves the researcher exposed to losing the intellectual property of the information without compensation. To respond to this emerging need, and to regulate the transactions between buyers and sellers, a new professional specializing in mediation was born: brokers for sales of Zero-Days exploits who provide anonymity to the bargaining parties in return for a commission. Third parties ensure correct payment to the seller and the protection of the knowledge on vulnerabilities. On the buyer's side, they verify the information the seller claims to have. Trusted third parties play a crucial role in these sales, as the market is extremely volatile and is characterized by fast dynamics. Since selling the discovery of vulnerability usually takes a few weeks, the nature of the information covered by the bargaining does not allow longer negotiation. One of the more famous third party firms that do this is Grugq, but even small companies like Vupen, Netragard and defense contractor Northrop Grumman also operate as mediators. Netragard's founder Adriel Desautels explained to Forbes Magazine that he's been in the exploit-selling game for a decade, and he has observed the rapid change of the market which has literally “exploded” in just the last year.He says there are now “more buyers, deeper pockets” - that the time for a purchase has accelerated from months to weeks, and that he's being approached by sellers with around 12 to 14 zero-day exploits every month compared to just four to six a few years ago. Countermeasures and the importance of a rapid response The lifecycle of a zero-day vulnerability is composed of the following phases: - Vulnerability introduced. - Exploit released in the wild. - Vulnerability discovered by the vendor. - Vulnerability disclosed publicly. - Anti-virus signatures released. - Patch released. - Patch deployment completed. The discovery of a zeroday vulnerability requires an urgent response. The period between the exploit of a vulnerability and the release of the proper patch to fix it is a crucial factor for the management of software flaws. Researchers Leyla Bilge and Tudor Dumitras from Symantec Research Labs presented a study entitled “Before We Knew It … An Empirical Study of Zero-Day Attacks In The Real World,” in which they explained how the knowledge of this type of vulnerabilities gives to governments, hackers and cyber criminals “a free pass” to exploit every target remaining undetected. The study revealed that typical zero-day attacks have an average duration of 312 days and once publicly disclosed, an increase of five orders of magnitude of the volume of attacks is observed, as shown in the following picture. The disclosure of a vulnerability triggers a series of cyber-attacks that try to benefit from its knowledge and the delay in the application of the patch. The increase in offensive activity has no specific origin, which makes it hard to prevent. Groups of cyber criminals, hacktivists and cyber terrorists could try to exploit the vulnerability in various sectors and the damage they can do depends on the context they operate in. The belief that zero-day vulnerabilities are rare is wrong. They are vulnerabilities exactly like any others with the fundamental difference that they are unknown. A study illustrated an alarming scenario: 60% of the flaws identified were unknown, and the data suggested that there are many more zero-day vulnerabilities than expected, plus, the average time proposed for the zero-day vulnerability duration may be underestimated. One of the most debated questions is how to respond to the discovery of a zero-day vulnerability. Many experts are convinced that it is necessary to immediately disclose it but it has been observed that this usually is the primary cause for an escalation of cyber-attacks that try to exploit the bug. A second school of thought suggests keeping the discovery of a vulnerability secret, informing only the company that has designed the compromised application. In this way, it is possible to control the explosion of attacks as a consequence of the first approach. However, there is a risk that companies would fail to manage the event properly and only provide a suitable patch to fix the bug several months after it has already happened. For a deeper discussion of zero-day vulnerabilities check out the CEH v8 (Certified Ethical Hacker) course offered by the InfoSec Institute. Not only zero-days Many professionals believe that the real nightmare of information security is represented by zero-day vulnerabilities, flaws that are impossible to predict and expose their infrastructures to attacks that are difficult to detect and can cause serious damage. Despite the fear in zero-day attacks being recognized worldwide, infrastructures are menaced daily by a huge list of well-known vulnerabilities for which the proper countermeasures aren't yet applied. Failure to follow the best practices in the process of patch management is the main cause of problems for private companies and governments. In some cases, patch management processes are extremely slow and the window of exposure to cyber threats is extremely large. In other cases, and for various reasons, the administrators of the infrastructure do not undertake the necessary updates which lead to a lot of homes affected by attacks. The result is shocking: millions of PCs every day are compromised by failure to follow simple rules. Known exploits are inefficient against correctly patched systems, but they still remain a privileged option for attackers who perform large scale attacks. Only a few entities are able to patch their systems in a short time. Patch management has a sizable impact in large organizations with complex architectures so a patch must be analyzed in detail to avoid problems to IT infrastructure, requesting further and more time-consuming analysis. The deployment phase has a variable length. For example, in a company located over multiple locations with a high number of strongly heterogeneous systems to patch, deployment activities are more challenging. A known bug is also called a 1-day vulnerability. It is cheaper compared to a 0-Day, so it is really easy for an attacker to acquire information and tools on internet and in the underground to arrange a large scale attack. Development of a 0-day is really expensive and time-consuming due the intense research that must be conducted to discover and to exploit the vulnerability. For this reason, this kind of exploits is typically used by governments, while cyber criminals appear to be more interested in 1-day exploits. Security firm Eset has demonstrated in many occasions how quickly the Blackhole gang can react to the 1-day opportunity. “There's intense interest in vulnerability research, with legitimate research seized upon by malware authors for malicious purposes.” David Harley, a senior researcher, declared: “The increase in volume of 1-day exploits suggests that even if 0-days' research prices itself out of the mass market for exploits, inadequate update/patch take-up among users is leaving plenty of room for exploits of already-patched vulnerabilities (as with the current spate of Tibet attacks.)” From discovery to the market: a millionaire business How is it possible to create a tool to exploit a vulnerability once it has been disclosed? The procedure is simpler with respect to the research of zero-day vulnerability. After the release of a software patch, researchers and criminals are able to identify the fixed vulnerability using binary diffing techniques. The term diff derives from the name of the command utility used for comparing files, in the same manner as the binary of a system before and after the application of a patch are compared. These binary diffing techniques are very efficient on Microsoft's binaries because the company releases patches regularly, and from the analysis of patch code, it is quite simple for specialists to identify the binary code related to that patch. A couple of the most famous frameworks for binary diffing are DarunGrim2 and Patchdiff2. Now that 0-day vulnerability and 1-day vulnerability have been introduced, it could be useful to discover the economy behind their commercialization. An article published on Forbes' website proposed the cost of zero-day vulnerabilities related to products of principal IT security firms. The cost of vulnerability is influenced by many factors: - Difficulties in identifying vulnerabilities dependent on the security compliance of the company that produces the application; the more time necessary for third parties to discover information, the greater is its value. - Level of diffusion of the application. - Context of exploited application. - If the application came by default with the operating system. - Necessity of authentication process to exploit the application. - Does typical ?rewall con?gurations block access to the application? - Is the vulnerability related to server or client application? - Is user interaction required to exploit the vulnerability? - Version of the software that is affected by the exploit, the more recent, the higher the price. - Dependence of technological context: the introduction of a new technology could in fact lead to less interest in a vulnerability related to an old technology being replaced by the new one. Typically, governments and intelligence agencies are more interested in these hacks because they could use them for operations such as cyber espionage campaigns or exploiting target infrastructures. Due to the reasons explained, cyber criminals are more interested in the use of 1-day vulnerabilities typically sold in the underground market as they are easier to use against a wide range of targets. Trend Micro has recently published a very interesting report on the Russian underground market analyzing the services and the products marketed by cyber criminals. The study is based on data obtained from the analysis of online forums and services attended by Russian hackers such as antichat.ru, xeka.ru, and carding-cc.com. Trend Micro demonstrated that it is possible to acquire all kinds of tools and services to initialize cyber-criminal activities and frauds. The top ten activities included malware creation and sale of exploit writing. The Russian cybercrime investigation company Group-IB published in the last month another interesting study on the Russian cybercrime market, estimating its business in 2011 to be worth $2.3 billion. Cybercriminals are selling services to conduct cyber-attacks exploiting well-known vulnerabilities and to conduct SQL injections and cross-site scripting attacks. Exploits are scripts that attack vulnerabilities in other programs or applications. According to Trend Micro, browser exploits are the most prevalent type as these enable the download of malicious files. Exploits introduce code that download and launch executable files on a victim's computer. Exploit bundles are usually installed in hosting servers. Smart bundles consist of a set of malicious scripts able to exploit the vulnerability related to the victim's characteristics such as OS version, browser or application type. Exploits are usually sold in a bundle but they may be sold singly or rented for a limited period of time, following a table that reports the exploit prices: Clearly, every vulnerability represents a serious threat for a specific application. Moreover, it could also menace the security of an organization or a government when it impacts the applications and infrastructure they've adopted. It is not possible to follow a standard approach to face the huge range of vulnerabilities, but a series of actions must be put in place starting at the development phase of a product. Security requirements have to be considered crucial for the design of every solution. Preventing zero-day vulnerabilities is a utopia but much more can be done once they are discovered. An efficient response could prevent dramatic consequences from a security perspective. The process of patch management must be improved especially for large organizations, which are common targets of cyber-attacks, and which usually have long reaction times. Don't forget that it's a race against time, and the only guaranteed defense against the 1-day attack is to patch our systems before the attackers exploit it.
<urn:uuid:800697eb-60ec-4bad-8e22-f0bb066c9905>
CC-MAIN-2017-04
http://blogs.flexerasoftware.com/vulnerability-management/2013/09/vulnerabilities-everywhere.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00244-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951311
3,046
2.875
3
Most of us have a nagging feeling that we should have some kind of security in place for the technology we own. Passwords are often used to prevent access to laptops and maybe even individual files. But when it comes to mobile devices, many of us are guilty of a leap of faith. The fact is mobile devices need the same security precautions. Spyware, corruption and theft can affect your mobile devices as easily as they can your computer – and the information can be just as sensitive. “The security concerns you would have with a mobile device are the same as with a laptop, because the same type of information and the same ability to access information within a network is also on your mobile device,” says BlackBerry security expert Michael Brown. Consider what data you have stored on your mobile devices: everything from address books with your important contacts to financial information to emails. Even an innocent email may contain important details about who you are. “In your email itself, there is a lot of sensitive information, and that’s something you want to protect,” says Brown. Passwords 101: Common Mistakes Password protecting your device is the key to peace of mind, but not all passwords are created equal. Many of us find we are overwhelmed with passwords for everything from bank accounts to our front doors – and memorizing them can be onerous. As a result, we write them down or, worse, post them in plain view, making it very easy for a savvy intruder to get at your most private information. Another common mistake is using memory cues to make passwords easier to remember. Using part of a phone number, family name, social security number or birth date may seem innocuous enough. But the truth is this information is often readily available; anyone looking to gain access to a mobile device is well schooled in how to access these details. Even recycling old passwords to create new ones offers an intruder a helping hand into your mobile data. Beefing Up Your Password There is no such thing as a truly impenetrable password, but a strong password should require a lot of time and effort to crack. The best passwords are often longer. Increasing the length of a password by just one character significantly increases the time and effort it takes to guess the exact combination of letters and numbers. When you create your device password, take into account these elements: - At least eight characters in length - A combination of letters of mixed case and numbers - Known only to the user (i.e., not present in any database) - Not found in an English or foreign language dictionary - Never shared - Never written down Passwords are just one component of maintaining a secure mobile solution. To learn more about securing your mobile device, download the user guide specific to your device at http://www.blackberry.com/support/documentation/handhelds/index.shtml
<urn:uuid:09e5ea08-d217-4338-8a7c-74fdca846450>
CC-MAIN-2017-04
http://www.blackberry.com/newsletters/connection/personal/October2006/how-protected-is-your-device.shtml
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00244-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941741
597
2.90625
3
With data coming from the LRO in large quantities and "do-overs" difficult and expensive to program into the satellite, data protection is a critical issue for both NASA and the project's principal investigators at Arizona State University. In order to arrange for real-time data backup of all images, ASU researchers decided to look to the cloud and contract with Nirvanix for flexible cloud-based storage. According to Jeff Tudor, founder and senior vice president of business development at Nirvanix, the researchers wanted off-site redundant backup of all data in case of a massive disaster at the university: "The images are shot from orbit around the moon, downloaded to NASA at the main data center, and written to disks there. Then they're transferred to the research center at ASU and mirrored over the Internet to one of our cloud facilities." Because of the resolution of the images and the size of the moon's surface, multiple terabytes of data are generated each day, and Tudor says that they anticipate that the data set will continue to grow for some time into the future. "The LRO is scheduled for a one-year mission, but based on the performance of some recent NASA projects we're anticipating a lifetime of several years," he explains. Tudor says that the cloud architecture of the backup means that researchers could continue working to analyze the data, even in a worst-case scenario. "If there's a fire or some other disaster at ASU you can take the CloudNAS software to, say, the Applied Physics Lab in Greenbelt, MD. A simple 6 megabyte download of software gets them started, they provide the same credentials used in the lab at ASU, and there's instant access to the hundreds of terabytes of data stored in our facility," Tudor says.
<urn:uuid:0326d6cb-5552-4b38-93f6-703090e0bccc>
CC-MAIN-2017-04
http://www.networkcomputing.com/storage/moon-cloud/1766226495
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00244-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952694
367
2.9375
3
World War II was a watershed moment for cryptologists: The Native American code talkers; the Nazis’ field-deployed cipher hardware; Alan Turing and the efforts at Bletchley Park. These all even have their own movies. Or a History Channel documentary, at the very least. But one unsung aspect of the crypto-tasticness of the war is the extensive use of homing pigeons. These brave feathered creatures were used to carry messages to and from England and friendlies in Nazi-occupied territories like France—at great danger to themselves, of course. As much as I can’t help picturing these guys wearing little bird helmets with little bird chin straps, the reality is, they went it alone, without a shred of protection—with rolled-up, encrypted messages affixed to their little bird legs in little bird-sized canisters. This may have been a forgotten chapter if it weren’t for a man in the South of England who discovered a pigeon skeleton while renovating his chimney—and the skeleton had an encoded message still attached. That touched off a quest to crack the Dead Pigeon code. And now, 22-year-old Dídac Sánchez from Spain claims that he’s done it. According to The Telegraph, Sánchez—a Barcelona entrepreneur, said that he has spent about $1.7 million and three years to solve the puzzle. “I put out advertisements on the internet, asking for certain mathematical and IT skills to get the best people for the job,” he said. “The selection process took four or five months as a lot of people turned up claiming to know a lot and then when it came down to it, they were useless. I thought about throwing in the towel at one point.” So what does it say? We may never know. The UK’s GCHQ has confirmed that Sánchez has contacted British authorities with the message and the code; but he’s not revealing anything publicly. That’s likely because he now plans to market new security software that he says is based on the code. The 4YEO (For Your Eyes Only) encryption will allow any text, document, WhatsApp, Messenger, SMS or Skype conversation to be encrypted, as well as telephone calls. The system is “impossible to crack,” he claims, and is offering €25,000 to anyone who can figure out the code’s structure by the end of the year. Will Sanchez’ code fly, as it were? Some are skeptical and think that this is all simply a marketing ploy--because in all likelihood the Dead Pigeon code method is unsolvable anyway. “If the sender was a field agent in occupied France, he may well have had a one-time pad, a sort of cipher that uses a randomly generated key that is as long as the message,” explained Paul Ducklin, a Sophos Security researcher, in a column. Ducklin lays out in detail more about how this type of encoding works, and why it’s rather airtight, and why it means that the message should be considered unbreakable. He added, “Only two copies of the key ever exist: the agent takes one key, or more usually a code-pad consisting of numerous sheets of daily keys, and the agent's handler keeps the other. By destroying one page of the code-pad each day, whether a message was sent or not, the field agent can ensure that each key is only ever used once, or not at all, and the handler can keep in synchronization.” Only time will tell what the truth is. It looks like it’s up to the British government to decide whether to release the message to the public, fittingly, just as the Allied spy agent released the pigeon that started his whole thing, more than 70 years ago.
<urn:uuid:cebab4b8-03b9-4c93-a252-0af3f0624d33>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/slackspace/world-war-ii-dead-pigeon-code/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00546-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957731
822
2.5625
3
Darknets – private networks carved out of the Internet to allow peer-to-peer sharing – can be quickly and easily created among Web browsers making it possible for people to participate anonymously and for the darknet itself to vanish with barely a trace when all the participants close their browsers, researchers told Black Hat yesterday. Their creation, called Veiled, could be used by political dissidents and others that want to communicate out of the public eye on a network that supports private Web pages not available to non-members of the darknet, say researchers from HP Security Labs who announced their proof of concept browser-based darknet. Traditional darknets, which include the notorious file-sharing networks that set the music industry on a rampage to tear them down, are more complex to create, requiring configuring of firewalls and network address translation that average Internet users lack the skills to perform. Veiled can be set up, used to share files and chat and then melt away if all members close their browsers, says Matt Wood, a senior researcher at HP's Web Security Group. The only trace left is a scrap of encrypted code buried in the browser’s history, he says. Veiled allows people to participate anonymously and to share files that are fragmented and distributed in pieces among the browser memories of participants. No one browser has access to a complete file on its own; it must go through a participating server called router to retrieve all the pieces, he says. These routers, also called supernodes, are necessary for individuals to participate so the communication is not strictly peer-to-peer. These supernodes also encrypt files, split them and distribute them for storage among the browsers of participants. These file fragments are stored redundantly to ensure the files remain available if a browser fails. Veiled relies on HTML 5 with its support for browser storage, high quality Java script libraries and cross-origin requests that allow cross-domain HTTP requests, the researchers say. The darknet supports versions of Firefox, Internet Explorer, Chrome, Safari and Opera browsers. The result is a private network within the Internet that lets users remain anonymous while they communicate via HTTP with access to a distributed file storage system, they say. Communications are protected via public and private keyed SSL. Wood and his co-researcher Billy Hoffman, manager of HP security Labs within HP Software, did not release code for Veiled. They said getting permission to do so from HP’s intellectual property team would have taken too long and the process wouldn't have been completed before their talk. But they say the outline they gave during their briefing should enable others to create similar darknets with browsers, perhaps improved. They noted their version has drawbacks, such as verifying the integrity of file pieces supplied to the darknet by individual browsers. Possibilities for future versions include using the distributed power of Veiled participants to perform distributed computing, splitting up tasks for individual browsers to work on. This story, "HP Researchers Say Browser-Based 'Veiled' Make Darknets a Snap" was originally published by Network World.
<urn:uuid:98552d8f-8681-4667-adc9-54dfc1c5ae6d>
CC-MAIN-2017-04
http://www.cio.com/article/2425923/networking/hp-researchers-say-browser-based--veiled--make-darknets-a-snap.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00362-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945372
629
2.578125
3
This October teachers around the country are participating in activities for Connected Educator Month. Digital Citizenship Week (October 21-25, 2013) places an emphasis on how all of us – teachers, students and parents – can have thoughtful discussions about being ethical and responsible online. It’s so important to have these discussions considering the digital world in which we live. I am a self-admitted geek – a Google Certified Teacher who carries multiple devices and instinctively checks for Wi-Fi access and electrical outlets whenever I enter a building – but I have no personal experience in navigating today’s digital world as a teenager. My experiences as a digital citizen are entirely as an adult—the first time I saw the internet was as a 23 year old. I’ve never had to make a decision about posting to a social media gaming site or considering if it will hurt my chances for college or a job after I graduate high school. This is my first year teaching a technology course to 8th graders which includes discussions on digital ethics and the importance of creating a positive digital footprint. I will also be teaching these same lessons to my daughters, ages 6 and 10. How do I know what to teach them about these crucial topics and how can I make it relevant to the lives they are living today? In my technology course this fall, I relied heavily on Cable in the Classroom’s InCtrl lessons on digital citizenship. These lessons fit my needs and are a well-developed approach to teaching digital citizenship issues. The lessons are easy to adapt to my class schedule as well as being ready to use right from the handout. The free, online curriculum includes videos and handouts for teachers as well as students. Teacher videos include background information and tips specifically for classroom use. You can then use the student-focused videos to introduce a topic with your students in an engaging format. The most valuable aspect of the InCtrl curriculum for me was the wide variety of discussion starters provided to initiate conversations with my students. Since I don’t have the personal experience of growing up as a teenager online, I found the most important thing I could do as a teacher was open up conversations and listen. As the adult in the room, I could then add my own perspective of ethical choices and long term consequences. It is important to remember that as the teacher we can provide guidance but we must first allow students to share their own experiences since they are ones living it. Teaching students to be good digital citizens is an extremely important topic. Unfortunately it is too often neglected because adults are not sure how to address and explain the issues. The InCtrl curriculum helps teachers start the discussion about digital citizenship topics which can then be supported by and continued with discussions at home. At the start of our digital footprint lesson, I asked students to create a list of all the social media networks and sites at which they participate. I was blown away by the roughly 30 different social media sites used by the students in just one class. The list included everything from gaming sites to sports team fan sites. Their digital footprint starts early in their lives and will be so much more extensive than mine. It made me think of what I might have posted as a 13 year old and if I would want it to be part of my “permanent record”? This, again, emphasizes why this is such an important topic to teach students. I also wanted to demonstrate how much information you can discover about someone from a simple Google search. I asked a couple of students Google their own name, but found as 8th graders, they haven’t created a considerable footprint yet. So then I asked my students to Google my name. I’m relatively active on social media – I blog, podcast, tweet and write for various publications – so quite a few items popped. Since this activity took place at the start of the year, during the first week of class I asked my students to make a list of what they learned about me from just the Google search. It created a great discussion in class. I shared with my students some of the great experiences and connections that I have built through collaboration online and the benefits of maintaining a positive digital footprint. Please join me and hundreds of other educators and parents during Digital Citizenship Week in talking to students about their digital world. I’ll be sharing my experiences in a free webinar about the InCtrl lessons called Empowering Students to be InCtrl in a Digital Age on Wednesday, October 23rd at 4:00 pm ET. You can also search the Connected Educator Month calendar (use keyword Digital Citizenship Week) for more free webinars and events to help you brush up on the issues, or check out the InCtrl lessons for ideas you can use today! Eric Langhorst is an American history and technology teacher at Discovery Middle School in Liberty, Missouri. He writes the Speaking of History blog and tweets as @ELanghorst. – See more at: http://www.ciconline.org/blog/posts/teaching-digital-citizenship-everyday/#sthash.7Vm4ZlJJ.dpuf
<urn:uuid:dca5ade6-a01f-4e1c-828b-24dfd5f533b2>
CC-MAIN-2017-04
https://www.ncta.com/platform/industry-news/teaching-digital-citizenship-everyday/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00362-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958233
1,054
3.359375
3
3D printing, a technology so cool and futuristic you’d think it was invented by the writers of Star Trek, is getting some terrible publicity these days. As you’ve probably heard, some cretins are using it to produce plastic guns that could be could slip past metal detectors. But doctors in Michigan used a 3D printer to fabricate a splint to hold open the airway of a dying baby boy. "Quite a few doctors said he had a good chance of not leaving the hospital alive," said April Gionfriddo, about her now 20-month-old son, Kaiba. "At that point, we were desperate. Anything that would work, we would take it and run with it," Gionfriddo added in a written statement. Kaiba was born with Tracheobronchomalacia, a rare disease that causes the airway to collapse, making it difficult to breath. By the time he was 2 months old, the infant could only breathe with the aid of a ventilator. He would stop breathing on a regular basis and required resuscitation every day. (Kaiba and his mom are pictured below.) Fortunately, researchers at the University of Michigan, had been working on a device that could help the infant. Glenn Green and Scott Hollister, medical science professors at Ann Arbor, obtained emergency clearance from the FDA to create and implant a tracheal splint for Kaiba made from a biopolymer called polycaprolactone. The device was created directly from a CT scan of Kaiba's trachea, integrating an image-based computer model with laser-based 3D printing to produce the splint. The splint was sewn around Kaiba's airway to expand the bronchial passage and give it a skeleton to aid proper growth. Over about three years, the splint will be reabsorbed by his body. "It was amazing. As soon as the splint was put in, the lungs started going up and down for the first time and we knew he was going to be OK," said Green. 3D printing, Green and Hollister say, can also be used to construct other body parts. They have already utilized the process to build and test experimental ear and nose structures and to rebuild bone structures in the spine and face. Kaiba is doing well and he and his family, including an older brother and sister, live in Ohio. Plastic guns and amazing medicine aside, 3D printing is not only cool, it's gradually become more mainstream. AutoDesk, as I wrote a while back, has developed software that lets you create 3D images which can be printed as solid objects by a number of companies, and the price of 3D printers, while still high, is coming down. I missed the recent Maker Faire in San Mateo, California, but people who were there told me that 3D printing played an important role; you can see more of it at Maker Faires in other parts of the country.
<urn:uuid:986a6fee-55b9-4eb2-a8a1-43aaaa4242f0>
CC-MAIN-2017-04
http://www.cio.com/article/2370464/consumer-technology/how-3d-printing-can-save-a-life.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00178-ip-10-171-10-70.ec2.internal.warc.gz
en
0.982063
625
3.140625
3
Google Captcha Dumps Distorted Text ImagesTired of reading those wavy words? Changes to Google's reCaptcha system -- which doubles as quality control for its book and newspaper scanning projects -- prioritize bot-busting puzzles based on numbers. 9 Android Apps To Improve Security, Privacy (click image for larger view) Google is making changes to its reCaptcha system: distorted text images are out, while numbers and more-adaptive, puzzle-based authentication checks are in. The change is necessary because text-only Captchas are no longer blocking a sufficient number of automated log-in attempts, according to Google's reCaptcha product manager, Vinay Shet. "Over the last few years advances in artificial intelligence have reduced the gap between human and machine capabilities in deciphering distorted text," he said in a Friday blog post. "Today, a successful Captcha solution needs to go beyond just relying on text distortions to separate man from machine." Based on extensive user testing, Google thinks it can better separate real users from bots by using better risk analysis. This is based in part on watching what a supposed user is doing before, during and after the check, and serving up multiple puzzle-based checks. Although Shet didn't spell out exactly what these puzzles might look like, he did say that unlike humans, bots have a tough time with numbers. [ Twitter's new security measures can be a double-edged sword. Read Twitter Two-Factor Lockout: One User's Horror Story. ] "We've recently released an update that creates different classes of Captchas for different kinds of users. This multi-faceted approach allows us to determine whether a potential user is actually a human or not, and serve our legitimate users Captchas that most of them will find easy to solve," he said. "Bots, on the other hand, will see Captchas that are considerably more difficult and designed to stop them from getting through." The Captcha -- an acronym for Completely Automated Public Turing test to tell Computers and Humans Apart -- challenge-response technique was first developed at Carnegie Mellon University in 2000. The approach is designed to create a test that humans can pass, but computers can't. In theory, Captchas can be used for a variety of tasks, including preventing automated spam from appearing in blog comments, blocking automated spam-bot signup attempts for email services -- such as free Gmail accounts -- and safeguarding Web pages that site administrators don't want to be tracked by search bots. In fact, Google purchased reCaptcha in 2009, in a bid to better block spammers who signed up for free accounts. The approach offered by reCaptcha was notable not just for presenting users with a Captcha phrase, but drawing those images from scans of books. That squares with Google's own Google Books and Google News Archive Search projects, which rely on optical character recognition (OCR) scans of printed source material, which aren't 100% accurate. By designating scanned content for use with the reCaptcha system, however, Google killed two birds with one stone: creating a security check, while also tapping users to manually enter or verify scanned text for free. In short order, Google also rolled out -- and still offers -- reCaptcha as "a free anti-bot service that helps digitize books," and is available for use by any website. "Answers to reCaptcha challenges are used to digitize textual documents," according to Google's reCaptcha overview. "It's not easy, but through a sophisticated combination of multiple OCR programs, probabilistic language models, and most importantly the answers from millions of humans on the internet, reCaptcha is able to achieve over 99.5% transcription accuracy at the word level." But no information security challenge-response system -- at least to date -- is perfect. Spam rings also have access to OCR tools, and have duly defeated many Captcha systems. Other criminal groups, echoing Google's crowd-sourced reCaptcha approach, have even tricked users into recording target sites' Captcha phrases -- most sites have a finite pool of possibilities -- with the lure of free porn. By adopting a more adaptive approach to verifying people's identities via reCaptcha, Google has taken a page from Facebook's login verification system, which looks at a variety of factors when someone attempts to log into an account, including their geographic location, and whether they're using a computer that Facebook has seen before. For unusual types of log-ins, Facebook's system can hit would-be users with an escalating series of security challenges. Similarly, RSA's Adaptive Authentication system, which is used by about 70 of the country's 100 biggest banks to verify their customers' identity, assesses a number of risk factors before granting access. Based on different risk factors, furthermore, users can also be made to jump through more hoops before the system believes that they are who they say they are. It's been a busy month for Captcha researchers. Earlier this month, a team of Carnegie Mellon researchers unveiled an inkblot-based Captcha system that's designed to defeat automated attacks. This week, startup firm Vicarious claimed it has created an algorithm that can successfully defeat any text-based Captcha system, as well as defeat reCaptcha -- widely seen as the toughest Captcha system available -- 90% of the time, New Scientist reported. But Luis von Ahn, who was part of the Carnegie Mellon team that created Captchas, remains skeptical, saying he's counted 50 such Captcha-breaking claims since 2003. "It's hard for me to be impressed since I see these every few months," he told Forbes.
<urn:uuid:2f9ccc6c-65e0-445d-825f-d918e6ff06ae>
CC-MAIN-2017-04
http://www.darkreading.com/attacks-and-breaches/google-captcha-dumps-distorted-text-images/d/d-id/1112111?cid=sbx_bigdata_related_news_vulnerabilities_and_threats_big_data&itc=sbx_bigdata_related_news_vulnerabilities_and_threats_big_data
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00574-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951296
1,171
2.71875
3
Back in January 1997, a group of people developed RFC 2065, Domain Name System Security Extensions, a document detailing the introduction of private/public key cryptography into the public DNS system. By adding cryptography to the DNS, users would be able to verify that DNS responses they receive are genuinely valid and accurate. The design of DNSSEC was updated in March 2005 by RFC 2535 but was never deployed. In March 2005, RFCs 4033, 4034, 4035 were published, detailing a new version of the protocol named DNSSEC.bis. This version of the protocol is easier to understand and deploy, but was never widely paid attention to until the summer of 2008. Those of us in the industry knew that DNSSEC was important, but the operational management, increased query size, and technical problems with many implementations of DNS prevented it from being deployed. The "DNS Summer of Fear" occurred in 2008, when security researcher Dan Kaminsky exposed a vulnerability in the DNS protocol where DNS cache poisoning could be achieved in just a few seconds allowing an attacker to spoof the DNS identify of a website. A short term fix, known as DNS Source Port Randomization, was deployed to help fend off attacks, while movement on a long term solution began work. The long term fix requires the use of DNSSEC to securely sign and validate the global DNS system, and with all things DNS, starts with the security of the DNS Root Zone, a.k.a ".". The DNS Root Zone is produced and maintained through a collaborate effort between ICANN, VeriSign, and the U.S. Department of Commerce. These three organizations have been extensively working to develop a secure and transparent way to manage the signing of the Root Zone since early 2009, and on July 15, 2010, the fruits of their labor will become reality when the signed root is deployed. On June 16, 2010, the first of two Root Key Signing Key (KSK) generation ceremonies was performed at a secure ICANN facility in Culpepper, VA. On July 12, 2010, a second KSK ceremony will occur at a second secure ICANN facility in El Segunda, CA. The purpose of these ceremonies is to generate the specialized cryptographic materials needed to sign the root zone, distribute copies to two secure facilities, distribute the cryptographic fingerprint data to Trusted Community Representatives (TCRs) for verification, and to distribute crypto material to Recovery Key Share Holders in case of failure of these two ICANN facilities. At Dyn Inc., we await the deployment of the signed Root Zone with much excitement. A signed root zone means that key stakeholders are paying attention to the criticality of the DNS and the role it serves in the Internet. To do our part, we've taken the following steps to DNSSEC-enable our systems and infrastructure: In the coming months, we'll continue to enable DNSSEC communication with other registries, and develop additional ways to manage DNSSEC crypto material to provide our users with an easy and simple path to DNSSEC signing their DNS zones. In the meantime, we all look forward to the signed root deployment on July 15th. Written by Tom Daly, Chief Technology Officer at Dynamic Network Services, Inc. Dyn is a cloud-based Internet Performance company. Dyn helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Through a world-class network and unrivaled, objective intelligence into Internet conditions, Dyn ensures traffic gets delivered faster, safer, and more reliably than ever. Learn More |Data Center||Policy & Regulation| |DNS Security||Regional Registries| |Domain Names||Registry Services| |Intellectual Property||Top-Level Domains| |Internet of Things||Web| |Internet Protocol||White Space| Afilias - Mobile & Web Services
<urn:uuid:c80caa6c-883e-4dc4-939a-c028f98b0936>
CC-MAIN-2017-04
http://www.circleid.com/posts/the_root_dnssec_deployment_and_dyn_inc/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00482-ip-10-171-10-70.ec2.internal.warc.gz
en
0.91636
791
3.125
3
Android App Development Learn to develop Android apps using Java and Eclipse. With Android phones being produced by all of the major phone manufacturers and with the addition of new tablet devices, it's no surprise that the Android platform is the fastest growing mobile development platform in the world. In this course, you will learn to develop Android applications using Java and the Eclipse development environment. You will learn basic application development including using the Android mobile camera, working with geolocation tools, and playing audio and video files.
<urn:uuid:5945468e-dace-47df-9942-f3b61301aa0b>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/course/116431/android-app-development/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00408-ip-10-171-10-70.ec2.internal.warc.gz
en
0.868708
108
2.9375
3
The US joined WTO on January 1st, 1995, when the organization was initiated. Since then it has been an integral part of the organization due to its dominant position in world trade and commerce. The two share a close relationship with the US reaching to the organization to settle its disputes. It has filed disputes against several countries including Indonesia, China and India to settle trade related issues. It has also been a part of 119 cases as a respondent and more than a hundred cases as a third party. The country has always been a big supporter of the organization and believes in bringing peace and harmony in the country, which is not possible until strong trade rules and regulations are created and implemented. The United States is a member of these groups in the negotiations: > Friends of Ambition (NAMA) > Friends of Fish (FoFs) > Joint proposal (in intellectual property) Nonetheless, in the country the WTO is not always looked as a respected organization by various groups. There lies a section of people that is against the WTO’s stipulations and laws and considers it a reason for the gradual downfall of the US economy. Some reasons why many people in the US are against WTO are: - The organization allows some countries to use protection while some countries cannot do so. This law has caused several issues in the US as the country cannot put tariffs against Chinese imports whereas China has the right to do so. This has been a major bone of contention as many people are of the view that laws like these prevent the US economy from flourishing while allowing other countries to succeed. - The World Trade Organization is a legal entity and is not accountable to anyone, not even the member nations. The organization holds a lot of power that goes completely unchecked. Due to this reason a great number of people want the US to cut ties with the organization so that it is not controlled by it. Many have even gone to the extent of calling the US a puppet in the hands of the WTO. - One of the biggest concerns is the power that the WTO gives to global corporations. Under certain conditions, corporations have the right to sue actual nations putting their sovereignty on the line. Many experts say that giant corporations can use the power invested in them to bend countries according to their will and cause troubles. Conversely, there are people who do have faith in the organization, which is why the US continues to be a member of the WTO. It cannot be denied that the country has greatly benefitted from its membership as well. Being a member allows the US to reach to new markets. At the same time the problem related with disputes has been handled as well as the WTO deals with the trade related disputes to find justified solutions.
<urn:uuid:a4cce843-eda3-4817-a39f-990d0a7f477c>
CC-MAIN-2017-04
http://www.best-practice.com/best-practices-regulation/business-regulations-best-practices-regulation/us-and-the-world-trade-organization/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00004-ip-10-171-10-70.ec2.internal.warc.gz
en
0.975633
553
2.828125
3
The mention of microlending may call up images of loans made to farmers in India or credit extended to women entrepreneurs in immigrant communities. While the village-based credit delivered by non-governmental organizations does constitute microlending, it is only part of the scope of microlending today. Now, instead of rural villages, microlending has shifted to social media-based online communities and the prevalent non-bank payment companies are facilitating the transactions. To prevent the loss of short-term loans to competitors, banks need to establish a social media presence that will help retain customers and generate income. By definition, a microloan is any extension of credit in which either the borrower, the lender or the amount lent is small, seldom above $25,000 and often as little as $500. Oftentimes, a microloan is made to a person who otherwise would not qualify for a traditional loan or credit card. This is not necessarily because they are not credit worthy, but perhaps lack a sufficient credit history. Community is the Basis of Microlending It wasn't so long ago that banks served a defined county or region and that credit unions served only strictly-defined groups. Although many financial institutions have moved away from a purely community-centered orientation, many credit unions and community banks still maintain an emphasis on community in their branches and lending practices. This concept of providing financial services within a defined geography or for a specific group is simply a formalization of the centuries old practice of lending circles in which people pooled their savings and made loans to members of the circle. Lending circles make loans with confidence because they know the applicant's circumstances and capacity to repay. This keeps default rates low. It isn't peer pressure to repay that makes microlending work, it is the community's knowledge of the applicant before the loan is made. A New Type of Community Thinking of Facebook as a community is the first step financial institutions must take in order to understand how to use it, and why loans are logical services to deliver via social media. Facebook enables people to build an online community made up of people they have met and then add new acquaintances that already share a common connection with an existing friend. LinkedIn groups work the same way. People are associated with colleagues and by extension their colleagues' colleagues. It is this connectedness that makes social media communities akin to villages and lending circles. Microlending on Facebook is already underway. Kiva and Accion are online organizations that solicit for charitable investments to fund microloans to entrepreneurs and small businesses. Among the most successful microlenders is Kabbage. Kabbage, financially backed by UPS, focuses on advancing funds to internet-based merchants. Also of significant note is Microplace, a related PayPal Company. It might be no surprise, if in a few years, Microplace or some other online entity becomes serious competition for banks, just as PayPal has on the payments side.
<urn:uuid:a4110042-ff07-4c20-8162-1a277ffd9d9b>
CC-MAIN-2017-04
http://www.banktech.com/channels/microlending-and-social-media-competition-between-banks-and-non-bank-lenders/a/d-id/1294879?page_number=1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00401-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961566
596
2.890625
3
Arp C.D.,U.S. Geological Survey | Jones B.M.,U.S. Geological Survey | Schmutz J.A.,U.S. Geological Survey | Urban F.E.,U.S. Geological Survey | Jorgenson M.T.,ABR Inc. Polar Biology | Year: 2010 Arctic habitats at the interface between land and sea are particularly vulnerable to climate change. The northern Teshekpuk Lake Special Area (N-TLSA), a coastal plain ecosystem along the Beaufort Sea in northern Alaska, provides habitat for migratory waterbirds, caribou, and potentially, denning polar bears. The 60-km coastline of N-TLSA is experiencing increasing rates of coastline erosion and storm surge flooding far inland resulting in lake drainage and conversion of freshwater lakes to estuaries. These physical mechanisms are affecting upland tundra as well. To better understand how these processes are affecting habitat, we analyzed long-term observational records coupled with recent short-term monitoring. Nearly the entire coastline has accelerating rates of erosion ranging from 6 m/year from 1955 to 1979 and most recently peaking at 17 m/year from 2007 to 2009, yet an intensive monitoring site along a higher bluff (3-6 masl) suggested high interannual variability. The frequency and magnitude of storm events appears to be increasing along this coastline and these patterns correspond to a greater number of lake tapping and flooding events since 2000. For the entire N-TLSA, we estimate that 6% of the landscape consists of salt-burned tundra, while 41% is prone to storm surge flooding. This offset may indicate the relative frequency of low-magnitude flood events along the coastal fringe. Monitoring of coastline lakes confirms that moderate westerly storms create extensive flooding, while easterly storms have negligible effects on lakes and low-lying tundra. This study of two interacting physical mechanisms, coastal erosion and storm surge flooding, provides an important example of the complexities and data needs for predicting habitat change and biological responses along Arctic land-ocean interfaces. © 2010 The Author(s). Source Garshelis D.L.,University of Minnesota | Johnson C.B.,ABR Inc. Marine Pollution Bulletin | Year: 2013 Sea otters (Enhydra lutris) suffered major mortality after the Exxon Valdez oil spill in Prince William Sound, Alaska, 1989. We evaluate the contention that their recovery spanned over two decades. A model based on the otter age-at-death distribution suggested a large, spill-related population sink, but this has never been found, and other model predictions failed to match empirical data. Studies focused on a previously-oiled area where otter numbers (~80) stagnated post-spill; nevertheless, post-spill abundance exceeded the most recent pre-spill count, and population trends paralleled an adjacent, unoiled-lightly-oiled area. Some investigators posited that otters suffered chronic effects by digging up buried oil residues while foraging, but an ecological risk assessment indicated that exposure levels via this pathway were well below thresholds for toxicological effects. Significant confounding factors, including killer whale predation, subsistence harvests, human disturbances, and environmental regime shifts made it impossible to judge recovery at such a small scale. © 2013 Elsevier Ltd. Source Harwell M.A.,Harwell Gentile and Associates | Gentile J.H.,Harwell Gentile and Associates | Johnson C.B.,ABR Inc. | Garshelis D.L.,Grand Rapids | Parker K.R.,Data Analysis Group Human and Ecological Risk Assessment | Year: 2010 A comprehensive, quantitative risk assessment is presented of the toxicological risks from buried Exxon Valdez subsurface oil residues (SSOR) to a subpopulation of sea otters (Enhydra lutris) at Northern Knight Island (NKI) in Prince William Sound, Alaska, as it has been asserted that this subpopulation of sea otters may be experiencing adverse effects from the SSOR. The central questions in this study are: could the risk to NKI sea otters from exposure to polycyclic aromatic hydrocarbons (PAHs) in SSOR, as characterized in 2001-2003, result in individual health effects, and, if so, could that exposure cause subpopulation-level effects? We follow the U.S. Environmental Protection Agency (USEPA) risk paradigm by: (a) identifying potential routes of exposure to PAHs from SSOR; (b) developing a quantitative simulation model of exposures using the best available scientific information; (c) developing scenarios based on calculated probabilities of sea otter exposures to SSOR; (d) simulating exposures for 500,000 modeled sea otters and extracting the 99.9% quantile most highly exposed individuals; and (e) comparing projected exposures to chronic toxicity reference values. Results indicate that, even under conservative assumptions in the model, maximum-exposed sea otters would not receive a dose of PAHs sufficient to cause any health effects; consequently, no plausible toxicological risk exists from SSOR to the sea otter subpopulation at NKI. © Taylor & Francis Group, LLC. Source Dou F.,University of Alaska Fairbanks | Yu X.,University of Texas at Dallas | Ping C.-L.,University of Alaska Fairbanks | Michaelson G.,University of Alaska Fairbanks | And 2 more authors. Geoderma | Year: 2010 Coastal erosion plays an important role in the terrestrial-marine-atmosphere carbon cycle. This study was conducted to explore the spatial variation of soil organic carbon (SOC) and other soil properties along the coastline of northern Alaska. A total of 769 soil samples, from 48 sites along over 1800-km of coastline in northern Alaska, were collected during the summers of 2005 and 2006. A geological information system (GIS) and a geostatistical method (ordinary kriging) were coupled to investigate the spatial variation of SOC along the coastline. SOC have a big variation ranging from 0.8 to 187.4 kg C m- 2 with the greatest value observed in the middle and lowest in the northeastern coastline. Compared to the 1-D model or the 1-D model with shortcut distance, the 2-D model was more reasonable to describe SOC along the coastline. The Gaussian correlation structure model had less prediction error than other examined geostatistical models. All mapping results also indicate that soils of the northwestern coastline stored greater SOC than those of the northeastern coastline. The estimation of total SOC along the coastline of northern Alaska was 6.86 * 107 kg m- 1. The prediction errors indicated that greater errors were observed in both ends of the coastline than were observed in other fractions, although the range was from 0.739 to 0.779. Our study suggests that the isotropic 2-D model without a trend, with the nugget effect and the Gaussian correlation structure is a useful tool to investigate SOC in large scale. Results of stable isotope of organic matter indicate that SOC are mainly derived from C3 plant, which ranged from - 30‰ to - 22‰. © 2009 Elsevier B.V. Source Ping C.-L.,University of Alaska Fairbanks | Michaelson G.J.,University of Alaska Fairbanks | Guo L.,University of Southern Mississippi | Jorgenson M.T.,ABR Inc. | And 4 more authors. Journal of Geophysical Research: Biogeosciences | Year: 2011 Carbon, nitrogen, and material fluxes were quantified at 48 sampling locations along the 1957 km coastline of the Beaufort Sea, Alaska. Landform characteristics, soil stratigraphy, cryogenic features, and ice contents were determined for each site. Erosion rates for the sites were quantified using satellite images and aerial photos, and the rates averaged across the coastline increased from 0.6 m yr-1 during circa 1950-1980 to 1.2 m yr -1 during circa 1980-2000. Soils were highly cryoturbated, and organic carbon (OC) stores ranged from 13 to 162 kg OC m-2 in banks above sea level and averaged 63 kg OC m-2 over the entire coastline. Long-term (1950-2000) annual lateral fluxes due to erosion were estimated at -153 Gg OC, -7762 Mg total nitrogen, -2106 Tg solids, and -2762 Tg water. Total land area loss along the Alaska Beaufort Sea coastline was estimated at 203 ha yr-1. We found coastal erosion rates, bank heights, soil properties, and material stores and fluxes to be extremely variable among sampling sites. In comparing two classification systems used to classifying coastline types from an oceanographic, coastal morphology perspective and geomorphic units from a terrestrial, soils perspective, we found both systems were effective at differentiating significant differences among classes for most material stores, but the coastline classification did not find significant differences in erosion rates because it lacked differentiation of soil texture. Copyright © 2011 by the American Geophysical Union. Source
<urn:uuid:1c88e1d0-d184-42f6-ac19-cc9c57941599>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/abr-inc-277495/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00401-ip-10-171-10-70.ec2.internal.warc.gz
en
0.912255
1,902
2.515625
3
Back in 2008, the Sloan Digital Sky Survey (SDSS) came to an end, leaving behind hundreds of terabytes of publicly-available data that has since been used in a range of research projects. Based on this data, researchers have been able to discover distant quasars powered by supermassive black holes in the early universe, uncover collections of sub-stellar objects, and have mapped extended mass distributions around galaxies with weak gravitational fields. Among the diverse groups of scientists tackling problems that can now be understood using the SDSS data is a team led by Dr. Risa Wechsler from Stanford University’s Department of Physics and the SLAC National Accelerator Laboratory. Wechsler is interested in the process of galaxy formation, the development of universal structure, and what these can tell us about the fundamental physics of the universe. Naturally, dark energy and dark matter enter the equation when one is considering galactic formation and there are few better keys to probing these concepts than data generated from the SDSS. Just as the Sloan Digital Sky Survey presented several new data storage and computational challenges, so too do the efforts to extract meaningful discoveries. Teasing apart important information for simulations and analysis generates its own string of terabytes on top of the initial SDSS data. This creates a dark matter of its own for computer scientists as they struggle to keep pace with ever-expanding volumes that are outpacing the capability of the systems designed to handle them. Wechsler’s team used the project’s astronomical data to make comparisons in the relative luminosity of millions of galaxies to our own Milky Way. All told, the project took images of nearly one-quarter of the sky, creating its own data challenges. The findings revealed that galaxies with two satellites that are nearby with large and small Magellanic clouds are highly unique — only about four percent of galaxies have similarities to the Milky Way. To arrive at their conclusions, the group downloaded all of the publicly available Sloan data and began looking for satellite galaxies around the Milky Way, combing through about a million galaxies with spectroscopy to select a mere 20,000 with luminosity similar to that of our own galaxy. With these select galaxies identified, they undertook the task of mining those images for evidence of nearby fainter galaxies via a random review method. As Wechsler noted, running on the Pleiades supercomputer at NASA Ames, it took roughly 6.5 million CPU hours to run a simulation of a region of the universe done with 8 billion particles, making it one of the largest simulations that has ever been done in terms of particle numbers. She said that when you move to smaller box sizes it takes a lot more CPU time per particle because the universe is more clustered on smaller scales. Wechsler described the two distinct pipelines required for this type of reserach. First, there’s the simulation in which researchers spend time looking for galaxies in a model universe. Wechsler told us that this simulation was done on the Pleiades machine at Ames across 10,000 CPUs. From there, the team performed an analysis of this simulation, which shows the evolution of structure formations on the piece of the universe across its entire history of almost 14 billion years — a process that involves the examination of dark matter halo histories across history. As she noted, the team was “looking for gravitationally bound clumps in that dark matter distribution; you have a distribution of matter at a given time and you want to find the peaks in that density distribution since that is where we expect galaxies to form. We were looking for those types of peas across the 200 snapshots we tool to summarize that entire 14 billion year period.” The team needed to understand the evolutionary processes that occurred between the many billions of years captured in 200 distinct moments. This meant they had to trace the particles from one snapshot to the next in their clumps, which are called dark matter halos. Once the team found the halos, which again, are associated with galaxy formation, they did a statistical analysis that sought out anything that looked like our own Milky Way. Wechsler told is that “the volume of the simulation was comparable to the volume of the data that we were looking at. Out of the 8 million or so total clumps in our simulation we found our set of 20,000 that looked like possibilities to compare to the Milky Way. By looking for fainter things around them — and remember there are a lot more faint things than bright ones — we were looking for many, many possibilities at one time.” The computational challenges are abundant in a project like this Wechsler said. Out of all bottlenecks, storage has been the most persistent, although she noted that as of now there are no real solutions to these problems. Aside from bottlenecks due to the massive storage requirements, Wechsler said that the other computational challenge was that even though this project represented one of the highest resolution simulations at such a volume, they require more power. She said that although they can do larger simulation in a lower resolution, getting the full dynamic range of the calculation is critical. This simulation breaks new ground in terms of being able to simulate Magellenic cloud size objects over a large volume, but it’s still smaller than the volume that the observations are able to probe. This means that scaling this kind calculation up to the next level is a major challenge, especially as Wechsler embarks on new projects. “Our data challenges are the same as those in many other fields that are tackling multiscale problems. We have a wide dynamic range of statistics to deal with but what did enable us to do this simulation is being able to resolve many small objects in a large volume. For this and other research projects, having a wide dynamic range of scales is crucial so some of our lessons can certainly be carried over to other fields.” As Alex Szalay friom the Johns Hopkins University Department of Physics and Astonomy noted, this is a prime example of the kinds of big data problems that researchers in astrophysics and other fields are facing. They are, as he told us, “forced to make tradoffs when they enter the extreme scale” and need to find ways to manage both storage and CPU resources so that these tradeoffs have the least possible impact on the overall time to solutions. Dr. Szalay addressed some of the specific challenges involved in Wechsler’s project in a recent presentation called “Extreme Databases-Centric Scientific Computing.” In the presentation he addresses the new scalable architectures required for data-intensive scientific applications, looking at the databases as the root point to begin exploring new solutions. For the dark energy survey, the team will take images of about one-eighth of the sky going back seven billion years. The large synoptic survey telescope, which is currently being built will take images of the half the sky every three days and will provide even more faintness detection, detecting the brightest stars back to a few billion years after the big bang. One goal with this is to map where everything is in order to figure out what the universe is made of. Galaxy surveys help with this research because they can map the physics to large events via simulations to understand galactic evolution.
<urn:uuid:554732b8-0d6f-42d3-ad23-04aa8fe173a3>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/05/31/a_dark_matter_for_astrophysics_research/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00125-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950303
1,505
3.890625
4
In 2012, 1.5 billion Wi-Fi enabled devices were shipped to consumers across the globe. Everything from baby monitors to cameras to TVs and tablets were sold and are now voraciously feeding off of the wireless broadband in our homes, cafes, and even in our public parks. It’s practically ubiquitous now, but the seemingly magical technology that enables Wi-Fi is barely fifteen years old. When it was first deployed, wireless Local Area Network (LAN) systems were designed to serve a limited number of business applications under controlled environments. Picture a storage facility floor where employees check inventory with a wireless scanner. Today, wireless LANs fuel everything from the biggest billion-dollar businesses to your iPhone. But people are using Wi-Fi on so many devices and for such data-heavy applications that the frequency utilized to transmit Wi-Fi is becoming saturated. It’s struggling to keep up with demand. In order to ensure a future with widespread access to both public and private Wi-Fi networks, something needs to be done. The current standard, 802.11n operating on the 2.4 GHz band, is capable of a theoretical maximum 300 – 450 Mbps per transmission point. This may seem pretty fast, but looking towards the future, it’s not going to be enough. That’s where 802.11ac comes in. It’s capable of 1Gbps and it operates on the 5 GHz band, which means it can handle more users, more devices, bigger apps, and pull more of the burden off of cellular networks. Even better, many old devices will work on the new transmission standard (though receive no new benefit) and similarly, new devices will still work on old transmissions, so no need to worry that you’ll need all-new stuff. But even more exciting than new super-speed capabilities is the ability of 802.11ac to support multiple devices in open spaces. The older 802.11n was capable of four spatial streams while the new 802.11ac is capable of eight. This means that if 10 people are using 30Mbps on a public 802.11n Wi-Fi system, 20 could use the same amount on the new 802.11ac system, even if the maximum speed of the new system was the same as the old one. This works because of 802.11ac’s multiple-user multiple-input multiple-output capabilities (thankfully abbreviated as Multi-User MIMO). The old system only supported a single-user MIMO per access point, which could only offer full benefit to one device at a time. The new smarter system allows an equal amount of bandwidth to be assigned to multiple users simultaneously. The graphic on the left shows how Single-User MIMO has to switch “attention” between multiple devices in order to deliver a connection, but Multi-User MIMO can simultaneously connect. What this all means is that 802.11ac is perfect for outdoor public Wi-Fi applications where many people are accessing the network on a variety of devices. This is particularly exciting for cable because of the already 150,000+ public Wi-Fi hotspots we’ve installed across the country. They’re available for no extra charge to cable broadband customers and while they’re revolutionizing how broadband is accessed outside the home, they’re also testing the limits of what the old 802.11n can do – especially in densely populated areas like New York City and Washington, DC. In order to keep up with demand, we need to adopt the new 802.11ac standard. Unfortunately, establishing 802.11ac as the new standard isn’t as easy as installing new hardware. It’s really about more access to the 5GHz band that allows 802.11ac to work to its full potential. This is why it’s imperative that the FCC remove existing encumbrances on the 5 GHz band and freely allow businesses to take advantage of next generation Wi-Fi. As a country, we have the opportunity to establish ourselves as a global leader in public Wi-Fi availability, speed, and scale through 802.11ac and the 5 GHz band. The possibilities are practically endless when everyone has access to more unlicensed spectrum. 802.11ac is most certainly coming. We need to make sure we’re ready when it does.
<urn:uuid:81e559c9-b32e-48f5-ba8b-c5e736410950>
CC-MAIN-2017-04
https://www.ncta.com/platform/broadband-internet/what-is-802-11ac-and-why-is-it-better-for-public-wi-fi/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00519-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94249
903
2.875
3
In a recent “The More You Know” public service announcement from NBC, the TODAY show’s Matt Lauer stated, “Teach your kids to surf the web and post responsibly.” This statement applies to teachers, administrators and staff as well, not only parents. With half (52 percent) of all children in the United States having access to mobile devices, it is imperative that kids understand how to be responsible and safe online. Internet safety and digital citizenship is something that should be taught early and often throughout a student’s school career. Not only is it important to teach children the importance of being safe online, it is also important for adults to continue emphasizing it. Here are some tips and reasons for teaching internet safety to children of every age: At the preschool level, students must understand internet safety at the most basic level. At this tender age, a child needs only to know how to operate the applications that can benefit them. This is an age where unsafe sites and apps should be blocked completely. If the internet and/or devices are used, it should be at an extreme minimum. The American Academy of Pediatrics recommends no more than two hours of screen time for all children. Children under the age of two should have no screen time. Therefore, preschoolers need little-to-no screen time. Given that it is difficult for many parents to abide by these recommendations, it is the responsibility of the school to limit screen access as much as possible. That said, the most helpful thing schools can do to teach internet safety and digital citizenship to students is teach their parents. Schools can provide parents of small children with multiple resources for internet safety and safe activities for Pre-K students, such as the following: The elementary age is the time to start teaching kids to protect themselves online. As stated in Edutopia, three considerations should be addressed in internet safety at this level: Teaching internet safety and digital responsibility at the elementary age includes beginning to explain why it is important to spend only short periods of time on computers and devices. Primarily using digital learning tools to enhance lessons instils the idea that computers and devices are reference tools, not just little boxes of live entertainment. During the elementary school years, students should begin to learn the proper usage of search engines. Rather than introducing students to Google at this level, schools can use kid-friendly search engines such as Kidrex, Kidzsearch, Safe Search for Kids and Internet Public Library for Kids. These search engines allow for lessons on searching without worry of little eyes finding inappropriate subject matter. Oh the wonderful tween years… By sixth grade most children in the US have become intimately acquainted with all things digital, including social media. This is the time to teach students about online identities and behavior. Because children at this age may already have their own smart phones equipped with their own personal Facebook accounts, games and YouTube, teaching about digital responsibility and stewardship is imperative. Teachers can help children to understand what information should be kept private and what should be shared. An important theme to lessons should be to have open communication about online activities with parents, teachers and other trusted adults. According to staysafeonline.org, some key concepts for students of this age level are: This post from eSchoolnews.com has some great middle school level lessons for teaching internet safety. At the high school level, teaching internet safety and digital citizenship becomes more in-depth. From digital etiquette to proper usage of resources, high school kids need a solid understanding of their place in the online world. To have teachable moments with students in these areas, the internet in schools needs to be unblocked, but monitored. When given monitored access to the social media websites and applications with which they are most comfortable, teenagers are less likely to leverage these tools in harmful ways (e.g., for cyberbullying). Impero Software can provide solutions for monitoring while keeping kids safe. As with middle school, high school students need to be made aware or reminded of how to communicate with trusted adults when they feel that unethical situations have taken place online. Ideally a school should have online options for anonymously reporting online bullying or abuse. In addition to understandingethics and proper online behavior, high school students need to understand the importance of not stealing or damaging other people’s digital work and property. In preparation for college and the workplace, students should be taught about plagiarism, copyright, and illegal downloading. By high school, students are aware of and utilizing search tools such as Google and Bing on a regular basis. Even with network safety tools in place, students will inevitably access inappropriate subject matter. Therefore, it is a good idea to have in place a policy of how to deal with this situation. With help from teachers and administrators, educators should develop a dialog that explains the proper way to react to, and move on from, inappropriate photos, blog posts, and other media. This may seem silly, but teenagers are keen to announce their findings out loud in class, which can cause a huge disturbance. Rather than shutting down the lines of communication, educators should teach students how to discreetly and politely report that something inappropriate popped up on their screen and without involving the entire classroom. This re-enforces future proper workplace behavior. Common Sense Education has some great videos with corresponding lesson plans that help jumpstart conversations with students about much of the subject matter above. College and Higher Education Do adults really need to be taught about internet safety and online citizenship? You betcha! At college, students learn and communicate online a lot, in preparation for their professional lives. They need to be reminded of the importance of maintaining their digital reputations and those of others so that they have good digital stewardship in future employment. This is the time to teach students to keep their online profiles up to date with information and images that portray them professionally and intellectually. Remind students that they shouldn’t put anything online that they wouldn’t want an employer to see. Let them know about common hiring practices, such as searching for potential new hires’ profiles on social media. Teach about appropriate email correspondence and how businesses can track data of communications on any devices used on their servers and internet connections. Again, as with all other age groups, reiterate the importance of communicating any inappropriate activity online. Give college students the resources to anonymously report any bullying or harassment. Tell students the course of action to take when plagiarism or copyright infringement has taken place. Reinforce proper usage of references. Explain the importance of taking this knowledge with them into their future workplace. Many an employee has lost a job by writing, reporting, or using images that were not legally obtained. Stay Safe Online has some excellent pointers for teaching online safety to college level students here. In addition to online safety, college is the time to teach about time management in the workplace. When an employee is taking up paid time to check personal social media, emails or shop online, this is considered a form of stealing, dubbed “time theft” by many employers. According to staffmonitoring.com, 64% of employees say they use the Internet for personal interest during working hours. This costs companies thousands of dollars each year, especially if employees are paid hourly and overtime. (Time wasted is made up in work time paid over 40 hours per week.) Teaching college students about the importance of using the internet and digital tools during work will help our future workforce be more productive and less distracted. Internet safety and digital citizenship go hand in hand and should be taught to students at every level of their educational careers. Teaching about resources, ethics, time management, and communication are all part of building a future workforce of digital stewards that utilize the internet in safe and appropriate ways. Here are some additional resources for teaching internet safety in schools: Infographic via EdTechReview
<urn:uuid:eafbc98b-7287-44ae-9b8b-3ac36b463a1b>
CC-MAIN-2017-04
https://www.imperosoftware.com/teaching-internet-safety-helps-build-good-digital-citizenship/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00335-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952787
1,603
3.875
4
These handheld devices feature ubiquitous connectivity, constant access to the biggest repository of mankind’s knowledge, and more computing power than the NASA control room for the first moon landing. Too many people assume that mobile devices are secure because they’ve never experienced malware on them before. The reality is that, until recently, there was little data on them that was worth stealing. Nevertheless, now that they do contain valuable information – from email addresses and passwords to bank account logins – cybercriminals will be paying closer attention. And just because mobile threats may not look like they do on the traditional PC, this doesn’t mean there are no security issues. New Technology, New Threats Mobiles will experience all the malware that PCs have before them, including viruses, phishing, worms and more. How these threats attack them will be different, however, as the vehicle will vary between the device, the operating system (OS), and the application. For example, attacks against the closed Apple iOS model are going to be significantly different to those affecting Google’s Android, which liberally allows applications to be published (including nasty applications). In addition, new devices and new functionalities will breed fresh opportunities for cybercriminals; features such as augmented reality, facial recognition and integrated social media all add new dimensions that could be targeted. Augmented reality, for example, can connect location information with a user’s social media ‘friends’, enabling them to identify digital contacts nearby. This infringes privacy and potentially hands out more information than we would usually share with our digital contacts. NFC (near-field communication) technology is another innovation that introduces new challenges for security. Primarily, the discussion over NFC has focused on its use with mobile payments, and therefore instantly means that mobile devices are likely to become much more of a target to steal money. In addition, other information associated with NFC – such as personal data, preferences or habits – may also be valuable to a cybercriminal, and be targeted as a result. Mobile networks are currently undergoing significant upgrades, enabling faster and more reliable connectivity. Although delivering better usability for customers, this ubiquitous connectivity can make mobile devices a more attractive target for both networks and command-and-control, because the network is strong enough to support an effective attack. It’s not all doom and gloom. Some new technology will, of course, enhance security. Modern mobile platforms tend to include capabilities such as sandboxing technology, which can isolate applications to prevent compromised ones from accessing all of the device’s data. Access control and permission systems have also undergone drastic reform from the conventional OS; rather than being based on access to arbitrary items like registry keys, they instead focus on more human access permissions, such as whether an application needs to access location data or SMS messages, making it easier to understand for consumers. Mobile device architectures are also becoming more tailored to modern working practices – BlackBerry, for example, has introduced a feature that provides two isolated working environments on the same device, allowing a separation between work and personal. This provides the benefits of a trustworthy and secure business environment, alongside the flexibility to play games and manage a personal life. These features are not yet widespread and the robustness of the security is unproven, but they do show a positive direction that could better secure the modern remote user in a way that works for both the business and the employee. It will be interesting to see if other vendors follow suit. These capabilities show great promise for producing a more secure mobile environment. That said, they are as-yet far from perfect, and many of these controls do not come with smart, secure defaults. Instead they rely on the user to edit the permissions of an application, a process that requires some knowledge and expertise. Education and awareness is therefore vital to ensure users know what options they have, and how best to secure a mobile device. IPv6 will also stamp a mark on the mobile security industry, especially because mobile device and telecoms providers are major proponents of IPv6, the next generation of protocols that will drive the internet. IPv6 will provide enhanced performance features, but it also has new functionality designed specifically for mobile and security. For example, IPSec – the industry standard for secure VPN connections – was incorporated into IPv6 and back ported to IPv4. Some of the changes enhance security, but others could leave a backdoor into your environment if not configured and managed correctly. Protecting Yourself, and Your Business Priority one is to get the basics under control. Despite all the hype, most mobile security breaches occur due to basic failures, such as poor passwords, lack of encryption, poor patching or social engineering. Mobile device management solutions can help ensure these capabilities are enabled. Some will be provided by the device in hardware, such as full volume encryption; others by the OS, for example, sandboxing. These will be managed and reported on by security vendors. Software security solutions, including mobile device management (MDM) and anti-malware capabilities, will be increasingly required, although their implementation will vary from their PC counterparts and differ from platform to platform. Data loss prevention (DLP) strategies must also be implemented specifically to mobile and, as data flows between different devices, continuous encryption to protect data wherever it resides will be powerful. Ultimately, the protection stack for mobile will expand over time, much as with the PC. It won’t be the same at first, but will need to remain as progressively capable. Essentially, the more data we make available on our mobiles, the more incentive we provide cybercriminals with to weave creative attacks that compromise our personal lives, businesses and finances. Equally, the more applications and new capabilities we use, the more we increase the surface attack area to be exploited. Privacy is also at risk, and as mobiles become the combination of a passport, personal record store and social life, we can expect to come under greater surveillance. Technology is constantly changing as are the threats. A six-month strategy is therefore far more effective than the conventional three- to five-year plan many IT teams use. James Lyne, director of technology strategy, is focused on the five-year technology strategy at Sophos in the Office of the CTO. Working with key business and technology trends and combining a detailed knowledge of threats, Lyne extrapolates from the modern world of threat protection to explore future security and technology requirements. Aside from technology strategy, he frequently engages with customers and industry forums to evangelize the security problem domains.
<urn:uuid:c2d9586f-fb4b-4241-8ed7-a43b71e664b0>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/opinions/comment-mobile-device-security-whats-coming-next/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00235-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945959
1,341
2.953125
3
How can you condense consecutive sets of zeros in an IPv6 address? - with the “:::” symbol - by eliminating leading zeros - by replacing four consecutive zeros with a single zero - with the “::” symbol The correct answer is D. IPv6 addresses can be shortened by replacing a “bunch” of zeros, only once in an address, with the “::” symbol. This can be done only once because the address is expanded by determining how many bits are missing and putting zeros in their place.
<urn:uuid:f32836eb-6f7e-45f9-a23e-4af2dac6ee40>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2011/08/10/ccna-question-of-the-week-6/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00171-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953439
125
2.59375
3
Facebook released their second annual study of global Internet access. While the company has met resistance in various countries that it has tried to provide Internet access, this report still shows an improvement to Internet availability. The report addresses four barriers to Internet access including availability, affordability, relevance, and readiness. - At the end of 2014, there were 2.9 billion internet users globally. By the end of 2015, this figure was predicted to have reached 3.2 billion, 43% of the world’s population. - During 2014, lower prices for data and rising global incomes have made mobile data packages of 500MB per month affordable to 500 million more people. - The highest estimates of 3G and 4G coverage suggest that 1.6 billion people live outside mobile broadband coverage, an improvement compared to 2 billion at the end of 2014. - Most people connect to the internet using mobile devices, which are the only way to get online in many parts of the world. An estimated 2.7 billion people did not have mobile phone subscriptions in 2015. (Credit: Facebook Newsroom)
<urn:uuid:7a7a9018-786d-4b3e-8a23-723644737687>
CC-MAIN-2017-04
https://www.404techsupport.com/2016/02/facebook-publishes-state-connectivity-2015/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00079-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949739
221
2.734375
3
Lenat D.B.,Cycorp Inc. | Durlach P.J.,Advanced Distributed Learning Initiative International Journal of Artificial Intelligence in Education | Year: 2014 We often understand something only after we've had to teach or explain it to someone else. Learning-by-teaching (LBT) systems exploit this phenomenon by playing the role of tutee. BELLA, our sixth-grade mathematics LBT systems, departs from other LTB systems in several ways: (1) It was built not from scratch but by very slightly extending the ontology and knowledge base of an existing large AI system, Cyc. (2) The "teachable agent" - Elle - begins not with a tabula rasa but rather with an understanding of the domain content which is close to the human student's. (3) Most importantly, Elle never actually learns anything directly from the human tutor! Instead, there is a super-agent (Cyc) which already knows the domain content extremely well. BELLA builds up a mental model of the human student by observing them interact with Elle. It uses that Socratically to decide what Elle's current mental model should be (what concepts and skills Elle should already know, and what sorts of mistakes it should make) so as to best help the user to overcome their current confusions. All changes to the Elle model are made by BELLA, not by the user - the only learning going on is BELLA learning more about the user - but from the user's point of view it often appears as though Elle were attending to them and learning from them. Our main hypothesis is that this may prove to be a particularly powerful and effective illusion to maintain. © 2014 International Artificial Intelligence in Education Society. Source Agency: Department of Defense | Branch: Air Force | Program: SBIR | Phase: Phase I | Award Amount: 99.18K | Year: 2006 Analytical tasks at the all-source level, and above, generally require access to intelligence distributed among a variety of forms: structured databases with differing schemas, electronic maps with various metadata schemes, and textual reports in multiple languages. Knowledge bases that employ highly expressive formal languages, such as extensions of first order logic, offer a solution to the challenge of combining information from the current daunting variety of data forms. Such knowledge bases can, in principle, represent the content of all structured sources within a single structure. Such a structure can in turn be accessed by interfaces that allow information to be formed in a way that is natural to analysts - rather than in the various idiosyncratic forms of multiple structured sources. Moreover, the expressive power of such knowledge bases makes it possible for them to integrate existing structured sources as a virtual part of their content, by translating data in those sources. A complete Virtual Knowledge Base (VKB) of data for intelligence analysis would address the need for data to exist in a form that is intelligible to analysts, while circumventing the impracticality of constructing a single knowledge base in which all intelligence data actually resides. Agency: Department of Defense | Branch: Air Force | Program: SBIR | Phase: Phase I | Award Amount: 96.67K | Year: 2006 In any complex environment – such as managing a battlespace, launching a satellite, or operating a global enterprise – critical decisions depend on a broad range of information sources, decision-making guidelines, and an array of operational and environmental factors. These challenges highlight the need for decision support systems whose decisions are based on both structured and unstructured information sources, and that can explain their decisions in a manner that garners trust from those relying on its conclusions. The Cyc knowledge-based environment supports many of the capabilities needed for such a system. Its Semantic Knowledge Source Integration functionality permits smooth integration with structured information sources, while its inference engine and NL generation capabilities provide textual justifications for its actions. Unstructured data (such as text documents, imagery, videos, etc.) can be mapped to the Cyc ontology to model their content as well as to identify key metadata (such as the source, creation date, scope, etc.), enabling material from unstructured sources to be seamlessly included in the decision process. We propose to design a decision support architecture around these existing capabilities that would gracefully incorporate a wide variety of information sources and offer greater transparency into its decision making process. News Article | August 9, 2014 Editor’s note: Catherine Havasi is CEO and co-founder of Luminoso, an artificial intelligence-based text analytics company in Cambridge. Luminoso was founded on nearly a decade of research at the MIT Media Lab on how NLP and machine learning could be applied to text analytics. Catherine also directs the Open Mind Common Sense Project, one of the largest common sense knowledge bases in the world, which she co-founded alongside Marvin Minsky and Push Singh in 1999. Imagine for a moment that you run into a friend on the street after you return from a vacation in Mexico. “How was your vacation?” your friend asks. “It was wonderful. We’re so happy with the trip,” you reply. “It wasn’t too humid, though the water was a bit cold.” No surprises there, right? You and your friend both know that you’re referring to the weather in terms of “humidity” and the ocean in terms of “cold.” Now imagine you try to have that same conversation with a computer. Your response would be met with something akin to: “Does. Not. Compute.” Part of the problem is that when we humans communicate, we rely on a vast background of unspoken assumptions. Everyone knows that “water is wet,” and “people want to be happy,” and we assume everyone we meet shares this knowledge. It forms the basis of how we interact and allows us to communicate quickly, efficiently, and with deep meaning. As advanced as technology is today, its main shortcoming as it becomes a large part of daily life in society is that it does not share these assumptions. We find ourselves talking more and more to our devices — to our mobile phones and even our televisions. But when we talk to Siri, we often find that the rules that underlie her can’t comprehend exactly what we want if we stray far from simple commands. For this vision to be fulfilled, we’ll need computers to understand us as we talk to each other in a natural environment. For that, we’ll need to continue to develop the field of common-sense reasoning — without it, we’re never going to be able to have an intelligent conversation with Siri, Google Glass or our Xbox. Common-sense reasoning is a field of artificial intelligence that aims to help computers understand and interact with people in a more naturally by finding ways to collect these assumptions and teach them to computers. Common Sense Reasoning has been most successful in the field of natural language processing (NLP), though notable work has been done in other areas. This area of machine learning, with its strange name, is starting to quietly infiltrate different applications ranging from text understanding to processing and comprehending what’s in a photo. Without common sense, it will be difficult to build adaptable and unsupervised NLP systems in an increasingly digital and mobile world. When we talk to each other and talk online, we try to be as interesting as possible and take advantage of new ways to express things. It’s important to create computers that can keep pace with us. There’s more to it than one would think. If I asked you if a giraffe would fit in your office, you could answer the question quite easily despite the fact that in all probability you had never pictured a giraffe inhabiting your office, quietly munching on your ficus while your favorite Pandora station plays in the background. This is a perfect example of you not just knowing about the world, but knowing how to apply your world knowledge to things you haven’t thought about before. The power of common sense systems is that they are highly adaptive, adjusting to topics as varied as restaurant reviews, hiking boot surveys, and clinical trials, and doing so with speed and accuracy. This is because we understand new words from the context they are used in. We use common sense to make guesses at word meanings and then refine those guesses and we’ve built a system that works similarly. Additionally, when we understand complex or abstract concepts, it’s possible we do so by making an analogy to a simple concept, a theory described by George Lakoff in his book, “Metaphors We Live By.” The simple concepts are common sense. There are two major schools of thought in common-sense reasoning. One side works with more logic-like or rule-based representations, while the other uses more associative and analogy-based reasoning or “language-based” common sense — the latter of which draws conclusions that are fuzzier but closer to the way that natural language works. Whether you realize it or not, you interact with both of these kinds of systems on a daily basis. You’ve probably heard of IBM’s Watson, which famously won at Jeopardy, but it’s a lesser-known fact that Watson’s predecessor was a project called Cyc that was developed in 1984 by Doug Lenat. The makers of Cyc, called Cycorp, operate a large repository of logic-based common sense facts. It’s still active today and remains one of the largest logic-based common sense projects. In the school of language-based common sense, the Open Mind Common Sense project was started in 1999 by Marvin Minsky, Push Singh, and myself. OMCS and ConceptNet, its more well-known offshoot, include an information store in plain text, as well as a large knowledge graph. The project became an early success in crowdsourcing, and now ConceptNet contains 17 million facts in many languages. The last few years have seen great steps forward in particular types of machine learning: vector-based machine learning and deep learning. They have been instrumental in advancing language-based common sense, thus bringing computers one step closer to processing language the way humans do. NLP is where common-sense reasoning excels, and the technology is starting to find its way into commercial products. Though there is still a long way to go, common-sense reasoning will continue to evolve rapidly in the coming years and the technology is stable enough to be in business use today. It holds significant advantages over existing ontology and rule-based systems, or systems based simply on machine learning. It won’t be long before you have a more common-sense conversation with your computer about your trip to Mexico. And when you tell it that the water was a bit cold, your computer could reply: “I’m sorry to hear the ocean was chilly, it tends to be at this time of year. Though I saw the photos from your trip and it looks like you got to wear that lovely new bathing suit you bought last week.” News Article | April 18, 2014 At last night’s StartOut gay entrepreneurs demo event, queer tech founders competed for venture capital attention in a warehouse in San Francisco’s South of Market neighborhood. Entrepreneurs from 10 startups pitched to VCs including Dave McClure from 500 Startups and Andy Wheeler from Google Ventures. No one was granted money that night, but organizer Chris Sinton said the exposure to venture capitalists and other founders — nearly 200 showed up to watch — would help get the ball rolling for the companies. StartOut and other minority affinity groups have grown this year, as more tech entrepreneurs, frustrated with the venture capital old boys’ networks, are looking to cultivate their own. Michael Witbrock, who sits on the board of StartOut, watched from the back of the room. The next step in gay activism, he argued, will be through helping the gay community in Silicon Valley become richer and more powerful. “There are things money can do that nothing else can,” said Witbrock, the vice president of research at artificial-intelligence company Cycorp. “This is a means for us as a community to empower ourselves financially. It’s about building people who have the resources to defend the community, who have the resources to buy those who would discriminate.” So advancing gay rights is about money now? “We don’t just need a place at the table,” Witbrock said. “Sometimes you need to buy the table.”
<urn:uuid:f1fac5cc-c512-40f7-ab50-41f74bc5f403>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/cycorp-inc-443482/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00015-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958583
2,616
2.65625
3
Designing for emergency calls is an important part of implementing a Unified Communications system. In Canada and the United States, a system has been implemented to assist Emergency Telecommunicators (“dispatchers”) handling emergency calls. The system is called Enhanced 9-1-1 (e911). This system allows dispatchers to identify where the emergency call is originating from. For the remainder of this blog, I’ll be focusing on the mechanics of how the system is deployed in the US (Get ready for acronym city!). Each telephone is connected to the telephone company (“telco”) central office (CO), which in this case, is referred to as the “End Office.” When a 9-1-1 call is placed, the switch at the CO recognizes it, and sends the call to the Tandem Office, sometimes utilizing trunks dedicated for this purpose. A call setup message is transmitted from the End Office to the Tandem Office which contains the Calling Party Number (CPN) which becomes the Automatic Number Identification (ANI, pronounced “Annie”). The device at the Tandem Office that these emergency trunks are connected to is called the Selective Router (SR), though I’ve also heard them referred to as Selective Access Routers (SAR). Once upon a time, a CO switch such as Nortel DMS-200 would be used for this, but most modern tandems use special-purpose equipment from vendors like EADS. Upon reaching the SR, the ANI is looked up in the Automatic Location Identification (ALI, pronounced “alley”). The ALI database is populated by the telco, and shows the service address for each ANI. To ensure the addresses are valid, the telco usually has a procedure which screens the address against the Master Street Address Guide (MSAG) when an order for new service is taken. If the address provided by the customer cannot be found in the MSAG, the order should be rejected. The MSAG is maintained by the local Authorities Having Jurisdiction (AHJ), which is usually the local 9-1-1 coordinator. Sometimes the local planning department, along with address database services from the US Postal Service, are used to maintain the quality of data in the MSAG. Another important piece of the MSAG is that it identifies which Emergency Services Number (ESN) the address is a part of. The ESN is a three to five digit number representing a unique combination of emergency service agencies (Law Enforcement, Fire, and Emergency Medical Service) designated to serve a specific range of addresses within a particular geographical area, or Emergency Service Zone (ESZ). The ESN facilitates selective routing and selective transfer, if required, to the appropriate PSAP and the dispatching of the proper service agency(ies).” ~ NENA Master Glossary Of 9-1-1 Terminology In other words, the ESN tells the SR which Public Safety Answering Point (PSAP) takes calls for your location. So let’s summarize what happens up to this point: When you call 9-1-1, your end office forwards the call to the 9-1-1 SR, which looks up your ANI in the ALI to determine which ESN you’re associated with, and which PSAP your call should be directed to. In a future post, I’m going to talk about how you, yes you, can create multiple entries in the ALI database so the dispatcher can determine where in the building/campus the call is originating from. This process is called Private Switch Automatic Location Identification (PS/ALI) and you might be required to do it by law according to the AHJ.
<urn:uuid:87390162-dd6e-4717-a38b-21e1c14aa241>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2010/11/09/what-annie-and-alleys-have-to-do-with-911/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00437-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931498
786
2.921875
3
This page provides a wrap-up of my series on MPLS traffic engineering. Multi-Protocol Label Switching (MPLS) was created to improve packet performance in the core of networks, and it is widely used for that purpose. It has also been adapted for other use cases, and one of the most important is traffic engineering. Success with MPLS TE MPLS allows network engineers to optimize network resources by sending traffic across a less congested physical path, rather than the default shortest path designated by the routing protocol. This is achieved by adding a short label to each packet with specific routing instructions that direct packets from router to router, rather than allowing the routers to forward them based on next-hop lookups. The new paths can be created manually or via signaling protocols, and they help to speed traffic. Part 1: MPLS Tunnel Set-Up Traffic engineering can greatly improve performance in MPLS networks. If you already have MPLS deployed in your network -- perhaps for a VPN -- MPLS traffic engineering can be very beneficial. I discuss how specific traffic paths are defined and calculated using routing attributes and protocols, the design criteria, and other design-centric questions to consider. Part 2: MPLS Path Selection For Bandwidth Optimization MPLS traffic engineering has three major uses. These are to optimize bandwidth by selecting an alternate path, to support a service-level agreement (SLA), or to enable fast reroute. This article discusses using label-switched paths for bandwidth optimization. Part 3: Meeting SLAs With MPLS Traffic Routing MPLS traffic engineering can be used to meet an SLA. Not all traffic is the same, and not all customers can get the same service. Voice and video traffic were traditionally carried over circuit-based TDM links. These applications are very delay and loss sensitive, so we must ensure that they are adequately supported on the packet-switched network. Part 4: MPLS Fast Reroute The most common use for MPLS traffic engineering is fast reroute. Here I explain the basics of fast reroute and how to use it to create backup and pre-configured paths in MPLS traffic engineering. MPLS TE alternatives MPLS traffic engineering has been around for more than a decade. Perhaps you are debating whether to use it and in which cases you should consider an alternative. If your network is pure IP but not MPLS enabled, bringing many new control plane features into your network might be too complex. Troubleshooting, management, user training, control plane state, and data plane state can all be concerns. In addition, if your equipment doesn't support Open Shortest Path First (OSPF), Intermediate System to Intermediate System (IS-IS), and Resource Reservation Protocol (RSVP), you may need to change hardware or software. If you already have an IP-enabled network, you may want to consider the IP fast reroute (FRR) feature. Two flavors of IP FRR include loop-free alternate (LFA) and remote LFA. You can find more information about these concepts in my article, Fast Reroute Mechanisms. Based on the topology, LFA could provide enough fast reroute capability for your primary path, and you wouldn't need MPLS. Some topologies might require remote LFA to cover failures -- the difference is, you will need to send the traffic to a node that will not send it back, resulting in a microloop. Although you don't need Label Distribution Protocol (LDP) for MPLS traffic engineering, if you are using MPLS for a Layer 2 or Layer 3 VPN, it is better to enable it. Then if traffic engineering errors occur, LDP can also be used as backup for forwarding traffic. Service providers may not need MPLS traffic engineering where it will only be used to protect primary paths. For example, they may have protected dense wavelength-division multiplexing (DWDM) to enable automatic protection switching in their transport systems. This provides them with sub-50-ms backup paths in the case of failure. MPLS may not be needed under these circumstances. Another scenario is when Enhanced Interior Gateway Routing Protocol (EIGRP) is built using a mechanism similar to fast reroute, called EIGRP feasible successor (FS). In this system, all loop-free alternate paths can be either kept in the EIGRP topology database, or might be used unequally. (EIGRP is like MPLS traffic engineering in supporting unequal cost multipath.) If unequal cost multipath is not used, but a loop-free alternate path exists in the topology database, when a primary path fails, EIGRP will not send a query. It will only run a diffusing update algorithm (DUAL) to select a successor among the feasible successors and install the best path into the routing and forwarding information databases. EIGRP FS is different from MPLS traffic engineering and other fast reroute mechanisms. The alternate topology information still must be calculated, because prefixes are not in the control or data planes. You don't need to use the IP FRR feature for EIGRP, but EIGRP FS still is slower than FRR. Lastly, software-defined networking (SDN) and the use of OpenFlow can give us the ability to calculate topology information, because the controller has a centralized view and can download the forwarding information directly into the forwarding table. MPLS traffic engineering is hard to enable, due to lack of full topology visibility, but with SDN traffic redirection it can be easier if not necessarily faster. Tell us about your experiences with MPLS traffic engineering. Have you had success using it? Would you recommend alternatives?
<urn:uuid:f9fd6619-b841-49c6-9aca-8686b3a25717>
CC-MAIN-2017-04
http://www.networkcomputing.com/networking/mpls-traffic-engineering-guide-success-alternatives/355899195?piddl_msgorder=thrd
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00161-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925396
1,183
3
3
The benefits from a business perspective should be obvious: software would be simpler and less convoluted. There would be more basic or generic programs (say, a spreadsheet) that are highly adaptable and simple to comprehend, and less purpose-designed software. The former is more inexpensive and easier to use. Consequently, it is more easily learned and becomes intuitive over time much more quickly. Conversely, specialized or complicated software is more expensive, both to produce and to maintain, and it also takes longer to learn. This makes ROI dubious at best. Lesson 7 Leave room for bottom-up inventiveness and initiative in how applications are used. This process is primarily driven by serendipity and evolutionary discovery, rather than a top-down systems engineering approach. In fact, the inventiveness of users sometimes drives the systems managers crazy trying to keep up with unintended applications of program. This is sometimes equivalent to hammering nails with a bowling ball, but the users dont mind, because they are getting something useful out of the softwareeven if it wasnt intended by its designers. The obvious lesson for business is it should be the end user who determines the best applications of digititization. This is largely an exploratory and serendipitous process. The role of systems management therefore becomes one of supporting the users of the systems, rather than the masters of one right way of doing things. By extension, corporate trainers and mentors should be capable of adjusting quickly to the new applications and supporting follow on users in their training. Lesson 8 Use it or lose it. Skill fade is always a major issue. For instance, the Canadian army devotes a lot of resources to training uniformed systems administrators for operational duty. However, systems management is so centralized that when they get back to their units these individuals are not able to use and maintain their skills. Their ability to use these skill sets when they are needed is severely hampered and the training is proving to be a waste. The lesson here is that training only goes so far for ensuring the proper skill sets. The organization must support this training with the proper work processes and organizational structures, and allow people to use and maintain their skills. Digitization is ultimately about improving productivity. However, it isnt enough just to invest in software and systems. There must also be a concomitant investment in the training and skills to be able to use the new tools. If a factory worker is given a better widget making machine, but isnt given the opportunity to learn how to use it, then there will likely be little or no productivity gains. All of the lessons learned listed here are really about making the right investments to ensure that productivity is gained, and not hampered, by digitization. Richard Martin is president of Alcera Consulting, a management consulting firm that helps individuals and organizations to thrive in the face of risks, threats and uncertainty.
<urn:uuid:726f6d0f-57db-426d-a68a-eb401d4d890a>
CC-MAIN-2017-04
http://www.cioupdate.com/reports/article.php/11050_3704751_3/8-Great-Training-Tips-from-the-Canadian-Army.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00373-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959083
586
2.84375
3
Kaspersky Security for Mac can scan network traffic for viruses using the SSL protocol. SSL Protocol (Secure Socket Layer) allows detecting the server authenticity via browser and also provides encrypted “personal” connection channel between user computer and the server. This protocol is widely used both for web-servers and mail servers. Gmail is a vivid example of such mail server. SSL connection protects data exchange channel in the Internet. The SSL protocol allows identifying the sides which exchange data based on electronic certificates, encrypts the transferred data and provides data integrity when transferring. These protocol peculiarities are often misused by cyber-criminals to spread Malware as most anti-virus products do not check SSL-traffic. Kaspersky Lab's experts recommend checking SSL-traffic if you are on a suspicious web-resource and when you navigate to another page data transfer by SSL-connection starts. Most probably a malicious program is being transferred by the encrypted protocol. To scan encrypted connection Kaspersky Security for Mac substitutes the required security certificate by the self-signed certificate. Sometimes the programs that establish connection, refuse accepting this self-signed certificate and as a result do not establish connection. Kaspersky Lab's experts recommend disable check of SSL-traffic: - when connecting to trusted web-resource, for example with the web-page of your bank on which you manage your personal account. In this case it is important to get authenticity confirmation of the bank certificate. - if the program which establishes connection checks the certificate of the required web-resource without dialog with the user. For example, the program MSN Messenger when establishing secure connection with the server, checks authenticity of the Microsoft Corporation digital signature. How to enable/ disable scan of encrypted connections If you need Kaspersky Security for Mac to scan/ not to scan encrypted connections on your computer, do the following: Open the main program window. - On the lower part of the main window click the Web Anti-Virus button. - In the Web Anti-Virus section check/clear the box Scan secure connections (HTTPS). - Once the changes are made, you are recommended to lock the program to prevent unauthorized access to program settings. - Close the main window.
<urn:uuid:5e59036a-23c2-4189-ad2a-1e508bce2617>
CC-MAIN-2017-04
http://support.kaspersky.com/8324
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00402-ip-10-171-10-70.ec2.internal.warc.gz
en
0.867276
463
2.578125
3
The USS Ronald Reagan aircraft carrier experienced radiation levels 30 times above normal while its crew conducted disaster relief operations off the coast of Japan in March 2011 after a tsunami damaged reactors at the Fukushima Daiichi Nuclear Power Plant, according to a new analysis published on Monday. That analysis came just days after 79 Reagan crewmen filed an amended lawsuit in federal court in San Diego against the plant’s operator, Tokyo Electric Power Co. The plaintiffs seek $1 billion in damages, claiming the company was negligent in construction and operation of the plant and during the subsequent meltdown of its reactors following the tsunami March 11, 2011. Kyle Cleveland, a sociology professor at Temple University Japan, cited transcripts of a conference call between high level Navy and Energy Department officials in an article in the Asia-Pacific Journal, “Mobilizing Nuclear Bias: The Fukushima Nuclear Crisis and the Politics of Uncertainty.” The transcripts, which Cleveland obtained through the Freedom of Information Act, show concern among U.S. officials discussing the level of radioactivity on the Reagan on March 13. In that transcript, Adm. Kirkland H. Donald, at the time director of naval nuclear propulsion, said the level of radioactivity in the plume emitted from the plant “was probably more significant than what we had originally thought.” Troy Mueller, deputy administrator for naval reactors at Energy, said the radiation is “about 30 times what you would detect just on a normal air sample out at sea . . . So it's much greater than what we had thought. We didn't think we would detect anything at 100 miles.” On the call, Deputy Secretary of Energy Daniel Poneman asked Mueller if the radiation detected on the Reagan “is significantly higher than anything you would have expected?” Mueller answered, “Yes sir.” Mueller also said that after 10 hours, the amount of radiation experienced on the ship could become a “thyroid dose issue.” Overdoses of radiation can destroy the thyroid gland. Navy spokeswoman Lt. Cmdr. Sarah Flaherty said in an email that a tri-service dose assessment and registry working group determined that the highest whole body dose to any Reagan crewmember is much lower than levels of radiation exposure associated with the occurrence of long-term health effects. She said the worst-case radiation exposure for a crewmember on USS Ronald Reagan was less than 25 percent of the annual radiation exposure to a member of the U.S. public from natural sources of background radiation, such as the sun, rocks and soil. During the disaster relief operation, Flaherty said, the Navy took proactive measures to control, reduce, and mitigate the levels of Fukushima-related contamination on U.S. Navy ships and aircraft. “Ship's company used sensitive instruments to identify areas containing radioactivity, took action to control the spread of the radioactivity, and washed and cleaned areas of the ship that contained radioactivity,” Flaherty said. “Potentially contaminated personnel were surveyed with sensitive instruments and, if necessary, decontaminated. The low levels of radioactivity from the Fukushima nuclear power plant identified on U.S. Navy ships, their aircraft, and their personnel were easily within the capability of ship's force to remedy,” Flaherty said The law firm Bonner & Bonner, based in Sausalito, Calif., charged in its lawsuit against Tokyo Electric Power that crewmen on the Reagan exposed to radiation from the Fukushima plant “now endure a lifetime of radiation poisoning and suffering which could have and should have been avoided,” if the company had not been negligent in construction and operation of the facility. The suit also charged that sailors aboard the Reagan “have been and will be required to undergo further medical testing, evaluation and medical procedures, including but not limited to chelation therapy, bone marrow transplants and/or genetic reprogramming.” Sailors on the Reagan were exposed to both airborne radiation and radiation from contaminated seawater, the suit said. One plaintiff said the ship was taking in sea water, “but obviously the ship can't filter out the radiation. Water we all showered with, drank, brushed our teeth, and had our food cooked with.” Tokyo Electric Power registered as a California foreign corporation in 2003. As a result, TEPCO is subject to the jurisdiction of the United States Federal District Court, the suit said.
<urn:uuid:8e060216-cbe1-4444-87ee-9f26f36c37e9>
CC-MAIN-2017-04
http://www.nextgov.com/defense/2014/02/radiation-was-30-times-higher-navy-responders-japan-nuke-disaster/79082/?oref=ng-channeltopstory
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00310-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961488
914
2.671875
3
NASA today said its 36-year-old Voyager 1 spacecraft has officially become the first human-made object to enter into interstellar space. For the past couple years scientists have been arguing whether or not the venerable spacecraft had in fact left our solar system or not. But in a press conference today Don Gurnett and the plasma wave science team at the University of Iowa said new data obtained around August 25, 2012 indicated Voyager had indeed moved into interstellar space. "We literally jumped out of our seats when we saw these oscillations in our data -- they showed us the spacecraft was in an entirely new region, comparable to what was expected in interstellar space, and totally different than in the solar bubble," Gurnett said. "The Voyager team needed time to analyze those observations and make sense of them. But we can now answer the question we've all been asking -- 'Are we there yet?' Yes, we are, "said Ed Stone, Voyager project scientist based at the California Institute of Technology. Calling it a truly alien environment, NASA said Voyager is in a region immediately outside the solar bubble, where some effects from our sun are still evident. The area is also known as the heliopause, which is the long-hypothesized boundary between the solar plasma and the interstellar plasma. Scientists said they don’t know when Voyager 1 will reach the undisturbed part of interstellar space where there is no influence from the sun. They also are not certain when its sister ship, Voyager 2 is expected to cross into interstellar space, but they believe it is not very far behind. Voyager 1 and 2 were launched 16 days apart in 1977. Both spacecraft flew by Jupiter and Saturn. Voyager 2 also flew by Uranus and Neptune. Voyager 2, launched before Voyager 1, is the longest continuously operated spacecraft. It is about 9.5 billion miles (15 billion kilometers) away from our sun, NASA said. NASA noted that Voyager 1 does not have a working plasma sensor, so scientists needed a different way to measure the spacecraft's plasma environment to make a definitive determination of its location. “A coronal mass ejection, or a massive burst of solar wind and magnetic fields, that erupted from the sun in March 2012 provided scientists the data they needed. When this unexpected gift from the sun eventually arrived at Voyager 1's location 13 months later, in April 2013, the plasma around the spacecraft began to vibrate like a violin string. On April 9, Voyager 1's plasma wave instrument detected the movement. The pitch of the oscillations helped scientists determine the density of the plasma. The particular oscillations meant the spacecraft was bathed in plasma more than 40 times denser than what they had encountered in the outer layer of the heliosphere. Density of this sort is to be expected in interstellar space.” NASA said. Voyager mission controllers still talk to or receive data from Voyager 1 and Voyager 2 every day, though the emitted signals are currently very dim, at about 23 watts -- the power of a refrigerator light bulb. By the time the signals get to Earth, they are a fraction of a billion-billionth of a watt, NASA said. Data from Voyager 1's instruments are transmitted to Earth typically at 160 bits per second, and captured by 34- and 70-meter NASA Deep Space Network stations. Traveling at the speed of light, a signal from Voyager 1 takes about 17 hours and 22 minutes to travel to Earth. After the data are transmitted to NASA’s Jet Propulsion Laboratory and processed by science teams and Voyager data are made publicly available. NASA says it anticipates being able to communicate with Voyager for about 10 more years. The cost of the Voyager 1 and Voyager 2 missions -- including launch, mission operations and the spacecraft's nuclear batteries, which were provided by the Department of Energy -- is about $988 million through September, NASA said. Check out these other hot stories:
<urn:uuid:c44e0f7b-dc9a-4bc0-8193-db77154515d6>
CC-MAIN-2017-04
http://www.networkworld.com/article/2225358/security/nasa--voyager-spacecraft-has-crossed-over-into-interstellar-space.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00218-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95451
802
3.5
4
1:1 computing device programs continue to gain popularity in schools worldwide. E-learning computing devices such as iPads, Chromebooks, tablets, and laptops have helped to engage students, improve technology skills and collaboration, encourage blended learning, and assert cost savings in other areas such as textbooks and paperwork. Seeking to provide their students with the skills they need for the 21st Century, Bethel Park School District began their 1:1 technology initiative with the 2014-2015 school year. The District’s four-year implementation will provide 1:1 computing devices for all students in grades K-12. Students will benefit from online access to many of their online textbooks and a variety of educational programs already in use in the District, such as Compass Odyssey, Study Island, Spell City, and Math Counts. The devices will also facilitate collaborative activities such as group writing and peer editing. The District selected Chromebooks for their students because of their cost effectiveness and compatibility with the Google Suite of educational applications. They needed affordable, well-built Chromebook carts to support their technology initiative for students in grades K-6 where students are too young to take the computers home. The District needed to store the Chromebooks safely and securely in every classroom and have them readily accessible to students. Bethel Park School District chose Black Box, a Pittsburgh-based global provider of charging and storage solutions for e-learning devices, as its solution for Chromebook storage and security. "Working with a local, high-quality, and affordable vendor operating in the District just made sense," said Director of Technology Services Ron Reyer. Since the District's plan is to purchase computer items through 2018, a functional cart that could grow with them into the future was a must. The one-size-fits-all shelving of the Standard Charging Carts from Black Box allow the District to be ready for what tomorrow’s technology may bring. Each cart accommodates up to 30 devices, including Chromebooks, laptops, iPads, and other e-learning devices, and is backed by a lifetime warranty. To protect their investment, the District needed to securely store the Chromebooks in every classroom for students in grades K-6. The Standard Charging Cart’s integrated, heavy-duty locks ensure that the devices are kept secure from unwanted access. Plus, the locking wheels keep the carts in place for maximum safety, but allow for mobility when needed. Working with Black Box resulted in a successful implementation with budget numbers kept in balance. The District had to stick with a specific budget without much room for flexibility, and Black Box delivered. Regarding the District's decision to go with Black Box, Reyer commented, "A site visit to Black Box's assembly plant in Lawrence, PA convinced us that this was the right decision and six months later we were convinced that our decision to choose Black Box was the best we could have made." Over the summer of 2014, K-6 teachers learned about the cart's rapid wiring system. The system manages the cables in a neat and secure way from the back of the cart. With the wiring in place, each student can now easily slide his or her Chromebook in its respective slot from the front of the cart and connect the device to its charging cable. "It's efficient and works well," said Third Grade Teacher at Bethel Park School District Bethani Bombich. Bombich added, "We love, love, love them [Chromebooks]. Use them every day. The kids love using them and it's amazing how quickly they get up to speed and can do different things. Just brings a new dimension to the classroom."
<urn:uuid:867f0d39-0505-46da-b8fc-9d7dad7488a7>
CC-MAIN-2017-04
https://www.blackbox.com/en-us/resources/case-studies/technology-solutions/bethel-park-school-district
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00126-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959919
739
2.640625
3
Tests as Products Consider car manufacturing, and it isn’t hard to believe that many consumer needs result in many car designs. SUVs are purchased for different reasons than sports cars. Growing families buy minivans in order to haul large numbers of children (their own and others’) economically. Students (or their parents) buy compact cars that get great gas mileage, and the money saved helps pay for tuition and books. Luxury cars are purchased as a reward for professional success and to indicate status. Within each class, the varieties seem endless. A test, like any other manufactured product, also is designed to fit a particular niche. That is, someone decides that the test should look a certain way and do certain things. Not all tests are designed the same, although it may not seem that way. For example, the test for registered nurses is adaptive. How questions are presented depends on how the nursing candidate answers them. The test takes about 75 minutes, but used to take two full days. The test for Certified Life Underwriters is 14 days long. Some tests use only multiple-choice questions, while others add creative performance components. Cisco, for example, has a rigorous lab-based exam that takes many hours to complete. Microsoft just embedded simulation questions into its exams to make sure candidates can perform certain critical tasks. Just this year, the test for CPAs added a simulation component where candidates have to actually complete accounting tasks. There are thousands of certification and licensure exams, and they differ from one another. For many programs, the tests have been designed by experts, called “psychometricians,” who are careful to design the tests to accurately measure knowledge and minimize security problems. Tests serve many other high-stakes purposes. For example, the state assessment is taken as part of the elementary and secondary school experience. These tests are designed by the states’ departments of education (or their contractors) in order to measure the effectiveness of education from the district level down to each classroom and teacher. In general, they aren’t designed to tell much about individual students, although newer exit exams, combined with the No Child Left Behind Act, are used to advance students from one grade to the next. Classroom tests designed by teachers are meant to find out what a student has learned in a class, although they don’t always do a great job. Some tests are created to measure how much you know. Others measure how much you are capable of learning. The former are achievement tests that measure what you have “achieved.” The latter are aptitude tests that measure your potential. Intelligence or IQ tests fit in this category. For about 100 years, intelligence tests have been used to decide whether school children need to be in special programs. These types of tests also can be used to determine the competence of alleged criminals to stand trial. A special type of aptitude test is an admissions test. For example, the SAT and ACT tests you took as a junior or senior in high school were meant to help colleges decide if you had what it takes to succeed at that level. From test to test, many characteristics can change, including: - Types of questions. - Number of questions. - How the questions are selected and presented. - Number of interchangeable test forms. - How the test is scored. - The score needed to pass. - Whether or not the test is computerized. - The test time limit. Given that you are adults who have completed quite a bit of schooling, you have likely taken tests that vary on any number of these characteristics—probably all of them. And you will continue to take tests, at least certification tests for professional development. If you ever go back to school, you will likely want to take another admissions test. If you have children, you can better understand what is required of them when they take tests, and you can support them better. Understanding tests as products can help you appreciate the differences between tests, why they cost what they do and why they put you through such complicated preparation and anxiety. Like purchasing any other consumer product, understanding tests in all their varieties can save time, money and frustration. David Foster, Ph.D., is president of Caveon and is a member of the International Test Commission, as well as several measurement industry boards. He can be reached at firstname.lastname@example.org.
<urn:uuid:55aefe26-171a-40cc-b1c4-f1a07ecda32c>
CC-MAIN-2017-04
http://certmag.com/tests-as-products/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00246-ip-10-171-10-70.ec2.internal.warc.gz
en
0.970982
904
3.078125
3
Out of Scope November 2009By Tim Moran | Posted 2009-11-06 Email Print Digital ants and other tales from the world of technology. A (Digital) Bug’s Life Cyber-security threats are no picnic, for sure, but if researchers are successful, ants will come to the rescue in protecting computer networks from intruders. “Digital ants,” that is. According to a recent story from Help Net Security (HNS) (www.net-security.org/secworld.php?id=8195), security experts are deploying a new defense modeled after one of nature’s hardiest creatures—the ant. HNS explains that “unlike traditional security devices, which are static, these ‘digital ants’ wander through computer networks looking for threats, such as computer worms. ... When a digital ant detects a threat, it doesn’t take long for an army of ants to converge at that location, drawing the attention of human operators who step in to investigate.” Wake Forest Professor of Computer Science Errin Fulp, an expert in security and computer networks, believes this concept of “swarm intelligence” can transform the nature of cyber-security because it can quickly adapt to ever-changing threats. Glenn Fink, a research scientist at Pacific Northwest National Laboratory in Richland, Wash., came up with the idea of copying ant behavior. PNNL, one of 10 Department of Energy laboratories, conducts cutting-edge research in cyber-security, and Fink teamed up with Fulp to join a project at PNNL that tested digital ants on a network of 64 computers. In that study, Fulp introduced a worm into the network, and the digital ants successfully found it. According to Fulp, the digital-ants concept will work best in large networks with many similar machines. But can the ants get out of control and cause unwanted damage? No worry, say the researchers. “Software sentinels” located at each machine report to network “sergeants” that are monitored by humans, who supervise the colony and maintain ultimate control. Just don’t go getting potato chip crumbs all over the keyboard. You never know. Do you know your BMI—body mass index? George Fernandez, a professor of applied statistics and director of the Center for Research Design and Analysis at the University of Nevada, Reno, is pretty sure you don’t—or, at least, that you don’t know how to figure it out. That’s why, according to an article on the university’s Nevada News site (www.unr.edu/nevadanews/templates/details.aspx?articleid=5168&zoneid=8), he decided the world needed an alternative to “weight in pounds is multiplied by 703 and then divided by height in inches squared.” Fernandez fired up some SAS software and devised a simpler way of calculating a “maximum weight limit.” There’s a baseline height and weight: 5 feet, 9 inches and 175 pounds for men, 5 feet and 125 pounds for women. The article explains that, “from that starting point, you calculate how much taller or shorter you are, in inches. If you’re a man, you add or subtract 5 pounds for every inch you are taller or shorter than 5 feet, 9 inches.” Women add or subtract 4.5 pounds for each inch over or under 5 feet. Watered Down Data Can the island nation of Mauritius become an international data center hub? (A question that’s surely been on all of our minds.) Economic development officials in this island chain in the western Indian Ocean believe that it can, by connecting Africa, Asia and the Middle East, according to an article on DatacenterKnowledge.com (www.datacenterknowledge.com/archives/2009/09/21/mauritius-pitches-sea-cooled-data-centers/). “A key part of that pitch is the ocean itself and its potential to help data center operators slash their cooling costs,” writes Datacenter Knowledge. It seems that the country has plans to develop something called “sea water air conditioning” (SWAC) to tap into deep, cold water currents that come within two miles of the island. This cold water will then be piped back to a data center complex, where it will be used as the main cooling system, obviating the need for more “power-hungry chillers.” According to the article, the SWAC concept is not unique to Mauritius—Cornell University, in Ithaca, N.Y., uses water from Lake Cayuga for data center cooling—so it’s a mature technology, says Steve Wallage of the Broad Group, a U.K. consultancy focused on the data center sector. “You tend to have a high up-front cost in the pipe work,” he says, “but the long-term saving is in the 75 percent to 90 percent range on the cooling build.” We wish Mauritius the best, for, as we know, still, cold waters run cheap.
<urn:uuid:9971f35f-25ab-460c-91ac-0197b1b42b56>
CC-MAIN-2017-04
http://www.baselinemag.com/c/a/IT-Management/Out-of-Scope-November-2009-784277
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00062-ip-10-171-10-70.ec2.internal.warc.gz
en
0.906353
1,101
2.625
3
Question 4) Cisco Certified Network Associate Single Answer Multiple Choice What does RIP use for its metric? D. Hop count E. Administrative distance D. Hop count RIP is a distance vector routing protocol that uses hop count as the routing metric. The maximum allowable hop count for RIP is 15. RIP broadcasts routing updates every 30 seconds by default and can load balance over four equal-cost paths. RIPv1 does not advertise the subnet mask, but RIPv2 does. To properly configure RIP on a router and enable RIP routing for networks 172.16.0.0 and 10.0.0.0, you issue the following commands: The ‘router rip’ command enables RIP on the router. The ‘network 10.0.0.0′ and ‘network 172.16.0.0′ commands specify that RIP routing should be enabled for these networks. The interfaces that match these addresses will begin sending RIP advertisements. For more information, see Configuring Routing Information Protocol at http://www.cisco.com/univercd/cc/td/doc/product/software/ios122/122cgcr/fipr_c/ipcprt2/1cfrip.htm#wp1000871 1. ICND Student Guide v2.1 – Volume 1 – Enabling RIP – RIP Features 2. INTRO Student Guide v1.0a – Volume 2 – Routing Protocols – RIPv1 and RIPv2 These questions are derived from the Self Test Software Practice Test for Cisco exam 640-801 – Cisco Certified Network Associate (single-exam option)
<urn:uuid:30f3802e-10a6-42a1-9953-24803cf621a5>
CC-MAIN-2017-04
http://certmag.com/question-4-test-yourself-on-cisco-certified-network-associate-single-exam-option/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00456-ip-10-171-10-70.ec2.internal.warc.gz
en
0.693403
346
2.703125
3
http://www.scientificamerican.com/2002/0502issue/0502profile.html W. WAYT GIBBS To defeat cyberterrorists, computer systems must be designed to work around sabotage. David A. Fisher's new programming language will help do just that As one of the primary lines of defense against hackers, cyberterrorists and other online malefactors, the CERT Coordination Center at Carnegie Mellon University is a natural target. So like many high-profile organizations, it beefed up its security measures after September's audacious terrorist attacks. Before I can enter the glass and steel building, I have to state my business to an intercom and smile for the camera at the front door. Then I must sign my name in front of two uniformed guards and wait for an escort who can swipe her scan card through a reader (surveilled by another camera) to admit me to the "classified" area. But these barriers--just like the patting down I endured at the airport and like the series of passwords I must type to boot up my laptop--create more of an illusion of security than actual security. In an open society, after all, perfect security is an impossible dream. That is particularly true of computer systems, which are rapidly growing more complicated, interdependent, indispensable--and easier to hack. The tapestries of machines that control transportation, banking, the power grid and virtually anything connected to the Internet are all unbounded systems, observes CERT researcher David A. Fisher: "No one, not even the owner, has complete and precise knowledge of the topology or state of the system. Central control is nonexistent or ineffective." Those characteristics frustrate computer scientists' attempts to figure out how well critical infrastructures will stand up under attack. "There is no formal understanding yet of unbounded systems," Fisher says, and that seems to bother him. In his 40-year career, Fisher has championed a rigorous approach to computing. He began studying computer science when it was still called mathematics, and he played a central role in the creation of Ada, an advanced computer language created in the 1970s by the Department of Defense to replace a babel of less disciplined programming dialects. In the 1980s Fisher founded a start-up firm that sold software components, one of the first companies that tried to make "interchangeable parts" that could dramatically speed up the development process. In the early 1990s he led an effort by the National Institute of Standards and Technology (NIST) to push the software industry to work more like the computer hardware market, in which many competing firms make standard parts that can be combined into myriad products. Fisher's quest to bring order to chaotic systems has often met resistance. The Pentagon instructed all its programmers to use Ada, but defense contractors balked. His start-up foundered for lack of venture capital. A hostile Congress thwarted his advanced technology program at NIST. But by 1995, the year that Fisher joined CERT, security experts were beginning to realize, as CERT director Richard D. Pethia puts it, that "our traditional security techniques just won't hold up much longer." The organization was founded as the Computer Emergency Response Team in 1988, after a Cornell University graduate student released a self-propagating worm that took down a sizable fraction of the Internet. There are now more than 100 such response teams worldwide; the CERT center at Carnegie Mellon helps to coordinate the global defense against what Pethia calls "high-impact incidents: attacks such as the recent Nimda and Code Red worms that touch hundreds of thousands of sites, attacks against the Internet infrastructure itself, and any other computer attacks that might threaten lives or compromise national defense." But each year the number of incidents roughly doubles, the sophistication of attacks grows and the defenders fall a little further behind. So although CERT still scrambles its team of crack counterhackers in response to large-scale assaults, most of its funding (about half of it from the DOD) now goes to research. For Fisher, the most pressing question is how to design systems that, although they are unbounded and thus inherently insecure, have "survivability." That means that even if they are damaged, they will still manage to fulfill their central function--sometimes sacrificing components, if necessary. Researchers don't yet know how to build such resilient computer systems, but Fisher's group released a new programming language in February that may help considerably. Fisher decided a new language was necessary when he started studying the mathematics of the cascade effects that dominate unbounded systems. A mouse click is passed to a modem that fillips a router that talks to a Web server that instructs a warehouse robot to fetch a book that is shipped out the same day. Or a tree branch takes down a power line, which overloads a transformer, which knocks out a substation, and within hours the lights go out in six states. Engineers generally know what mission a system must perform. The power grid, for example, should keep delivering 110 volts at 60 hertz. "The question is: What simple rules should each node in the power grid follow to ensure that that happens despite equipment failures, natural disasters and deliberate attacks?" Fisher asks. He calls such rules "emergent algorithms" because amazingly sophisticated behavior (such as the construction of an anthill) can emerge from a simple program executed by lots of autonomous actors (such as thousands of ants). Fisher and his colleagues realized that they could never accurately answer their question using conventional computer languages, "because they compel you to give complete and precise descriptions. But we don't have complete information about the power grid--or any unbounded system," Fisher points out. So they created a radically new programming language called Easel. "Easel allows us to simulate unbounded systems even when given incomplete information about their state," Fisher says. "So I can write programs that help control the power grid or help prevent distributed denial of service attacks" such as those that knocked out the CNN and Yahoo! Web sites a few years ago. Because it uses a different kind of logic than previous programming languages, Easel makes it easier to do abstract reasoning. "Computation has traditionally been a commerce in proper nouns: Fido, Spot, Rex," Fisher notes. "Easel is a commerce in common nouns: dog, not Fido." This difference flips programs upside down. In standard languages, a program would include only those attributes of dogs that the programmer judges are important. "The logic of the programming language then adds the assumption that all other properties of dogs are unimportant. That allows you to run any virtual experiment about dogs, but it also produces wrong answers," Fisher says. This is why computer models about the real world must always be tested against observations. In Easel, Fisher says, "you enumerate only those properties of dogs about which you are certain. They have four legs, have two eyes, range from six inches high to four feet high. But you don't specify how the computer must represent any particular dog. This guarantees that the simulation will not produce a wrong answer. The trade-off is that sometimes the system will respond, 'I don't have enough information to answer that question.' " Easel makes it easier to predict how a new cyberpathogen or software bug might cripple a system. CERT researcher Timothy J. Shimeall recently wrote a 250-line Easel program that models Internet attacks of the style of the Code Red worm, for example. That model could easily be added to another that simulates a large corporate network, to test strategies for stopping the worm from replicating. Fisher and others have already begun using Easel to look for emergent algorithms that will improve the survivability of various critical infrastructures. "You can think of an adversary as a competing system with its own survival goals," Fisher says. "The way you win that war is not to build walls that interfere with your goals but to prevent the opposition from fulfilling its purpose." - ISN is currently hosted by Attrition.org To unsubscribe email majordomoat_private with 'unsubscribe isn' in the BODY of the mail. This archive was generated by hypermail 2b30 : Wed Apr 17 2002 - 03:33:24 PDT
<urn:uuid:d2a9b3c1-93a0-42d2-90f6-5966f218b865>
CC-MAIN-2017-04
http://lists.jammed.com/ISN/2002/04/0091.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00484-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955435
1,688
2.53125
3
PostgreSQL is a first-rate, enterprise-worthy open source RDBMS (relational database management system) that compares very favorably to high-priced closed-source commercial databases. Databases are complex, tricksy beasts full of pitfalls. In this two-part crash course, we'll get a new PostgreSQL database up and running with elegant ease, and learn important fundamentals. If you're a database novice, then give yourself plenty of time to learn your way around. PostgreSQL is a great database for beginners because it's well documented and aims to adhere to standards. Even better, everything is discoverable -- nothing is hidden, not even the source code, so you can develop as complete an understanding of it as you want. The most important part of administering any database is preparation, in planning and design, and in learning best practices. A good requirements analysis will help you decide what data to store, how to organize it, and what business rules to incorporate. You'll need to figure out where your business logic goes -- in the database, in middleware, or applications? You may not have the luxury of a clean, fresh new installation, but must instead grapple with a migration from a different database. These are giant topics for another day; fortunately there are plenty of good resources online, starting with the excellent PostgreSQL manuals and Wiki. We'll use three things in this crash course: PostgreSQL, its built-in interactive command shell psql, and the excellent pgAdmin3 graphical administration and development tool. Linux users will find PostgreSQL and pgAdmin3 in the repositories of their favorite Linux distributions, and there are downloads on PostgreSQL.org for Linux, FreeBSD, Mac OS X, Solaris, and Windows. There are one-click installers for OS X and Windows, and they include pgAdmin3. Any of these operating systems are fine for testing and learning. For production use, I recommend a Linux or Unix server, because they're reliable, efficient, and secure. Linux and FreeBSD split PostgreSQL into multiple packages. You want both the server and client. For example, on Debian the metapackage postgresqlinstalls all of these packages: # <b>apt-get install postgresql</b> postgresql postgresql-9.0 postgresql-client-9.0 postgresql-client-common postgresql-common See the detailed installation guides on the PostgresSQL wiki for more information for all platforms. The downloads page also includes some live CDs which make it dead easy to set up a test server; simply boot the CD and go to work. For this article, I used a Debian Wheezy (Testing) system running PostgreSQL 9.0.4, the current stable release.
<urn:uuid:ee6680e0-5896-4ea7-87f0-29318d70b67f>
CC-MAIN-2017-04
http://www.itworld.com/article/2738507/data-center/crash-course-in-postgresql--part-1.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00484-ip-10-171-10-70.ec2.internal.warc.gz
en
0.88088
572
2.703125
3
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note that it will likely favor the submitter's approach. The IPv6 movement has been years in the making. So many years, in fact, that it has hardly been a movement at all. While a handful of regions, primarily in Asia and pockets of Europe, have embraced IPv6, it has been otherwise largely ignored, something to be considered later while we exhaust IPv4 assets. This thinking has clearly stunted the growth of IPv6, presenting opportunities to early adopters and IPv6 facilitators and indigestion for the procrastinators. Besides the indepletable address space (approximately 340 trillion, trillion, trillion unique IP addresses vs. IPv4's 4.3 billion), IPv6 offers a number of network advantages: In most cases, for example, computers and applications will detect and take advantage of IPv6-enabled networks and services, and in most cases, without requiring any action from the user. And IPv6 relieves the need for Network Address Translation, a service that allows multiple clients to share a single IP address, but does not work well for some applications. BY THE NUMBERS: Why the Internet needs IPv6 For the Internet to take advantage of IPv6 most hosts on the Internet, as well as the networks connecting them, will need to deploy the protocol. However, IPv6 deployment is proving a bigger challenge than expected mostly due to lack of interest from the service providers and end users. While IPv6 deployment is accelerating, according to a Google study, areas such as the Americas and Africa are comparatively lagging in deployment. In December 2010, despite marking its 12th anniversary as a Standards Track protocol, IPv6 was only in its infancy in terms of general worldwide deployment. There are indeed interoperability issues between IPv6 and IPv4 which is leading to the creation, essentially of parallel, independent networks. Exchanging traffic between the two networks requires special translator gateways, but modern computer operating systems implement dual-protocol software for transparent access to both networks either natively, or using a tunneling protocol such as 6to4, 6in4, or Teredo. [Also see: "IPv6 tunnel basics"] Early adopters positioned to have greatest impact Even early adopters of IPv6 as a network service understand that demand doesn't exist across the board today, but will soon. Akhil Verma, director of global product management for Inteliquent (formerly Neutral Tandem/Tinet), says, "IPv6 service providers lack the content, infrastructure and applications to make a good business case today and have contributed significantly to the lack of IPv6 adoption." This coming from Inteliquent, one of the world's leading providers and facilitators of IPv6 network-enabled solutions. Once the inevitable IPv6 levy breaks, an overwhelming number of network and content providers will be scrambling (and likely overpaying) to get IPv6 equipped and compliant -- a potential boon for those few IPv6 veteran IPv6 enablers. "We have been doing IPv6 for over a decade now, and have been enabling it for our customers on demand as a complimentary component of the services we offer," Verma says. "In turn, this means any IP Transit customer can get IPv6 service enabled on existing ports so they can have dual stack access at any time. We continue to enable services for new IP transit customers, but we see a very low demand. It could be an education issue, financial justification issue or some other factor. Whatever it is, it is stopping customers from making the move." According to Dr. Kate Lance, communications manager, IPv6Now, an Australian based company specializing in IPv6 assistance, training and consulting, in future years we will look back on this IPv6 debate with astonishment and ask, How could anyone conceivably argue against a technology that offers secure communication to people and devices on a breathtaking scale? How could anyone not want a technology that offers an incremental leap in Internet capability and capacity that may be as significant as the development of the Internet itself? But challenges related to IPv6 implementation remain, relating specifically to transition complexities, costs, timelines and the overall business of the migration from IPv4 to IPv6. These challenges include: * Operational challenges: The operational challenges are, actually, no different from the normal challenges of running any network: Staff training and equipment updates take place as in the normal business cycle. All modern network equipment is IPv6-ready -- it just needs the staff to run it. The actual transition -- like any technical upgrade -- takes planning and resources, but in operation it quickly becomes a standard operational environment. Still, for those without the necessary recourses or in-house expertise, IPv6 implementation challenges can be overwhelming ... and expensive. * Transition and implementation challenges: IPv6 has been hamstrung by bad press, says Lance, especially the view that IPv6 is a purely technical transition without financial justification. Communication has been a disaster between levels of organizations: Businesspeople, who comprehend the strategic implications, have not been in effective communication with network people, caught up in the technical details, who fail to understand what IPv6 can offer business at a higher level. Once an organization decides it will move to IPv6, the greatest transition hurdle is overcome! All reports so far are that IPv6 is easier to implement than people fear, and the benefits of efficiency and economy quickly become apparent. It's not an all-or-nothing environment: IPv6 can be implemented in stages as required, Lance says. * Financial investment justification: IPv6 networks are easier to manage than IPv4 networks, so ultimately less expensive. For instance, merging two networks when businesses combine or expand becomes extremely easy under IPv6, but under IPv4 today can be costly. This is because most internal IPv4 networks use the same RFC1918 private number space, so a merger means an expensive renumbering exercise to avoid clashes and black holes. In IPv6 the vast address range means this would never happen. In fact, any network reorganization or expansion becomes easier due to IPv6's numbering scheme, which over time means cheaper networks. IPv6 also has mobility and security features that when implemented have enormous implications for financially beneficial innovation. * Internal educational challenges: It's important for staff who interact at a technical level with customers to at least understand the existence of IPv6 and some of the issues, as more and more customers will start using it. Managers of call centers, and managers of software and hardware groups will need some education. But the deepest level of knowledge required is at the system/network level, and staff there need some formal training. * Availability of knowledgeable workforce: Many technical people are aware of the need for IPv6, but have been held back by other demands and business priorities. The knowledgeable workforce is not large at the moment, but IPv6Now's experience from numerous training courses is that people find it easier to understand than originally expected, and quickly develop confidence and expertise. This does, however, mean either hiring a knowledgeable staff or training-up existing staff. Making the move better According to multiple sources with firsthand experience regarding both IPv6 network transition and equipment challenges, a seamless IPv6 transition will require a systemized approach that leverages deep telecom and IP network knowledge across multi-vendors and technologies. This systematic approach is critical to developing an efficient, comprehensive and hassle-free IPv6 implementation plan, design and execution. Moreover, it is key to facilitating a smooth IPv6 migration strategy across diverse architectures. For most, this means partnering and collaborating with an established IPv6 vendor that can bring valuable implementation insights and provide guidance on avoiding any operational challenges that will plague the majority of the "go it alone" IPv6 implementers. There are a number of different paths one can take when making the transition from IPv4 to IPv6, each having its own set of merits and/or implications for service providers. Simplest and easy, the most logical option for almost all service providers is to deploy dual-stack on their network allowing support for both IPv4 and IPv6 simultaneously. This can be deployed in different ways, depending on the service provider's vision and network capability. CLEAR CHOICE TEST: IPv6: Dual-stack strategy starts at the perimeter The frequently used methods are: * Option 1: Dual stack throughout the network: The most commonly used option, whereby the network is designed to support both native IPv4 and native IPv6 simultaneously to the end customer. The service provider can offer IP transit service in multiple flavors namely, native IPv4, Dual-Stack IPv4/IPv6 and native IPv6. The native IPv4 is business as usual practice being run for ages, Dual-Stack IPv4/IPv6 allows the service provider to offer both IPv4 and IPv6 services over the same backbone where both kind of traffic simply transverse the network as an IP packet. Finally, for the technology savvy end customers, the service provider can also offer native IPv6 service, which allows the end users to use the service separate to their IPv4 service, and is preferred by some customers to balance their network hierarchy. * Option 1-B: Tunneling IPv6 over IPv4 using 6RD (anycast): While not the best way to deploy IPv6, but preferred by service providers who want a low investment and do not have an access network capable of supporting IPv6. Native IPv6 is not supported in this scenario, thus potentially requiring a second upgrade in the future, hence requiring another round of financial justification, education, etc., for the service provider. There are also other ways and means available to service providers to enable their networks to support IPv6, but those are neither easy nor common in terms of approach. In closing, it is fair to suggest that IPv6's greatest challenges are not technical or economic, but rather the bizarre myths that have arisen, including, Lance says: * The belief that IPv6 is just an unprofitable switchover in Internet technical plumbing, when it has the great benefit of supporting vastly larger, safer, more efficient -- and hence cheaper -- network infrastructure. * The fear that IPv6 is a costly "all or nothing' transition" when in fact there are staging solutions to help run IPv4 and IPv6 networks in tandem. * And the assumption that IPv6 is just like IPv4 only bigger, when in fact it needs good levels of understanding through training and workshops to be implemented securely and with maximum return on the investment. Once a decision is taken to move forward with training and implementation then IPv6 all falls easily into place. Perrine is a director at Jaymie Scotto & Associates. JSA provides clients with critical industry perspective and visibility. Our innovative tools, expert team and established relationships within the industry ensure the finest public relations, marketing and event planning services available in telecom. Read more about lan and wan in Network World's LAN & WAN section. This story, "The IPv6 bandwagon has left the station, but who is onboard?" was originally published by Network World.
<urn:uuid:506d5ed1-95e3-42fa-b0fc-cb125615274a>
CC-MAIN-2017-04
http://www.itworld.com/article/2723334/networking/the-ipv6-bandwagon-has-left-the-station--but-who-is-onboard-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00300-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930256
2,286
2.90625
3
IBM, Volunteers Help Locate Anti-Cancer Drugs Thanks to 1.5 million volunteers who left their computers running when not in use and a little help from Big Blue, cancer researchers have announced significant progress in looking for new potential drugs in cancer treatment. The Help Conquer Cancer Project worked with the IBM-supported World Community Grid to send out protein samples for simulation testing on all of the computers. The program, running in the background of the volunteers' computers and harnessing unused compute cycles, simulated a process called crystallization, where proteins crystallize into a solid form. In this form, the proteins can be further examined by special X-ray to see how they interact with cancer, and whether or not those proteins may cause the disease. Using the World Community Grid to send out sample after sample to the volunteers, the Help Conquer Cancer Project believes it was able to determine six times as many images per protein for further testing in significantly less time than would be possible under manual human review. By way of example, if a person looked at one image per second without rest -- which is not humanly possible -- it would take 1,333 days to examine all 12,500 proteins in the study. The World Community Grid did that in a fraction of the time, said Dr. Joseph Jasinski, an IBM distinguished engineer and program director of IBM's Health Care and Life Sciences Institute. "The World Community Grid is really good at running embarrassingly parallel computation, where you do the same task over and over again. So it's set up for doing many possibilities to try where you want to throw away the ones that are no good quickly," Jasinski told InternetNews.com. Something like testing for cancer drugs works with a setup like the World Community Grid. The 1.5 million machines work independently and don't communicate with each, hence Jasinski's description of them as "embarrassingly parallel." They perform the same repetitive task over and over -- in this case, testing a protein to see its potential in cancer treatment. An increasing number of firms are turning to this solution for their own large-scale computing tasks, but they are keeping the process inside the firewall. Distributed computing lends itself well to a "loosely coupled" task like searching through a vast amount of data for a match, Jasinski said. In a "tightly coupled" scenario, where the program might need the results of one step in order to continue, or processors need to communicate, a company would be better off using IBM's BlueGene servers, where high-speed interconnects enable interprocessor communication, Jasinski said. IBM consulting helps firms determine whether their computation needs are loosely coupled or tightly coupled, and offers the appropriate solution. Some companies are using loosely coupled computing internally, Jasinski said. IBM has its own technology and services through its Smarter Planet initiative to help these firms build internal distributed computing systems. Companies like financial, life science and drug research firms put an agent on employee computers and request they leave their computer running at the end of the day, he said. "We have helped companies and institutions set these things up. It's part of a growing trend around distributed computing, a sort of precursor to cloud computing in a sense, so I think that general trend of trying to harness the horsepower you have and get as much productivity from the infrastructure you have is going to continue," Jasinski said. "We have a good and growing list of problems people are applying this technology toward, typically in energy, the environment, health care and life sciences. We've also tried to get some stuff going in computational aspects of humanities research," he added. The World Community Grid launched in 2004 and is the world's largest public scientific research computer network, with 514,000 members offering 1.5 million devices, meaning many people are running more than just their own personal computer. It runs other tests similar to the Help Conquer Cancer Project, like FightAIDS@home, which looks for a cure for HIV, plus programs to fight influenza, muscular dystrophy, human protein folding and research efforts in the field of clean energy. In April, the FightAIDS@Home group announced it had found two new potential proteins that could be used in developing protease inhibitors, an effective method of treatment for HIV.
<urn:uuid:58edf012-4493-4c32-94a4-a1406ce7a85c>
CC-MAIN-2017-04
http://www.cioupdate.com/print/news/article.php/3888281/IBM-Volunteers-Help-Locate-Anti-Cancer-Drugs.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00208-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952284
880
2.765625
3
DNS-Based Phishing Attacks on The Rise Phishing (define) fraudsters are using a pair of DNS exploits to help give them the illusion of credible domains, the latest ploy to dupe people into handing over their sensitive information. According to research firm Netcraft, phishers have begun to use wildcard DNS records to help trick unsuspecting users into giving up information about their identity. Wildcard DNS help users arrive at their intended Web destination by redirecting mistyped and/or errant addresses. But wildcard DNS has been used against Barclays Banks in the U.K with e-mail using an additional sequence of characters that ultimately leads the user to a phisher's site. A similar type of attack vector specific to Microsoft Internet Explorer was reported last month by security researcher Bitlance Winter. In that attack, an identifiable URL also has a string of characters or additional domain information added that directs a user to a different address than the one they see in the visible toolbar. The technique, known as DNS cache poisoning, is also being utilized by phishers in an attack know known as "pharming" where a poisoned DNS server redirects users to the phisher's Web site. The "poison" is essentially false DNS information that is injected into a vulnerable DNS server. According to Netcraft, an attack this past Saturday exploited a known vulnerability in Symantec's firewall product. The firewall vulnerability had not been patched by Symantec last year. The Saturday attack redirected user requests from eBay, Google and weather.com to a trio of phisher-directed sites. Dave Jevans, chairman of the Anti-Phishing Working Group, told internetnews.com that he has seen an increase in Wildcard DNS and DNS pharming attacks with several new ones this year targeting North American institutions. "UK has seen an increase since December 2004," Jevans said. "Some of these attempt to implement man-in-the-middle attacks too." The DNS system itself has been the subject of proposed enhancements like DNSsec to guarantee better security for users. DNSsec is short for DNS Security Extensions, which are supposed to include integrity and authentication checks to DNS data. "DNS-sec has been in the works for some time, but not really rolled out except maybe at the Verisign root. Recent events are going to spur something here, I think," Jevans said. DNSsec however won't necessarily stop all pharming activity though. "Most pharming is using DNS poisoning at the personal PC level (eg. add entries to the local hosts file). Fixing DNS servers won't prevent this," Jevans explained. "Mutual authentication (possibly two-factor) would be a big help, however." The APWG recently reported that phishing attacks rose by 42 percent from December 2004 to January 2005. Article courtesy of internetnews.com
<urn:uuid:486acb55-7a8d-4cab-934f-d44f3edbf483>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netsecur/article.php/3488266/DNSBased-Phishing-Attacks-on-The-Rise.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00208-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954476
596
2.515625
3
Light is fundamental to the quality of an image. As a rule, provided it is not overexposed, the image will be better the more light that is available in the scene. If the amount of light is insufficient, the image will be noisy or dark. The amount of light that is required to produce a good-quality image depends on the camera and how sensitive to light it is. In other words, the darker the scene, the more sensitive to light the camera has to be. Light sensitivity, or minimum illumination, refers to the smallest amount of light needed for the camera to produce an image of useable quality. Minimum illumination is presented in lux (lx), which is a measure of illuminance, often inappropriately referred to as light intensity. Thus, one might argue that the lower the lux rating indicated by the vendor, the more sensitive the camera. However, it is not quite that simple. There is a paradox to the minimum illumination issue. While light sensitivity is often a key deciding factor when deciding between products and vendors, it is a challenging aspect of camera technology and one of the most difficult to depict. This paper aims to bring some nuance to the discussion on light sensitivity, to highlight the traps and explain why in-the-field testing is preferred over datasheet comparisons and necessary to make an in- formed purchase decision. Download this paper below to read more.
<urn:uuid:2398ec18-96fd-47ee-b7c2-48eb0bcd8e4c>
CC-MAIN-2017-04
http://www.bsminfo.com/doc/in-the-best-of-light-the-challenges-of-minimum-illumination-0001
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00328-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948396
282
3.421875
3
NOAA uses Google Earth to take you down to the sea - By William Jackson - Feb 03, 2009 The narrator of Herman Melville’s “Moby Dick” signed onto the Pequod because he wanted to see “the watery part of the world.” Modern-day Ishmaels can just sign onto Google Earth. Google and the National Oceanic and Atmospheric Administration have launched the latest version of the online geospatial exploration program with a new component called Google Ocean. It contains images, videos, and historical and current data supplied by NOAA. “This allows anyone anywhere at any time to explore virtually the ocean from their home computer,” said Richard Spinrad, NOAA’s assistant administrator for research and a member of the advisory board for Ocean in Google Earth. The Ocean layer of Google Earth is a way of making data gathered at taxpayer expense more widely available to the general public. “What it does is allow access to the kind of data, information and imagery we normally collect,” Spinrad said. Previously, the data was embedded in a variety of Web sites and a variety of formats. The new approach allows access from a common, user-friendly platform and central Web location. “Google Earth is a format that just about everyone who is computer literate knows how to use,” Spinrad said. Google Ocean also includes content from the National Geographic Society and Scripps Institution of Oceanography. Keyhole Inc., which Google acquired in 2004, created Google Earth. It integrates and provides access to satellite and aerial imagery, maps, terrain data, and other information. In the Ocean layer of the program, clicking on the watery parts of the virtual world brings up data and imagery from NOAA research expeditions, such as a visit to the wreck of the Titanic. Users can also access information on the 13 U.S. national marine sanctuaries and one marine national monument, including underwater video. There are maps of ocean currents, information on marine debris movement and data from NOAA’s National Data Buoy Center, which gathers information from hundreds of buoys in U.S. coastal waters and the Great Lakes. Such information is useful for everyone from fishers to windsurfers, Spinrad said. “One of the really powerful tools is bottom topography,” he added, which provides maps of the coastal seafloor rendered with realistic elevations and some imagery. “Within a year, we expect to be able to provide real-time access to imagery from our explorations,” Spinrad said. “We’re just getting a sense of what types of applications we can build.” Last year, NOAA commissioned the Okeanos Explorer as the first U.S. ship dedicated to ocean exploration. It has a system for near-real-time audio, video and data transmission via satellite and Internet2 to five onshore Exploration Command Centers, which gives scientists an opportunity to participate in the ship’s mission as they are needed. Eventually, Web surfers will be able to watch as well. Ocean in Google Earth grew out of talks with NOAA’s chief scientist, which led to the formation of an advisory board three years ago. Spinrad said adding the Ocean component to Google Earth was a goal from the beginning. “It was so obviously something that needed to happen,” he said. The project began in May 2007, and much of the work consisted of translating existing data into the Keyhole Markup Language, the geospatial language that underlies Google Earth. The Ocean layer is included in beta Version 5 of Google Earth, which is available for free download William Jackson is a Maryland-based freelance writer.
<urn:uuid:40527e6a-1e67-4bf5-96c8-4dcb15885af4>
CC-MAIN-2017-04
https://gcn.com/articles/2009/02/03/noaa-dives-into-google-ocean.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00538-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924708
781
2.6875
3
Why does NOAA launch satellites? To a report of the possibility that we could be without weather satellite coverage for more than a year, a reader asked: Why does NOAA have anything to do with the launching of satellites? If NOAA needs a satellite they should just tell NASA what they need, and let the experts build and fly it. We don't need multiple agencies trying to build their own little empires of satellite operations. Frank Konkel responds: NOAA works with NASA on the JPSS (Joint Polar Satellite System), but prior to that partnership, those two agencies worked with DOD on a program called the National Polar-orbiting Operational Environmental Satellite System (NPOESS) that was supposed to replace polar-orbiting satellites: It failed miserably due to mismanagement and overshot budgets. The government, then, decided the current system would be better than the tri-agency partnership, although there is no shortage of criticism. Posted by Frank Konkel on Feb 20, 2013 at 12:10 PM
<urn:uuid:916e765f-7a73-4e86-a79c-5bfa49ce6d40>
CC-MAIN-2017-04
https://fcw.com/blogs/conversation/2013/02/satellite-partnership.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00172-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961506
205
2.78125
3
Researchers: SSD Drives Pose Data Sanitation RiskResearchers from the University of California, San Diego are warning that traditional methods to clear data from hard drives may not work as well on Solid State Disks. Researchers from the University of California, San Diego are warning that traditional methods to clear data from hard drives may not work as well on Solid State Disks.For those who need to make certain that sensitive data is cleared from their drives after systems are decommissioned, or if they're being transferred to other employees or users – SSD drives may pose a serious challenged. According to this study, the researchers evaluated 12 SSDs, of those that had built in ATA and SCSI commands for wiping data – only eight of the twelve – half of the wiping routines on those eight didn't work. The researchers suggest, instead, that disks be encrypted as soon as the initial system image is created. They found that degaussing the drives (using magnetism to destroy the structure of the data) didn't work properly. And software wiping of individual files could not be relied upon to properly work with native routines. However, wiping the entire drive with software routines worked often, but not always. The team provided tools they believe make file sanitation more effective. "Overall, we conclude that the increased complexity of SSDs relative to hard drives requires that SSDs provide verifiable sanitization operations," the report concluded. This news is troubling for those in industries with highly-sensitive, confidential, or regulated industries who must ensure drive data is properly destroyed. Colleague Mathew J. Schwartz covers more detail in his story, SSDs Prove Tough To Erase: How can SSDs be effectively secured or disposed of, short of physically destroying them? The researchers propose encrypting all data from the start, then destroying the encryption keys and overwriting every page of data to securely wipe the SSD and block future key recovery. Implementing such an approach requires planning. "To properly secure data and take advantage of the performance benefits that SSDs offer, you should always encrypt the entire disk and do so as soon as the operating system is installed," said Chester Wisniewski, a senior security advisor for Sophos Canada, in a blog post. Based on the researchers' findings, "securely erasing SSDs after they have been used unencrypted is very difficult, and may be impossible in some cases," he said. For my security and technology observations throughout the day, find me on Twitter as @georgevhulme.
<urn:uuid:2a8ca501-eca8-417d-b285-45ea4c9bd52e>
CC-MAIN-2017-04
http://www.darkreading.com/risk-management/researchers-ssd-drives-pose-data-sanitation-risk/d/d-id/1096217?piddl_msgorder=thrd
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00080-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956665
521
2.875
3
Officials in Topeka, Kan. want septic tanks gone, so they use GIS maps to locate households capable of switching to the municipal sewer system. The project has been ongoing since the city passed an ordinance in 1997 mandating that homes close to the sewer system remove their septic tanks, said Kyle Tjelmeland, GIS system analyst for Topeka. "It becomes a revenue stream for the city," said Tjelmeland. Citizens pay the city more than $1,000 to hook up to the municipal sewer system. To locate prospects for conversion to the sewer, Topeka's GIS staff crafted a map with layers showing houses near the sewer system that receive water service but no sewer service. "If they're paying a water bill, but they're not paying a sewer bill, there's a good chance they're using a septic system. Either that, or they're connected to the sewer system illegally," said Tjelmeland. Septic Tanks Can Pollute A septic tank receives all of a building's sewage, dissolves the solids, cleans the water and releases it into the ground. Built up solids are pumped out of a tank roughly every five years. One benefit of a septic tank is that it spares from the cost of paying for sewer service. The process is harmless to the environment until the tank starts malfunctioning -- sometimes when the tank has been in the ground 60 years or more, explained Tjelmeland. "The bacteria and chemicals in wastewater can leach into the ground water. More often that water comes to the surface, getting you a bad smell and standing water, which can breed mosquitoes," Tjelmeland said. The Environmental Protection Agency gave Topeka a grant years ago to pay for residential hookups, and many residents utilized it. That grant is done. Homeowners now must fund new sewer hookups themselves.
<urn:uuid:0103c508-05ac-4d52-82a8-d703e3fdb525>
CC-MAIN-2017-04
http://www.govtech.com/geospatial/Septic-Tanks-Targeted-by-GIS-Maps.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00474-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956477
394
2.90625
3
Get Your CCNA Lab Kit Here! Looking to learn how to build a CCNA Lab to help you pass your exam the FIRST time? Then you are in the right place. Click the picture above to go right to the CCNA Kits or click the link below to learn how to build an awesome home CCNA Lab! Cisco CCENT Internetworking Welcome to the exciting world of internetworking. This first chapter will really help you understand the basics of internetworking by focusing on how to connect networks together using Cisco routers and switches. To get all that down, you’ll first need to know exactly what an internetwork is, right? You create an internetwork when you take two or more LANs or WANs and connect them via a router, and configure a logical network addressing scheme with a protocol like IP. Cisco CCENT Network Definition A network is a collection of network devices (i.e. switches and routers) along with end systems (i.e. PCs and servers). Networks carry many types of data (i.e. voice, video and data) to/from many locations (i.e. Branch Office, Home Office, etc.). Headquarters: The main site where everyone is ultimately connected. It is where the bulk of the information is located. Main offices typically have higher bandwidth coming into them than smaller locations. Within the headquarters site there may be multiple Local Area Networks (LANs) attached. Types of Remote Locations: Branch Office: Smaller office where a group of people work. Typically a lot of the resources they are accessing are stored at the headquarters (main site). Home Office: Place where an individual works from a somewhat permanent location. Again, most of the resources they are accessing are stored at the headquarters. Mobile User: Mobile users typically connect to the headquarters or branch office while on travel. Cisco CCENT Network Diagrams Network diagrams are vital to be able to efficiently maintain a network. The icons represented on the slide are as follows: Represents a router. Represents a switch. Represents a server. Represents a PC. Represents a network cloud (i.e. the Internet). Cisco CCENT Benifits of a Network Networks allow users to share information and hardware resources (i.e. printers, cameras, etc.). Major resources that are shared are as follows: Network Storage: There are numerous types of network storage. Some of which are direct attached storage (DAS) which connects physical storage directly to a PC or shared server, network attached storage (NAS) which makes storage available through a special network appliance and storage area networks (SANs) which provide a network of storage devices. Data and Applications: Users can share files and software applications when connected via a network. This makes data easily available and promotes more efficient collaboration on work projects. Backup Devices: These devices are things such as tape drives that provide a central way of backing up (saving) files from multiple computers. This is great for archiving and planning for disaster recovery. Resources: Printers and cameras are just a couple of devices that can be shared on a network. Cisco CCENT User Applications Many user applications utilize a network to interact. Imagine life without email or a web browser to surf the Internet. Both of these user applications rely on the underlying network to operate. Cisco CCENT Impacts of Applications on the Network When considering the interaction between the network and applications that ran on the network, bandwidth was historically not the main concern. Batch applications such as FTP, TFTP, and inventory updates would be initiated by a user, then run to completion by the software with no further direct human interaction. If it took a little longer for an FTP to complete, it was not that big a deal. Users could perform other tasks while waiting for the FTP to complete. Interactive and real time applications such as VoIP and video applications involve human interaction. Because of the amount of information that is transmitted, bandwidth has become critical. In addition, because these applications are time-critical, latency (delay through the network) has become critical. Even variations in the amount of latency can affect the network. Not only is proper bandwidth mandatory, QoS is becoming more prevalent. VoIP and video applications must be given the highest priority. Cisco CCENT Characteristics of a Network Speed: Speed is a measure of how fast data is transmitted over the network. Another term to describe speed is data rate. Cost: Cost indicates the general cost of components, installation, and maintenance of the network. Security: Security indicates how secure the network is, including the data that is transmitted over the network. Availability: Availability is a measure of the probability that the network will be available for use when it is required. Scalability: Scalability indicates how well the network can accommodate more users and data transmission requirements. If a network is designed and optimized for just the current requirements, it can be very expensive and difficult to meet new needs when the network grows. Reliability: Reliability indicates the dependability of the components (routers, switches, PCs, and so on) that make up the network. This is often measured as a probability of failure, or mean time between failures (MTBF). Topology: In networks, there are two types of topologies: the physical topology, which is the arrangement of the cable, network devices, and end systems (PCs and servers), and the logical topology, which is the path that the data signals take through the physical topology. Cisco CCENT Internetworking Terms By default, switches break up collision domains. This is an Ethernet term used to describe a network scenario wherein one particular device sends a packet on a network segment, forcing every other device on that same segment to pay attention to it. At the same time, a different device tries to transmit, leading to a collision, after which both devices must retransmit, one at a time. Not good—very inefficient! This situation is typically found in a hub environment where each host segment connects to a hub that represents only one collision domain and only one broadcast domain. By contrast, each and every port on a switch represents its own collision domain. Routers, by default, break up a broadcast domain, which is the set of all devices on a network segment that hear all broadcasts sent on that segment. Breaking up a broadcast domain is important because when a host or server sends a network broadcast, every device on the network must read and process that broadcast—unless you’ve got a router. Switches create separate collision domains, but a single broadcast domain. Routers provide a separate broadcast domain. Hence, collision domains are defined at layer 2 and broadcast domains are defined at layer 3. Cisco CCENT What is a Lan A Local Area Network (LAN) is a network comprised of devices that are located relatively close together (i.e. within a building). This can range from a home office with two computers to a business with hundreds or even thousands of computers. Cisco CCENT Common LAN Components PC: Computers serve as end points in the network, sending and receiving data. Interconnections: Devices that allow data to travel from one place to another within a network. An example of an Interconnection is a Network Interface Card (NIC). Hub: Network device operating at Layer 1 of the OSI reference model, used for port aggregation. Hubs have been replace by switches in modern networks. Bridge: Early implementation of a switch with fewer ports. Segments the network into different collision domains. Switch: Like hubs and bridges, a switch is an aggregation point for network devices. Ethernet Switches are essentially multiport bridges with more intelligence. They operate at Layer 2 of the OSI reference model. Router: Provides a means to connect different LAN segments. Routers operate at Layer 3 of the OSI reference model. Cisco CCENT Shared LANS with Hubs A hub is really a multiple-port repeater. A repeater receives a digital signal and re-amplifies or regenerates that signal, and then forwards the digital signal out all active ports without looking at any data. An active hub does the same thing. Any digital signal received from a segment on a hub port is regenerated or re-amplified and transmitted out all ports on the hub. This means all devices plugged into a hub are in the same collision domain as well as in the same broadcast domain. Cisco CCENT Early Solution Utilizing Transparent Bridges Prior to switches, bridges were utilized to segment networks and reduce the number of hosts within a collision domain. Bridges read each frame as it passes through the network. The layer-2 device then puts the source hardware address in a filter table and keeps track of which port the frame was received on. This information (logged in the bridge’s or switch’s filter table) is what helps the machine determine the location of the specific sending device. Cisco CCENT Switch (Multi-Port Bridge) Ethernet switches are used to create a physical star topology. This means that the network is physically connected in the center, as shown in the diagram on the slide. The logical bus means the signal must run from the beginning of a network segment to the end, and everyone on that segment must listen to the signal on the bus. Switches break up these segments into smaller logical bus’s Cisco CCENT Router Here are some points about routers: Routers, by default, will not forward any broadcast or multicast packets. Routers use the logical address in a Network layer header to determine the next hop router to forward the packet to. Routers can use access lists, created by an administrator, to control security on the types of packets that are allowed to enter or exit an interface. Routers can provide layer-2 bridging functions if needed and can simultaneously route through the same interface. Layer-3 devices (routers in this case) provide connections between virtual LANs (VLANs). Routers can provide quality of service (QoS) for specific types of network traffic.
<urn:uuid:e9a263d7-c86f-48ee-b7ee-b54a932fd007>
CC-MAIN-2017-04
https://www.certificationkits.com/cisco-certification/ccent-640-822-icnd1-exam-study-guide/cisco-ccent-icnd1-640-822-exam-certification-guide/cisco-ccent-icnd1-internetworking-and-security-part-i/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00108-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923103
2,132
3.1875
3
There are a variety of ways in which websites and public-facing computer systems can be attacked by hacktivists, and attacks on websites continues to be a popular form of political demonstration. In December 2010, around 36 Pakistani government websites were hacked by an online hacker group called the Indian Cyber Army. All hosted on the same server, the sites that were hacked included the Pakistan Army, the Ministry of Foreign Affairs, Ministry of Education and the Ministry of Finance. The attacks consisted of messages and graphics inserted into the web pages with political messages, some of which related to the attacks in Mumbai. Also in December 2010 a number of financial payment websites were subject to denial of service attacks by hacktivists disgruntled at these companies no longer processing payments to the WikiLeaks website. For commercial websites that trade across the internet, this can be catastrophic and is the equivalent of having all their real-life stores closed down in one go. Denial of service attacks can range in their level of sophistication from destruction of physical internet connection points through to the flooding of websites with extraneous data that overwhelms web servers, forcing them to close down. This is similar to blocking the switchboard of a business with lots of phone calls that are terminated as soon as they are picked up, but uses the TCP/IP protocol that runs the internet to flood servers with bogus messages. These attacks can be coordinated using hijacked networks of computers, called botnets, which, in turn, are forced to send high levels of spurious data to target websites. There are steps that designers can take to mitigate such attacks but, in reality, a significant attack can be difficult to manage, and often the best course of action is to take down the servers and hope the attackers go away. More sinister is a malware threat that emerged in 2010 called Stuxnet. Researchers had been aware of this malware for many months, but it hit the media headlines when reports emerged of Stuxnet finding its way into Iranian nuclear plants. Excellent investigation by Symantec has enabled us to see inside this malware and understand how it works. The malware was apparently written to target industrial control systems such as those used in manufacturing and processing plants. Its ultimate aim is to reprogram control systems by modifying computer code on programmable logic controllers, or PLCs, in such a way that plant operators would never suspect anything was wrong. In contrast to a denial of service attack that is extremely noisy, Stuxnet is a very clever and covert attack. Bundled with the Stuxnet malware is a whole arsenal of additional components designed to assist in this control system attack, including zero-day exploits, antivirus evasion and a Windows rootkit, an advanced form of malware. So why bother to mess with PLCs? In fact Stuxnet only affects specific PLCs controlling electric motors that run at special high speeds and frequencies. These are only available from two specified companies and the attack will only be initiated if there are at least 33 of these devices present. The majority of Stuxnet infections were found in Iran and these devices are regulated for export by the United States Nuclear Regulatory Commission as they can be used in centrifuges used for uranium enrichment. Yes, the implication is that Stuxnet is a powerful piece of malware created to disrupt the enrichment of uranium by the Iranian government. Clearly this advanced malware has not been developed by a back-bedroom hacker, as it needed very specific insight into the workings of complex industrial control systems. This is a high watermark in terms of malware, and evidence is starting to emerge that conventional cybercriminals are adapting Stuxnet for more conventional criminal activities. We have not seen the end of Stuxnet yet. Is your organisation a target? It could be argued that, in the great scheme of things, most businesses and organisations will never appear on a cyberterrorist’s radar, as the type of work they do is not one that attracts attention from such people. On the other hand it could be argued that every person and organisation is a target for cybercriminals, so a reasoned, objective risk assessment should always be undertaken to gauge a likely risk profile. This must include all aspects of a business, including the supply chain, employee travel, executive profiles, nature of the business and, of course, the ever-changing worldwide geopolitical situation. This risk assessment needs to be continuous and fully integrated into the decision-making process of the leadership team. Informing this risk assessment must be intelligence gained and shared with colleagues, industry communities and the authorities ensuring a two way flow of up to date, actionable and relevant information. Polices and procedures need to be built that encompass this risk assessment and it is vital that a converged approach is taken, such that information security experts work with physical security experts to develop plans and skills to manage a cyberterrorist attack. These attacks will rarely come from nowhere and the sharing of skills and information is vital. Employees are often in the front line against cyberterrorists, as their day-to-day activities are often subject to reconnaissance and investigation from potential attackers. Phishing emails, social engineering phone calls and strange conversations are just some of the indicators that an organisation is being scoped for attack. These users must be educated about the importance of both physical and information security, supporting a converged approach, in their day-to-day jobs and have a means to raise their concerns in an open way that supports these reports and avoids any embarrassment if a genuine report is false. Finally, organisations and businesses need to be doing their job, focusing on delivering value, products and services to their clients and shareholders. In support of this it makes complete sense to work with expert third parties that can take on a lot of the risk management work, freeing up the business to do what it does best. Over these 3 articles we have seen that the internet is awash with threats to organisations and individuals, but it is also an amazing force for good in the world, supporting commerce and the freer flow of information. Inevitably criminals, rogue states and terrorists will see the internet as an ideal tool in their armoury but, by taking some reasonable precautionary steps, many of these threats can be significantly reduced. Symantec. Stuxnet: A Breakthrough. Available at http://www.symantec.com/connect/blogs/stuxnet-breakthrough Last accessed 9th December 2010
<urn:uuid:7f4f73f3-e9fd-4dd7-a003-b68d3c69f474>
CC-MAIN-2017-04
http://www.bloorresearch.com/analysis/cybercrime-cyberwars-cyberterrorism-and-hacktivism-p3/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00346-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963392
1,310
3
3
It’s a scary world that we live in. Posting to social media is a choice that some people make. Even if they are comfortable with the privacy settings they have configured for their account, there is little stopping someone from resharing the post to a much wider audience. When the content of that post is of something inconsequential, it has little impact. As an individual, you can choose what you post about yourself with concerns that it might reach a wider audience. If somebody else posts something embarrassing about you or just more than you would rather share, it seems unfair. That same consideration does not seem to be granted to children. Things that hit the Internet have a way of sticking around. It is not unreasonable to say that a Facebook post or viral video on YouTube could last until the focus of the post becomes a victim of bullying in grade school. It might take that long for the child to be aware of their unwanted fame but parents might find out much sooner. Kaspersky had an excellent article on tips for parents following viral videos. Close comments to prevent ill-intentioned trolls, set privacy settings to your intended audience only, and think of the consequences. The problem is not limited to videos. German police issued an appeal to parents to stop posting photos of their children. Facebook may implement facial recognition to warn people when they are posting photos of minors publicly. Many people were upset when VTech was compromised, releasing the names, email addresses, passwords, and home addresses of almost 5 million parents and over 200,000 children’s first names, genders, and birthdays. All of this information helps attackers pull off scams. With believable information, more people are likely to be victims. While people were upset that VTech’s Kid Connect service was compromised, many post that same information publicly to social media. All of this oversharing has gone too far. Privacy settings are complicated but necessary to understand if you are going to use social media. Oversharing might allow someone to create a complete profile on a victim. If we thought the popular security question asking your mother’s maiden name was a weakness for our generation, the next generation is going to be completely known and transparent. Review privacy settings and avoid sharing more information than is necessary. A photo post wishing happy birthday could reveal name, birthday, and appearance. That is why I say that a child’s privacy should be treated like their credit. Most minors do not have a credit history. Experian states that if a child has a credit report, one of three things happened: You have applied for credit in their names and the applications were approved. You have added them as authorized users or joint account holders on one or more of your accounts. Or, someone has fraudulently used their information to apply for credit and they are already identity theft victims. Unlike the free annual credit report, there is no free privacy report. If everything goes as planned, a person has their own credit to make or ruin. Likewise, it should be up to them whether they shed or hold onto their privacy. They won’t have anything to blame you for and can decide for themselves what level of exposure they would want. This also might mean standing up to grandparents and aunts/uncles about sharing so much information but it’s your job to protect your kids, right?
<urn:uuid:2285a4a8-2e8f-4714-8384-c88c70ac24b9>
CC-MAIN-2017-04
https://www.404techsupport.com/2015/12/childs-privacy-credit/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00098-ip-10-171-10-70.ec2.internal.warc.gz
en
0.966941
687
2.625
3
GPS is a godsend for people with absolutely no natural sense of direction. But sometimes, these people-tracking satellite systems don't work so well in a dense city, or worse--when you go anywhere indoors. We've seen some indoor positioning systems that require you to wear an additional device or even use the Earth's magnetic field in conjunction with your phone's compass. DARPA, however, is developing an indoor GPS system that uses the same trusted, tested tech in your phone, except on a chip that's smaller than a penny. DARPA calls its prototype chip a Timing and Inertial Measurement Unit (TIMU). Although the chip itself measures just 10 cubic millimeters in size, it's packed with three gyroscopes and three accelerometers, as well as an internal clock. Your smartphone also has these same sensors too; the major difference with this chip, though, is that each of these sensors are 50 micrometers thin (almost as thin as a human hair) stacked on top of each other, so it's a way smaller package. The device is designed to start working as soon as you lose your regular GPS signal. From that starting point, the gyroscopes and accelerometers keep track of how far away you are--as well in which direction you have moved--from that initial point. DARPA is developing the technology to help keep track of troops fighting in urban and indoor situations where satellite tracking may not work. Since the device is so small, we imagine that the TIMU could eventually end up inside our mobile devices because it's pretty easy to get lost indoors, too. I mean, have you ever been to a MegaMall?
<urn:uuid:73a0dc06-815b-473f-ac9b-23b11de2d6da>
CC-MAIN-2017-04
http://www.computerworld.com/article/2496648/emerging-technology/darpa-makes-an-indoor-gps-chip-that-s-smaller-than-a-penny.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00311-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963681
346
3.109375
3
5.1.4 What is IPSec? The Internet Engineering Task Force (IETF)'s IP Security Protocol (IPSec) working group is defining a set of specifications for cryptographically-based authentication, integrity, and confidentiality services at the IP datagram layer. IPSec is intended to be the future standard for secure communications on the Internet, but is already the de facto standard. The IPSec group's results comprise a basis for interoperably secured host-to-host pipes, encapsulated tunnels, and Virtual Private Networks (VPNs), thus providing protection for client protocols residing above the IP layer. The protocol formats for IPSec's Authentication Header (AH) and IP Encapsulating Security Payload (ESP) are independent of the cryptographic algorithm, although certain algorithm sets are specified as mandatory for support in the interest of interoperability. Similarly, multiple algorithms are supported for key management purposes (establishing session keys for traffic protection), within IPSec's IKE framework. The home page of the working group is located at http://www.ietf.org/html.charters/wg-dir.html. This site contains links to relevant RFC documents and Internet-Drafts.
<urn:uuid:07f4b3bc-d2a8-4df1-8da4-fec4f9718112>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/ipsec.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00311-ip-10-171-10-70.ec2.internal.warc.gz
en
0.864744
245
3.34375
3
What email address or phone number would you like to use to sign in to Docs.com? If you already have an account that you use with Office or other Microsoft services, enter it here. Or sign in with: Signing in allows you to download and like content, which the author will be aware of. Embed code for: Quality of life- Background Select a size November 30, 2016 QLI: Background Research Gangsterism is defined as belonging to a gang that brings fear and violence in various communities. Certain communities in New York go through the fear and danger of encountering a dangerous gang that could not only harm the people in it physically, but also mentally. Especially children or teens that are subject to the violence of these gangs end up having PTSD (Post Traumatic Stress Disorder) from such events. The influences gangs have in a community are increased levels of crime, violence and even murders. Also, the people that are part of a gang or gang members have long-term effects, such as dropping out of school, suffer through unemployment, overuse of drugs and alcohol and even go to jail (Friedrichs). The origins of the existence of gangs traced back to the 1800s. The main elements that fueled it were the poverty and immigration that was going on in New York City. There were also disorders in city slums and tenements where people were unemployed and did not have any skill to find a job (Howell & Moore). Discrimination against immigrants took a part also because different cultures would end up having conflicts against each other, which caused the rise of gangs. This problem persists because of the amount of people that live in poverty without an education or job to maintain them. The community they would live would not be in the best shape, therefore making it more susceptible to gang involvement. There are areas in New York that have different rates of crime due the number of murders, robberies, and assaults that were reported. For example, parts of Manhattan such as Midtown, Midtown south and Hudson Yards are some of the areas that have the highest crime rates, while most parts, if not all, of Staten Island have the lowest crime rates (SIPA). The way gangs come to affect the well being of the people on different types of communities is that brings a negative impact to the youth. Perhaps some don’t really have a place to fit in or belong to, but find it when they join a gang and have the same desires as the members. Some gangs also have conflicts and rivalries, which bring violence not only to them but also to the bystanders in the community. “Reports of gang-related homicides are concentrated mostly in the largest cities in the United States, where there are long-standing and persistent gang problems and a greater number of documented gang members—most of whom are identified by law enforcement as young adults” (J. Howell). The youth are an easy target for gang involvement according to this statement and therefore come to bring more problems in areas where gang activity is high. They can easily be influenced to join a gang, there increasing the possibility of dropping out of school, victimization, committing crimes, drugs and even teen parenthood. This would not only affect the lives of the youth but it would also affect the lives of their peers, parents and community overall in a way that it is becoming a less safe environment due to gang activity (youth.gov). This issue is ultimately essential because of the well-being and safety of the people in different communities. Kids should not be afraid of going to the park or walking the block in fear of encountering a gang, parents shouldn’t be overstressing over how else they can get their children to be safe and not have to worry about them joining a gang or even worse being a victim of one. “Gangs provide an environment in which deviant and illegal behaviors are learned and improved upon, and techniques for avoiding detection are learned” (Carlie). However if gang activity were to be reduced, then communities would have a chance to rise, feel safe and work without having fear. No kid should ever be traumatized because of this, because of the violence gangs bring and their victimizing of innocent people that have nothing to do with it. With less gangs, there would less murders, assaults, robberies and fear from the people in the community. Some programs that were made in order to deal with this issue are the Safe Alternatives and Violence Education “SAVE” and the Adolescent Social Action Program “ASAP”. SAVE is an education curriculum to raise awareness for teens from ages 10-17 who are carrying weapons. If a student was found doing so, then he/she would get a referral to go to SAVE in order to learn the real dangers of carrying a weapon. This program helps promote a safer environment for students in school to not carry weapons that could lead to violence. ASAP is a program where “Peer resistance” and “decision making training” are used in order to increase self-responsibility and life skills for youth participants. It helps address the high-risk behaviors that could lead them to doing drugs or even becoming part of a gang (Juvenile Justice Bulletin) To help with the issue, The New York City government is helping fund an anti-violence initiative where former gang members form up to “fight gangsterism by example”. These former gang members are known as Gangstas Making astronomical Community Change. The Government of NYC is trying a different approach to deal with violence, which is, instead of flooding areas where there is high gang activity with cops, deal with it with former gang members as the antibodies to the disease which is violence and crime (Walshe). Friedrichs, Matt. “Gangs: Problems and Answers.” EDGE. Accessed November 29, 2016 Carlie, Michael. “Why Be Concerned About Gangs” Into The Abyss. Accessed November 29, 2016 “Violence in the Community” Juvenile Justice Bulletin. Accessed November 29, 2016 “Gang Involvement Prevention” Youth Topics. Accessed November 29, 2016 Howell, James. “The Impact of Gangs in Communities” NYGC Bulletin. Accessed November 29, 2016 Walshe, Sadnbh. “NYC launches anti-violence initiative to fight gangsterism by example” Accessed November 29, 2016 Moore, John. “History of Street Gangs in the United States” NGCB. Accessed November 29, 2016 State of New Yorkers – “A Well-Being Index” SIPA. Accessed November 29, 2016 of the well-being and safety of the people in different communities. Kids should not be afraid of going to the park or walking the block in fear of encountering a gang, parents shouldn’t be overstressing over how else they can get their children to be safe and not have to worry about them joining a gang or even worse being a victim of one. “Gangs provide an environment in which deviant and illegal behaviors are learned and improved upon, and techniques for avoiding detection are learned” (Carlie). However if gang activity were to be reduced, then communities would have a chance to rise, feel safe and work without having fear. No kid should ever be traumatized because of this, because of the violence gangs bring and their victimizing of innocent people that have nothing to do with it. With less gangs, there would less murders, assaults, robberies and fear from the people in the community. Some programs that were made in
<urn:uuid:81757a4d-7c44-404e-a25c-72fc6ec61772>
CC-MAIN-2017-04
https://docs.com/alexis-vergara/5614/quality-of-life-background
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00219-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955975
1,577
3.140625
3
Photography is all about choice: Where do you focus? How much depth of field do you want? What’s your perspective? When the shutter closed in the past, you were stuck with your choices. But not any more. A new type of camera technology, called light field, makes it possible to refocus a picture, change the perspective and see a part of the picture that had been previously obscured – after the picture is captured. The world’s first light field camera is made by a Silicon Valley company called Lytro, and you can buy it right now at a number of retailers including Best Buy, Target and Amazon. You may have heard about it a few years ago, but the technology has matured, it's now available for iPhones and other iOS devices and it can do more than in the past. The Lytro camera uses an array of tiny lenses to capture many light rays so the image it creates has much more information in it than a picture snapped by a conventional digital camera. After the picture has been taken and processed by software embedded in the camera and in accompanying applications, it can be viewed by anyone, whether or not the person has the special software; the image itself contains all the required digital information. So if I wanted to share a picture I took with the Lytro camera, I could post it to Lytro’s site, email a link to a friend, and then they could view and manipulate it. My friend could also post it to Facebook or Twitter or embed it on a website, as I did with the picture above. You can mouse over my image, click on the faces of the women to see how the focus changes or use the controls to change the perspective. Changing the focus of a picture after the fact is impressive and it makes sense, but the camera’s ability to “see” an object that’s partially obscured by something in front of it seems counterintuitive - until it’s explained. If you remember your physics lessons, you know that light curves, so some of the light from an object travels around whatever is in front of it and reaches the camera. The camera doesn't give you Superman’s x-ray vision, but if someone is standing behind a tree or other object, and they're not completely blocked, you’re able to see it in a Lytro picture. Changing the perspective also lets the photographer give pictures very different points of view. Lytro’s founder Ren Ng developed the technology while earning his PhD in Computer Science at Stanford. Lytro CEO Jason Rosenthal told me to "Think of Lytro as Moore's Law meets photography. We're taking components of cameras like lenses and optics and image sensors and replacing them with software and computation to build better and more powerful cameras than the world has ever seen.” (You can read my interview with Rosenthal here.) A 16GB version of the camera sells for $399, and the retail price for the 32GB version is $499. The software is free and runs on Macs, PCs and iOS devices. Images courtesy of Lytro
<urn:uuid:44181ca5-7be6-4f7c-a5d3-12739f745daf>
CC-MAIN-2017-04
http://www.cio.com/article/2370231/consumer-technology/lytro--the-digital-camera-that-lets-you-shoot-first--focus-later.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00247-ip-10-171-10-70.ec2.internal.warc.gz
en
0.966524
638
2.625
3
Keyboard shortcuts are the key to productivity and efficiency. This is true for any machine, any OS and any application. This article lists shortcuts I personally find very helpful and use often. In selecting what to include I focused on shortcuts that might be less well known. Many shortcuts work in other than the described applications in a very similar way. Please do try them out in all your most frequently used applications. CTRL+T opens a new tab while CTRL+N opens a new window. CTRL+F4 closes the current tab. But what if you went ahead of yourself in closing that tab and need it back? CTRL+SHIFT+T reopens it for you. This can be a lifesaver! [Chrome, Edge, Firefox, IE] In Chrome you can even bring back an entire browser window with all its tabs by pressing CTRL+SHIFT+T in one of the remaining windows. If you closed the last window, Chrome reopens the tabs by default, of course, when you start it the next time. [Chrome] CTRL+TAB switches through tabs in almost any application (with CTRL+SHIFT+TAB going the other way). With many tabs open getting to the desired tab can take a while, though. Modern browsers provide a welcome shortcut: CTRL+1 activates the first tab, CTRL+2 the second, and so on, with CTRL+9 activating the last tab. [Chrome, Edge, Firefox, IE] CTRL+L gets you to the address bar. This the quickest way to change the URL or enter a new address. [Chrome, Edge, Firefox, IE] Searching the Current Page When looking for specific information by far the fastest solution is to search the currently displayed page using the browser’s search functionality. Ease of use varies between browsers (I prefer Chrome’s search UI), but in all major browsers CTRL+F brings up the page search dialog. [Chrome, Edge, Firefox, IE] To navigate search results: F3 jumps to the next search result, SHIFT+F3 jumps to the previous search result. [Chrome, Firefox] Zoom In or Out There are many cases where a website’s text is either too large or too small. No problem: CTRL+<plus key> enlarges the content (zooms in), CTRL+<minus key> zooms out and CTRL+0 restores the original zoom factor of 100%. [Chrome, Edge, Firefox, IE] These days browsers hide their settings menus in order to maximize screen real estate for the website content. The quickest way to get to the settings menus is to simply press ALT. In Chrome follow with the down cursor key, in Firefox and IE navigate with the left/right cursor keys. In Chrome you can alternatively press ALT+E to bring up the settings. [Chrome, Firefox, IE] Google Mail and many other Web Apps The following keyboard shortcuts originate from Gmail but made their way into many other web apps, too. Try them out in your app and you might be surprised how much easier it suddenly is to use. To search, simple press the forward slash (/). The question mark (?) brings up a help dialog (a list of the keyboard shortcuts). “gl” navigates to a label of your choice. “gi” brings you to your inbox. “gc” switches to contacts. “c” (create) brings up the new email dialog, but in the same browser tab. SHIFT+C opens the new email dialog in a new window. “1” goes to the first column, “2” on the second, and so on. Task Switching with Cursor Keys You probably know that ALT+TAB switches to the next window while ALT+SHIFT+TAB goes back. Doing that repeatedly to cycle through a large number of open applications becomes tiresome very quickly. The process becomes much faster when you press and hold ALT+TAB and then use the cursor keys to navigate the list of open windows. Especially the up/down keys speed things up significantly. To search the current page press CTRL+F. To search all pages press CTRL+E. Linking to URLs I won’t bore you with the well known shortcuts for marking text as bold (CTRL+B) or italic (CTRL+I). However, links are a different matter. You should not put a link’s full URL into your text. Instead, do what is common on web pages and attach the link to a word in your text, hiding the (usually) ugly URL. To do that simply highlight the word(s) you want to convert into a link and press CTRL+K to bring up a dialog to enter the link’s URL. [Gmail, Microsoft Office and many other applications]
<urn:uuid:d9710c28-a5cc-4af0-9fcc-f7af695ef2c2>
CC-MAIN-2017-04
https://helgeklein.com/blog/2016/11/keyboard-shortcuts-youll-never-want-miss/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00181-ip-10-171-10-70.ec2.internal.warc.gz
en
0.85854
1,026
2.578125
3
Fiber optic splitter, also named fiber optic coupler or beam splitter, is a device that can distribute the optical signal (or power) from one fiber among two or more fibers. Fiber optic splitter is different from WDM (Wavelength Division Multiplexing) technology. WDM can divide the different wavelength fiber optic light into different channels, but fiber optic splitter divides the light power and sends it to different channels. Optical splitters “split” the input optical signal that received by them between two optical outputs, simultaneously, in a pre-specified ratio 90:10 or 80:20. The most common type of fiber optic splitter splits the output evenly, with half the signal going to one leg of the output and half going to the other. It is possible to get splitters that use a different split ratio, putting a larger amount of the signal to one side of the splitter than the other. Splitters are identified with a number that represents the signal division, such as 50/50 if the split is even, or 80/20 if 80% of the signal goes to one side and only 20% to the other. Some types of the fiber optic splitter are actually able to work in either direction. This means that if the device is installed in one way, it acts as a splitter and divides the incoming signal into two parts, sending out two separate outputs. If it is installed in reverse, it acts as a coupler, taking two incoming signals and combing them into a single output. Not every fiber optic splitter can be used this way, but those that can be labeled as reversible or as coupler/splitters. Fiber optic splitters can be divided into active and passive devices. The difference between active and passive couplers is that a passive coupler redistributes the optical signal without optical-to-electrical conversion. Active couplers are electronic devices that split or combine the signal electrically and use fiber optic detectors and sources for input and output. Passive splitters play an important role in FTTH (Fiber To The Home) networks by permitting a single PON (Passive Optical Network) network interface to be shared among many subscribers. Splitters include no electronics and use no power. They’re the community parts that put the passive splitter in PON network and are available in a wide range of break up ratios, including 1:8, 1:16, and 1:32. Optical splitters are available in configurations from 1×2 to 1×64, such as 1:8, 1:16, and 1:32. There are two basic technologies for building passive optical network splitters: Fused Biconical Taper (FBT) splitter and Planar Lightwave Circuit (PLC) splitter. FBT coupler is an older technology and generally introduces more loss than the newer PLC Splitter. But both are used in PON network. Here is a brief introduction to them. FBT coupler is a traditional technology with which fiber optic products can be made at a low cost but high-performance way. As this technology has developed over time, the quality of FBT splitters is good and they can be implemented in a cost-effective manner. Now FBT splitter is widely used in passive networks, especially where the split configuration is relatively smaller such as 1×2, 1×4, 2×2, etc. The following is a FBT splitter with ABS box. PLC splitter offers a better solution for applications where larger split configurations are required. It uses an optical splitter chip to divide the incoming signal into multiple outputs. As the wide use of PLC splitter, there are various types of PLC splitter on the market. For example, blockless PLC splitters, fanout PLC splitter, bare PLC splitter, tray type PLC splitter, ABS PLC splitter, mini-plug in type PLC splitter, etc. Here is a 1×4 PLC splitter. Enabling a single fiber interface to be shared among many subscribers, fiber optic splitter plays an increasingly significant role in many of today’s optical networks. As a professional optical products supplier, Fiberstore offers different types of high-quality splitters for your applications. If you want to know more details, please visit FS.COM.
<urn:uuid:6fc93699-178d-4806-9878-e5ebc0ce2528>
CC-MAIN-2017-04
http://www.fs.com/blog/common-passive-fiber-optical-splitters.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00421-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932745
907
3.84375
4
The Search for Earhart is Reignited / March 20, 2012 On July 2, 1937, Amelia Earhart and her navigator Fred Noonan left the Territory of New Guinea (now Papua New Guinea) en route to Howland Island in the South Pacific -- but they never arrived. At an event held this morning in Washington, D.C., U.S. Secretary of State Hillary Rodham Clinton spoke with historians and scientists from The International Group for Historic Aircraft Recovery (TIGHAR) about where the aviator may have gone missing over the South Pacific 75 years ago, the Associated Press (AP) reports. According to AP, enhanced analysis of a photograph taken just months after Earhart’s Lockheed Electra plane vanished shows what experts think may be the landing gear of the aircraft protruding from the waters off the remote island of Nikumaroro, in what is now the Pacific nation of Kiribati. Shown above is Earthart with her Lockheed Electra. Photo courtesy of the Smithsonian Institution
<urn:uuid:cc7eb0a5-c775-4b44-ae71-cc4f162e9166>
CC-MAIN-2017-04
http://www.govtech.com/photos/Photo-of-the-Week-Search-for-Earhart-is-Reignited-03202012.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00329-ip-10-171-10-70.ec2.internal.warc.gz
en
0.899372
208
2.953125
3
Building custom controllers using the now iconic Arduino open source computer platform is no longer the province of uber-geeks given you can now buy an Arduino Uno kit from RadioShack for $35. And it's this kind of ubiquity that has created all sorts of variations on the Arduino platform. The problem that some people have found with many of the Arduino board designs is that they are too big (pretty funny when you consider that the Uno, for example, is only 2.7 inches by 2.1 inches). But the fact is when you're trying to squeeze a computer inside a toy or build a flight control system for a quadcopter, the standard Arduino may be a bit large. To solve this pressing technical challenge TinyCircuits, a firm that designs and markets, as its name suggests, tiny electronics, has just announced TinyDuino, a Kickstarter project to create "an Arduino compatible board in an ultra compact package. Imagine the possibilities of having the full power of an Arduino Uno in a size less than a quarter!" At $19.95 this minute open source computer is a steal and the company also offers a number of similarly tiny "shields," daughterboards that provide additional functionality such as USB and LED displays. As of this writing the project has raised $33,368 on a goal of $10,000 and there's still 14 days to go! That board too big? TinyCircuits also has the TinyLily, which is the size of a dime and tough enough to be washed! It's designed for "e-textile" and wearable applications. Again, this computer is a steal at $9.95! Oh, you'd rather build a more powerful computer? How about your own supercomputer? For cheap? If so, I have just the thing you're looking for: the Parallella, a Kickstarter project from chip-maker Adepteva. If the Kickstarter project raises its target of $750,000 Adapteva will develop and sell the Parallella board ("A Supercomputer for Everyone"), which, at 3.4 inches by 2.1 inches, is slightly larger than the standard Arduino board, with a 16-core Epiphany-III running at 13GHz to produce 26 GFLOPS at a price of ... wait for it ... $99 each! Considering that just 12 years ago $1,000 per GFLOPS was a breakthrough, it's pretty amazing to think that the cost has dropped to 26 cents per GFLOPS! And as to whether the company can deliver, its street cred is good: Adapteva has been in the chip business for more than four years and reckons it has something like $4 million invested in the design of its Epiphany chips. Even better, the company's Epiphany-III 16-core 65nm processor has been in the field for almost a year. The Parallella board will ship with a Dual-core ARM A9 CPU running Ubuntu, 1GB RAM, a MicroSD Card slot, two USB 2.0 ports, two general purpose expansion connectors, 10/100/1000 Ethernet, and an HDMI port as well as the entire tool chain which Adapteva currently sells for several thousand dollars. But wait! There's more! If Adapteva reaches its Kickstarter "stretch goal" of $3 million the company plans to offer a board based on its Epiphany-IV chip with 64 cores that will run at 45GHz and deliver 90 GFLOPS for ... gasp ... $199! What is also really impressive is the Epiphany architecture achieves 72 GFLOPS per watt, exceeding the performance goal set by DARPA of 50 GFLOPS per watt by 2018. So far the Parallella project has raised $141,786 and it has 25 days to go. Just imagine what you might be able to build with this system! I've become a backer and you should too, because this is the sort of technology that will make a lot of tough computing problems much, much easier. Plus, I like the idea of having enormous power under my control.
<urn:uuid:18756a4e-17a1-4387-8894-8b9ba4af5192>
CC-MAIN-2017-04
http://www.networkworld.com/article/2223256/software/tinyduino-and-parallella--kickstarter-projects-that-kick-computing-butt.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00237-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95758
836
2.53125
3
Learning how to write parallel and vector HPC programs will take a lot longer than five minutes. But with a series of five-minute videos introduced this week, Intel’s director and parallel programming evangelist James Reinders gives prospective parallel and vector programmers an introduction to the tools and techniques they’ll use to write code for the chip giant’s latest processors and coprocessors. The series will cover a different aspect of HPC programming every week for 12 weeks with the new series called The Five Minute Guide to Parallel and Vector Software Programming. New episodes will come out every Wednesday through the middle of August. In the first episode, which is available now, Reinders discusses Intel’s vision for programming Intel’s Xeon processors and Xeon Phi coprocessors, the powerful new many integrated core chips that are making a big splash in the HPC community. New hardware always makes for big headlines, but for Reinders, the real story is Intel’s approach to the architecture as a whole, and the portability of skills and tools between X86 and the new Phi family. “The most important [part of the story] is our vision in building the device,” he says in the first five-minute video, called “Coding the Future: Intel’s Vision.” “Our vision was to be able to span from a few cores to many cores in different systems and use the same programming models, the same programming languages, the same tools, and techniques across these. “So whether you’re working on an Atom-based machine,” Reinders continues, “or a Xeon-based server or workstation, or whether you’re moving all the way up to Intel Xeon Phi coprocessors and computation capability, that you’re able to preserve the learning, the tools, the methods, the language and stay standards-based. And we’ve been quite successful with this.” The second five minute guide, titled “Vectorization using Intel Cilk Plus Array Notation in C++/C,” comes out this Wednesday. David MacKay, Intel Software Development Products, will demonstrate how vectorized software using an array notation coding style that generates SIMD operations can yield big computational boosts on Xeon Phi coprocessors compared to scalar code. Next Wednesday’s five minute guide will be called “Vectorization with Pragmas in Fortran and C++/C.” It will be followed by: - “Data alignment for effective vectorization in Fortran and C++/C” on June 26; - “Faster math performance with Intel Math Kernel Library” on July 3; - “Automatic offload with Intel Math Kernel Library” on July 10; - “Threading with OpenMP” on July 17; - “Simplified threading with Intel Cilk Plus” on July 24; - “Threading with Intel Threading Building Blocks (when Intel Cilk Plus isn’t enough)” on July 31; - “Performance analysis with Intel VTune Amplifier XE” on August 7; - “Distributed Computing with Intel MPI Library” on August 14; - and “Balancing MPI Applications” on August 21. You can view the first five minute guide and read more details about the future five minute guides at http://tci.taborcommunications.com/l/21812/2013-05-03/2bc3.
<urn:uuid:4e42fde5-2dd2-4472-8b51-a96f5df7b20a>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/06/10/the_5_minute_guide_to_parallel_and_vector_software_programming/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00237-ip-10-171-10-70.ec2.internal.warc.gz
en
0.907698
753
2.59375
3
Set wide characters in memory #include <wchar.h> wchar_t * wmemset( wchar_t * ws, wchar_t wc, size_t n ); - A pointer to the memory that you want to set. - The value that you want to store in each wide character. - The number of wide characters to set. Use the -l c option to qcc to link against this library. This library is usually included automatically. The memset() function fills n wide characters starting at ws with the value wc. The wmemset() function is locale-independent and treats all wchar_t values identically, even if they're null or invalid wide characters. A pointer to the destination buffer (i.e. the same pointer as ws). Last modified: 2014-06-24
<urn:uuid:e606ca8d-fd23-4fd8-ba20-6238866cca41>
CC-MAIN-2017-04
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/w/wmemset.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00081-ip-10-171-10-70.ec2.internal.warc.gz
en
0.745963
183
2.78125
3
“Standards allow different layers to evolve independently and therefore faster and better.” - Sir Tim Berners-Lee, Founder and Director of the W3C The World Wide Web is built upon standards, at every level. Because of Web standards, users can access information from multiple different browsers on a variety of platforms and operating systems, and organizations can choose from a diverse set of servers and databases to host content. This arrangement has worked well for more than twenty years, encouraging innovation and competition—as long as the information being served is intended to flow freely and without restriction. However if the contents’ owners want to control access to their information through encryption and authentication, the interoperability of the Web breaks down. The means of decrypting content for secure viewing has always required a separate, proprietary viewer or plug-in, because there is no Web standard for rights management at the file level. Opinions differ on the wisdom and feasibility of implementing DRM for consumer content on the Web (although the channeling of rights-managed content into closed platforms such as Amazon Kindle and Apple iBooks has rendered opposition somewhat futile), but the absence of a security standard in the enterprise has created a vacuum into which a private corporation (namely Microsoft) could come to dominate the creation, distribution and consumption of information from end-to-end. Imagine a scenario in which the entire document workflow of the Fortune 1000 is beholden to a single vendor for its authoring applications, user identity management, decryption keys, usage data and other private and proprietary information. Aside from wiping out entire categories of cloud storage platforms and security vendors, a private monopoly in rights management technology would prevent its customers from choosing among workflow and server technologies, and put their data at risk of a potentially catastrophic security breach if any vulnerabilities were exploited. The vast majority of encrypted documents would be secured with a single proprietary method. Further, such a monopoly would threaten the fundamental framework of the Web, which was originally designed to enable servers and clients on different platforms, from different vendors, to speak to each other in a common language. Without a published standard for handling encrypted documents on the Web, it becomes possible, even necessary, to limit usage to a single viewer or editor—the one sold by the same vendor of the productivity suite, identity framework and encryption tools. The Mirage of Cloud Security How could we be at risk of a rights-management monopoly, when the cloud era has disrupted incumbent vendors and the dominant players now encourage integration via open APIs? In his blog post from 2013 “The Rise of The Cloud Stack,” Aaron Levie of Box proclaimed “From this diversity of systems, customers will get choice, and with this choice we’ll see better applications and solutions emerge. At a lower cost, and at a higher quality.” The cloud stack was supposed to free us from single-vendor “ecosystems” so we could migrate between increasingly sophisticated platforms. But as we will see, the lack of a standard document security protocol has instead resulted in a Babel-like city of secure towers that don’t speak the same language, and therefore can’t interoperate. First, let’s look at how cloud services handle user identity management, because any workable rights management scheme depends on user identity to determine whether or not to decrypt a file. One of the keys to the evolution of the cloud stack is the availability of what Gartner Group has named Identity and Access Management as a Service (IDaaS). Using an IDaaS service, a company can create a Single Sign-On (SSO) system for a variety of different, and often interconnected, cloud applications. With a single login, a user might now update a record in Salesforce, triggering an alert in Slack, updating a spreadsheet in Google Docs and a report in Box. From the perspective of a network administrator, it is necessary that these new workflows also be auditable and their use controlled. So the “Access Management” part of IDaaS has become an important new element of the information security toolkit. In the example above, it is also critical that the administrator be able to revoke access to all the applications in that workflow all at once. In the current environment the granularity of IDaaS is normally at the application level; i.e. the IDaaS vendor controls access to the cloud provider itself, with that cloud provider managing access internally based on the identity provided. In some cases, the cloud vendor may implement access control at the document-level, but the identification and permissioning of documents is specific to that environment. That is, there is no mechanism exposed by IDaaS vendors by which a document exported from Google Docs and imported into Box or Dropbox can be tracked or controlled as it moves from one environment to the next. Every cloud provider that offers some form of DRM is an island, and every protected document is trapped on that island. RMS: Bundling Security and Identity If there is no standard for document-level security in the cloud, what about behind the firewall, in the enterprise? There are multiple competing approaches to Enterprise Document Rights Management (EDRM), including the one developed by my company, FileOpen Systems. Each is functionally a silo; there is no standard for DRM interoperation. Among the EDRM solutions, the largest (by installed base or market share) and best-known framework is Microsoft Rights Management Services (RMS). It can certainly be argued that RMS is the de facto standard for controlled distribution of MS Office documents; the question is whether it can or should be the basis of a more formal web standard. Microsoft RMS is a framework for DRM implemented natively in the Microsoft Office suite of products and also by third parties for some PDF viewers. The RMS software was introduced in 2003, evolving from and ultimately replacing an eBook DRM system, Microsoft Reader, which was introduced in 2000 and discontinued around 2007. Today RMS is predominantly, perhaps exclusively, focused on enterprise use-cases and workflows. The functionality was originally bundled into Windows Server 2003 and remains a part of the core Windows Server product. The RMS system is also tightly integrated with Microsoft's Active Directory product, indeed is formally named "Active Directory Rights Management Services" (AD RMS). The tight integration of RMS with Windows Server has impeded adoption by entities not primarily using the Microsoft stack; RMS cannot be implemented directly in environments using Linux, or other Unix derivatives, at the server. Further, the reliance upon Active Directory has reduced the utility of the system for external document distribution, insofar as the Active Directory environment of the document creator and that of the document consumer are rarely integrated. Microsoft is now addressing this concern via the federation of Active Directory networks using Azure Active Directory (Azure AD) and the cloud version of RMS (Azure RMS). These cloud-based systems for identity management and DRM are intended, among other things, to simplify the process of distributing encrypted content outside the firewall. Microsoft has also integrated the RMS functionality into a number of clients on desktop and mobile platforms, under the banner “Office everywhere, encryption everywhere.” This initiative addresses both document security in the MS Office suites and message security in the Microsoft mail clients, via S/MIME. Viewing of RMS-protected documents on iOS can now be done in third-party viewer applications or with Azure RMS in the Microsoft Office applications for Mac, iOS and Android. Support for PDF files in the RMS environment is implemented via partners, including Foxit, Nitro and Nuance, and in Adobe Acrobat/Reader via a plug-in from GigaTrust. RMS-encrypted PDF cannot be opened in the Microsoft Reader application, which is the default handler for PDF on Windows 8.1 and later, nor by the embedded PDF viewer in Microsoft Edge (or any other browser). Standard or Stranglehold? The presence of the RMS client in the primary Microsoft Office applications enables workflows in which effectively any user can distribute encrypted content to any other user, without a requirement that either install new software. No other DRM system can make that claim. And while this situation is reminiscent of Microsoft’s bundling of Internet Explorer with Windows, which was ultimately sanctioned by the US courts, the integration of Microsoft’s DRM into Microsoft’s Office applications has not been the subject of any antitrust actions. So it is likely that the bundling will continue, and continue to be an advantage for Microsoft in the enterprise, as long as Office remains the primary vehicle for document creation and collaboration. Similarly, the bundling of RMS with Windows Server and Active Directory is not optional, as the RMS functionality depends upon proprietary logic in the Microsoft Server stack. Use of RMS with any alternative webserver is not supported. We assume that Microsoft tightly integrated the RMS client and server functionality in order to increase the security and reliability of the system rather than as a competitive strategy, but the result is indistinguishable. There is no set of APIs and no licensing framework that would permit an independent vendor to create an alternative to the Microsoft implementation of RMS in Windows Server, or a cloud service that could be used as a replacement for Azure RMS. While the original on-premises version of RMS required use of Active Directory for identity provision, the Azure RMS product offers the option of a federated identity mechanism, meaning that other IDaaS vendors can be integrated into an RMS workflow. However, by design, all DRM interactions involving the Azure RMS client/server exchange require a token from the Azure RMS server. So while an alternative identity provider may be provisioned, there is no mechanism in either RMS or Azure RMS for that identity provider to also manage the permissions on the document. In short, the RMS client and server cannot be decoupled. As a result of these Microsoft design choices, it is not possible for an entity that chooses not to use Microsoft Server to implement RMS inside the firewall, nor is it possible for an entity using Azure RMS to operate with full independence from Microsoft. An IDaaS vendor can provide primary identity management, but cannot independently provide access control at the document level, as this capability necessarily involves the sharing of user and client data with Microsoft. An Alternative: Standards-Based DRM Microsoft’s approach is fundamentally at odds with the core design of the World Wide Web, which is premised on the loose coupling of client and server functionality. It is, for example, inconceivable today that a browser vendor would attempt to require use of a particular, and proprietary, server in order to implement SSL. An entity publishing data on the World Wide Web today can exert complete control over the code used at the webserver, to the point of having the option to write that code anew in any language and run it on any hardware, provided that the result properly implements the required protocols, all of which are public. It follows that any attempt to create a standard system for DRM on the World Wide Web would, at a minimum, need to support this degree of client/server independence. However, DRM systems are rarely designed around open interfaces, at least partly due to a tendency to think that the most secure system is one in which the developer has control over both ends of the client/server interaction. Moreover, DRM differs from standard cryptographic implementations because one side of the interaction, the client, is not necessarily a willing participant in the secrecy of the conversation, and indeed may be an adversary. So a completely open source DRM system is likely to be impossible. What can work, as demonstrated by the implementation chosen for the W3C Encrypted Media Extensions (EME), is the combination of a compiled binary client library and an open interface to an arbitrary key management system. EME is now embedded in multiple applications and platforms, and allows content distributors to manage protected video files using one or a combination of key management and delivery systems (Google Widevine, Apple FairPlay, Microsoft PlayReady, Adobe Primetime, etc.) The FileOpen Rights Management Layer The original design of the FileOpen DRM software, first articulated in 1997, described the same general architecture. FileOpen rights management was explicitly designed to work with any external identity provider, and to permit the entity encrypting documents to specify all relevant metadata (e.g. document identifiers, encryption keys). In the company’s early years, the only way to implement FileOpen software was to develop a custom server to manage customer and document identities and to implement business logic governing usage. Today, commercial "PermissionServer" implementations are available in .NET and PHP, and multiple cloud services are built around the FileOpen Client and protocol, which is published. The architecture of FileOpen rights management is such that is not tied to any particular document format or viewing application. This differs markedly from Microsoft’s approach, which developed RMS for its own Office applications and has demonstrated a clear bias toward those applications with respect to competing tools (e.g. LibreOffice) and formats (e.g. PDF). We believe that decoupling the rights management protocol from the document format and viewer vendor enables organizations to share documents securely with the broadest set of users, inside and outside the firewall. It also protects publishers and document creators from a potentially costly “lock-in” to the format vendor, who would control the keys to encrypted documents and store private data about their end-users. Moreover, because the FileOpen software does not rely upon any particular identity provider, it can be implemented natively by any IDaaS vendor. All such implementations are independent and private, in the sense that information about users/documents/permissions is fully controlled by and visible only to that vendor. Interoperation between IDaaS implementations, or between IDaaS vendors and cloud application providers, or between different cloud application providers, is likewise possible without direct involvement of FileOpen Systems, using the published interfaces of the system. While FileOpen rights management is not yet a standard, it was designed and continues to be developed around the principles that underlie Web standards. In fact, one of the first and largest sectors to standardize on the FileOpen software is the community of Standards Developing Organizations (SDOs; e.g. ANSI, ASME, Afnor, DIN, etc.). Ultimately, the FileOpen software was designed not to be an end-to-end secure publishing system, or even a complete system for DRM, but a flexible component that can be integrated to extend the functionality of such systems--one thread in a larger, interconnected, Web. Contact us for a demo of FileOpen Rights Management for Office
<urn:uuid:146a23ec-e5ca-4efa-82e0-d1ee1b48c8c1>
CC-MAIN-2017-04
https://www.fileopen.com/blog
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00081-ip-10-171-10-70.ec2.internal.warc.gz
en
0.919963
3,016
2.640625
3