text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Every year, more children of all ages go online to study, have fun and communicate with the world at large. The Internet is becoming an even more integral part of our children's lives, and most are ill equipped to protect themselves online. enhancing children's abilities to recognize dangers on the Internet; According to the National Center for Missing & Exploited Children's (NCMEC) June 2000 report, Online Victimization: A Report on the Nation's Youth, in one year approximately one in five young people who use the Internet regularly received an unwanted sexual solicitation or approach, and one in four encountered unwanted pornography. The risks of the Internet pose a clear and immediate danger to families and children. Utah Attorney General Mark Shurtleff has taken an active position on educating Utah's children. "Prosecuting online predators is only half the battle," said Shurtleff, who also supervises Utah's Internet Crimes Against Children Task Force. "Our best chance of protecting Utah's children is to teach them how to avoid being a victim in the first place." Shurtleff's idea was to incorporate Internet education into the Utah school system. His solution was to come to NetSmartz for help. Made possible by a public-private partnership with Congress, the U.S. Department of Justice's Office of Juvenile Justice and Delinquency Prevention, the NCMEC, and Boys & Girls Clubs of America, the NetSmartz Workshop provides original, animated characters and age-appropriate, interactive activities using the latest 3-D and Web technologies to entertain while educating. Through partnerships with Computer Associates, HP and Cox Communications, NetSmartz stays current with the latest business and Web trends. What is NetSmartz? In the late 1990s, Boys & Girls Clubs of America launched Operation Connect, a multifaceted and comprehensive effort to bridge the digital divide between children who have access to the Internet and those who do not, and bring the latest technologies to Boys & Girls Clubs nationwide. Thus, a major concern for Boys & Girls Clubs of America was Internet safety. With the rollout of new computers and computer labs in Boys & Girls Clubs all over the country, a large percentage of Club members were being exposed to the Internet for the first time. Boys & Girls Club directors felt it was imperative that their kids were empowered to protect themselves online. It was natural then, in 1999, when Boys & Girls Clubs of America sought to develop state-of-the-art educational content about online safety that they should turn to the NCMEC. Since the NCMEC was established in 1984, it has worked to make children safer. The NetSmartz Workshop was created specifically to extend children's safety awareness to prevent victimization and increase self-confidence whenever they go online. NetSmartz goals include: enhancing children's abilities to understand that people they first meet on the Internet should never be considered their friends; encouraging children to report victimization to a trusted adult; supporting and enhancing community education efforts; and increasing communication between adults and children about online safety. Boys & Girls Clubs leaders and children played vital roles in the appearance of program content and characters, ensuring that NetSmartz messages were on target and characters appealed to the respective age groups. The NetSmartz activities, designed for ages 5 to 7, 8 to 12, and 13 and older, combine the newest technologies and most current information to create high-impact educational activities that are well received by even the most tech-savvy kids of any age group. The NetSmartz Web site is a great resource for kids, teens, parents and educators. Kids can play games and activities while learning Internet safety from NetSmartz characters. Teens can view "real-life" stories to learn from other teens' experiences with online dangers. "Who's Your Friend on the Internet?" is one activity designed to show children that you don't always know who you are talking to online. In this game show, there are three curtains with a contestant behind each one. Each contestant is asked to describe himself or herself. Once the children hear all three voices, they must decide who sounds the most trustworthy. At the end, the children find out that all three contestants, even the ones that sounded like children, turned out to be "WizzyWigs." This stands for, "What you see isn't always what you get," because on the Web, you never really know who you are talking to. "Julie's Journey" is the story of a 13-year-old girl who left home for three weeks with a convicted murderer she developed a relationship with online. Adults can access materials to help them teach their children or students online safety skills. NetSmartz uses the Web and its streaming technologies to deliver content more efficiently and effectively. The Web has more potential than television for reaching children on a number of levels. Viewers will tune in like a television show, but the show will include the interactivity the Web provides. Importance of Private Partnerships What makes the NetSmartz Workshop unique is its use of the latest 3-D software and hardware, content management software, and Web-based solutions. It is also available to the public at no cost. But with a small staff and a budget based on federal appropriations and charity, private partnerships are a must to achieve organizational goals. The strategy at NetSmartz is to present itself not only as a charity, but also as a partner with incentives. Since NetSmartz is a nonprofit organization, it can't produce revenue for its partners, but it does provide good PR. HP donated $1.5 million in computer equipment, including 3-D workstations and media servers for NetSmartz's artists and Web technicians. Compaq Presario model computers include a desktop icon that takes parents and children directly to the NetSmartz Web site. Cox Communications recently partnered with the NetSmartz Workshop by providing $1 million in airtime to run NetSmartz public-service announcements (PSA) on Cox cable networks. The NetSmartz PSAs will run on networks such as Disney and the Cartoon Network. The parent PSA, which is aimed at getting parents to take a more active role in their child's online activities, will run on networks viewed primarily by adults, such as ESPN, CNN and CNBC. Computer Associates, which had already been providing the NCMEC with solutions for their mission of finding missing children and preventing child sexual exploitation, donated eTrust Security Suite, CleverPath business portal and Unicenter Web traffic analyzer software to NetSmartz. NetSmartz uses the CleverPath Portal with the iMarkup Solutions iMarkup Server v5 for an end-to-end document and content management solution. Web traffic on the NetSmartz site is monitored using Unicenter Management for Web Servers integrated into the CleverPath Portal. More importantly, Computer Associates donates hundreds of free engineering and support hours for all of its products. Both the private and public organizations benefit from the partnerships. The private corporations gain great PR and customer referrals, and NetSmartz can more efficiently reach its goals with important business software and Web technology. The most important benefit these private corporations receive may be seeing their products help an important cause. The Importance of State Partnerships To successfully reach its mission, NetSmartz strives to reach as many kids, parents and teachers as possible. It was important for Utah to find a program that was both fun and educational, but also cost-effective and proven. Because the NetSmartz Workshop is free and was tested nationwide in Boys & Girls Clubs across the country, it was a perfect fit. The partnership between Utah and NetSmartz made both of those goals a reality. After the partnership with Utah, elected officials across the nation began to express interest in NetSmartz. The National Association of Attorneys General requested a presentation at their annual conference in February 2003. Shurtleff challenged other attorneys general to adopt NetSmartz in their states as well. On Feb. 18, 2004, New Hampshire Attorney General Kelly Ayotte and New Hampshire Gov. Craig Benson announced plans to implement NetSmartz in their state. Students will complete NetSmartz activities and receive a computer-generated certificate. Other states continue to follow the example of Utah and New Hampshire. As of January 2004, Arizona, Arkansas, Colorado, Florida, Indiana, Maine, Missouri, New York, Texas and Wyoming have initiated partnerships with NetSmartz. The Perfect E-Government Model The NetSmartz Workshop is a unique addition to the world of e-government. The fact that a 3-D animation studio could be such a useful tool in aiding states and law enforcement in the war against online sexual predators is a great indication that current Web and business technologies are dramatically changing the way government and nonprofit organizations manage operations. Using the latest business software technologies, NetSmartz can more efficiently manage and develop its content under strict deadlines and tight budgets. Using the latest Web technologies, NetSmartz can stream content to families and schools across the country from an office in Alexandria, Va. Most importantly, none of this could be achieved without the help of private partnerships. The technology and exposure provided to NetSmartz from these corporations allow NetSmartz to achieve its goals. In return, private corporations see their products, designed specifically for e-business being used to aid state, local and federal governments help combat the problem of Internet dangers and keep kids safer online. It is truly a win-win relationship for everyone involved.
<urn:uuid:07e9cf2f-a06f-4eb2-a574-18a44eece0fc>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/100496329.html?page=3
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00379-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953017
1,958
2.859375
3
For some time, we have been aware of our lengthening average lifespan or life expectancy. In the last 100 years, average life expectancy has almost doubled from 30-45 to 67. If it doubles again in the next 100 years, it will reach 130. However, the rate of increase has not been linear, it is an acceleration curve and continues to grow faster. If this trend continues, in the next hundred years, it could be approaching a 1,000 year average lifespan. The article below suggested (in 2007) that people “40 years or younger can expect to live for centuries” and that the first human to live 1,000 years is already alive today. Assuming that the 1,000 year old is an infant, that still projects a steep increase in the difference for average lifespan between the 40 year old and the infant. If that trend does anything but reverse itself, the 1,000 year old will also have a chance at immortality. Today’s infant is born with a life expectancy of about 80 years, but by the time they are approaching 80, the average life expectancy may have grown to 500 years or more. And by the time they have lived another 80 years, the life expectancy could be “indefinite”. The First Person Who Will Live to Be 1000 Is Alive Right Now! – [blisstree.com] According to the latest immortality research (oh, it is a field), the possibility of a person making it to their first millennium is not only possible – it’s almost guaranteed that such a person is already alive right now. Of course, philosophical debates are raging, but everyone agrees that perhaps something more reasonable – say, five additional years on the old lifespan – would be totally acceptable. But according to Aubrey de Grey, the spokesperson for the anti-aging movement, the moral debates are futile: “Whether they realise it or not, barring accidents and suicide, most people now 40 years or younger can expect to live for centuries.” BBC | Nov 26, 2010 | More about this programme: http://www.bbc.co.uk/programmes/b00wgq0l Hans Rosling’s famous lectures combine enormous quantities of public data with a sport’s commentator’s style to reveal the story of the world’s past, present and future development. Now he explores stats in a way he has never done before – using augmented reality animation. In this spectacular section of ‘The Joy of Stats’ he tells the story of the world in 200 countries over 200 years using 120,000 numbers – in just four minutes. Plotting life expectancy against income for every country since 1810, Hans shows how the world we live in is radically different from the world most of us imagine.
<urn:uuid:beeb5729-4450-4c98-878c-05f436d2dd49>
CC-MAIN-2017-04
http://www.hackingtheuniverse.com/singularity/biogenetics/human-immortality
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00287-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937229
578
2.796875
3
Reducing Online Banking Fraud with Stronger Authentication MethodsAccount fraud is frequently the result of single-factor (e.g.,ID/password) authentication exploitation. As a result, the FFIEC is now urging financial institutions to deploy multi-factor authentication and assess the adequacy of their authentication techniques in light of new or changing risks such as phishing, pharming, malware, and the evolving sophistication of compromise techniques. The guidelines are definitely a step in the right direction. However, guidelines are just guidelines and a bank's goal should be secure online banking. Consider this: the appendix to the FFIEC guidelines lists one-time password scratch cards as a means of stronger authentication. However, phishers have already successfully attacked a bank that uses that system, forcing a 12 hour shutdown of their online bank. Financial institutions should strive to provide their customers with a consistent, secure process of authentication that minimizes potential avenues of attack, especially attack vectors beyond the control of either the user or the bank. Understanding the types of attacks that can occur is a requirement for deciding what authentication mechanisms are needed. There are two main attack vectors discussed: man-in-the-middle attacks and malware. The vast majority of attacks are Man-In-The-Middle (MITM) attacks. Phishing is email calls-to action to get users to fake MITM websites. DNS-cache poisoning attacks a DNS server somewhere between the user's computer and the server to misdirect users to a fraudulent website. Malware is malicious software that captures and forwards private information such as ID's, passwords, account numbers, and PINs. Keystroke loggers log keystrokes and send them back to the author for later use. Many activate only when a user types in specific information, such as a bank site URL. Time-bound, one-time passcodes thwart keystroke loggers, as they would be used or expired before the attacker gets them. Session-hijackers run inside an SSL session and perform nefarious transactions. Session-hijackers are particularly tricky in that they work after session and mutual authentication have been complete. They are why many pundits have suggested that two-factor authentication won't stop online fraud. These pundits miss an important point: that the server can ask for a second one-time passcode to validate the transaction. It is key, however, that the transactional authentication method be distinct from the session authentication method or the attacker will just generate a "Connection Lost" error message, prompt the user to log in again and use that OTP for the fraudulent transaction. An important question to answer in determining what type of authentication to use is: What exactly are you wanting to authenticate? Most people think of authentication as validating the identity of the user for a session. We add to that: session authentication is validating the user to the site; mutual authentication adds validation of the site to the user; and transactional authentication is validating that it is the correct user requesting the transaction. Strong session authentication is a base requirement for securing online banking. Session authentication must include some time-bound, one-time use passcodes. MITM attacks can be automated to a high degree. For example, a fraudulent site could accept a time-bound one-time passcode and immediately use it to log into the bank within the time allowed. Only strong mutual authentication can stop MITM attacks. Mutual authentication is really site authentication to the user combined with user authentication to the site. Site authentication is already provided by SSL. Unfortunately, many sites ask users to log into non-SSL sites and users rarely check SSL certificates for validity. Fraudulent websites can use self-issued SSL certificates to fool users or generate a fake SSL 'key lock' and position it over the key location in the browser. SSL site authentication is clearly broken. Some have suggested using unique images as a shared secret to identify a server before the user enters their password. One possible attack against this is that a MITM could replay the initial request and any additional information from the user's computer to the server and in turn provide the user with the image. Also, if the mutual authentication method uses machine authentication as a primary mechanism and knowledge authentication as a back up, then all the MITM has to do is present the user with the questions asked by the site. Since there is a lack of consistency in the session authentication method, the mutual authentication method becomes suspect. Another method, as developed by WiKID Systems, uses a hash of the server certificate stored on the authentication server. When the user requests an OTP, the hash is also sent to the token client. Before presenting the user with the OTP, the token client fetches the certificate from the web site, hashes it and compares it to the retrieved hash. If the hashes match, the URL is presented as validated and the default browser is launched to that URL. This method leverages the security and investment in SSL certificates and provides a consistent session and mutual authentication method to the user. Even with both session and mutual authentication methods strengthened, a session hijacking trojan could empty a bank account. For this reason, transaction authentication is recommended. Transactional authentication is equivalent to digitally signing a transaction and it can be accomplished with an OTP. For example, when a user wishes to make a suspicious transaction, such as a one-time, large payment to a new payee, they should enter a second one-time passcode to validate the transaction. It is important, as mentioned previously, that the transactional authentication be cryptographically distinct from the session authentication mechanism or the attacker will try to get the user to re-authenticate for the session. This requirement highlights a key difference between shared-secret systems and public-key systems. A public key system can support multiple authentication servers with no reduction in security. One public key pair can be for sessions and another for transactions or a user could have more than one key pair on separate devices. For example, they might have a session token on their PC and a transaction token on their cell phone. Account fraud and identity theft are frequently the result of weak authentication. Although the complete mitigation of risk is unrealistic, financial institutions can effectively maintain the integrity of online banking with stronger authentication. The key threats are MITM attacks, keystroke loggers and session hijackers. By employing session, mutual and transactional authentication tools on the front-end, web application security can be significantly improved. Using back-end fraud detection mechanisms to detect potentially fraudulent transactions that might slip through the front-end can further reduce fraud. There is no reason why online banking cannot be as secure as credit card transactions. Fraud can be reduced to a level where it does not impact the average user and can be covered by insurance. - Nick Owen is the Founder and Chief Executive Officer of WiKID Systems, Inc. www.wikidsystems.com
<urn:uuid:ea61e0d0-9036-4195-97a9-a7aec91a9e7a>
CC-MAIN-2017-04
http://www.bankinfosecurity.com/articles.php?art_id=115
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00527-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923275
1,404
2.546875
3
In September 2007, in a remote laboratory in Idaho, researchers began to show that that picture had begun to change, dramatically and irreversibly. Dubbed "Aurora," the researchers' project demonstrated the ability of a cyber hacker to destroy physical equipment—in this case a generator used to create electricity for the power grid. The Aurora research brought the question of physical safety and the ability for a nation to defend itself from attack in the cyber world to the forefront. For the next three years, this difficult discussion would largely remain just a discussion, contemplated, if passionately, in corners of Washington and at wonk-ish meetings across the U.S. The first dramatic images of a generator shaking and belching smoke were vivid enough to force the informed to begin to consider the implications of such an attack occurring in the real world. We began to envision scenarios of a broad-scale attack on U.S. infrastructure, with the potential to cause blackouts that could last for months, contaminate our water supply, and cause industrial disasters. Forget Facebook—we began to worry about our ability to keep the lights on. In 2010, along came the Stuxnet Worm, which took the hypothetical scenario extrapolated from the Aurora research and proved not only that it had been done, but also that it was released and traveling through cyberspace undetected. The worm carried with it all of the potential outcomes of Aurora to be triggered by a packaged-up set of autonomous code. Now the risk was real and it became very vivid. [Editor's note: Read the full text of Assante's Congressional testimony on Stuxnet (PDF, registration required).] For the first time in a public forum we could read about a real-world scenario with physical consequences playing out as a result of an attack from a remote computer. In our minds' eyes, the images of toxic vapor rising from a chemical processing plant or a series of explosions at power plants across the country began to crystallize. [Also see 4 things the Roman aqueducts can teach us about securing the power grid by Assante and Mark Weatherford] This new "face" of the cyber threat tears away at our notion of cyber security being confined to the "cyber" world. It elevates certain types of computer attacks to a higher-level of decision-making in a nation state and turns what was traditionally a law enforcement matter into one for the military and intelligence community. Before Aurora and Stuxnet, a leader could afford to ignore or to tolerate the majority of cyber attacks and choose to quietly conduct investigations and deal with longer-term efforts to raise awareness and develop more responsible and capable participants in the computer ecosystem. When we considered the cyber security threat, most of us could easily dismiss the headlines as routine. Viruses, identity theft, WikiLeaks, even large-scale financial scams are part of our every-day vernacular, understood as an unavoidable consequence of our life on the web. We all recognize these risks exist, the costs can be quite large, but, after all, we like our e-mail, we like Facebook, we like the convenience of immediate access to virtually everything. We still get in the car each morning to go to work. Cyber risk, is most often an invisible threat: unseen, often undetected, and absorbed by society as a necessary evil that comes along with the vast improvements made possible by the internet. Rarely do these threats occur in such a way that scares us—even more rarely do they occur in a form that we would begin to consider government, let alone military, intervention necessary or appropriate. Even though the losses of information and monetary value are very real and ultimately have physical, "real-world" impacts, they lack the vividness that taps into the human perception of real danger. Certainly a President can recognize the negative circumstances and deplore the many acts that result in the theft of information or financial damages to organizations, but they did not feel compelled to directly respond in a public manner using instruments of national power. The cyber effort could be left to professionals across the nation and dealt with in a less than real-time manner, not having the necessary gravity to call into question the confidence of a people in their leader. Stuxnet has changed that precept and made the cyber threat a clear and present danger. Stuxnet has delivered to the President, and other leaders throughout the developed world, the possibility of being confronted with a cyber attack that would require a real-time response using the instruments of national power. In its release of the International Strategy for Cyber Space, the White House has clearly communicated its national security doctrine for cyber attacks that carry with them this recognizable danger to public safety. The document states, "Right of Self-Defense: Consistent with the United Nations Charter, states have an inherent right to self-defense that may be triggered by certain aggressive acts in cyberspace". The nation's defense objectives are clearly stated, "The United States will, along with other nations, encourage responsible behavior and oppose those who would seek to disrupt networks and systems, dissuading and deterring malicious actors, and reserving the right to defend these vital national assets as necessary and appropriate". This right to act in defense would also extend to friendly nations, "When warranted, the United States will respond to hostile acts in cyberspace as we would to any other threat to our country. All states possess an inherent right to self-defense, and we recognize that certain hostile acts conducted through cyberspace could compel actions under the commitments we have with our military treaty partners We reserve the right to use all necessary means—diplomatic, informational, military, and economic—as appropriate and consistent with applicable international law, in order to defend our Nation, our allies, our partners, and our interests. In so doing, we will exhaust all options before military force whenever we can; will carefully weigh the costs and risks of action against the costs of inaction; and will act in a way that reflects our values and strengthens our legitimacy, seeking broad international support whenever possible." The White House makes it clear that it is developing a strategy of deterrence and credible response that will rely on treating certain acts as law enforcement matters with real consequences for threat actors, and others as national security matters that may illicit a military response. This notion of military response to a cyber attack, to include the use of violence to defend our nation, is a direct result of the ramifications and dangers made clear and present by potential attacks like Stuxnet. In addition to a clear policy of deterrence and military retaliation, these types of attacks justify strong, coherent, and cohesive domestic policy to ensure these threats are adequately protected against throughout our critical infrastructures. Today, incentives are not appropriately aligned for business owners to make sound investment decisions with respect to these risks, resulting in under-secured systems and assets. Many call for a strong regulatory framework should be put in place for all critical infrastructures to lessen the likelihood of a successful attack and provide the ability to manage a Stuxnet-like attack. Regulation will ultimately be necessary, but I must share my recent experience with electric power system cybersecurity standards. These standards have polarized the industry and have imposed compliance requirements on a highly-dynamic and not fully understood area of risk. The result has been a conscious and inevitable retreat to a compliance/checklist-focused approach to the security of the bulk power system. Regulation, although necessary, should be re-evaluated and designed to emphasize learning, enable the development of greater technical capabilities through more qualified staff, and discourage the creation of a predictable and static defense. This will take time and will not be an easy task. This new reality will also require the clarification of emergency powers and authorities to respond to and defend against such attacks, potentially from within private networks. The mechanisms to enable this kind of action have been highly-contested, and many legislative proposals tabled and left behind. The issue they grapple with was perhaps best highlighted by the chairman of the House Armed Services Committee, Rep. Howard P. "Buck" McKeon, (R-Calif.), in his comments on the Rules of Engagement in cyberspace for the Defense Department, saying "because of the evolving nature of cyber warfare, there is a lack of historical precedent for what constitutes traditional military activities in cyberspace." [Also read George Hulme's If Stuxnet was an act of cyberwar, is the US ready for a response?] As a nation, we have many questions to answer if we are to determine how to make the right decisions in response to cyber security threats or attacks, particularly those that might ultimately lead to the use of military force. What would demand such a response? Certainly it is a combination of factors, to include our confidence in the understanding of who conducted the attack, why it was conducted, and what their future intentions and capabilities may be. But the decision will ultimately rest on the impacts and implications of the attack itself. The President will be faced with the need to decide if constraining the nation's ability to produce or supply a given product for some period of time would qualify. This is no small matter when the product in question is a life-sustaining drug in short supply or the loss of electricity to a major city or the contamination of a water source. We must also wrestle with our inability to control all actors operating from U.S. territory or with links to our interest from precipitating a justified response by another country that parrots our own policy. These are the scenarios that will need to be considered and developed into the strategy of both deterrence and credible responses. With the advent of Stuxnet and Aurora, we have truly entered the "bad new world" of cyber security—a "bad new world" that now demands our attention at the highest level. Unlike in the past, the headlines of today call into question our nation's ability to "provide for the common defense," and threaten the safety of our citizens and our way of life. This new face of cyber security is one that has vivid physical impacts, and is no longer a movie script that requires the suspension of disbelief. The paradox of the matter is this: the risk we have learned to so easily dismiss may ultimately cost our society more than those attacks we are now driven to protect against. The average cost to of cyber attacks to medium to large businesses is a concern, but the longer-term implication of the loss of intellectual capital is stunning. These real costs are a reminder that any national strategy needs to effectively span the spectrum of attacks as the cost incurred by the U.S. as a result of Stuxnet-like attacks to date is zero. Such, however, is the nature of why and how man feels compelled to act. It is the dramatic events able to penetrate our armor of self-deception that get our attention. We are left to consider the difficult to assess consequences of all cyber attacks on our productivity, viability, competiveness, and national security. We must develop doctrines that are flexible enough to deter a death that comes by a thousand cuts while rationally deterring more vivid attacks that directly impact public safety. We are left with the difficult imperative to shape a prudent defense against both the litany of attacks impacting our country's competitiveness and economic well being and to the now illuminated specter of attacks that will result in physical damage. We would be wise to invest our efforts into developing highly technical and skilled cyber defenders and finding ways to enable them. The bad new world should not deter us from deploying and driving the technology of tomorrow nor should it tie the hands of our defenders with compliance-focused security programs. We must look to the future with eyes wide open recognizing the vivid and less vivid implications to getting it wrong. Michael Assante is President and CEO of National Board of Information Security Examiners and former Chief Security Officer at the North American Electric Reliability Corporation (NERC).
<urn:uuid:eb04a217-b23d-41bf-ae77-fbb67976d9e8>
CC-MAIN-2017-04
http://www.csoonline.com/article/2129606/employee-protection/bad-new-world--cyber-risk-and-the-future-of-our-nation.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00251-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956564
2,435
2.921875
3
Andrew Tanenbaum and his students just published a paper on the possibility of self-replicating RFID viruses (PDF). The paper is titled "Is Your Cat Infected with a Computer Virus?". MSNBC also has a story on this. RFID tags, as you may know, are small radio chips that can be placed on inanimate objects, animals or even humans. Once in place, a specialized reader can read the tag from tens of meters away. The technology can be used to track luggage at airports or to automate store checkout systems, among many other things. It's already quite common to tag family pets for easy identification (hence the title of the paper). The paper presents an attack where the tags carry a small amount of data (127 characters) that will infect the RFID reader. More precisely, they use an SQL injection attack against an Oracle database backend that interfaces with the reader. The reader will then continue to infect all new tags it sees. Luckily, this is currently only a proof-of-concept attack, even though it's a scary idea. As a side note, did you know that RFID tags are also used to fight />=3 the H5N1 avian influenza? I bet the clever people who thought of that never saw this one coming.
<urn:uuid:fb8e766b-6434-46df-8440-515578611e00>
CC-MAIN-2017-04
https://www.f-secure.com/weblog/archives/00000835.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00159-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940938
263
2.78125
3
Upgrading your business' computers and mobile devices is essential for staying productive and keeping your data secure. But when you replace your hardware, what do you do with your castoffs? It's an important decision considering e-waste is a growing global environmental and health issue and current trashing bans for various types of electronics don't seem to be deterring anyone. Last week Jean-Daniel Saphores, an applied economist at the University of California-Irvine, presented research regarding U.S. recycling rates at the annual meeting of the American Chemical Society in Indianapolis. He surveyed 3,156 U.S. households and asked them how they had disposed of junk cell phones and how they intended to get rid of unwanted TVs. At the time of his 2010 survey only California had legislation on the books regarding the disposal of cell phones and 13 states had laws that covered throwing away TVs. After looking at the disposal behavior of people across states he found no difference in how people got rid of their gear between states with e-waste legislation compared to those without it. Essentially, he found that state legislation is worthless. "It opens the door to people driving from one state to another--not ordinary folks but some recyclers--to get money from say, California, where the goods were consumed elsewhere," he says. The toxic truth Electronic waste from the U.S. often ends up in developing countries where workers at scrap yards, some of whom are children, are exposed to hazardous chemicals and poisons while looking for valuable metals. Along with elements such as gold and copper, anything with a circuit board contains toxic substances, including lead, nickel, cadmium, mercury, brominated ame retardants (BFRs) or the chlorinated plastic, polyvinyl chloride (PVC), all of which harm the environment. Jerry Powell, executive editor of Resource Recycling, says that of 1,352 e-scrap processing plants in the United States only 114 are certified by a non-profit called e-Stewards not to export overseas, dump or burn their waste. E-Stewards says only 11-14 percent of e-waste is sent to recyclers--the rest ends up in landfills or is burned resulting in soil, water and air pollution. Of the e-waste sent to e-cyclers, 70-80 percent of it is exported to countries with lax environmental and labor regulations. How to dispose of your old electronics Saphores believes the solution is to make people pay a bit more when they buy electronics and give it back to them when they return the item to a store, just like people already do with car batteries. Unfortunately, doing so would take both political will as well as cooperation from manufacturers and retailers that may not want to complicate the consumer buying experience, he says. In the meantime, there's really no excuse for throwing your old PCs and other hardware away. Plenty of big tech brands offer local drop-off centers for old electronics, free shipping labels to send old tech gear back for recycling, or rewards for recycling such as coupons for discounts on future purchases. Here are several options--just make sure to verify that the manufacturer or retailer partners with e-Stewards-certified recyclers. Apple offers gift cards for old Apple gear. Best Buy will take back nearly all consumer electronics gear. Canon runs several recycling programs online and with retail partners for its printer hardware, toner cartridge, and digital camera gear. Dell's recycling program has 2000 physical drop-off recycle centers and runs a mail-back recycling program for print supplies and hardware. Hewlett-Packard runs several recycling programs for print supplies, PC hardware, cellphones, and batteries. Samsung Electronics allows you to print a pre-paid postage label to send any old cellphone back to Samsung for recycling. Otherwise, the Environmental Protection Agency runs an electronics donation and recycling site that offers links to resources. CEA, the consumer electronics trade association, also links to recyclers through its Greener Gadgets website. If your hardware still works, you can always sell it. In addition to Craigslist, scads of websites buy used equipment or offer trade-up programs, including Amazon, Best Buy, BuyMyTronics, eBay, Ecosquid, Gazelle, and Glyde. Just make sure to remove any data from a computer or mobile device before recycling or donating it. This story, "Trashing Bans Not Reducing Office E-Waste" was originally published by PCWorld.
<urn:uuid:f7e85967-6cc4-4770-ac9c-e6e6df4ab98f>
CC-MAIN-2017-04
http://www.cio.com/article/2382424/green-data-center/trashing-bans-not-reducing-office-e-waste.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00187-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952502
939
2.9375
3
Research by McAfee and the Anti-Bullying Alliance has found that 14-15 year old teenagers are most likely to adopt risk-taking behaviours and overshare online; putting themselves in potentially harmful situations and at risk of cyber-bullying. The research reveals that 14-15 year olds spend more time on social media than any other age group, with a fifth spending over four hours logged on every day. These teens risk making themselves more vulnerable to abusive and bullying behaviours by digitally exposing themselves through sharing too much personal information online; 11% had shared revealing videos or photos of themselves, 1 in 10 had seen an inappropriate, revealing or pornographic image of someone they know online and 7% admitted to “liking’ an unkind image of someone they know. With 14-15 year olds such prolific users of the internet, it is perhaps unsurprising that findings show this age group are putting themselves in potentially harmful situations by engaging in inappropriate behaviours online; and are most likely to access dangerous content, be exposed to cruel or mean behaviour and encounter unwelcome adult attention. Nearly a quarter (23%) of 14-15 year olds surveyed had seen a porn image online of someone they didn’t know and 19% confessed to visiting a website that their parents would not approve of. In addition, over half of 14-15 year olds surveyed also confessed to hiding their online activity from parents, with nearly a quarter (24%) actively deleting their browsing history. Findings also showed that children and young people clearly need help to understand what is and isn’t appropriate behaviour online, and with recognising the potential consequences of their actions: Around 34% of respondents had witnessed cruel behaviour online, whilst 22% had been subjected to it themselves, half of whom admitted it left them feeling upset or angry. Fifteen percent had been on the receiving end of foul or abusive comments and 7% had been told they were fat or ugly. ‘Peer pressure’ was also most prevalent for this age group, with 19% of respondents admitting they had looked up sexual, violent and other inappropriate content due to pressure from friends, girl/boyfriends. The same age group displayed a need to be guided on online etiquette; to clearly understand the difference between “banter’ and bullying; only 23% were able to see that their cruel and abusive comments may be considered mean to the person on the receiving end, with -the same number seeing these comments as “just banter’. When it came to stranger danger, one in ten (11%) 14-15 year olds had been approached by an adult they did not know online. Disturbingly, nearly one third (32%) of those teens approached had then shared inappropriate things such as pictures of themselves with that stranger which they later regretted. More worrying still, a fifth (20%) reported meeting that adult in person before realising the relationship was inappropriate. McAfee Cyber Security Expert, Raj Samani commented: “Protecting your child online is an absolute minefield, with easy access to the net through smartphones, tablets and computers, parents need to strike a balance between social freedom and security for teens. This report highlights the growing need for parents to have frank conversations with their children around threats online, net etiquette and the nature of cyber-bullying, as well as ensuring that household devices are as effectively secured as possible from questionable content.” Luke Roberts, National Coordinator of the Anti-Bullying Alliance said: “The digital world is one inhabited by most young people on a daily basis, yet they are clearly struggling to understand online etiquettes, what appropriate online behaviour is, or how to keep safe. Our findings highlight the dangers of digital exposure. They suggest that young people, particularly young teenagers, are displaying risk-taking behaviours and freely sharing information with what is essentially a global, and sometimes anonymous, mass audience, without grasping the permanence of these exchanges. “By making private information public property, young people are exposing themselves to comment and attention from others, without necessarily having the skills to deal with potential situations which might arise from these online interactions. As adults it is our responsibility to teach children and young people digital skills and set boundaries so they are able to realise the huge benefits and opportunities that the internet offers in terms of accessing information and making friends, but also ensures that they are safe and free from being bullied both online and offline.” The research was commissioned by McAfee and undertaken by Atomik Research in the UK. 1012 UK children between the ages of 10-17 and 1013 UK adults with at least one child aged between 10-17 were surveyed. The survey was conducted in October and November 2013.
<urn:uuid:1a9dcd47-0b80-4c62-894e-543b089b901b>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2014/01/30/risky-online-behaviour-is-putting-teens-in-danger/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00489-ip-10-171-10-70.ec2.internal.warc.gz
en
0.972591
959
3.015625
3
Windows Safe Mode is a way of booting up your Windows operating system in order to run administrative and diagnostic tasks on your installation. When you boot into Safe Mode the operating system only loads the bare minimum of software that is required for the operating system to work. This mode of operating is designed to let you troubleshoot and run diagnostics on your computer. Windows Safe Mode loads a basic video drivers so your programs may look different than normal. If you use a computer, read the newspaper, or watch the news, you will know about computer viruses or other malware. These are those malicious programs that once they infect your machine will start causing havoc on your computer. What many people do not know is that there are many different types of infections that are categorized in the general category of Malware. Windows 7 hides certain files so that they are not able to be seen when you exploring the files on your computer. The files it hides are typically Windows 7 System files that if tampered with could cause problems with the proper operation of the computer. It is possible, though, for a user or piece of software to set make a file hidden by enabling the hidden attribute in a particular file or folder's properties. Due to this it can be beneficial at times to be able to see any hidden files that may be on your computer. This tutorial will explain how to show all hidden files in Windows 7. By default Windows hides certain files from being seen with Windows Explorer or My Computer. This is done to protect these files, which are usually system files, from accidentally being modified or deleted by the user. Unfortunately viruses, spyware, and hijackers often hide there files in this way making it hard to find them and then delete them. Windows Vista comes with a rich feature set of diagnostic and repair tools that you can use in the event that your computer is not operating correctly. These tools allow you to diagnose problems and repair them without having to boot into Windows. This provides much greater flexibility when it comes to fixing problems that you are not able to resolve normally. This guide focuses on using the Startup Repair utility to automatically fix problems starting Windows Vista. The tutorial will also provide a brief description of the advanced repair tools with links to tutorials on how to use them. HijackThis is a utility that produces a listing of certain settings found in your computer. HijackThis will scan your registry and various other files for entries that are similar to what a Spyware or Hijacker program would leave behind. Interpreting these results can be tricky as there are many legitimate programs that are installed in your operating system in a similar manner that Hijackers get installed. Therefore you must use extreme caution when having HijackThis fix any problems. I can not stress how important it is to follow the above warning. To remove an app directly from your iPad, iTouch, or iPhone, press the icon on the device for the particular app you wish to delete until all of the icons on the screen start to wiggle. Once they are wiggling you will also see the symbol appear in the upper left-hand corner of each icon as shown in the image below. Before Windows was created, the most common operating system that ran on IBM PC compatibles was DOS. DOS stands for Disk Operating System and was what you would use if you had started your computer much like you do today with Windows. The difference was that DOS was not a graphical operating system but rather purely textual. That meant in order to run programs or manipulate the operating system you had to manually type in commands. When Windows was first created it was actually a graphical user interface that was created in order to make using the DOS operating system easier for a novice user. As time went on and newer versions of Windows were developed DOS was finally phased out with Windows ME. Though the newer operating systems do not run on DOS, they do have something called the command prompt, which has a similar appearance to DOS. In this tutorial we will cover the basic commands and usage of the command prompt so that you feel comfortable in using this resource. Windows Vista has made it a little harder to find the Folder Options settings than it had in previous versions. The easiest way is to use the Folder Options control panel to modify how folders, and the files in them, are displayed. You can still show the Folder Options menu item while browsing a folder, but you will need to hold the ALT key for a few seconds and then let go to see this menu. The iPad is ultimately a device create to allow you consume content in an easy and portable manner. As there is no better location for consumable content than the Internet, being able to connect to a Wi-Fi network so you can access the Internet is a necessity. This guide will walk you through all of the steps required to connect to a Wi-Fi network using your iPad. We have also outlined steps that will allow you to access almost all types of Wi-Fi networks as well as using proxy servers if your particular scenario requires it.
<urn:uuid:d8a6a7b2-aa54-43e3-99a8-f5987a50590c>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/tutorials/popular/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00124-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953139
1,020
2.796875
3
There are some pretty gnarly looking YouTube videos showing California National Guard officers dropping water on the destructive Rim Fire that burned more than 257,000 acres in California in August 2013. On display in the videos is the kind of effort that went into combating the massive blazes and took the combined efforts of forces like the National Guard, the California Department of Forestry and Fire Protection (Cal Fire) and the U.S. Forest Service to quell. In all, those entities poured at least 250,000 gallons of water or retardant on the blazes. The videos show the result of the all-hazards and whole-community mentality that the guard has adopted more since 9/11 and especially Hurricane Katrina. The guard works alongside the California Emergency Management Agency in a state where threats of wildfires, floods and earthquakes are omnipresent. The guard’s Joint Operations Center (JOC) near Sacramento is staffed 24/7, and on the day Emergency Management visited, staff members were tracking a system that turned out to be the devastating Typhoon Haiyan that killed more than 6,000. The JOC is a modern operations center, and guard personnel can drill down into areas affected by a potential disaster and obtain a great degree of situational awareness. For example, if there’s an earthquake in the Bay Area, the guard can locate personnel in the area and within 15 minutes know which soldiers will and will not be recallable. “Google Earth allows us, with our layers and feeds that we leverage from Northcom [U.S. Northern Command in Colorado], existing relationships and mutual aid agreements, and pull up layers such as Caltrans to see what traffic is like,” said Maj. Brandon Hill. “We can use these layers and the ability we have with personnel in the JOC to push someone in the area, whether it’s [guardsmen] from out of state, local first responders or military.” Like this story? If so, subscribe to Emergency Management’s weekly newsletter. “During the Rim Fire, you’d have seen this room fill up with our aviation assets, our Cal Fire partners and others,” said Maj. Dan Bout. “We had Army aviation and Air National Guard aviation assets, including their liaison officers, right here at these stations providing information to us so the decision-makers can say, ‘We need to put more assets on the south side of the fire’ or whatever that incident commander from Cal Fire or the U.S. Forest Service needed.” During the Rim Fire last August, Black Hawk helicopters manned with guardsmen dropped 660-gallon buckets of water on the fire, something that the guard trains for regularly. That all-hazards training came after Katrina when the guard realized it had to improve natural disaster response. Col. Wesley L. McClellan, deputy director of J-3 operations, said the biggest change that came from 9/11 and Katrina was training to support civil authorities. He said the training led to partnerships and helped “bridge federal-state planning efforts, promote mutual understanding and enhance unity of effort.” During an event, the guard will be on alert in the JOC until a mission-tasking request is made. Guardsmen track the event, do predictive analysis, maintain situational awareness and are in constant communication with partners in preparation for a formal request. “During the Rim Fire when we had all the state’s fixed-wing and rotary-wing assets already committed, they recognized that gap, turned to the National Guard and said, ‘We need X number of rotary-wing aircraft and so on,”’ Bout said. Tracking an event and maintaining situational awareness is key to being ready when the call comes. “They’re busy. We’re not calling them, saying, ‘Do you need us?’ We’re doing that predictive analysis and saying, ‘We think they’re going to run out of resources,’ which means we’re next in line to get a phone call for aviation assets, or soldiers and airmen to help out,” Bout said. “We have a close working relationship with the National Guard,” said Mark Ghilarducci, director of the California Office of Emergency Services. “I have liaisons here 24/7, and we share information on joint priorities. The National Guard provides support for all of the agencies, predominantly public safety, but depending on the situation, the National Guard is a force multiplier. They’re the governor’s army, so they are — through my office — tasked to do a multitude of support, whether it’s aircraft transporting people or getting boots on the ground.” There are more than 20,000 guardsmen in the state, most based in high-population areas like the Bay Area and Southern California. There are also smaller units, called armories, of about 120-150 personnel in some of the state’s less populated areas as part of more than 200 guard installations. The guard is prepositioned, physically and otherwise, to respond to most scenarios. “We have a lot of priority intelligence requirements based on seasons,” Hill said. “We’re entering a flood season, so there are different layers, such as river gauges and weather feeds, that we monitor.” In addition to blazes, the fire season also brings the second and third effects of flooding and mudslides, and the guard must be ready to respond to those. That’s where predictive analysis comes into play. “Instead of having a knee-jerk reaction, we know it’s coming based on what was happening in the JOC,” Bout said. Part of avoiding a knee-jerk reaction is getting “socialized” to any response that might be necessary. That means practice. Once a year, reservists drill to see if they’re up to a major response. They test everything, including their fitness, if radios work, if they’ll have food and water for three days, and if the administrative tasks are taken care of. “It’s not as simple as putting in a call because these are reservists,” Bout said. “That’s one thing California prides itself on. By practicing, coming up with a system and then vetting that system, we have the ability to respond that doesn’t exist in a lot of other National Guards.” In a state as diverse as California, the focus must be on all hazards, and the guard must be ready to respond to many possibilities on short notice. “If you’re on the East Coast, your predominant emergency is going to be a hurricane, where it’s all hands on deck, the disaster’s coming and you have advance notice,” said Hill. “In California the things we’re looking at are no-notice.” Hill said the response is similar to any guard response but more flexible and diverse. “We don’t want to be limited,” he said. “We have plans for every major catastrophe in California you can think of.” Redundancy is important too since the JOC is in a flood plain. “We even have our own internal plans if this building gets flooded. We have an alternate location in Fresno.” The signal to respond comes as a tasking request from the state’s Office of Emergency Services. The guard will have been monitoring the situation at the JOC, prepping to deploy assets. The smaller, 150-man units can be deployed within six hours and are generally moving within two or three hours of the request. “That’s six hours,” Hill said. “That doesn’t mean we sit in the JOC at three in the morning and wait. I have the authority at two in the morning to make that call to the company commander and say, ‘Alert, recall your forces to your armory.’ They’ll have six hours to marshal a certain number of personnel, vehicles, etc., and then depart. I can give initial guidance and say, ‘Get down to Oakland and link up with the EOC, here’s your point of contact, go.”’ That call may go out several ways — high-frequency radio, email, alert systems, the Everbridge notification system or by phone. The larger, 500-man units can be deployed in 12 hours, depending on the location in the state. Onsite, personnel are staged in a process called Joint Reception, Staging, Onward Movement and Integration (JRSOI). “Instead of flooding people, that’s what JRSOI prevents,” Bout said. “It’s a processing point and puts them in the theater and keeps track of them.” A tiered response will let a unit get to the site, establish command and control, and provide early eyes and ears on the situation. If guard and military personnel from other states are called in, they’ll all be under the management of a dual-status commander. Dual-status command was another initiative that came out of Katrina, where 70,000 soldiers showed up in force, but there was no chain of command. Each state now has a dual-status commander, who’s in charge of both National Guard and active-duty personnel. “We moved to the dual-status commander concept fast and that’s been a great thing,” Ghilarducci said. “We introduced a concept called the California military coordinating officer, which is equivalent to the federal DCO [defense coordinating officer], but on the state side and that rolls into liaison and support much like the federal DCO does with FEMA. No other place in the country has this yet.” The dual-status command structure was used successfully during Hurricane Sandy, where state and federal personnel worked under the same chain of command and helped local first responders deliver 6 million meals and more than 8 million gallons of unleaded and diesel fuel. The structure also helps with the whole-community approach the guard has embraced since Katrina. “Along with 9/11, Katrina was one of those pivotal domestic incidents that highlighted the critical importance of pre-incident planning, shared situational awareness and interagency coordination,” McClellan said. Most states don’t have a 24/7 JOC. But California is unique and so is the guard’s ability (much like Florida and New York) to do water search and rescues. Every few months the guard is deployed for maritime service. For instance, the guard was recently asked to help with a rescue 1,300 miles off the San Diego coast. It was out of the Coast Guard’s reach, and the Air Force couldn’t support the mission. The guard flew two aircraft from Moffett Airfield in the Bay Area, assisted a Chinese fishing vessel, then handed it over to the Coast Guard. “That’s something we’d quarterback here in the JOC with our supporting units throughout the state,” Hill said. In fact, he said, the guard responds to a search and rescue or other emergency every three days or so. They also provide shelter for the homeless at armories throughout the state at different times of the year. It’s all part of the guard’s new community mentality.
<urn:uuid:0760855d-693e-4717-beb0-c14112ae54ff>
CC-MAIN-2017-04
http://www.govtech.com/em/disaster/National-Guard-All-Hazards-Response.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00546-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951126
2,423
2.65625
3
In a strange melding of technologies, cell phone towers are helping to make wind energy forecasts in Texas more accurate and predictable. Niwot, Colo.-based wind data provider Onsemble recently announced the completion of a wind data network that tracks real-time wind speeds, direction and temperature from sensors placed on cell phone towers that are approximately 260 to 320 feet above ground — the same height as many wind turbines. The sensors will track data throughout the Electric Reliability Council of Texas (ERCOT) region, which manages most of the state’s electricity grid and covers 95 percent of wind farms in Texas. The Public Utility Commission of Texas oversees ERCOT. The project is part of the nationwide effort to better predict wind turbines’ electricity output, which typically is more unreliable and intermittent than other energy sources. The sensors collect one-minute averages of wind data and send the information to a central hub every 10 minutes in order to predict a wind energy “ramp event” — when a large influx of wind energy will be introduced to the electricity grid. The system can predict such an event 12 to 24 hours before it happens. This improved forecasting could help Texas — which according to private-sector data produces one-third of the country’s 45,000 megawatts of wind power — run the wind farms more efficiently and make better financial decisions, partly because prices in the Texas electrical grid update every 15 minutes based on the supply and demand of the prior 15 minutes. Consequently more accurate and frequent readings should equal more accurate prices. And if operators know when a boost of wind-powered energy is about to happen, they can reduce their reliance on reserve power sources, such as fossil fuels. Most nationwide wind data used by the power industry is still collected from about 10 meters above ground. The readings are typically farther away from the source, so it’s not as accurate. One of the obstacles to implementing the more accurate systems was an assumption that governments would have to pay for the new sensor networks because there wasn’t a business model to back private funding. But according to experts, that’s no longer the case. “With the recent development of wind energy and solar energy, and the need to more accurately predict the intermittency of those renewable energy resources, there is now a business case to be made for obtaining this data at those levels,” said Anish Parikh, co-founder and vice president of Onsemble. In addition to ERCOT, Onsemble is currently operating a real-time data network in the Bonneville Power Administration, a federal agency in the Northwest; the Public Service Company of Colorado; and the Southwest Power Pool utility markets. A nationwide build-out of the network is planned through 2012. Once the data is collected, it’s sold to agencies that makethe forecasts. Generally packages are sold as yearly subscriptions to either real-time or archived data sets. For government customers, prices are driven by the data package’s size and how frequently it’s updated. The improved network is just one of many efforts to build out the country’s wind power data. Last month, IBM announced a partnership with two private wind-data collection companies to use the company’s existing software to develop an automated wind control development platform. In 2009, a private company funded Boston-based WindPole Ventures to install wind-measuring sensors on 1,150 communication towers over the next 22 years, although the company confirmed that only 600 were suitable for use.
<urn:uuid:a5a53b18-78ac-4383-95a7-c674321c1d19>
CC-MAIN-2017-04
http://www.govtech.com/technology/Wind-Energy-Forecasts.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00362-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940732
728
2.765625
3
William Jackson | The Web at 20: Rewards and risks of an open platform - By William Jackson - Apr 03, 2009 The World Wide Web is so ingrained in our daily lives that it’s something of a surprise to be reminded that it was born just 20 years ago. There could be a number of birthdays for the Web. The underlying concept of hypertext dates to the 1960s, and the Web debuted as a publicly available service in August 1991. “But the date of record is usually considered March 13, 1989,” said Leslie Daigle, chief Internet technology officer at the Internet Society. That was when Tim Berners-Lee submitted his proposal for the technology to CERN, the European physics consortium. Berners-Lee proposed a plan for linking information systems via hypertext over the Internet as a way to manage and make available the huge amounts of data expected to be generated by the Large Hadron Collider, which is just now going online. The Internet Society recently marked the anniversary of that proposal by hailing it as an example of the wonders that can be achieved on an “open, standardized Internet platform.” There have been other game-changing technologies in the past century, including aviation, radio and television. But unlike them, the Web harnessed the imagination and efforts of thousands of individuals and organizations that made their own contributions. It is not easy to get a new TV station on the air or a new aircraft into the air. But all it takes to get a new tool online is a PC and an idea. That is the great strength — and weakness — of the Web. In 20 years, it has established a completely new sector of the economy. Unfortunately, a large portion of that sector operates underground, where criminals steal and manipulate data and generally do their best to make online life for the rest of us burdensome. “It’s like any other tool,” Daigle said. “It requires a certain amount of education and a lot of socialization to use it properly.” Emphasizing security rather than openness in developing the Web would have been counterproductive, she said, although “I’m sure there is a lot that could have been done differently that wouldn’t have stifled that development.” That unstifled development still is going on. Instant messaging made e-mail seem old hat and has since been made almost obsolete by texting. In the past few years, social-networking sites have emerged and evolved from student playgrounds to business tools. And with each innovation have come new crops of vulnerabilities, exploits and opportunities for social engineering that create new risks. On balance, the benefits of the Web probably far outweigh the drawbacks. I, for one, would not want to go back to doing my job without the online resources of the Web, and there are thousands of people providing those resources who I am sure would hate to lose their jobs. Even the security threats have a silver lining because they employ thousands of people to defend us from malicious code and malicious people. But the balancing act is a precarious one, and without constant vigilance, the scales could easily tip in the other direction. William Jackson is a Maryland-based freelance writer.
<urn:uuid:a0e7b480-bf37-4c74-802d-ff39d2d8b96b>
CC-MAIN-2017-04
https://gcn.com/articles/2009/04/06/cybereye-the-web-at-20.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00178-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962208
666
2.625
3
Boosting Your Reasoning Skills It may seem hard to believe that the person who was perhaps the greatest thinker of the ancient world was commonly thought of as a gadfly by his own contemporaries, but it’s true. Socrates, the intellectual progenitor of Plato and Aristotle, was a much-disliked man in his time. In his day and age — during Athens’ golden age in the 5th century B.C. — the academic climate was dominated by the Sophists, a class of instructors whose educational methods were primarily designed to steer learners toward professional success rather than teach them how to think. Socrates, however, held that behind the admittedly polished and skillful rhetoric of the Sophists and their students were muddled meanings, sloppy argumentation and little substantive information. In contrast, Socrates offered what’s called dialectical reasoning, though he wouldn’t have recognized that term. This is more or less a perpetual process of exhaustively questioning an idea or principle: The accuracy of any concepts or facts, regardless of how important they may or may not be, is never taken for granted, and any position taken must be carefully thought through. In Plato’s accounts, Socrates regularly shot holes in the arguments of the Sophists in public debates via this dialectical methodology and, not surprisingly, they hated him for it. In fact, Socrates was sentenced to death in 399 B.C. because his unique approach to teaching was eventually ruled as being dangerous to the state. Although he passed long ago, the dialectical technique Socrates espoused has lived on into our times. Whenever students attempt to go beyond passive learning and venture into active reasoning — questioning and evaluating information instead of just absorbing it — they’re wittingly or unwittingly employing this method. IT professionals at all levels and in all job roles should seek to boost their ability to reason. Although this will help them in their training and certification endeavors, particularly in advanced learning and testing environments such as labs and simulations, critical thinking is more than just a means to prepare for and pass an exam. It’s something that will aid you in your progression through your professional and personal life. A few relatively simple ways in which you can boost your reasoning skills include: Make a Case for a Point of View at Odds with Your Own As Socrates questioned everything — even his own viewpoints — so should you. Think of one of your most closely held beliefs, then try to refute it with an opposing fact-based argument. The point is not to change your values, but rather to enhance your critical thinking skills by shaking yourself out of restrictive thought processes. By debating yourself, you’ll boost your ability to reason through dialectical methods, and your own convictions will likely come out stronger for having been challenged and defended. Play Reasoning Games When you work through puzzles such as Sodoku and crosswords, or play strategy games like chess and Risk, you’re giving your brain a workout. These help you develop reason and logic by putting you in situations where you can identify patterns and relationships, overcome various obstacles and take the appropriate measures to achieve an objective. Think Through Paradoxes Paradoxes, or numerical and verbal expressions that are contradictory in nature, are another interesting way to challenge the mind. Here’s a really basic one: There is an infinite amount of numbers after zero. Yet there is also an infinite amount of numbers between zero and one (0.1, 0.01, 0.001 and so on). Which infinity is bigger? Can an infinity exist within an infinity? You can build up your reasoning skills by setting your mind loose on these mental games and seeing if you can come up with a satisfactory explanation. (For more on paradoxes, go here and here.)
<urn:uuid:f7bd476a-ca3c-423b-b59b-63d1f324127b>
CC-MAIN-2017-04
http://certmag.com/boosting-your-reasoning-skills/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00298-ip-10-171-10-70.ec2.internal.warc.gz
en
0.969469
774
3.125
3
Figuring out what exactly is wrong with a crying baby can be a frustrating experience for parents and doctors. Is the cry from pain, hunger or something worse? In a rather interesting development announced this week, an automated software and sensor system developed at Brown University may begin to unlock some clues. Brown researchers, in conjunction with Women & Infants Hospital say they have developed a tool that analyzes the cries of babies, searching for clues to potential health or developmental problems. Slight variations in cries, mostly imperceptible to the human ear, can be a "window into the brain" that could allow for early intervention, the researchers stated. From the researchers: "Utilizing known algorithms, we developed a method to extract acoustic parameters describing infant cries from standard digital audio files . The system operates in two phases. During the first phase, the analyzer separates recorded cries into 12.5-millisecond frames. Each frame is analyzed for several parameters, including frequency characteristics, voicing, and acoustic volume. The second phase uses data from the first to give a broader view of the cry and reduces the number of parameters to those that are most useful. The frames are put back together and characterized either as an utterance - a single "wah" - or silence, the pause between utterances. Longer utterances are separated from shorter ones and the time between utterances is recorded. Pitch, including the contour of pitch over time, and other variables can then be averaged across each utterance. In the end, the system evaluates for 80 different parameters, each of which could hold clues about a baby's health.." "There are lots of conditions that might manifest in differences in cry acoustics," said Stephen Sheinkopf, assistant professor of psychiatry and human behavior at Brown, who helped develop the new tool in a statement. "For instance, babies with birth trauma or brain injury as a result of complications in pregnancy or birth or babies who are extremely premature can have ongoing medical effects. Cry analysis can be a noninvasive way to get a measurement of these disruptions in the neurobiological and neurobehavioral systems in very young babies." The automated analyzer lets researchers evaluate cries much more quickly and in much greater detail. The Brown team plans to make it available to researchers around the world in the hopes of developing new avenues of cry research. Check out these other hot stories:
<urn:uuid:1ea7f2c6-d8ca-43cc-93bf-97a2b6e2a65c>
CC-MAIN-2017-04
http://www.networkworld.com/article/2224944/applications/high-tech-tool-can-help-interpret-health-clues-from-crying-babies.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00114-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934647
487
2.796875
3
Forwarded from: Jason Burzenski <jason.burzenskiat_private> I recommend users use a personal cryptography system to ensure quality passwords. The idea is.. the user chooses a cipher to remember that will be applied to passwords, and the passwords before they are ciphered. For example, if you insist that your password should be iluvlinux for your email account and ihatelinux for your network logon you might apply a simple substitution cipher that changes all vowels to h4ck3r vowels, then pad the password with predetermined special characters such as a ^ prefix and a ) suffix. For added strength, any consonants occurring before the letter N will be capitalized. The user would then use ^1LuvL1nux) to access email and ^1H4t3L1nux) for network logon. A user only need remember the cipher and a common word/phrase in order to maintain a set of strong passwords. This is also helpful in an environment where users insist on writing their passwords on sticky notes and attaching them to the sides of their monitors. Finding a list of common words will not allow an attacker to gain entry without knowing the correct cipher. If you're truly a genius and you have room in your mind for more then one cipher, you can associate a cipher with a set of associated systems. Have a cipher for work, for personal business, for spam generating websites, etc. Its not a cure-all but ^CH4rL13) is still a stronger password then charlie. Jason Burzenski, CISSP -----Original Message----- From: owner-isnat_private [mailto:owner-isnat_private]On Behalf Of InfoSec News Sent: Friday, May 24, 2002 6:30 AM To: isnat_private Subject: [ISN] Hackers can crack most in less than a minute http://news.com.com/2009-1001-916719.html?tag=fd_lede By Rob Lemos Staff Writer, CNET News.com May 22, 2002, 4:00 a.m. PT When a regional health care company called in network protection firm Neohapsis to find the vulnerabilities in its systems, the Chicago-based security company knew a sure place to look. Retrieving the password file from one of the health care company's servers, the consulting firm put "John the Ripper," a well-known cracking program, on the case. While well-chosen passwords could take years--if not decades--of computer time to crack, it took the program only an hour to decipher 30 percent of the passwords for the nearly 10,000 accounts listed in the file. "Just about every company that we have gone into, even large multinationals, has a high percentage of accounts with easily (cracked) passwords," said Greg Shipley, director of consulting for Neohapsis. "We have yet to see a company whose employees don't pick bad passwords." [...] - ISN is currently hosted by Attrition.org To unsubscribe email majordomoat_private with 'unsubscribe isn' in the BODY of the mail. This archive was generated by hypermail 2b30 : Tue May 28 2002 - 06:04:21 PDT
<urn:uuid:1834f5e9-fc2d-48ca-83a6-ec120cd46388>
CC-MAIN-2017-04
http://lists.jammed.com/ISN/2002/05/0166.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00022-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926365
676
2.515625
3
In the 1800s, the railroad provided extreme economic growth to cities and towns across America. According to one Georgia County, fiber optics appear to be the next determinant of economic growth. According to Henry County Commissioner Brian Preston, the railroads moved goods throughout the country and now fiber optics move goods around the world. Thanks to technology advancements, the world’s economy has become more interconnected. “We are competing in this worldwide economy to try to bring business to Henry County and every day that we’re delayed or every day that we’re behind, they get a little bit bigger and a little bit stronger,” Preston said. “Fiber optics is the network connecting communities around the world.” When the railroad network expanded from 1870 to 1890, it redefined agricultural land values. According to a study by MIT’s Dave Donaldson and Harvard’s Richard Hornbeck, if those railroads had been removed in 1890, the total value of U.S. agricultural land would have decreased by 73% and GNP by 6.3%. Laying fiber cable may bring similar growth to land value. The railroad wealth went to the Rockefeller’s and Vanderbilt’s, but if planned properly, the potential wealth connected with this modern expansion can remain with local communities. For cities, like Henry County, that are interested in designing their own fiber optic network, GeoTel Communications’ Metro fiber maps, fiber lit buildings and long haul fiber can help city officials, network providers, and urban planners analyze existing telecom networks and make informed decisions for planning and expanding these networks. If you’re interested in telecommunications infrastructure maps, fiber maps or metro fiber maps, GeoTel Communications offers custom-made fiber maps based on client requests, as well as a variety of other telecom data products. Give us a call at 800-277-2172 to learn more!
<urn:uuid:998961f8-703a-4ac0-b55b-11e5586e51b7>
CC-MAIN-2017-04
http://www.geo-tel.com/2013/fiber-could-have-the-same-impact-as-the-railroad/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00234-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928986
391
3
3
Go Beyond Studying: Absorbing Knowledge It might not be intuitive for a newsletter titled “Study Guide” to downplay the significance of studying. After all, the “branding” of this particular medium suggests its bread and butter is helping readers study better. And that’s undeniably true — it does primarily exist to assist IT pros prepare for certification exams. In the commonly understood meaning of the word, though, “studying” is a short-term effort. People study a particular subject through reading and research for the sake of some goal, usually to pass a test. After they’ve successfully completed the exam, they can just let the knowledge they’ve built up fall into the cerebral abyss. But they haven’t really learned anything if they do that. For true career success, IT professionals need to build up and sustain their proficiency, both the technical and nontechnical varieties. Ultimately, we want techies to go beyond studying and really absorb the information they gain, so they can apply it in their work as easily as possible. As you study, you should keep the following thoughts in mind to really take in the terms and concepts of a particular body of knowledge. Less is More You want to learn as much as possible, right? Not exactly. Studying voluminous amounts of information actually can interfere with learning, as the brain can handle only so much new data in a given time period. For example, a study of American and German high school students showed that the mathematics textbooks used by the former covered close to twice as many topics. Yet, the German students surpassed their American counterparts on math exams. Because they were able to concentrate more of their mental energy on fewer topics, the German high school students were able to apply the knowledge much more effectively. Hence, when studying for an exam or trying to learn a new skill, focus first on the most essential topics. Then, when you’ve got those down, let your knowledge branch out further into related spheres. A Good Night’s Sleep A recent study from the Harvard Medical School involving 48 people between 18 and 30 — none of whom had any sleep problems or were taking medications — showed sleep greatly improves memory. Participants were divided into four groups: sleep before testing, wake before testing, sleep before testing with interference or wake before testing with interference. The research findings showed people who slept after learning the information — no matter whether their slumber was disturbed — recalled more information than those who tried to remember it after hours of being awake. This demonstrates two things: To retain more, you should study in the evening if you can, then go to bed relatively soon after you finish. Low to No Stress Continual discharge of large amounts of stress-related hormones called cortisol can prevent the brain from creating memories and accessing old ones. Your adrenal glands release adrenaline during brief periods of severe stress, which can be great for dealing with these situations. If these conditions are sustained for very long, however, cortisol is released into the brain, which damages the hippocampus (the section of the organ that creates memories and more or less controls learning). In addition, too much cortisol can shut down the brain’s ability to retrieve long-term memories, which helps explain why some people “go blank” during a big test. If you want to learn and retain information, then steer clear of excess stress.
<urn:uuid:90a710a8-d04f-42ee-920b-f90e3b2bbdd2>
CC-MAIN-2017-04
http://certmag.com/go-beyond-studying-absorbing-knowledge/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00262-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962598
706
3.09375
3
The potential of connected devices to create damage, injury and mayhem is an ongoing security concern. But so far, the Internet of Things is not being linked, in a significant way, to security problems, says a new study. Verizon, in its just-released annual report of report of cyber incidents, identifies phishing as the major problem. Of the over 65,200 incidents it gathered data about, about 2,250 resulted in a breach, or confirmed disclosure of data to a third party. (In Verizon's parlance, a security 'incident' falls short of a breach.) A major problem remains phishing, where typically an email with a malicious attachment or link is used to entrap a victim. There were about 9,500 reported incidents, with just over 900 reports of confirmed data disclosure. The main perpetrators of these attacks are organized crime syndicates (89%) and state-affiliated actors (9%), it said. Humans remain the weakest security link. In looking at phishing activity, the report wryly points out that "the communication between the criminal and the victim is much more effective than the communication between employees and security staff." It recommended improving email filtering and awareness training, and developing a means to protect the rest of the network from employee mistakes. The IoT has been identified as a potential security threat on a number of levels. Internet-connected devices can act as spyware, collecting voice, video or just usage data for unauthorized uses. And then there are James Bond-type breaches, where nefarious parties control machines, industrial settings, motor vehicles, drones and any connected devices. But in terms of the IoT-connected problem, the Verizon report didn't turn up issues. "We still do not have significant real-world data on these technologies as the vector of attack to organizations," it said of the IoT. This story, "Report says criminals are better communicators than IT staffers" was originally published by Computerworld.
<urn:uuid:e36ac248-1a12-4577-ae0f-abcbe84ebecc>
CC-MAIN-2017-04
http://www.itnews.com/article/3061923/security/report-says-criminals-are-better-communicators-than-it-staffers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00564-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95454
399
2.546875
3
Default Passwords and What You Can Do About Them This is a rather large security issue that has been (until lately) largely ignored and swept under the carpet. Many vendors have a dirty little secret: they ship software and hardware with default usernames and passwords, some of which they do not tell customers about. Once an attacker knows these default settings they can typically access the software remotely and gain administrative control. This can be extremely dangerous. Consider an attacker gaining access over your switch and routing infrastructure and forwarding traffic from the R&D department to another server. Alternatively, imagine the attacker taking over your remote access devices, such as ISDN routers, and then sniffing passwords as users access the corporate LAN. |"[Default passwords] are a huge problem because companies buy lots and lots of hardware and software that they need to deploy quickly."| This is a huge problem because companies buy lots and lots of hardware and software that they need to deploy quickly. This often results in minimal configuration effort being made, and the default passwords are usually left in, due to carelessness, or for the simple fact that the people installing it don't know (hardware vendors like 3Com have placed backdoors in hardware so that they can help the customer recover): "3Com is issuing a security advisory affecting select CoreBuilder LAN switches and SuperStack II Switch products. This is in response to the widespread distribution of special logins intended for service and recovery procedures issued only by 3Com's Customer Service Organization under conditions of extreme emergency, such as in the event of a customer losing passwords." It then goes on to list several products and their username and password pairs (debug:synnet and tech:tech). These accounts have FULL administrative access, since they can reset the customer's password and so on. While it's very nice of 3Com to be thinking about helping customers, this is not the way to do it. It's like the car dealership putting a lock on the car where all you need to insert is a stiff piece of wire to open the door and start the car, thanks but no thanks. Another classic example is Microsoft's SQL server, the "SA" account's password is left blank. SA stands for Security Administrator, and it has quite a bit of access. Since most MS SQL databases are attached to networks and listening on port 1433, it is trivial for an attacker to attach to the database and do whatever they want to (from running system commands on the server via "xp_cmdshell" to wiping or stealing the contents of your database). |"In a perfect world, software products would generate secure random passwords during install and notify the user."| The reason this issue exists is that vendors want to make products easy to deploy, increase ease of use and decrease support costs. When shipping a software or hardware product that has passwords, the cheapest solution is to simply leave them blank or set them with a default password. Ideally, vendors would ship each piece of hardware with a different, hard to guess default password such as "2i3h2323ddf" and tell the customer what it is. Some vendors do this, but it is relatively rare. Ideally with hardware, the vendor should log in to the hardware, generate a random password and then assign it, and print out the password and ship it with the product. For software vendors this is a bit more difficult, as mass producing CD-ROMs is not feasible if every CD-ROM must be different. In a perfect world, software products would generate secure random passwords during install and notify the user. Unfortunately this would also increase support costs and user aggravation, so as with most security issues, ease of use beats out security. So, these are the existing solutions: |||Assign no password and make the user login and create a password. Maybe configure the product so that it does not function until a password is entered. This would be quite effective and would definitely encourage people to put a password in. It would also cause problems, though, when users plug it in and it doesn't work immediately.| |||Assign a default password and make the user login and change the password. Maybe configure the product so that it does not function until a password is entered. This would be quite effective and would definitely encourage people to put a password in. It would also cause problems, though, when users plug it in and it doesn't work immediately. This is no better then the no- password option since the default password will be widely published at some point. Many vendors opt for this solution, assigning a default password and (usually) telling the user to log in and change it.| |||Assign a random password and make the user login and change the password. Maybe configure the product so that it does not function until a password is entered. This would be quite effective and would definitely encourage people to put a password in. This would also cause a lot of grief to users, though, since they may lose the paper with the serial number. Some vendors that sell servers with the OS pre-loaded do this to the admin accountsa good idea. One variation would be to put the database online, so when you plug in the serial number, out pops the default password that was assigned. This assumes the serial number was stamped onto the product physically, and cannot be found via the network, etc. This would be a relatively safe option.| Use some other mechanism, such as a token. For a product such as a router, design the authentication to support tokens and preload the product and the token with the same "secret". To login, the user needs the token to create the response to a challenge. This would be expensive, and somewhat difficult for many users, but it would make breaking into the equipment via the network exceedingly difficult. Ultimately, all these solutions require some degree of user education. You could solve the problem by using a technical solution such as tokens, but this would create other problems (what happens when you lose the token? what about distributed administration?), and is generally not feasible. A company should have a policy for setting passwords and so on for hardware and software devices and packages that are being installed. The person or group responsible for the item should be given the task of assigning the password, keeping it recorded somewhere (preferably on a machine that is not networked) and generally taking care of the item (such as upgrades, configuration and so on). The password should be recorded, since people leave companies, get run over by trucks, forget things, and so on. Alternatively, if the device supports it, you should use token- or smartcard-based authentication systems. This way you can easily share the PIN number or password needed to access the smartcard or token, but by keeping the smartcard or token physically secure not have to worry as much about someone leaking the PIN number or password. Unfortunately, the vast majority of network- based equipment that can be remotely managed is done so via telnet, which means attackers can easily sniff passwords or hijack sessions. Attackers can be expected to conduct sweeps of your network looking for devices that are remotely manageable, and then use automated tools to try to log into them using the default passwords. There are two primary methods. The first is to use a tool such as nmap to identify which hosts are up, identify the OS they are running (via TCP-IP fingerprinting), and then try the specific defaults for each device. The second is to simply pound on each device that has telnet open and try every default user and password (if they just wanted to target 3Com equipment, this would be about a dozen username and password pairs). Needless to say, they could very quickly go through your network and hijack any 3Com equipment that still has default passwords left in it. You can also be sure that products like CyberCop and Nessus will integrate these checks into new versions of their software (and of course the bad guys have access to these tools). So what can you, as a network administrator, do? Well, for starters, check that all remotely manageable devices (routers, switches, servers, etc.) have strong passwords set. Also check the list of default passwords to see if you own any equipment or software listed on it, and also check with employees to see if they have such devices at home, and make sure the issue is made clear (you're not angry at your employees for not setting the password; you want to help themplaying the blame game is useless). Find out who is responsible for each item and make a list (this is a good idea in the long term when you have problems with them). Many IDS systems are easily extensible and you can add checks for the default usernames and passwords such as "PASSWORD" and "CHANGE_ON_INSTALL DBA + ". You should also firewall any access to these devices from the Internet, and if you must manage devices at a remote site, consider using VPN software to create a secure tunnel between your site and the remote one. |"Long term, I think the solution is to implement authentication schemes that provide stronger control than simply usernames and passwords."| Long term, I think the solution is to implement authentication schemes that provide stronger control than simply usernames and passwords. Already companies are deploying smartcards, tokens, biometric-based systems and so on. There are also much better mechanisms than telnet for logging into remote equipment. Ideally, support for SSH, Kerberos and other strong authentication systems should be built into products. When buying equipment, ask the vendor if they plan to support such things, and if not, why not. Cisco, for example, now has SSH on their 7200, 7500 and 12000 series equipment. This is a good startas a Cisco customer, you can exert some influence, and if enough customers demand SSH support on other Cisco products, chances are that it will happen. Kurt Seifried (email@example.com om) is a security analyst and the author of the Linux Administrators Security Guide, a source of natural fiber and Linux security, part of a complete breakfast. SecurityPortal is the world's foremost on-line resource and services provider for companies and individuals concerned about protecting their information systems and networks. The Focal Point for Security on the Net (tm)
<urn:uuid:bab48e88-3c8a-403d-acf8-0353cd938196>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netsecur/article.php/624481/Default-Passwords-and-What-You-Can-Do-About-Them.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00380-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950825
2,122
2.671875
3
A study-guide on how to detect a virus hoax yourself It is difficult to imagine anybody today who does not treat computer viruses as a real threat to a regularly functioning computer system. However, contiguously with the virus spreading has occurred another syndrome, which is not any less dangerous - virus hoaxes. The idea of a virus hoax is simple: an offender fabricates a warning about an extremely dangerous virus that actually does not exist at all. After that, he sends the hoax to as many users as possible, asking them to take appropriate measures and to forward the message to others. Scared users, doing their best, inform all their colleagues and partners. As a result, the computer world is constantly agitated by bursts of virus hysteria, alarming tens of thousands of people all around the world. The "heyday" of the virus hoax was 1997-1998 when nearly every month, anti-virus companies were struck by a huge wave of e-mail from frightened users. As a result, these same anti-virus companies had to release soothing "calm down" articles. How can you recognize a real virus warning from a hoax? And what do you do should your friends believe this bad joke? The main rule: If the message did not come directly from an anti-virus-developer news service, then you should check the hoax sections at specialised Internet resources. We recommend you subscribe to the Kaspersky Lab Virus Encyclopaedia or check Rob Rosenberger's popular Virus Myths & Hoaxes Web site at VMyths.com. In case you don't find the virus alert you have received on these pages, then you should visit the news section on Kaspersky Lab Web site. Our experts are very fast in delivering breaking news about the latest virus outbreaks. Should there be any new outbreaks, you will find a corresponding notification at www.viruslist.com. In the event that you fail to locate any details regarding the virus mentioned in the alert, you should send a request to Kaspersky Lab technical support (firstname.lastname@example.org) for clarification. What should you do if you have received a real virus hoax? Firstly, do not forward it to anyone else. The best way of handling such messages is to delete them immediately. Secondly, as fast as you can, notify the sender that he has fallen victim to a virus hoax. There is still a possibility he hasn't managed to send the "virus alert" to others, so by informing him of his error, you are helping him save his credibility for not crying "wolf," causing friends and colleagues unnecessary nerve-wracking moments. In addition, it also needs to be mentioned that virus hoaxes carry an even more dangerous payload than simply scaring people with hollow alerts. It is possible that at sometime, a malefactor will write a virus, utilizing the nickname of a well-known virus hoax, thus, users-believing it is fine to do so-will open the attached file and get infected. At this time, we would like to remind you of the Golden Rule in regards to computer hygiene: Do not, under any circumstances, open any attached files received from unknown sources. You should be careful even with messages received from the people you know: many viruses send out infected files from affected computers in a way a user simply doesn't realize. Thus, if you consider the message to be unexpected and strange (for instance, a love letter from your boss), then it is better to check whether the sender has really sent the file, and to be sure his computer is not infected. "Perhaps, some will consider it strange, but some paranoia is an essential part of computer security, especially when dealing with e-mail," said Den Zenkin, Head of Corporate Communications for Kaspersky Lab.
<urn:uuid:65991821-c320-4610-92d4-72b9022a2f20>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2000/If_You_ve_Got_Mail_
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00500-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965182
782
2.765625
3
In this tutorial we will discuss the concept of Ports and how they work with IP addresses. If you have not read our article on IP addresses and need a brush up, you can find the article here. If you understand the concepts of IP addresses, then lets move on to TCP and UDP ports and how they work. The devices and comptuers connected to the Internet use a protocol called TCP/IP to communicate with each other. When a computer in New York wants to send a piece of data to a computer in England, it must know the destination IP address that it woud like to send the information to. That information is sent most often via two methods, UDP and TCP. The two Internet workhorses: UDP and TCP UDP? TCP? I know you are getting confused, but I promise I will explain this in very basic terms so that you can understand this concept. TCP stands for Transmission Control Protocol. Using this method, the computer sending the data connects directly to the computer it is sending the data it to, and stays connected for the duration of the transfer. With this method, the two computers can guarantee that the data has arrived safely and correctly, and then they disconnect the connection. This method of transferring data tends to be quicker and more reliable, but puts a higher load on the computer as it has to monitor the connection and the data going across it. A real life comparison to this method would be to pick up the phone and call a friend. You have a conversation and when it is over, you both hang up, releasing the connection. UDP stands for User Datagram Protocol. Using this method, the computer sending the data packages the information into a nice little package and releases it into the network with the hopes that it will get to the right place. What this means is that UDP does not connect directly to the receiving computer like TCP does, but rather sends the data out and relies on the devices in between the sending computer and the receiving computer to get the data where it is supposed to go properly. This method of transmission does not provide any guarantee that the data you send will ever reach its destination. On the other hand, this method of transmission has a very low overhead and is therefore very popular to use for services that are not that important to work on the first try. A comparison you can use for this method is the plain old US Postal Service. You place your mail in the mailbox and hope the Postal Service will get it to the proper location. Most of the time they do, but sometimes it gets lost along the way. Now that you understand what TCP and UDP are, we can start discussing TCP and UDP ports in detail. Lets move on to the next section where we can describe the concept of ports better. TCP and UDP Ports As you know every computer or device on the Internet must have a unique number assigned to it called the IP address. This IP address is used to recognize your particular computer out of the millions of other computers connected to the Internet. When information is sent over the Internet to your computer how does your computer accept that information? It accepts that information by using TCP or UDP ports. An easy way to understand ports is to imagine your IP address is a cable box and the ports are the different channels on that cable box. The cable company knows how to send cable to your cable box based upon a unique serial number associated with that box (IP Address), and then you receive the individual shows on different channels (Ports). Ports work the same way. You have an IP address, and then many ports on that IP address. When I say many, I mean many. You can have a total of 65,535 TCP Ports and another 65,535 UDP ports. When a program on your computer sends or receives data over the Internet it sends that data to an ip address and a specific port on the remote computer, and receives the data on a usually random port on its own computer. If it uses the TCP protocol to send and receive the data then it will connect and bind itself to a TCP port. If it uses the UDP protocol to send and receive data, it will use a UDP port. Figure 1, below, is a represenation of an IP address split into its many TCP and UDP ports. Note that once an application binds itself to a particular port, that port can not be used by any other application. It is first come, first served. <-------------------- 192.168.1.10 --------------------> This all probably still feels confusing to you, and there is nothing wrong with that, as this is a complicated concept to grasp. Therefore, I will give you an example of how this works in real life so you can have a better understanding. We will use web servers in our example as you all know that a web server is a computer running an application that allows other computers to connect to it and retrieve the web pages stored there. In order for a web server to accept connections from remote computers, such as yourself, it must bind the web server application to a local port. It will then use this port to listen for and accept connections from remote computers. Web servers typically bind to the TCP port 80, which is what the http protocol uses by default, and then will wait and listen for connections from remote devices. Once a device is connected, it will send the requested web pages to the remote device, and when done disconnect the connection. On the other hand, if you are the remote user connecting to a web server it would work in reverse. Your web browser would pick a random TCP port from a certain range of port numbers, and attempt to connect to port 80 on the IP address of the web server. When the connection is established, the web browser will send the request for a particular web page and receive it from the web server. Then both computers will disconnect the connection. Now, what if you wanted to run an FTP server, which is a server that allows you to transfer and receive files from remote computers, on the same web server. FTP servers use TCP ports 20 and 21 to send and receive information, so you won't have any conflicts with the web server running on TCP port 80. Therefore, the FTP server application when it starts will bind itself to TCP ports 20 and 21, and wait for connections in order to send and receive data. Most major applications have a specific port that they listen on and they register this information with an organization called IANA. You can see a list of applications and the ports they use at the IANA Registry. With developers registering the ports their applications use with IANA, the chances of two programs attempting to use the same port, and therefore causing a conflict, will be diminished. Every machine on the the Internet has a unique number assigned to it, called an IP address. Without a unique IP address on your machine, you will not be able to communicate with other devices, users, and computers on the Internet. You can look at your IP address as if it were a telephone number, each one being unique and used to identify a way to reach you and only you. Have you ever been connected to your computer when something strange happens? A CD drive opens on its own, your mouse moves by itself, programs close without any errors, or your printer starts printing out of nowhere? When this happens, one of the first thoughts that may pop into your head is that someone has hacked your computer and is playing around with you. Then you start feeling anger tinged ... The Internet is a scary place. Criminals on the Internet have the ability to hide behind their computers, or even other peoples computers, while they attempt to break into your computer to steal personal information or to use it for their own purposes. To make matters worse, there always seems to be a security hole in your software or operating system that is not fixed fast enough that could ... With so much of Computer use these days revolving around the Internet and communicating with others, its important that you understand what exactly a network is. Without networks, all communication between your computer and other computers whether it be instant messaging, email, web browsing, or downloading music could not be achieved. This tutorial will strive to teach you about networks and ... When using the Internet most people connect to web sites, ftp servers or other Internet servers by connecting to a domain name, as in www.bleepingcomputer.com. Internet applications, though, do not communicate via domain names, but rather using IP addresses, such as 192.168.1.1. Therefore when you type a domain name in your program that you wish to connect to, your application must first convert ...
<urn:uuid:67afa3fc-b0dc-4b31-ac48-52e3af7db026>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/tutorials/tcp-and-udp-ports-explained/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00436-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94084
1,765
3.859375
4
A Adware program is one that displays advertisements on your computer or within the program itself. Just because a program is Adware, though, does not make it malicious in nature. In fact there are many legitimate programs, including games, that now display ads on your computer or within the software itself. These types of programs display these ads to generate further revenue for the developers or to promote other software that they may sell. One advantage of a legitimate Adware program is that you can sometimes download the software for free. Instead of the developers charging for the software they will display advertisements within them to cover the costs of development and to generate revenue they would normally get from selling the product. If you then wish to no longer see the advertisements, but would like to continue using the program, you can typically pay a registration fee to the developer. All of these legitimate types of Adware programs will contain an End User License Agreement that will explicitly state if and how advertisements will be shown through the software. When you uninstall these types of Adware, the program will be completely removed and will cease displaying advertisements on your computer. On the other hand, there are Adware programs that are considered malware or Potentially Unwanted Programs (PUP). These are programs that display advertisements on your computer without your permission or the knowledge of what program is generating them. They are also designed to make it harder to uninstall so that they can continue earning revenue through their advertisements. Malware Adware are computer infections that are typically installed on your computer through two methods. The first method is when these Adware programs pretend to be something innocuous so that you will download and install them, but once installed all they do is display ads. The other method is when they are installed without your permission or knowledge through Windows or software vulnerabilities on your computer. Adware of this type are the most difficult to remove and typically use protection mechanisms that make it hard to run security programs to assist in removing them. Adware that are classified as PUPs are typically bundled within other free programs that you download from the Internet. When you install the main program, the adware programs will be installed as well and will display advertisements on your computer. These programs will also not clearly delineate in the End User License Agreement how or when advertisements will be displayed.
<urn:uuid:cc826640-dd32-4a7c-b343-7bfbb1ecf9d3>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/virus-removal/threat/adware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00153-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944602
465
2.71875
3
Disease 'networks' mimic web Studying the spread of virulent computer viruses may prove useful in understanding the spread of disease and the ability of ecosystems to handle disturbances, researchers say. "In terms of computer networks, one of the clear points that comes out of the analysis is that if you have this scale-free network, most transmission can be traced to the most highly connected nodes. So this is a clear implication for how to prevent the transfer of viruses: You concentrate on the most highly connected nodes," says Alun Lloyd, a researcher at the Institute for Advanced Study. Lloyd and Oxford Universitys Robert May reported their findings in the journal Science on May 18. Computer and biological networks have similar structures that affect how disturbances such as electronic viruses propagate through them. Computer networks are "scale-free" networks, meaning most nodes of the network have relatively few connections to other nodes, while a small number have many connections. For instance, a university, Internet service provider or large company like Microsoft will have thousands or millions of connections to other points in the network, while a home computer may have only one. So, a virus that hits an individuals PC is likely to propagate more slowly than one that invades Microsoft, since there are fewer links to exploit. The computer case mimics what happens in the spread of diseases in the real world. With sexually transmitted diseases like AIDS, "a few individuals such as prostitutes have very high numbers of partners," the researchers wrote. On the ecological front, so many processes are involved that the conclusions are not so clear. The model might be used to develop plans for protecting endangered species. "A food web may be one of those networks, so there are interactions in species where the nodes are the species and the links are that one species eats this one and competes with another one," Lloyd says. "The stability of the ecosystem might depend on these links. "With some species," Lloyd says, "you could remove it quite easily and it might not have much of an effect. But a species with a lot of links might have a very large effect." A few years ago, IBM scientist Jeffrey Kephart anticipated that the study of this interconnectedness called topology in mathematics might yield important theoretical conclusions about population biology and epidemiology. "For example, in this heyday of HIV, we are admonished daily by educators about the dangers of promiscuous activity, yet until recently there were no quantitative theoretical studies of how the spread of disease depends upon the detailed network of contacts between individuals," Kephart wrote. "Digital organisms" may be preferable subjects in the study of disease, he wrote, because they can be more easily controlled experimentally. Lloyd and Mays findings differ somewhat from the conclusions drawn in related research and reported in Physical Review Letters by Romualdo Pastor-Satorras and Alessandro Vespignani. Even at very low levels of infection, a computer virus will spread widely, they concluded in their paper But Lloyd says their work used a model in which the infected node could be reinfected and continue to spread the virus the Typhoid Mary of computer viruses. In humans, with most viruses, once a person is infected, he or she gains some immunity. In computers, the most highly connected nodes are usually those that are most sophisticated in dealing with virus infections, he says.
<urn:uuid:7fb24fe3-1e22-4025-b2b2-ba3255f41f61>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Desktops-and-Notebooks/Under-the-Microscope
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00271-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961325
688
3.421875
3
Laying more fiber optic cables is an acceptable solution to meet the increasing needs for higher bandwidth. However, in some situations, this solution is not economical and flexible for many service providers and enterprise who are limited by costs and time. Thus, WDM technologies, allowing more bandwidth over a single fiber optic cable, are widely used in today’s network. CWDM network as a cost-effective and easy-to-deploy solution, has attracted the attentions of many service providers and corporations. Although, CWDM network is not as perfect as DWDM network in data capacity, it can still satisfy a variety of applications in fiber optic networks by increasing the channel spacing between wavelengths on the fiber. In addition, CWDM allows any protocol to be transported over the link, as long as it is at the specific wavelength. The capacity of a CWDM network is largely relayed on a passive component—CWDM Mux/Demux. Theoretically, the more channels a CWDM Mux/Demux provides, the larger capacity of a CWDM network could have. The channel numbers of most CWDM Mux/Demuxs provided in the market are ranging from 2 to 16 according to different requirements. The most commonly used CWDM Mux/Demuxs use standard rack design, LGX box design or pigtailed ABS module design. |LGX CWDM Mux/Demux||Rack CWDM Mux/Demux||Pigtail CWDM Mux/Demux| Driven by the increasing needs for higher network capacity and future proofing of network infrastructure, a CWDM Mux/Demux with 18 channels has been introduced. A 18-channel CWDM Mux/Demux utilizing all of the 18 CWDM wavelengths defined by standards, which can combine up to 18 different wavelength signals from different optical fibers into a single optical fiber, or separates up to 18 different wavelength signals coming from a single optical fiber to separate optical fibers. The 18-CH CWDM Mux/Demux provided by FS.COM is equipped with a monitor port, for better CWDM network management. The following picture shows a 18-channel CWDM Mux/Demux in standard 1 U rack. To build a 10G CWDM network with Mux/Demux is relatively lower cost and higher efficiency compare with other methods, especially for optical transmission in long distances. The steps to build a 10G CWDM network is very simple and the components are affordable for most companies. For 10G CWDM network, the basic required components are 10G switches, CWDM Mux/Demux, 10G CWDM SFP+ transceivers (or 10G CWDM XFP, if the switch is with XFP interfaces) and fiber patch cables. To guarantee correct and error free system setup, the most important and complex step in 10G CWDM network cabling is plugging in the patch cables from the correct wavelength SFP+ (or XFP) to the correct port on each end of the link. The following picture shows the basic infrastructure of a CWDM network that uses 18-CH CWDM Mux/Demux for long distance transmission. A 18-CH CWDM Mux/Demux is deployed on each end of the existing fiber optic cable to separate or multiplex the 18 wavelengths of signals. Signals transmitting over a specific wavelength are transmitted to the switch via a length of fiber patch cable which is connected to the CWDM Mux/Demux on one end and a same wavelength SFP+ transceiver on the other end. The SFP+ transceiver is firstly installed in the switch SFP+ port for transition between optical signal and electrical signal. Note: Click to Enlarge the Picture Selecting the right products for 10G CWDM network, cannot only enjoy the best network performance, but also cut the cost effectively. Here offers a full series of product solutions for 18-CH CWDM Mux/Demux 10G cabling. Enjoy High Performance Network With Right Patch Cords The quality and performance of the fiber patch cables for 10G CWDM network are essential. Except standard fiber patch cables, there are different fiber patch cables that can meet a variety of cabling applications. Customers can choose what they need accordingly. |Bend Insensitive Patch Cords for Lower Signal Loss||Uniboot Patch Cords for Higher Duplex Cabling Density||Push-Pull Patch Cords for Easier Finger Access| Cut Cost Effectively With Reliable and Affordable CWDM SFP+ Transceivers During the selection of the 10G CWDM SFP+ transceivers, the compatibility should be considered. For example, if you are using a Cisco switch, the CWDM SFP+ should be compatible with this Cisco switch. However, an original brand CWDM SFP+ fiber optic transceiver is usually very expensive to many companies. For instance, an original brand Cisco CWDM SFP+ transceiver that supports 80km transmission distance is priced around 10000 USD. Luckily, there is another choice—Cisco compatible CWDM SFP+ transceivers which are much cheaper than the original branded ones. The price of Cisco compatible CWDM SFP+ provided by FS.COM has been cut more than half compared with the original one. In addition, their performances are ensured by a series of tests for its compatibility and quality. Here listed the generic CWDM SFP+ transceivers for 18-CH CWDM Mux/Demux. |Wavelength||20km CWDM SFP+||40km CWDM SFP+||60km CWDM SFP+| Kindly contact email@example.com or Visit FS.COM for more details about 18-CH CWDM Mux/Demux and its cabling solutions
<urn:uuid:935d50b0-e760-4e2b-9a79-e9d7bf9333dd>
CC-MAIN-2017-04
http://www.fs.com/blog/affordable-10g-network-over-cwdm-up-to-18-channels.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00575-ip-10-171-10-70.ec2.internal.warc.gz
en
0.893512
1,239
2.59375
3
On Jan. 12, 2005, Washington Gov. Gary Locke left office after serving two terms, but the virtual face of his administration, stored in the Digital Archives, will remain an accessible part of history. The Digital Archives, touted as the first of its kind in the nation, is maintained by the Washington State Archives Division, part of the Secretary of State's Office. "Salvaging Gov. Locke's Web site is an important step in the right direction," said Secretary of State Sam Reed. "We learn from those who have gone before us, and it is our responsibility to preserve our records for future generations." Locke's Web site has survived in its entirety. The 1,235 Web pages -- Web pages containing valuable insight into his administration, including 1,605 press releases, 536 speeches and 162 media events -- are now available through the Digital Archives. Previously records-management systems were designed to accommodate paper records, but with the rapid onslaught of electronic records production and no means to effectively archive those records, vital information was being lost. In addition, electronic records that were preserved were stored on media that has become inaccessible with modern technology, falling prey to the ever-evolving world of technological advancement. Left in the dust of obsolete legacy technology, digital records can become frustratingly unavailable. As the first in the nation to attempt a statewide digital archiving system, devising a strategy for retaining and preserving electronic records became a challenge for Reed and his staff, who, after extensive research, came up with the proposal for a content management system proffering an efficient means to access digital records with an easy-to-use Web interface as well as a successful search method for finding those records. On Oct. 4, 2004, Reed's proposal came to fruition at the 48,000 square-foot facility for Digital Archives located in the Belle Reeves Building on the campus of Eastern Washington University in Cheney. Filling the Archives The facility for digital archives provides a standardized central location for state archives, as well as a uniform means to store and access pertinent state records. Making this possible is a highly redundant storage area network (SAN) with the current storage capacity of 5 terabytes (approximately 20 billion sheets of paper) and the capability to conform to the latest technological advancements. The SAN consists of a high-speed, redundant hardware/software solution from Cisco Systems and EMC. The front end contains a Web content application system utilizing hardware and software from Hewlett-Packard and Microsoft, providing accessibility to indexed and searchable data via the Web. Data stored on the network will not only be preserved in tape form, but electronic records also will be converted to open file format, such as XML (Extensible Markup Language), and automatically migrated to media compatible with the most recent digital methods for presentation. Electronic records transferred to the facility's SAN include e-mail folders, directories, databases, documents and Web pages, said Adam Jansen, digital archivist for Washington's State Archives Division. Remote agencies transfer files via file transfer protocol through an automated process using Microsoft BizTalk Server 2004. Files are sent to a specific location on a server based upon certain parameters such as who's sending them, what office they're sent from, what agency is sending them, and the type of records they are, Jansen said, and additional metadata is added to transferred files when they're received by the BizTalk Server. "If it's a TIFF image of a photograph, we also create a more Web-friendly version, such as PDF or DjVu by LizardTech, so that we can present the information in a more universal format," said Jansen. The benefit is that by converting files created by proprietary software, such as Word and Excel, to PDF format, users attempting to access them can do so without being required to have Word or Excel installed on their computer, he said. "We do not alter any of the original information sent," he said. "We create what we call a Web-readable open standard version of that file so that we can carry it forward years from now." Besides focusing on information from remote agencies, the Digital Archives also captures agencies' Web sites for the database. "We're saving them as blobs -- binary images -- in a single server database," said Jansen. "We're maintaining all of the original scripting, but we're doing it into the database itself so that it can be pulled out and restructured or reconstituted as needed." To archive Web sites, the Digital Archives uses a custom-created Web-spidering utility, which grabs streams of binary Web information to save the information to the database, said Jansen. A Web spider begins with a single Web page then branches out to subsequent pages through the links connecting them, weaving a web of seemingly endless data retention. The facility's Web spiders automatically capture participating state agency Web sites at specified intervals and can be configured with certain parameters, such as how deep a spider capture should delve or how to handle links leading to external Web sites. Before the Digital Archives' efforts, he said, it was impossible to view a site as it appeared in any given moment of the past, before small, incremental changes gradually altered the site and irrevocably replaced what was already there. "There were no snapshots in time," Jansen said. Now, sites are being captured in different stages, providing a historically accurate glimpse of government activity. "Increasingly the Web is becoming the public interface for government," he said. "That's how we're disseminating information to citizens, which is why it is becoming more and more important to capture those Web pages because they are the face of the government that people see." Storing Public Policy The need for a successful, standardized archival program derived from state legislation enforcing an open government policy, which mandates that the public have full access to all documentation and records relating to government. "We needed to come up with a solution to ensure transparent government by preserving electronic records and making them available to the public years from now," Jansen said. As a result, the state hopes to ensure public confidence in state government and reassure the public their interests are being met. State-required archive information includes: land records; court records; maps; vital records (such as birth, death and marriage certificates); retirement documents; census; codes, ordinances and statutes; government correspondence and documentation; and any additional records with legal or historical significance. "We don't store records just to store them. We store records which are important -- that need to be kept forever, which really allows us to focus on the records we take in," said Jansen. The Digital Archives' first project migrated historic census information and marriage records for three pilot counties -- Spokane, Chelan and Snohomish -- to its centralized database. Now, with the gradual acquisition of electronic information such as Locke's Web site, the Digital Archives Division is expanding and fine-tuning the system for future stability. "Within two years, we hope to have fully evolved the system and developed the policies and procedures for both accessing and ingesting data to the point where we can really open our doors to the entire state," Jansen said. The Digital Archives plans to double storage capacity annually, growing from 5 terabytes to 30 terabytes within the next four years. By merging record management and technology, the Digital Archives offers users the ability to access information anytime, from anywhere, he said, noting that by merging these two worlds, digital archiving allows the state to "blend the traditional archival science of preservation that successfully provides access to the public with the best practices of IT storing and migrating data." "Our goal is to continue to grow and evolve the system to prove that its fundamental conception is correct and that our execution is right; that the retrieval is smooth and effortless and gives a very robust user experience while still preserving the information for long term," said Jansen. History will advance alongside technology through digital preservation, continually made available to the future as an untarnished link to the past. To access the Digital Archives, visit the Web site
<urn:uuid:4f052d89-3b8b-48c3-95d4-f4a6d76b849f>
CC-MAIN-2017-04
http://www.govtech.com/security/Digital-Preservation.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00537-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94166
1,671
2.578125
3
Tips for Taming SE Linux, Part Two For now we're going to make sure we understand SELinux fundamentals, and take a look at the nice Fedora tools for managing SELinux. Policies: The SELinux Master Control Center SELinux uses policies to enforce mandatory access controls (MAC), which you'll recall from part 1 foil zero-day attacks and privilege escalation, so let's see what goes into making a policy. SELinux calls users, processes, and programs subjects. objects are files, devices, sockets, ports, and sometimes other processes. Subjects can be thought of as processes, and objects are the targets of a process operation. SELinux uses a kind of role-based access control (RBAC) combined with type enforcement. Type enforcement enforces policy rules based on the types of processes and objects, which it tracks in a giant table. Types and domains are the same thing; you'll see both terms a lot. Type enforcement means every subject on the system—that's right, all of them&mashhas to have a type assigned to it. Types are stored in security contexts in the extended attributes (xattrs) of the files. This means they are stored in the inodes, which means that no matter how many weirdo soft or hard links are attached to your file, the security context is inescapable, and will not be fooled by silly evasions such as renaming the files or creating crafty softlinks. Types are included in the security context. A security context has three elements: identity, role, and type identifiers, like this: You can see these with the Z option to the ls command: $ ls -alZ /bin/ping -rwsr-xr-x root root system_u:object_r:ping_exec_t:s0 /bin/ping What do these things mean? system_u is a system user. Files on disk do not have roles, so they are always object_r. ping_exec_t is the type for the ping command. You will also see documentation that calls this the domain. The security context is used by your SELinux policy to control who can do what. The identity controls which domains the process is allowed to enter. This is defined somewhere inside the vast directory — /etc/selinux— that contains your SELinux policy. In the targeted SELinux policy, every subject and object runs in the unconfined_t domain, which is just like running under our old familiar Unix DAC (Discretionary Access Control) permissions. Except for a select set of daemons that are restricted by SELinux policy and run in their own restricted domains. For example, httpd runs in the httpd_t domain, and is tightly restricted so that a successful intrusion will be confined to the HTTP process, and not gain access to the rest of the system. Nor will users or processes who have no business with httpd be allowed to interfere with its operation, or access data files they have no business looking at. The ps command will show you some examples of this in action: $ ps aZ LABEL PID TTY STAT TIME COMMAND system_u:system_r:getty_t:s0 2587 tty1 Ss+ 0:00 /sbin/mingetty tty1 system_u:system_r:xdm_xserver_t:s0:c0.c1023 2664 tty7 Ss+ 7:38 /usr/bin/X What does the s0 mean? Well now, that opens a whole new can o' terminology. That field belongs to Multilevel Security (MLS); it sets a sensitivity value that ranges from s0-s15. When you use MLS you also need a capabilities field, which goes from c0 - c255, so it would look something like s1:c2. MLS is super-strict and overkill for most of us. So instead Fedora uses Multi-Category Security (MCS). The MLS sensitivity field is required by the kernel and it always says s0, but you can ignore it. MCS allows you to further refine access controls with user-defined categories. For example, you could have a MCS category called "super-secret!_yes_really!". Then files labeled with this will be accessible only to processes with permissions to enter this category. In the ps output above, you'll see an example of this with the X process. If you want to try your hand at these read A Brief Introduction to Multi-Category Security (MCS), and Getting Started with Multi-Category Security (MCS). While most files can be controlled by SELinux without any modifications, a few have had to be patched to become SELinux-aware, such as the Linux coreutils files, login programs like login, sshd, gdm, cron, and the X windows system. You should also find these on systems that do not ship with SELinux, such as Ubuntu. If your system does not have SELinux, they will return empty fields where the SELinux labels should go, like this ps example: $ ps aZ LABEL PID TTY STAT TIME COMMAND - 4248 tty4 Ss+ 0:00 /sbin/getty 38400 tty4 - 4249 tty5 Ss+ 0:00 /sbin/getty 38400 tty5 The nice SELinux devs have kindly made Z the universal "show me the security context" option. SELinux comes with it own set of user-space commands, which are bundled up in the policycoretutils package. You can run a number of SELinux commands without hurting anything, like see your own personal security context: $ id -Z system_u:system_r:unconfined:t:s0 You can check SELinux status: SELinux status: enabled SELinux mount: /selinux Current mode: permissive Mode from config file: permissive Policy version: 21 Policy from config file: targeted avcstat displays AVC (Access Vector Cache) statistics. avcstat 5 runs it every five seconds; of course you can make this any interval you want. Fedora's SELinux Tools Fedora 7 and 8 have three good graphical SELinux tools: SELinux Management, SELinux Policy Generation Tool, and SELinux Troubleshooter. Start with SELinux Management; this lets you fine-tune the existing SELinux policy, or change to a different policy type entirely. It costs nothing but time and a spare PC to learn your way around this potent security tool. I've seen a lot of comments on forums and mailing lists that say it's too complex to bother with. I don't agree with this; I think a security tool of this nature is overdue for Linux. Any Internet-facing server is a good candidate for SELinux, and especially the notoriously porous category of LAMP servers. What about AppArmor and GRSecurity? We'll soon be looking at these as well. - Dan Walsh's LiveJournal contains reams of SELinux howtos, which is good because he is the lead Red Hat SELinux developer. Yes, it's all his fault! - A step-by-step guide to building a new SELinux policy module - SELinux FAQs - SELinux Commands - Targeted Policy Overview
<urn:uuid:2dfe6afa-0a7d-4649-b05a-67e2791dfdb8>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netsecur/article.php/3714286/Tips-for-Taming-SE-Linux-Part-Two.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00445-ip-10-171-10-70.ec2.internal.warc.gz
en
0.877903
1,621
2.828125
3
John L. "Jack" Hayes is the National Oceanic and Atmospheric Administration assistant administrator for weather services and National Weather Service (NWS) director. He is responsible for an integrated weather services program; supporting the delivery of a variety of weather, water and climate services to government, industry and the general public, including the preparation and delivery of weather warnings and predictions; and the exchange of data products and forecasts with international organizations. He responded to a set of questions posed by Emergency Management magazine. Weather-related disasters seem to be on the rise. How do you explain this? In 2011, more than 1,100 people died in weather events and more than 8,000 were injured. The year also included at least 14 individual events — a record — that caused economic damages of $1 billion or more and carry a collective price tag of more than $55 billion. There is both a scientific and a societal explanation for these increased impacts. Scientifically speaking, we saw a range of short-term, cyclical climate factors in play, such as La Niña, which altered storm patterns. Events such as the southern drought contrasting the floods across the northern U.S., represent the extreme temperature and precipitation swings that climate scientists project will become more common in the future amid a warming climate. Society is also changing. The U.S. population has almost doubled since 1954, which corresponds with higher property and infrastructure values. Trends such as urban sprawl and conversion of rural land to suburban landscapes increase the likelihood that a tornado will impact densely populated areas. The wild weather of 2011 reminds us all of our increasing vulnerability and prompted an initiative to build a Weather-Ready Nation. The 21st century is the digital age. What types of major improvements are being made today or are on the drawing board to modernize the national weather system? Weather forecasts have improved dramatically in recent decades through investment in research and technology. An example of how research is coming to fruition, the NWS is in the process of upgrading its national network of Doppler radars to have dual-polarization technology. When this upgrade is completed in 2013, all radars will be more sophisticated with the ability to distinguish precipitation type [rain, snow, ice] and in many cases detect precisely where a tornado is on the ground by detecting debris being tossed by the vortex. This additional information will arm forecasters with the knowledge and confidence to issue more detailed alerts to save lives and protect property. We are also improving our satellite observation system with the [NASA-launched NPOESS Preparatory Project] polar-orbiting satellite. And this year we will begin construction on a National Water Center in Tuscaloosa, Ala., which will provide integrated and expansive water resources information to expand and improve river and flood forecasting, enhance water resource management, and accelerate the application of research to real world uses. Weather warnings are critical to protecting people and property. What is the average time for severe weather warnings to be distributed once weather systems are detected? Nationally, the average lead time for tornadoes is 12 to 14 minutes, but during the various outbreaks of severe weather in 2011, tornado warnings were issued with an average lead time of approximately 25 minutes and some exceeded 30 minutes. Not long ago, the average lead was half as long. Warnings for flash flooding, another leading cause of weather-related fatalities, have also improved greatly with the nationwide average lead time of one hour or more. We’ve made great strides in improving the reliability and lead times of “short-fuse” warnings for events such as tornadoes, flash floods and severe thunderstorms when every minute counts. And there’s great potential for further enhancements. An effective warning requires that the threat be detected, a warning communicated and the people in the impacted area must take action to protect themselves. What is the NWS doing to get people to take action once they have been warned? Last year, while the NWS issued accurate outlooks days in advance of severe weather events, issued watches hours in advance, and sounded warnings longer than the national average, there was still a tragic loss of life. The improvements we’ve made in predicting weather have enabled us to refocus our attention on the public’s response to warnings and alerts. Social science is part of the solution. By helping atmospheric scientists and the emergency management community better understand how weather information is received and what triggers people to take action, we can communicate the threat more effectively and save more lives. What role, if any, is social media playing in how the NWS disseminates weather information and warnings? Social media is steadily becoming an important tool in which the NWS communicates critical forecast information and provides a direct linkage between our local forecast offices and national centers to all audiences, especially the general public. Facebook has been adopted by our 122 local offices and our national page has more than 86,000 fans. We are currently prototyping Twitter for adoption. As a science and risk communication agency we must be methodical in evaluating new technologies to make sure they are both robust and help us accomplish our mission of saving lives and livelihoods. What type of partnership would you like to foster with the broader emergency management community? What recommendations do you have for state and local emergency managers as it pertains to their local and regional weather forecast offices? The emergency management community has always been a critical partner in saving lives and livelihoods. We know a weather-ready nation isn't possible without them, which is why we now have a NWS liaison position at FEMA’s national headquarters. We have also defined a new type of weather and water forecaster called an emergency response specialist. We’re testing this concept in select offices around the country to determine if it’s viable for broader deployment in other communities. Compliant with the National Incident Management System, the emergency response specialist will deploy with short notice to provide in-person, on-scene decision support during high-impact events. Our partnership with the emergency management community is based on an ongoing open dialog between local emergency managers and the local NWS warning coordination meteorologist, our primary contact for local emergency managers. The NWS StormReady program fosters this relationship. StormReady provides jurisdictions a standard of preparedness for hazardous weather and recognizes their hard work. Emergency managers work with their NWS local office to ensure they meet the guidelines necessary to become recognized as a StormReady site. What worries you most about our federal national weather system? These are challenging fiscal times for the nation and we cannot afford another full-scale modernization that transformed the National Weather Service in the 1980s and 1990s, nor is one necessary. But we need to continue to sustain and evolve critical infrastructure and staff readiness and do so methodically and responsibly. We’re doing just that through our Weather-Ready Nation initiative that will be cultivating and testing scientific advances in pilot projects that will allow us to build a little, test a little and field a little. Problems can’t be addressed until they are recognized. Call it global warming or climate change, when do you think the scientific community will universally agree that global temperatures are rising and there will be negative impacts as a result? Climate scientists agree on the facts. There are several trends over the past 50 to 100 years indicative of a warming atmosphere: average temperatures are rising, polar ice coverage is shrinking, and sea level is rising — just to name a few. In the U.S., 2011 was yet another warmer-than-average year, and severe weather and associated societal impacts increased. In my 40 years of tracking the weather, I have never seen extreme weather like we had in 2011. With our changing climate, we can no longer think about severe weather as an inconvenience. We have seen the devastation with our own eyes and hope it doesn’t happen again. Let’s make 2012 the year that we all came together to build a Weather-Ready Nation. For more about the Weather-Ready Nation initiative, please visit www.noaa.gov/wrn.
<urn:uuid:b6d165ed-d3e8-4a01-8148-50e11334dabd>
CC-MAIN-2017-04
http://www.govtech.com/em/disaster/2011-Precursor-NOAA-Assistant-Administrator.html?page=1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00263-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949665
1,642
3.15625
3
After nearly nine years in space, traveling 4.7 billion miles, NASA's deep space comet hunter has come to the end of its mission. The Deep Impact team at NASA's Jet Propulsion Laboratory called an end to the mission after losing contact with the spacecraft for more than a month. The last communication with the spacecraft, which was launched in January 2005, was on Aug. 8. "Deep Impact has been a fantastic, long-lasting spacecraft that has produced far more data than we had planned," said Mike A'Hearn, NASA's Deep Impact principal investigator, in a statement. "It has revolutionized our understanding of comets and their activity." The spacecraft gained scientific significance for sending back information about the surface and interior composition of comets, as well as data about other planets. Deep Impact deployed a probe that was intentionally run over by a comet dubbed Tempel 1. The impact shot material from the comet's surface into space where the spacecraft could better analyze it with its telescopes and onboard scientific instruments. "Six months after launch, this spacecraft had already completed its planned mission to study comet Tempel 1," said Tim Larson, Deep Impact's project manager, in a statement. "But the science team kept finding interesting things to do, and through the ingenuity of our mission team and navigators and support of NASA's Discovery Program, this spacecraft kept it up for more than eight years, producing amazing results all along the way." NASA reported earlier this month that its scientists had lost contact with the spacecraft and that they suspected a software glitch may have been forcing its computers to continually reboot. If that's the problem, the computers would be unable to fire the spacecraft's thrusters to change or maintain altitude, and the spacecraft would spin out of control. Scientists no longer know the orientation of Deep Impact's antennas, making it much harder to communicate with the craft. NASA also noted that if the spacecraft can't point its solar array toward the sun, it may be running low on power. Deep Impact, which completed both its primary and extended missions, is history's most traveled deep-space comet hunter, according to the space agency. NASA noted that the spacecraft's extended mission culminated in the successful flyby of the comet Hartley 2 on Nov. 4, 2010. It also observed six different stars to confirm the motion of planets orbiting them and took images and collected data on Earth, the moon and Mars. This collection of data helped scientists confirm the existence of water on the moon, and attempted to confirm the presence of methane in the Martian atmosphere. "Despite this unexpected final curtain call, Deep Impact already achieved much more than ever was envisioned," said Lindley Johnson, the Discovery Program executive at NASA. "Deep Impact has completelyoverturned what we thought we knew about comets and also provided a treasure trove of additional planetary science that will be the source data of research for years to come." Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is email@example.com. Read more about government/industries in Computerworld's Government/Industries Topic Center. This story, "NASA's comet-hunting spacecraft lost in space" was originally published by Computerworld.
<urn:uuid:245babfd-0e95-42ba-ba89-30c916cc7f02>
CC-MAIN-2017-04
http://www.networkworld.com/article/2170154/data-center/nasa--39-s-comet-hunting-spacecraft-lost-in-space.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00079-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95517
699
3.125
3
The Basics of Configuring and Using Cisco Network Address Translation While the Internet uses IP addresses assigned by an Internet authority such as the American Registry for Internet Numbers (ARIN), there are too few of these numbers to uniquely identify the millions of computers and computing devices in the world. Therefore, most enterprises use private addresses which allow them to identify the aforementioned computers. Of course, these IP numbers cannot be allowed on the Internet because all private networks use the same ones so there would be vast overlapping of addresses, and the addresses are not compliant anyway. Therefore, it is necessary to change the identity of a private host to a legal public host. This process is called Network Address Translation (NAT) and may be implemented on Cisco firewall products and Cisco routers. The firewall device(s) at the Internet demarcation point is by far the more popular way to implement NAT, but routers are used in small offices or small-to-medium-sized networks in which a separate firewalling solution is not possible or affordable. The focus of this paper is on the router-based NAT solution. The objective is to provide a fundamental explanation of Cisco NAT with the following topics: 1. Defining NAT and Port Address Translation (PAT) 2. Configuring Static NAT 3. Configuring Dynamic NAT 4. Configuring PAT 5. Troubleshooting NAT/PAT 6. Troubleshooting Example
<urn:uuid:ab67e65a-8c63-4569-97e1-9b1b1696d36c>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/resources/resource-library/white-paper/the-basic-of-configuring-and-using-cisco-network-address-translation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00133-ip-10-171-10-70.ec2.internal.warc.gz
en
0.8962
287
3.390625
3
Jellyfish-like robot, developed with Navy funds, refuels itself with hydrogen and oxygen extracted from the sea. The goal: Perpetual ocean surveillance. NASA's Blue Marble: 50 Years Of Earth Imagery (click image for larger view and for slideshow) Scientists at the University of Texas at Dallas and Virginia Tech have built a jellyfish-inspired robot that can refuel itself, offering the possibility of perpetual ocean surveillance. Like Slugbot, a robot designed to be able to hunt garden slugs and devour them for fuel, Robojelly, as the machine is called, is self-sustaining. It extracts hydrogen and oxygen gases from the sea to keep itself running. "We've created an underwater robot that doesn't need batteries or electricity," Yonas Tadesse, assistant professor of mechanical engineering at UT Dallas, told the UT Dallas news service. "The only waste released as it travels is more water." The robot offers one way around a problem that continues to vex researchers developing autonomous machines: operational limitations imposed by the need for frequent refueling. Scientists at Sandia National Laboratories and Northrop Grumman last year concluded that nuclear power would extend the capabilities of aerial drones but couldn't be implemented due to political considerations. The U.S. government presumably would rather avoid the political outrage that would follow from a downed nuclear drone. A self-sustaining surveillance bot that doesn't involve hazardous materials and doesn't pollute would be much more politically palatable, not to mention operationally useful. Robojelly looks as if it could be related to a novelty umbrella hat, except that it has two hemispherical canopies, stacked one on top of another (the video embedded below depicts an earlier single-canopy version). These bell-like structures are made of silicone and are connected to artificial muscles that contract when heated. The contractions, like those in a real jellyfish, propel the device. The muscles are made of a nickel-titanium alloy encased in carbon nanotubes, coated in platinum, and housed in a casing. The chemical reaction arising from contact between the mixture of hydrogen and oxygen and the platinum generates heat, which causes the artificial muscles to contract and move the silicone canopies while expelling water. Tadesse says the next step in the project is to revise the device's legs so it can move in different directions. Right now, Robojelly's fixed supports allow it to move in only one direction. Robojelly was funded by the Office of Naval Research, which has an obvious interest in monitoring the seas. In addition to scanning the waves, Tadesse suggests the device could be used to check the water for pollutants. Nominate your company for the 2012 InformationWeek 500--our 24th annual ranking of the nation's very best business technology innovators. Deadline is April 27. Organizations with $250 million or more in revenue may apply for the 2012 InformationWeek 500 now. Secure Application Development - New Best PracticesThe transition from DevOps to SecDevOps is combining with the move toward cloud computing to create new challenges - and new opportunities - for the information security team. Download this report, to learn about the new best practices for secure application development. Published: 2015-10-15 The Direct Rendering Manager (DRM) subsystem in the Linux kernel through 4.x mishandles requests for Graphics Execution Manager (GEM) objects, which allows context-dependent attackers to cause a denial of service (memory consumption) via an application that processes graphics data, as demonstrated b... Published: 2015-10-15 Cross-site request forgery (CSRF) vulnerability in eXtplorer before 2.1.8 allows remote attackers to hijack the authentication of arbitrary users for requests that execute PHP code. Published: 2015-10-15 Directory traversal vulnerability in QNAP QTS before 4.1.4 build 0910 and 4.2.x before 4.2.0 RC2 build 0910, when AFP is enabled, allows remote attackers to read or write to arbitrary files by leveraging access to an OS X (1) user or (2) guest account. In past years, security researchers have discovered ways to hack cars, medical devices, automated teller machines, and many other targets. Dark Reading Executive Editor Kelly Jackson Higgins hosts researcher Samy Kamkar and Levi Gundert, vice president of threat intelligence at Recorded Future, to discuss some of 2016's most unusual and creative hacks by white hats, and what these new vulnerabilities might mean for the coming year.
<urn:uuid:8004bf04-ee1a-4c3d-811c-1d23aa625aa0>
CC-MAIN-2017-04
http://www.darkreading.com/risk-management/robot-jellyfish-may-be-underwater-spy-of-future/d/d-id/1103534?piddl_msgorder=thrd
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00555-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92691
944
3.15625
3
Pointe-Claire Area Code 514 When the Pointe-Claire area code was created, it covered the western half of the province of Quebec. It was in 1947 when the Pointe-Claire area code or the 514 area code was utilized in a very large area. The Pointe-Claire area code is one of the first area codes that were assigned to Canada. In 1957, some regions covered by the 514 area code were separated to create another area code. The area code that was born out of this separation was area code 819. This reduced the boundary of the area code of Pointe-Claire to regions near Montreal. This boundary was further condensed into a much smaller region in 1998 when another area code was generated. The 450 area code is the result of that condensing process. The 514 area code was given an area code overlay in 2006, which was area code 438. The overlay area code 438 was originally meant to be used for both area codes 514 and 450, but the decision was later rejected. This happened because the supply of area code 514 was depleted faster than 450. With an area code overlay in service, the Pointe-Claire area and its adjacent regions would employ 2 area codes. Owners of the first area code will keep the old digits. The new area codes will be used by companies and residents who have newly acquired a phone in that area. Both of these area codes will be used in the Island of Montreal and a few communities that are located nearby. The original area code and its overlay must be used when placing a call to the areas that they govern. These numbers became required when the 10-digit dialing system was put into service. This dialing system has forever changed the telecommunication system of Quebec, as well as the other cities and provinces in the state of Canada. In this system, a caller must dial the Pointe-Claire area code together with the Pointe-Claire local number to connect a call in that location. Callers who use the old method of 7-digit dialing when they call to a certain location in Pointe-Claire will not get their calls connected. A call done with just the Pointe-Claire local number will not get through. This change also had a profound effect upon businesses based on that location. They have to reset their call forwarding and call transfer options. Their old system of transferring calls to Pointe-Claire local number will not reach the intended recipient since it would also require an area code. Companies would have to reorganize their telecommunication system. This would require a large sum of cash to be successful. If you have a business based in Pointe-Claire then you need to think of another way aside from resetting your whole system. You can avoid all these hassles and expenses by simply purchasing the products of RingCentral. They provide your company with a sophisticated phone system that would not be affected by the complications brought by the 10-digit dialing system. Their call forwarding and call transfer features are far more superior to what other service providers can offer. It is not restricted by area code boundaries. It gives your telecommunication system the ability to relay calls in locations that are beyond the boundaries of the Pointe-Claire area code.
<urn:uuid:26d45954-3022-431a-8c31-d6e1c5427d9c>
CC-MAIN-2017-04
https://www.ringcentral.ca/features/local-numbers/quebec/pointe-claire-514-areacode.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00189-ip-10-171-10-70.ec2.internal.warc.gz
en
0.969708
676
2.859375
3
Many not-for-profit organizations are familiar with the concepts of the original Framework based on their past experiences with financial statement audits. Beginning in the mid-1990s, the auditing profession has used the original Framework in analyzing most organizations’ internal controls. Additionally, the OMB Circular A-133, which applies to many not-for-profit organizations that receive Federal grant awards, has required auditors to use the original Framework in evaluating internal controls. As a result of these and other factors, the original Framework is one of the most widely adopted internal control frameworks used today. However, the world has changed significantly since the early 1990s – from increased globalization to more reliance on technology to changing regulations. The new Framework was created to refresh the original Framework and ensures its continued relevance in the future. Below are five key things to know about the new Framework. - Core Concepts Remain Unchanged. The definition of internal control stayed essentially the same. It’s defined as “a process effected by an entity’s board of directors, management, and other personnel, designed to provide reasonable assurance regarding the achievement of objectives relating to operations, reporting, and compliance.” The focus continues to be on those three categories of objectives: operations, reporting, and compliance. Additionally, the new Framework retains the five components of internal control, which are the control environment, risk assessment, control activities, information and communication, and monitoring activities. The requirement that each of these five components is present and functioning for an effective internal control system remains the same. As a result, the criteria to assess the effectiveness of an organization’s internal controls are relatively unchanged. - Codification of Underlying Principles. The original Framework provided implicit concepts on the core principles of internal control. To help users better understand what constitutes effective internal control, the new Framework codifies 17 principles associated with the five components of internal control. These broad-based principles help support the criteria used in establishing internal controls. In addition, these principles are reinforced by 79 total points of focus that provide guidance in designing, implementing and conducting internal control and in assessing whether relevant principles are present and functioning. - Increased Role of the Reporting Objective. As noted above, the three categories of objectives for internal control are in operations, reporting, and compliance. The original Framework focused on financial reporting. The new Framework, however, expands the focus to both financial and non-financial reporting and both internal and external reporting. As a result, this change essentially leads to coverage of all reporting aspects within an organization. - More Relevant Context to Today’s Environment. The new Framework, which along with the appendixes is documented in over 170 pages, has updated the context to today’s environment. Specifically, the context has considered changes in expectations for governance oversight, globalization of markets and operations, complexities in business, complexities in various laws, rules, regulations, and standards, expectations for competencies and accountabilities, use of and reliance on technologies, and, finally, expectations relating to preventing and detecting fraud. Each of these areas has significantly changed over the past 20 years, thus the new Framework was updated to better address these changes. - Original Framework Will Transition Out in 2014. COSO believes the underlying concepts and principals of the original Framework are still fundamentally sound today. However, after December 15, 2014, COSO will consider the original Framework superseded. In other words, the new Framework will be the framework referenced from that point forward. Not-for-profit organizations should consider the new Framework in evaluating and updating their internal controls. Consistent with the original Framework, judgment is an important part of this process. For more information about our not-for-profit services, click here.
<urn:uuid:aedf2db3-e513-4736-b5f9-be73d72d36b0>
CC-MAIN-2017-04
http://www.lbmc.com/blog/5-things-not-for-profit-organizations-should-know-about-the-new-coso-internal-control-framework
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00097-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935163
763
2.5625
3
Scanning the Internet used to be a task that took months, but a new tool created by a team of researchers from the University of Michigan can scan all (or most) of the allocated IPv4 addresses in less than 45 minutes by using a typical desktop computer with a gigabit Ethernet connection. The name of the tool is Zmap, and its uses can be many. “ZMap can be used to study protocol adoption over time, monitor service availability, and help us better understand large systems distributed across the Internet,” the researchers say, and they have used it to see how fast organizations / websites are implementing HTTPS, how Hurricane Sandy disrupted Internet use in the affected areas, how widespread are certain security bugs, and when is the best time to perform scans like these. Among the things that they discovered are that in the last year the use of HTTPS increased by nearly 20 percent (nearly 23 percent when it comes to the top 1 million websites), and that the Universal Plug and Play vulnerability discovered earlier this year was still found on 16.7 percent of all detected UPnP devices after a few weeks passed from the revelation. The scanner can also be used to enumerate vulnerable hosts (and hopefully notify its administrators of the fact so that they can remedy the situation), to uncover hidden services, detect service disruptions and even study criminal behavior, the researchers pointed out. On the other hand, it can also be used for “evil” – attackers can also wield it to detect vulnerable hosts in order to compromise them. “While ZMap is a powerful tool for researchers, please keep in mind that by running ZMap, you are potentially scanning the ENTIRE IPv4 address space and some users may not appreciate your scanning. We encourage ZMap users to respect requests to stop scanning and to exclude these networks from ongoing scanning,” the researchers noted and added that coordinating with local network administrators before initiating such a scan is also a good idea. “It should go without saying that researchers should refrain from exploiting vulnerabilities or accessing protected resources, and should comply with any special legal requirements in their jurisdictions,” they stressed.
<urn:uuid:b6eea888-2a54-49b0-bc39-e5f1d3389b8f>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/08/19/scanning-the-internet-in-less-than-an-hour/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00097-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944954
437
2.78125
3
Originally published August 20, 2009 Note: Some of the techniques and approaches discussed herein are intellectual property protected by pending patents. One of the most confusing and misunderstood aspects of the integration of raw text into a form that is useful for textual analytics is that of converting specific text into generic text. In order to understand why such a conversion is necessary, consider the issue of terminology. Text has the property that it is used over time and by many different types of people. People with different backgrounds and geographies may talk of the same thing using different terms. Lawyers have their own vocabulary. Doctors have their own vocabulary – and specialists may have vocabularies that differ from that of general practitioners. Therefore, if there is to be a surmounting of the challenge created by terminology, it is necessary to think of some words as specific words and other words as generic words. A specific word is generally a word in a class. The class of words is the generic word. There are many examples of specific and generic words. Some simple examples are: These classes of specific and generic data provide the key to getting through the barrier created by different terminologies. In order to create a specific and generic reference to text, one way to proceed is to write the generic reference in the same location as the specific reference. For example, the textual ETL tool reads “Ford” and writes “car” in the same place as the word “Ford.” In fact, in every place where “Ford,” “Toyota,” “Chevrolet” and “Porsche” are encountered, the word “car” is added. In fact, some words may have more than one generalization. The word “Porsche” may have the categories of “car,” “sports car” and “luxury item.” All of these generic categories may apply to “Porsche”. In addition there may be multiple levels of categorization. For example, sitting above “car” may be “transportation.” So, generic categorizations may be hierarchical. In addition, there may be more than one type of categorization, and one type of categorization may be favored over another form of categorization. For example, the word “Ford” may appear in the specific categorization of both “car” and “former President.” If the document being addressed is from Detroit and Motor Trends magazine, then the favored category would be “car.” But if the document is about discussions regarding the pardoning of President Nixon, then the favored category would be “former presidents.” The different forms of categorization are typically created using taxonomies. A taxonomy is derived for a body of words. A body of words may contain many different taxonomies. Once derived, a taxonomy can be useful in many other places. For example, suppose a taxonomy is derived for Sarbanes Oxley. The taxonomy most likely will be useful in many places other than the one in which it is derived. Thus, taxonomies take on a life of their own. While all of this may be useful for describing how terminology may be handled in integrating textual data, the real, practical value is probably not apparent at all. In order to explore the practical value of generic and specific treatment of text in the integration process, let’s consider an example. Suppose that there was information about activities happening in the United States. Suppose the text has words such as “Phoenix,” “El Paso,” “Wichita,” “Walla Walla” and “Roanoke.” There is nothing wrong with these words. But they represent a very low level of specificity. Suppose the level of generics were applied to these words. Suppose “American town” was added everywhere the words appeared. Now there would be “Phoenix/American town,” “El Paso/American town,” “Wichita/American town” and so forth. When the query tool goes to make a query, a query can be made either at the specific level or the generic level. A query can be made for “Tucson,” and if any references to “Tucson” are found, the query is satisfied. But a query can also be made to “American town.” In this case, if there are any references to “Tucson,” they will be found, along with the references to other American towns. So data that is both specific and generic can be located. From the standpoint of data modeling, it is recognized that the practice of abstracting data is exactly the opposite of that learned conventionally. In conventional data modeling, a high level abstraction is created. Typically, this is an ERD. The high level model is “fleshed out” to a lower level of abstraction, where keys and attributes are added. Finally, the model is abstracted down to its lowest level. When operating on textual data, the process happens in reverse. The data modeler starts with the textual data. After the text is examined, the most basic abstractions are recognized. Then higher levels of abstraction are created. The higher levels of abstraction show up as generic categories of data, or taxonomies. The taxonomies are applied against the raw text to create a structuring of data that serves to address the issues of terminology. Recent articles by Bill Inmon
<urn:uuid:6ab2d57c-b49f-4c06-a11e-f85556e6ede7>
CC-MAIN-2017-04
http://www.b-eye-network.com/view/11087
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00520-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949374
1,167
2.796875
3
Summary of Protocols for PKI Interoperability The Enterprise environment is typified by organizations seeking to provide consistent, transparent security across all end-user applications. The organization has the greatest amount of control in this environment, allowing it to leverage investment in interoperable PKI solutions for both infrastructure and end-users. Certificate Generation – X.509, PKIX Profile X.509 defines the format of a public key digital certificate as well as a Certificate Revocation List (CRL). RFC 3280, from the IETF PKIX Working group, provides profiles for each of these two formats. Certificate Distribution – Lightweight Directory Access Protocol (LDAP) LDAP defines the protocol used to publish and access digital certificates and CRLs from a repository. Certificate Management – PKIX Certificate Management Protocol (PKIX-CMP) RFCs 2510 and 2511 from the IETF PKI Working Group define the protocol for managing keys and certificates. Extends beyond simple certificate request to support PKI lifecycle functions required in the Enterprise. The Inter-Enterprise environment is typified by organizations seeking to provide trusted and secure means for business-to-business electronic commerce. The organization has control over its own resources, both infrastructure and end-user, that must interoperate with others’ PKIs. Certificate Generation – X.509, PKIX Profile These standards also apply to cross certificates and CRLs used in establishing one-to-one or hierarchical trust between enterprises. Certificate Distribution – LDAP, S/MIME LDAP provides the access protocol for enterprises wishing to share full or partial certificate repositories. S/MIME (RFCs 2632-2634) defines a protocol that is used for the direct exchange of digital certificates between end users. Certificate Management – PKIX CMP, PKCS #7/#10 PKIX-CMP provides protocols for the request and management of cross-certificates, as well as keys and certificates as in the Enterprise model. PKCS #7/#10 (RFCs 2315, 2986) provide protocols for requesting and receiving certificates without any management once created and distributed. The Consumer environment is typified by organizations seeking to enable electronic commerce with consumers over the Internet. While controlling its infrastructure, the organization must interoperate with consumers using a wide variety of applications, typically web browsers and associated email. Certificate Generation – X.509 v3, PKIX Profile These standards provide the profile definition of a public key digital certificate. While no standards have been universally adopted for revocation checking in this environment, schemes such as OCSP (RFC 2560) are getting increasing attention. Certificate Distribution – S/MIME Distribution of certificates in this environment is currently limited to direct user to user communication with S/MIME. Certificate Management – PKCS #7/#10 PKCS #7/#10 supports certificate request and receipt but does not provide for any key or certificate management. While no standards have been universally adopted for key and certificate management in this environment, schemes such as PKIX-CMC (RFC 2797) are being considered. Entrust has demonstrated interoperability with all these approved protocols? Elements of PKI Interoperability Regardless of the environment in which it operates, a Managed PKI is made up of several components that must interoperate. As shown in the figure below, these include interfaces within a single PKI as well as to external environments. A brief summary of the purpose of each component is as follows: - Certification Authority. The Certification Authority (CA) represents the trusted third party that issues keys and certificates to end users and manages their life cycle including generation, revocation, expiry and update. - Certificate Repository. The Certificate Repository provides a scalable mechanism to store and distribute certificates, cross-certificates and Certificate Revocation Lists (CRLs) to end users of the PKI. - Client Application. The Client Application is the end user software that requests, receives and uses public key credentials for conducting secure electronic commerce. - Additional Services. Additional services are required by a Managed PKI that will interoperate with the other three components listed. These provide particular services that enable many electronic commerce applications. Typical services include Time Stamping, Privilege Management, Automated Registration Authorities, etc? Because of their central role in a Public Key Infrastructure, regardless of the environment, these components must interact and interoperate. These operations can be summarized as follows: - Certificate Generation. This includes the generation of public key digital certificates and Certificate Revocation Lists with a defined format and syntax to enable interoperability with other client applications and other PKIs. Also included is the generation of cross-certificates used to interoperate between Certification Authorities. - Certificate Distribution. In order to conduct public key operations, one user must access another’s certificates as well as associated CRLs. Accordingly; there must be a common protocol to allow for access to other user’s certificates and associated revocation information. - Certificate Management. Managing keys and certificates represents the most common PKI operations. Protocols for requesting, renewing, backing-up, restoring and revoking keys and certificates require interoperability between client applications and the Certification Authority.
<urn:uuid:f24e3a67-7fe8-403f-85ff-17799024837d>
CC-MAIN-2017-04
https://www.entrust.com/about-us/certifications-standards/standards-summary-of-protocols-for-pki-interoperability/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00520-ip-10-171-10-70.ec2.internal.warc.gz
en
0.901399
1,089
2.515625
3
Kaspersky Lab, a leading information security software developer, announces a new case of mass infection, caused by a combination of malware and unsanctioned access to computer systems. Web servers running Microsoft Internet Explorer (ISS) 5 are affected, and individual computers will become victims when the user views an infected site using Internet Explorer. When Internet Explorer is used to view a site on an infected server, the Trojan will take control of the victim machine, and redirect the browser to a site containing a PHP script. This is done using an unknown vulnerability in Internet Explorer. A version of Backdoor.Padodor (.w, .x, .y, or .z) will then be installed on the victim machine. This spy program enables full remote control over victim machines. Most versions of Padobor contain the line 'Coded by HangUp Team' or 'Coded by HT', leaving no doubt as to the author's identity. The use of Padodor in the current attack makes it likely that the attack was initiated by the HangUp Team, an internationally known group of hackers and virus writers. The group is responsible for a number of malicious programs, including the recent Padobot worm, aka Korgo. This worm attacks victim machines by exploiting vulnerability in Windows LSASS, and receives remote commands via IRC channels. The HangUp Team was founded by three inhabitants of Archangel, Russia. In 2000, they were arrested and placed on probation for creating and distributing malicious code. However, the HangUp Team is still active, and has members from throughout the former Soviet Union, and possibly from other countries. The group is also notorious for its strong ties with the spamming industry, which uses networks of zombie machines created by the HangUp Team. Such networks are created using Trojans: once a proxy-server is configured, these networks can be used as spamming platforms. We may be talking about a zero-day exploit here - a vulnerability which no-one knows about, and which there is no patch for. The hackers may have discovered the vulnerability themselves, or paid for the information, and compromised IIS servers around the world in order to distribute this Trojan spy program. We have been predicting such an incident for several years: it confirms the destructive direction taken by the computer underground, and the trend in using a combination of methods to attack. Unfortunately, such blended threats and attacks are designed to evade the protection currently available,' commented Eugene Kaspersky, head of Anti-Virus Research at Kaspersky Lab. Updates for Kaspersky Lab anti-virus databases already contain definitions of Trojan.JS.Scob.a, and Backdoor.Padodor.x, .y and .z.
<urn:uuid:67e42e88-9348-4c30-8879-cf72e0f6eb8e>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2004/Russian_hackers_investigate_new_vulnerabilities
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00336-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920054
553
2.5625
3
In an effort to explain the complexities of the DNS, we are examining some abnormalities that impact how queries are received. This post will provide an overview of the mechanics of the ANY query in context of areas where it has been implemented incorrectly or abused. Perhaps because the DNS specification is in so many RFCs, it is especially cruel to naïve understandings of the specifications. Worse, applications often can’t even tell whether a given request for the address associated with a name came from the DNS or from some other protocol. Most alternative name resolution protocols are only available on the local network, but for a general-purpose application that does not matter. If you’re trying to stream video in your campus network, you (and your video application) don’t care that ‘building1” and “building2” are “different local networks”. It’s your campus network. In order to get around this ambiguity, a popular Web browser released a beta version that made a bunch of queries, and then asked for “anything” — the so-called “ANY query”. “Give Me What Matches” ANY does not mean what it means in English. If you read this blog, you know that DNS is distributed in both administration and operation. Administrators of various zones can each operate their own DNS, so there isn’t a single entity that controls the DNS: that’s the distributed administration, and we talk about it a lot. But the operational distribution means that every cache operator is free to return whatever is in the cache. And ANY, it turns out, in the DNS means “give me what matches”. For a cache, that means, “What do you have on hand?” not, “What could I possibly know about this DNS name?” Now, you might think, “Aha! I’ll just ask ANY first! That’ll fix everything!” Alas, no. For distributed operation of the DNS requires a way to invalidate cache entries. This is achieved through the Time To Live, or TTL, on a DNS record. Everything in the DNS comes with this TTL and it applies to everything of the same type — in the DNS, the Resource Record Set or RRset. But it doesn’t apply to everything of the same name, and and ANY query asks for everything at the same name. Different data of the same name, but different type, can have different TTLs. The Problem of ANY The effect of all this is that the answer to an ANY query will tell you the “true data” about a name only unreliably, unless you ask the authoritative server about that name. You might get data you didn’t want. You might get data you did want. And you might not get data you did want. When you are asking ANY through a normal resolver (that is, behaving as applications almost always do, and not asking the authoritative directly), you have no idea how to interpret the response. ANY is a good way to compare, “What’s in the cache?” and, “What’s in the authoritative server?” Otherwise, nobody should use it. “This all seems like a theoretical argument. How does this appear in reality?” you might be asking. It turns out confusion about the use of and results from ANY requests is a real problem. As I mentioned above, recently a popular Web browser was attempting to increase the application’s awareness of the TTLs. The API the Web browser developers had been using didn’t expose the TTL value returned to the stub resolver. As a result, the development team thought that they could get the data they need by a different path, so in cases where they needed an A or a AAAA record’s TTL they also issued an ANY query to the DNS. This turned one request into two requests and for Dyn specifically lead to a 10x increase in the number of ANY queries that we received. Lack of understanding of the DNS request types leads to confusion and in some cases misuse. The second query increased network traffic and didn’t meet the original goal of TTL awareness. Ask a Simple Question, Get More Than You Bargained For This may sound like a one-off oversight, but these types of DNS issues aren’t limited to Web browsers, and in some cases are calculated. The ANY request is often associated with UDP reflection attacks due to their amplification factor. You can ask a very small / simple question and receive a voluminous response: in network terms I can use a small amount of bandwidth to create a larger bandwidth response. So, an attacker issue queries with QTYPE ANY to thousands of DNS servers (either full-service resolvers or authoritative servers), claiming that the source address is the IP address of the target victim. The servers, like good DNS servers should, immediately reply to the target, who receives a big pile of large DNS responses. As an authoritative operator, we often see this type of activity but instead of only asking for the contents of the recursive cache, the third party will send ANY queries with the Recursion Desired bit set, presumably so that, if we permitted recursion, we would get additional data.. The ANY request isn’t just a source of developer confusion but can also be weaponized due to its ability to produce bandwidth amplification related attacks. Because of the issues with ANY, some operators have decided that the ANY request is no longer suitable for use in the public DNS,and are deprecating it. Instead of replying to the ANY request with the contents of the cache they will now respond with an RCODE 4 (Not Implemented) response. It is not clear how well this conforms to the DNS protocol specifications; the protocol community is split on the issue. Since the point of IETF protocol specifications is interoperation, and not conformity with some scripture, the dispute may be solved by learning how recursive and stub resolvers will respond to and process the response. This acts as the perfect bridge to our next post in which we will be covering negative caching semantics in the DNS and observations which seem to signal issues with different DNS resolvers. About the Author Chris is a Principal Data Analyst Dyn, a cloud-based Internet Performance company that helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Follow Dyn on Twitter: @Dyn.More Content by Chris Baker
<urn:uuid:a92e12bb-4ce2-4395-9d6f-1956a0456752>
CC-MAIN-2017-04
http://hub.dyn.com/dyn-blog/the-attractive-menace-of-any
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00244-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9408
1,352
2.65625
3
Supercomputer Serves as Weapon in AIDS FightBy M.L. Baker | Posted 2006-06-05 Email Print Scientists at Stony Brook University have used a supercomputer to probe HIV's weak spots in an effort to provide more effective drugs. Researchers at Stony Brook University's Center for Structural Biology wanted to understand how an essential HIV protein switched between two known conformations. They used computer simulation to model the transition and identified a new conformation that helps explain HIV's vulnerability to a class of drugs known as protease inhibitors. The research at the university, based in Stony Brook, N.Y., could eventually lead to more effective forms of AIDS drugs. Small molecule drugs, typically taken as pills, tend to work by gumming up cellular machinery, usually proteins. This happens because both proteins and drug molecules have specific shapes. Such drugs are identified by finding molecules that fit into crevices or cavities in the protein. Drug researchers often make crystals of proteins to examine the shape of these crevices and design drug molecules that fit the protein more snugly. But in the case of HIV protease, crystal structures were little help. In the crystallized forms, the cavities and crevices in the protein were very small, too small to let drug molecules in. Stony Brook's Carlos Simmerling used a supercomputer to model how HIV protease switched between two shapes, and saw a new conformation where the cavities open wider. The work reveals how drugs like Kaletra and Viracept fit into the protein and stop it from working. An "open" conformation of HIV protease was expected, but had not previously been described in detail. Computer modelling is routinely used in drug design, but certain simulations are often not attempted. Simmerling said that access to Silicon Graphics' Altix through the NCSA (National Center for Supercomputing Applications) made him go ahead with the project. Read the full story on eWEEK.com: Supercomputer Serves as Weapon in AIDS Fight
<urn:uuid:8f2f16bc-3ddb-4f45-b5e8-7d3ff33639d5>
CC-MAIN-2017-04
http://www.baselinemag.com/it-management/Supercomputer-Serves-as-Weapon-in-AIDS-Fight
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00364-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961121
424
2.953125
3
The Weekly Top Five features the five biggest HPC stories of the week, condensed for your reading pleasure. This week, we cover the NC State effort to overcome the memory limitations of multicore chips; the sale of the first-ever commercial quantum computing system; Cray’s first GPU-accelerated machine; speedier machine learning algorithms; and the connection between shrinking budgets and increased reliance on modeling and simulation. Research Technique Addresses Multicore Memory Limitations A new technique developed by researchers at North Carolina State University promises to boost multicore chip performance from between 10 to 40 percent. The new approach is two-pronged, using a combination of bandwidth allocation and “prefetching” strategies. One of the limitations to multicore performance is the memory problem. Each core needs to access off-chip data, but there is only so much bandwidth available. With the proliferation of multicore designs, the data pathway is all the more congested. The NC State researchers developed a system of bandwidth allocation based on the fact that some cores require more access to offchip data than others. Implementing an on-chip memory store (cache-based) allows the chip to prefetch data. When prefetching is used in an intelligent as-needed basis, performance is further enhanced. With boths sets of criteria working in tandem, “researchers were able to boost multicore chip performance by 40 percent, compared to multicore chips that do not prefetch data, and by 10 percent over multicore chips that always prefetch data,” the release explained. First-Ever Commercial Quantum Computing System Sold Vancouver-based research outfit D-Wave Systems, Inc. began generating buzz in 2007 when the company announced it had built the first commercially-viable quantum computer. The claim was difficult to verify and received a fair amount of skepticism. Now four years later, D-Wave has announced the first sale of a quantum computing system, known as D-Wave One, to Lockheed Martin Corporation. As part of a multi-year contract, “Lockheed Martin and D-Wave will collaborate to realize the benefits of a computing platform based upon a quantum annealing processor, as applied to some of Lockheed Martin’s most challenging computation problems.” D-Wave will also be providing Lockheed with maintenance and related services. The D-Wave One relies on a technique called quantum annealing, which provides the computational framework for a quantum processor. It was also the subject of an article published in the May 12 edition of Nature. The computer’s 128-qubit processor, known as Rainier, relies on quantum mechanics to tackle the most complex computational problems. While Lockheed Martin’s exact interest in the system was not specified, suitable applications include financial risk analysis, object recognition and classification, bioinformatics, cryptology and more. A Physics World article cited expert collaboration regarding the system’s authenticity. MIT’s William Oliver, although not part of the research team, went on record as saying: “This is the first time that the D-Wave system has been shown to exhibit quantum mechanical behaviour.” Oliver characterized the development as “a technical achievement and an important first step.” Further coverage of this historic event, including an interview with D-Wave co-founder and CTO Geordie Rose, is available here. Cray Debuts GPU-CPU Supercomputer The newest Cray supercomputing system, called the Cray XK6, relies on processor technology from AMD and NVIDIA to achieve a true hybrid design that offers up to 50 petaflops of compute power. Launched at the 2011 Cray User Group (CUG) meeting in Fairbanks, Alaska, the supercomputer employs a combination of AMD Opteron 6200 Series processors (code-named “Interlagos”) and NVIDIA Tesla 20-Series GPUs, and provides users with the option to run applications with either scalar or accelerator components. The XK6 is the first Cray system to implement the accelerative power of GPU computing, and Barry Bolding, vice president of Cray’s product division, highlights this fact: “Cray has a long history of working with accelerators in our vector technologies. We are leveraging this expertise to create a scalable hybrid supercomputer — and the associated first-generation of a unified x86/GPU programming environment — that will allow the system to more productively meet the scientific challenges of today and tomorrow.” Cray already has its first customer; the Swiss National Supercomputing Centre (CSCS) in Manno, Switzerland, is upgrading its Cray XE6m system, nicknamed “Piz Palu,” to a multi-cabinet Cray XK6 supercomputer. The Cray XK6, which is scheduled for release in the second half of 2011, will be available in both single and multi-cabinet configurations and scales from tens of compute nodes to tens of thousands of compute nodes. Upgrade paths will be possible for the Cray XT4, Cray XT5, Cray XT6 and Cray XE6 systems. For additional insight into this Cray first, check out our feature coverage. PSC, HP Labs Speed Machine Learning Algorithm with GPUs Researchers from the Pittsburgh Supercomputing Center (PSC) and HP Labs have figured out how to speed the process of key machine-learning algorithms using the power of GPU computing. Specifically, the team has achieved nearly 10 time speed-ups with GPUs versus CPU-only code, and more than 1,000 times versus an implementation in an unspecified high-level language. Machine learning is a branch of artificial intelligence that “enables computers to process and learn from vast amounts of empirical data through algorithms that can recognize complex patterns and make intelligent decisions based on them.” The application the research team is working with is called k-means clustering, popular in data analysis and “one of the most frequently used clustering methods in machine learning,” according to William Cohen, professor of machine learning at Carnegie Mellon University. Ren Wu, principal investigator of the CUDA Research Center at HP Labs, developed the GPU-accelerated cluster algorithms. Wu then teamed up with PSC scientific specialist Joel Welling to test the algorithms on a real-world problem, which used data from Google’s “Books N-gram” dataset. This type of N-gram problem is common in natural-language processing. The researchers clustered the entire dataset, with more than 15 million data points and 1,000 dimensions, in less than nine seconds. This kind of breakthrough will allow future research to explore the use of more complex algorithms in tandem with k-means clustering. Lean Budget Increases Government Reliance on Modeling and Simulation The Institute for Defense & Government Advancement (IDGA) put out a brief statement last week, suggesting a link between declining budgets and a growing demand modeling & simulation (M&S) tools. Last week, the Army and Department of Defense (DoD) awarded a $2.5 billion contract to Science Applications International Corporation (SAIC) for a combination of planning, modeling, simulation and training solutions. According to the IDGA, “this contract signifies the growing need for simulation training to prepare troops for combat. Despite budget constraints, Modeling and Simulation (M&S) is expanding as technological improvements develop. M&S is the more viable and cost-effective option for tomorrow’s armed forces.” The IDGA also announced that its 2nd Annual Modeling and Simulation Summit will explore the latest technological advancements and look at the lessons to be learned from recent efforts. This event will have a focus on military strategies for M&S, such as Irregular Warfare and Counter-IED training.
<urn:uuid:f9a62277-5db4-4fff-935b-dee656086e5c>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/05/26/the_weekly_top_five/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00364-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918466
1,634
2.515625
3
The US government’s flirtation with default ended on August 2nd when the President signed the debt agreement into law. Officially known as the Budget Control Act of 2011, it purports to shave at least 2.1 trillion dollars off the federal deficit over the next ten years. What this means for federal funding of science, education, and R&D is still unclear, but given the government’s current obsession with downsizing itself, it’s hard to envision that research-centric agencies like the NSF, DOE Office of Science, and DARPA will remain unscathed. Currently all the deficit reduction is being done on the spending side, with no new revenues in the mix. The initial $917 billion promised in cuts splits the pain between discretionary and non-discretionary spending, with the other $1.2-1.5 trillion to be decided later (and which we’ll get to in a moment). None of the initial cuts are going into effect in the current fiscal year, with just $22 billion or so targeted for 2012 and the remainder spread out across 2013 through 2022. Last Sunday, President Obama tired to reassure the public that investments in education and research would be preserved, at least in the initial discretionary cuts. The second phase of deficit reduction will be designed by a so-called congressional “Super Committee” of six Democrats and six Republicans. They’re tasked with coming up with an additional $1.5 trillion over the next ten years. If the Super Committee can’t come to an agreement or the Congress votes down the deal, which, given the hostile political climate, is a likely outcome, an automatic $1.2 trillion in cuts is triggered. The would bring the grand total to $2.1 trillion over the next decade. So where does this leave R&D funding? From the glass-half-full perspective, none of these programs at the NSF, DOE Office of Science, or DARPA are specifically called out in the legislation, and probably won’t be in any subsequent deal the Super Committee comes up with. Better yet, in the short-term, the cuts on the discretionary spending side (where all the R&D funding comes from), are not really cuts per se; they are better characterized as caps on future spending increases. According to a Science Insider report, the effect will be to basically freeze discretionary spending for the next two years, while allowing for absolute increases of $20 to $25 billion per year over the remainder of the decade. The article probably pegs it about right as far as the near-term effect on the research community: While that’s hardly good news for researchers lobbying for the double-digit increases proposed by President Obama for some research agencies, it’s a lot better than the Republican drive to roll back spending to 2008 levels. But another article in The Scientist, is more worrisome, noting that health agencies like NIH, CDC and the FDA could be hard hit: [T]he proposed deficit reduction is too steep to avoid real damage, said Mary Woolley, president and CEO of Research!America, an advocacy group that promotes health research. “These are horrifying cuts that could set us back for decades,” she said. DARPA, the research agency of the US Department of Defense (DoD), may be particularly unlucky. The DoD has been singled out to endure $350 billion in cuts from the initial phase of the debt deal and $500 to $600 billion in the second phase if the Super Committee fails and the trigger is pulled. DARPA’s total budget, which funds high-profile supercomputing projects like the Ubiquitous High Performance Computing (UHPC) program, is only about $3 billion a year, so it may not be a prime target when large cuts have to be made. But if the Pentagon really has to swallow nearly a billion in funding reductions over the next decade — and there is some skepticism that this will come to pass — one can assume that the research arm will not be able to escape harm completely. The larger problem is that budget reductions of this magnitude threaten both parties’ most cherished programs, leaving other discretionary spending, like science, education and R&D as secondary priorities. Democrats want to protect things like Social Security and Medicare (off the table for the time being), while the Republicans are circling the wagons around national defense and are extremely adamant about not raising taxes. In such a political environment, funding for research agencies, which normally get some measure bipartisan support, could be sacrificed. Certainly the Republicans’ increasing aversion to scientific research, and the Democrats’ willingness to capitulate to Republican demands doesn’t bode well for these agencies and their R&D programs. The best hope for the science and research community is that this debt deal is superseded by more level-headed legislation down the road. That’s certainly going to require a much more reasonable approach to taxes and spending then we have now. The most recent blueprint for balancing the budget can be found during the latter part of the Clinton administration, when actual surpluses were being projected. But we have veered rather far from that revenue-spending model. Without raising taxes, balancing our budget over the long term (which this latest deal will not do) will be impossible unless we’re willing to shrink the government down to its pre-World-War-II level. No respectable economist believes that the spending-cut fairy will magically increase revenues by growing the US economy. The debt deal signed into law this week is actually projected to reduce GDP by about 0.1 percent in 2012, according to Troy Davig, US economist at Barclays Capital. It would be easy to blame the Congress, particularly the Tea Party wing of the Republicans, for their inability come up with a rational budget approach. And they surely deserve some of it. Holding the economy hostage by threatening to default on the debt was just plain dangerous and irresponsible. But in a more fundamental way, the politicians are just reflecting the public’s ignorance of the how federal budgets work. There are a number of polls that show people believe they can have their entitlements and other programs with little or no revenue increases. There is also widespread ignorance of how the government allocates its money, and the value of funding scientific research and education. With such a lack of understanding by the public, it’s no big mystery that we elect politicians who promise contradictory policies. Until that changes, it’s hard to imagine how we’ll get the government to behave responsibly with our money.
<urn:uuid:728a3000-b98e-47fd-84f6-e3cbc7f5bfaf>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/08/04/debt_deal_casts_shadow_on_us_research_funding/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00024-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948152
1,364
2.65625
3
As a kind of insulated cable, the aerial fiber optic cable is suspended in the air from poles and/or support structures. It is often supported between poles by being lashed to a wire rope messenger strand with a small gauge wire. And since optical cable is a high capacity transport medium that is sensitive to excessive tensile force, tight bends, and crushing forces, some care must be taken during the installation procedure to respect these limitations. This article will take a brief introduction to aerial fiber optic cable installation via several diagrams. Before the project, the aerial fiber line construction shall be worked out in details. This is intended to determine if any work may be required along the proposed route before cable placement begins. As you can see from the following picture, there are many assemblies that are attached to the pole such as cable storage assembly, fiber optic splice closure and so on. These devices play really play important roles in the aerial cable line, therefore it is necessary to have an understanding to them. Generally, there are five steps for installing aerial fiber optic cable:setting pole, fixing pole guy, constructing messenger wire, installing fiber optic cable and line protecting (as shown in the following figure): Since the cable route is usually very long, the connection of messenger wires is an essential part in the aerial cable project. Iron wires with 3.0 cm diameter are often used to wind on the connection point of the messenger wires. And there are generally two methods to complete the winding (as shown in the following figure). The following picture shows the details of messenger wire connection: Just as the messenger wires, the fiber optic cables need to be spliced in the fiber optic splices near the poles as well. Many FTTH (fiber to the home) projects are achieved through the aerial fiber optic cable. And there are two ways that fiber cables are introduced into a house: from the underground pipeline into the house and from the orifice plate outside the house (as shown in the following figure). Compared with buried cable or fiber in-duct solution, aerial optical cable solution is typically faster and less expensive to deploy than digging, particularly for backbone fiber. But installing aerial fiber optic cable is also a risky job. Thus, it is necessary to have the knowledge of aerial fiber optic cable installation.
<urn:uuid:6de65936-ad78-47c8-bd7d-8254fa8d8160>
CC-MAIN-2017-04
http://www.fs.com/blog/aerial-fiber-optic-cable-installation-guide.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00328-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94408
464
2.578125
3
Known as Ethane, the simple-to-define access policies are maintained in one place, and implemented consistently along a network datapath, and no user, switch or end-host has more information than it needs. Today corporate networks typically allow open communication by default, which makes implementing effective security and privacy rules an onerous task for network administrators. A first implementation of Ethane was built and deployed in Fall of 2006. The deployment consisted of one controller, 19 switches and it managed the traffic from over 300 wired hosts and many more wireless. The switches were built on both wireless and wired platforms and in hardware. Currently, Stanford researchers are working on the second version of Ethane which they say will have better policy language support and a more feature-rich datapath supporting more diverse techniques such as NAC, MAC hiding and end-to-end L2 isolation. The second pilot network is being deployed and tested this summer. Ultimately, it is Stanford's goal to make high fan-out Ethane switches and controller available to other institutions, researchers said. The trick behind the Ethane design is that all complex features, including routing, naming, policy declaration and security checks are performed by a central controller (rather than in the switches as is done today). Each flow on the network must first get permission from the controller which verifies that the communication is permissible by the network policy. If the controller allows a flow, it computes a route for the flow to take, and adds an entry for that flow in each of the switches along the path, according to Stanford's Website.With all complex function subsumed by the controller, switches in Ethane are reduced to managed flow tables whose entries can only be populated by the controller (which it does after each successful permission check). This allows a very simple design for Ethane switches using only SRAM (no power-hungry TCAMS) and a little bit of logic, the Website states. Tal Garfinkel, a Ph.D graduate student in Stanford University's computer science department recently talked with Network World and said: "I think our work on redesigning the enterprise network with security in mind (SANE/Ethane) points to some important ideas that hopefully will gain greater traction in the coming years. Such as implementing fine-grain, centrally managed access controls at the level of users and end-hosts, and using strongly authenticated network endpoints for doing access control, instead of the mess of IP and MAC-level ACLs that we have today. "Ethane is funded by the Stanford Clean Slate Project, an ambitious undertaking that proposes to build a new Internet from the ground up. The point of Stanford's efforts is not that the Internet is broken they say, just that it has become ossified in the face of emerging security threats and novel applications, researchers said. Cisco Systems, Deutsche Telekom and NEC are also taking part in the research. The researchers say their work closely complements two projects under way at the National Science Foundation. The first, called GENI, for Global Environment for Network Innovations, aims to build a nationwide programmable platform for research in network architectures. Stanford researchers will present their update at the school's Hot Chips symposium Aug. 19-21 at Stanford's Memorial Auditorium.
<urn:uuid:15f310e3-dd20-43c8-af5f-f5921edf602c>
CC-MAIN-2017-04
http://www.networkworld.com/article/2348274/security/researchers-set-to-spark-up-new-more-secure-network--routers--switches.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00474-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946287
678
2.65625
3
Hamburg, the second largest city in Germany, has just unveiled its Green Network Plan, a compilation of strategies designed to eliminate the need for cars in the city in the next two decades. Hamburg is made up of 40 percent green areas, gardens, parks and squares, and the new plan is designed to unite these areas in a way that will be completely accessible by foot or bike. “Other cities, including London, have green rings, but the green network will be unique in covering an area from the outskirts to the city center,” city spokeswoman Angelika Fritsch told The Guardian. “In 15 to 20 years, you’ll be able to explore the city exclusively on bike and foot.” The city also plans to utilize the green areas both to help absorb carbon dioxide and prevent flooding. Hamburg’s average temperature has increased about 34 degrees in the past 60 years and the sea level has risen about 7 inches. The city will work to unite each of the seven municipalities of the metropolitan region to ensure that all residents receive access to green pathways. Another area that has been taking initiative toward greener transportation is San Francisco with its Connecting the City project, which launched in 2011. The San Francisco Bicycle Coalition spurred the project, which aims to create 100 miles of cross-town bikeways by 2020, with three roadways receiving primary focus: the Bay to Beach, North-South and Bay Trail routes. The coalition aims for these three roadways to be bike friendly by 2015, with additional busy areas soon following suit. The goal is to continue substantially increasing the amount of people who choose to bike every day.
<urn:uuid:815b888c-4a54-45f4-ad65-5bd51f25db55>
CC-MAIN-2017-04
http://www.govtech.com/health/Hamburg-Plans-Eliminate-Cars.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00502-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95675
340
2.890625
3
Whether it's an earthquake in Indonesia, a cyclone in Mozambique or a tornado in Kansas, the Red Cross is there -- helping the vulnerable, sheltering the displaced and comforting the needy. When thousands of lives are touched by tragedy, millions are moved to help with their time and money. The American Red Cross, as part of the International Federation of Red Cross and Red Crescent Societies, mobilizes the generosity of people in America to help the lives of those that need assistance most. In recognition of its continued efforts, the American Red Cross joins Red Cross and Red Crescent National Societies around the world and more than 750 chapters in the United States in celebrating World Red Cross Red Crescent Day on May 8. World Red Cross Red Crescent Day honors the efforts of Red Cross workers and volunteers worldwide who work tirelessly to alleviate human suffering. May 8 marks the birth of Henry Dunant, the founder of the International Committee of the Red Cross. Moved by the atrocities he witnessed during the Battle of Solferino in 1859, Dunant began advocating for the humane treatment of the sick and wounded during wartime. He was later honored by being one of the first recipients of the Nobel Peace Prize. The International Red Cross and Red Crescent Movement comprises more than 97 million members and volunteers -- the world's largest humanitarian network -- and assists more than 233 million people worldwide each year. Although each national society has its unique qualities, each is united by common fundamental principles and the goal of improving the lives of those in need. "We know from long experience in dealing with crises that no single government or organization alone can tackle the rising challenges posed by catastrophes, conflicts, health emergencies, poverty and migration," said the president of the International Federation, Juan Manuel Suárez del Toro, and the president of the ICRC, Jakob Kellenberger, in a joint statement. "It will take solid coordination and better partnerships at all levels, including governments, donors, humanitarian agencies, the private sector and individuals, in order to reduce the impact of wars, disasters and disease, while making vulnerable communities stronger and safer," they added. Over the past year, volunteers from the American Red Cross responded to hundreds of disasters in local communities and around the world. Whether providing shelter after a house fire, landslide, tornado or hurricane; or immunizing millions against measles and providing nets to prevent malaria, the American Red Cross strives to work with our partner organizations to help prevent, prepare for, respond to and recover from disasters both natural and man-made. World Red Cross and Red Crescent Day serves as an annual reminder of the lasting work and commitment of the Red Cross family. Through the motivation and action of its volunteers and donors, the American Red Cross and its partners continue to help those in need -- whether around the corner or across the world.
<urn:uuid:c613c18e-997f-4447-8d27-73f18ff2f2a2>
CC-MAIN-2017-04
http://www.govtech.com/health/World-Red-Cross-Red-Crescent-Day.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00410-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943159
572
2.71875
3
Over the years, NASA has famously invented a number of technologies that have since entered into many of our everyday lives. For instance, NASA had a hand in the invention of insulin pumps, scratch-resistant lenses and memory foam (though not, despite what you may have heard, in the invention of Tang, Velcro or Teflon; it just helped make them popular). We may all soon benefit again from NASA brainpower thanks to the recent release of lots and lots of software code developed by and for the space agency. Last week, NASA's Technology Transfer Program published its Software Catalog, which documents code for over 1,000 projects which is being made available to the public. The catalog documents what the code does, what (if any) restrictions are placed on it (some code is released to the general public, some for use by U.S. citizens only, some only for use on behalf of the government) and how to get it. In most cases, you can't just download this code; you have to request access to it explaining what you plan to use it for. Of course, you're probably thinking, "Cool, but this doesn't really affect me, since I'm not designing a spacecraft to go into orbit or to the moon." While it's true that lots of this code has to do with pretty NASA-y type of stuff like aeronautics, life support systems and propulsion (e.g. Advanced Ducted Propfan Analysis Code, which "solves tightly coupled internal/external flows through future-concept short-duct turbofan engines") , there's also quite a bit of other code that may be of interest to your business or for personal use. I took a spin through the catalog, which is currently only available in PDF form but will reportedly be made available via a searchable database and online repository, and identified some of the more mundane code that may actually be of use or interest. Use these NASA-developed tools to help with the day-to-day tasks of running your company: Electronic Timecard System - "The Electronic Timecard System can be utilized by any business or organization wishing to streamline its payroll department procedures. The automated system minimizes the consumption of paper and eliminates the need for weekly pick-up and delivery of time sheets. The tool also simplifies the daily recording of time worked by employees, and it allows employees to "sign" their "timecards" electronically at the end of each week. Supervisors can review an employee's electronic timecards daily and sign them electronically." Goal Performance Evaluation System - "The Goal Performance Evaluation System (GPES) is an innovative interactive software application that implements, validates, and evaluates an organization's performance by the achievements of its employees. The tool has been used for strategic planning, employee performance management, and center-wide communication. The system is Web-based and uses a relational database to host information. Can I Buy - "The Can I Buy tool automates processes used to request and approve procurements. The software allows registered users to create, submit, un-submit, and delete purchase requests. Different capabilities are provided depending on a person's ‘role.' Privileged roles include branch head, assistant branch head, secretary, resource analyst, credit card specialist, and tool administrator. Email is the medium of communication in the system." Software developers and system administrators may find some useful tools in the catalog such as: Ballast: Balancing Load Across Systems - "Ballast is a tool for balancing user load across Secure Shell Handler (SSH) servers. The system includes a load-balancing client, a lightweight data server, scripts for collecting system load, and scripts for analyzing user behavior. Because Ballast is invoked as part of the SSH login process, it has access to user names. This capability, which is not available in traditional approaches, enables Ballast to perform user-specific load balancing. In addition, Ballast is easy to install, induces near-zero overhead, and has fault-tolerant features in its architectures that will eliminate single points of failure." Multi-threaded Copy Program - "MCP is a high-performance file copy utility that achieves performance gains through parallelization. Multiple files and parts of single files are processed in parallel using multiple threads on multiple processors. The program employs the OpenMP and MPI programming models." NASA World Wind Java (WWJ) Software Development Kit (SDK) and Web Mapping Services - "NASA World Wind is an intuitive software application supporting the interactive exploration of a variety of data presented within a geospatial context. The technology offers a 3D graphics user experience with seamless, integrated access to a variety of online data sources via open-standards protocols." NASA has developed some tools which may not be particularly useful to most of us, but which still sound like they'd be fun to tinker around with, such as: Spacecraft Docking Simulation - "This simulation is a simplified version of the rendezvous and docking scenario performed by Space Shuttle astronauts docking at the International Space Station (ISS)." NASA Forecast Model Web - "NFMW reads weather forecast models outputs; subsets the data to the region of interest; interpolates the data to the specified size; generates a visualization of the data using colors, contour lines, or arrows; and sends the visualization to the client." Station Spacewalk Game App - "This video game features simulations of Extravehicular Activities (EVAs) conducted by NASA astronauts on missions to the International Space Station." While none of the offerings in the catalog may have the impact of, say, cochlear implants, it seems like there are still useful nuggets here. Or maybe you just want to contribute back to NASA by helping them out with their code? Either way, take a look and have fun! Read more of Phil Johnson's #Tech blog and follow the latest IT news at ITworld. Follow Phil on Twitter at @itwphiljohnson. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
<urn:uuid:a3066d04-f10c-4d92-b57e-cbeca2dbe4f1>
CC-MAIN-2017-04
http://www.itworld.com/article/2697996/cloud-computing/need-an-electronic-timecard-system--nasa-has-the-code-for-you.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00410-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927197
1,251
2.609375
3
The World’s Highest Data Center This view shows lights glowing on some of the racks of the correlator in the ALMA Array Operations Site Technical Building. This photograph shows one of four quadrants of the correlator. The full system has four identical quadrants. (Photo Credit: ALMA, S. Argandoña) ESO (THe European Southern Observatory) provided a key part of the correlator: a set of 550 state-of-the-art digital filter circuit boards was designed and built for ESO by the University of Bordeaux in France. With these Tunable Filter Banks, the light which ALMA sees can be split up into 32 times more wavelength ranges than in the initial design, and each of these ranges can be finely tuned. (Photo Credit: ESO) This photograph shows just some of the many thousands of cables needed to connect the electronics racks of the correlator together. The full system requires 32,768 rack-to-rack digital interfaces and 16,384 cables to transport the signals between the racks. (Photo credit: ESO) The ALMA Array Operations Site (AOS) Technical Building, the highest altitude high-tech building in the world, at an altitude of 5,000 metres on the Chajnantor Plateau in the Chilean Andes. This building houses the ALMA correlator supercomputer, enabling its many antennas, which are separated by up to 16 kilometres on the plateau, to work together as a single, giant telescope. (Photo Credit: ALMA, A. Caproni of ESO) Enrique Garcia, a correlator technician, examines the system while breathing oxygen from a backpack. ALMA’s Array Operations Site is so high that only half as much oxygen is available as at sea level. (Photo Credit: ESO/Max Alexander) Want more? For a video overview, continue to the next page. Pages: 1 2 3
<urn:uuid:3414117f-02fc-419f-bc81-b175129708e7>
CC-MAIN-2017-04
http://www.datacenterknowledge.com/archives/2013/04/05/the-worlds-highest-data-center/2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00318-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9232
399
2.8125
3
As consumers, most of us don’t give a second thought to the Internet speed we need until the Internet we have no longer works the way we need it to, or we need to change providers. This guide serves to help you determine your needs based on your usage and how to sift through the technological data to find the right plan from the providers in your area. Determining Broadband Speed: What is it? Your broadband speed is measured in megabits-per-second, or Mbps. As each piece of a website is built using files comprised of data bits, the speed is measured by how fast the data files move across the network to and from your computer. Different technologies allow for different speeds. Dial-up Internet using a telephone line moves much slower than a DSL line, which moves slower than a cable line, which moves slower than fiber-to-the-home. The broadband service you get in your home will vary based on the technology type, and service plan you purchase. Each broadband plan actually has two speeds to pay attention to: download and upload. Download speed refers to how fast your computer or other Internet connected device will receive information from the network, and upload speed refers to how fast the device will send information to the network. How You Use the Internet: What Speed Do You Need? The Internet speed you need depends on two main factors: - What you’ll be doing on the Internet: “Light” activities such as simple web browsing and checking email do not require fast Internet speeds. Streaming video on demand, video conferencing, and online gaming will add strain to a standard or low speed plan. - How many devices will be connected to the Internet: If only one device at a time will be connected to the Internet, then the overall Internet speed won’t matter much. The more devices you have connected to the Internet at a time, the faster the connection will need to be, regardless of what you’re doing online. With an increasing number of households running more than one computer, a tablet, a smartphone, gaming console, and Internet Ready TVs and blu-ray players to access HD video and gaming content, there is a greater need for faster Internet services. The chart below will help you choose the right speed for you usage. Checking email, social media, simple web browsing Light use plus streaming HD video, video conferencing or gaming Light use plus more than one device simultaneously engaging in moderate use activities Basic Plan Speeds: 5 to 10 Mbps Medium Plan Speeds: 10 to 50 Mbps Advanced Plan Speeds: 50 Mbps or faster How Much Speed Typical Internet Tasks Require to Function Correctly The chart below demonstrates the minimum Internet speeds typical tasks will need to function. Use this chart to help gauge the speed you need to get the best Internet experience for your usage. |Task||Minimum Download Speed (Mbps)| |Navigating basic websites: job searching, government websites, etc.||1| |Navigating interactive and feature rich websites, watching short videos||1| |Listening to live streaming radio||1-3| |Making phone calls with Skype or other VoIP telephone service||2-5| |Streaming standard video (YouTube or similar service)||2-10| |Streaming feature movies (Netflix or similar service)||2-5 constant stream| |Streaming HD feature movies or video lectures||4| |Basic video conferencing||2| |HD video conference and online learning||4| |Connecting console to the Internet to access online content||3-20| |HD Two-way gaming||5-20| Beyond Download Speed: Looking at the Plan as a Whole While download speed is an important part of any broadband plan, there are at least three other factors to consider when shopping for a new plan, whether you are changing to a new service provider or not. To ensure you’re getting the best possible plan, also take a look at: - Upload Speed: Upload speed refers to the speed at which you can send information from your computer (or other device connected to the Internet) to the Internet. It is not particularly important for simple browsing tasks, but when you are involved in online learning, video conferencing, or have a need to send large files to others on a regular basis, the upload speed dramatically impacts your browsing experience. - Latency: Latency refers to the time it takes packets of data to move across the network. It is what creates a lag that cause video playback to be choppy, and online phone conversations to cut in and out. This “lag” is barely noticeable when it comes to light Internet usage, but the heavier your usage across one or more devices, the more importance the latency issue becomes. - Data Limits: Some companies will greatly reduce your Internet speeds, in a technique known as “throttling,” or completely shut off your Internet usage if you use too much data over the course of the month.The data limit will vary, but commonly ranges between 150 to 250 GB each month. They do this to conserve data, and to alleviate network stress to provide service to more customers. To help you see how much data you could “consume” consider this: a standard definition full length movie typically ranges between one and two GB and a HD full length movie is generally anywhere from three to five GB. Most consumers will not every reach the data limit, which is why there is an illusion of unlimited data. Regardless of your usage amount, being aware of data limits from the start can help you choose the most valuable plan. If your provider does not advertise a data limit, ask about one. Find Providers in Your Area Depending on where you live, different technologies and providers may be available. While historically BroadbandMap.gov has been a great tool to find providers in your area, new websites have jumped into make the shopping experience better. To locate high speed Internet providers in your area we recommend checking: - BroadbandNow.com — This is the most authoritative database of internet service providers on the internet. - ProvidersByZip.com — While not as authoritative as BroadbandNow they collect huge amounts of plan data for bundles and deals allowing consumers to compare plans in plain English. If these two resources turn up empty or you live in a very rural area, ask friends, family, and neighbors about the service they use and how happy they are with it. Compare Plans for the Best Value Compare the various plans from each provider to see which plan offers the speed you need at the price you can afford. These providers will have basic lower speed plans for the least amount of money, while the faster plans will be more expensive. Consider the speed you need, and see what company provides this option at the price you can afford. Contact Customer Service Regarding Contracts and Additional Fees Some companies offer a lower price when customers agree to sign a contract, usually lasting one to two years. If you do not want to sign a contract, contact customer service to determine what this will do to your monthly rate. If you do sign a contract, make sure you are aware of early termination fees for canceling the service, or if there are any fees in the event that you need to upgrade or downgrade your service. This is also a good time to discuss installation and equipment fees, and to determine if there are any promotions that will waive installation and activation. You may have the option to rent your modem and/or have the provider setup your wireless network for an additional monthly fee. You can save money buy purchasing your own modem and router and setting up the network yourself after the service has been connected. Is Your Connection Slower than Anticipated? If after you’ve established service you find that it is running slower than you expected, it is important to remember several factors impact your speed. The provider may not be at fault if you are running slower than their advertised “up to” speed due to things such as: - an old/slower computer - an old/slower router - your network configuration - too many devices on the network at once - using the Internet during peak hours of 7 p.m. to 11 p.m. - using feature rich websites and online applications Before calling your provider, see if changing your configuration seems to speed it up. If after you’ve made changes you still do not see speed increases, contact your provider’s tech support department.
<urn:uuid:f1b9588a-bfab-4444-8894-2427fc27cca3>
CC-MAIN-2017-04
http://www.highspeedexperts.com/broadband-shopping-guide/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00318-ip-10-171-10-70.ec2.internal.warc.gz
en
0.90711
1,773
2.875
3
Faulty design in a popular web application programming language is opening up websites across the Internet to hacker attacks, a data security firm reported this week. Security problems are arising from the way the language, PHP, handles certain kinds of variables in its code, according to the report prepared by researchers at Imperva. "The PHP platform is by far the most popular web application development platform, powering over eighty percent of all websites, including top sites such as Facebook, Baidu, and Wikipedia," the reportA'A (PDF)A'A explained. "As a result, PHP vulnerabilities deserve special attention." "In fact," the report added, "exploits against PHP applications can affect the general security and health status of the entire web, since compromised hosts can be used as botnet slaves, further attacking other servers." Imperva was critical of the way the application programming language defines by default certain "super global" variables and allows external programs, such as cookies, to manipulate them. Hacker attacks exploiting super globals are gaining in popularity with hackers, the report noted. "[Hackers] incorporate multiple security problems into an advanced Web threat that can break application logic, compromise servers, and may result in fraudulent transactions and data theft," Imperva's researchers reported. The addition of super global variables to PHP is a relatively new addition to the language. It makes cooking code easier because it removes the necessity of defining some common variables each time an app is created, but the security implications of the practice may not have been thoroughly thought out. "Technically, PHP isn't broken," NSS Research Director Chris Morales said in an interview. "It's performing as designed. It's just not a good design." "I totally agree with Imperva," he said. "Why is PHP written in such a way that they allow an external component to execute a super global variable. From a coding perspective, there's no reason to ever to do that. Their implementation is poor." Since PHP is an open source program, there's always some question as to whether its openness is contributing to its security problems. "I don't think that's the issue here," said Tal Be'ery, Web security research team leader at Imperva. "If PHP had been closed sourced, it wouldn't have been more secure," Be'ery said in an interview. "There are some architectural decisions taken by the PHP implementers that makes it easier to use for the programmer but makes the software less secure." PHP has been in the sights of hackers for years. At the end of 2006 alone, there were 2,100 PHP flawsA'A listed in the ISS database of vulnerabilities to tempt net baddies. And through the years, web malcontents have used rogue PHP pages to redirect users to work-at-home scamsA'A and CGI vulnerabilities in the language to execute code remotely. From Windows to WordPress, large platforms in general attract hacker attention so it shouldn't surprise that PHP has done so, too. "PHP's footprint is pretty large, which makes it juicier as a target," Mat Gangwer, an information security analyst with Rook Consulting, said in an interview. What makes large platforms especially attractive is that they can give hackers the most bang for their buck. "When they come up with an exploit or attack on one site it can be traversed across multiple sites so it doesn't have to be a single targeted attack," Gangwer said. "In a lot of ways, PHP is a victim of its own success," said Daniel Peck, a research scientist with Barracuda Networks. Peck explained hosting sites rapidly adopted the language because it was easy to use, it worked and it was free. That kind of haphazard growth created growing pains for the language -- including security aches. "The documentation and example code has a lot of poor and insecure practices in it so if you search on how to solve your problem in PHP, you'll come up with an insecure solution," Peck said in an interview. Even if a programmer wants to mind his security P's and Q's, they can find it challenging. "It also has some features that make it difficult to program securely," Peck noted. "It can be done, but you need to put a significant amount of effort into it." PHP is also plagued with another affliction of mega Web platforms. "Content systems deployed in an open source fashion are easy to deploy and administer, but often the resources aren't there to keep up with the patch frequencies and the vulnerabilities associated with them," JD Sherry, vice president of Technology and Solutions for Trend Micro told CSOonline. "When you couple the problem with super global variables with unpatched systems, you've got a perfect storm for an attacker," Sherry said. Read more about malware/cybercrime in CSOonline's Malware/Cybercrime section. This story, "Poor Design Fosters Hacker Attacks of Websites Running PHP" was originally published by CSO.
<urn:uuid:80f9839c-de9f-4e6a-b6e9-6e052b792636>
CC-MAIN-2017-04
http://www.cio.com/article/2382547/cybercrime/poor-design-fosters-hacker-attacks-of-websites-running-php.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00254-ip-10-171-10-70.ec2.internal.warc.gz
en
0.966269
1,036
2.546875
3
There’s always some risk of unexpected signal interference when two adjacent bands of spectrum are used, and that has been a concern for upcoming 800-MHz auctions of spectrum intended for Long Term Evolution networks in the United Kingdom. According to the study by the regulator Ofcom, LTE end user devices operate at power levels low enough that there will not be a risk of interference with devices using adjacent unlicensed spectrum. The devices using unlicensed spectrum include wireless microphones, personal alarms and amplified headphones for the hard of hearing. The typical problem is that some amount of signal spreads outward from the intended center frequency, spilling over into the adjacent frequencies that can be licensed for other purposes. As LightSquared discovered in the U.S. market, that is doubly a problem when one emitter is at high power (a cell site tramsitter) and the adjacent devices operate at very low poer (GPS receivers). In the case of U.K. LTE spectrum, low power LTE devices will operate in spectrum adjacent to low power microphones and other devices. The Ofcom tests show the LTE signals will have very good filtering, with the intended signals sharply focused in band, with the out of band energy being at very low levels. As shown in the Ofcom tests, LTE signals in the 852 MHz to 862 MHz region will have an unusual signal attenuation signature, with power levels dropping dramatically at the edges of the band. And that response curve will mean the danger of signal interference with the devices in the immediately higher unlicensed frequencies (862 MHz to 872 MHz), is minimal. The upshot is that there is no impediment to upcoming LTE auctions. Want to learn more about today’s powerful mobile ecosystem? Don't miss the Mobility Tech Conference & Expo, collocated with ITEXPO West 2012 taking place Oct. 2-5 2012, in Austin, TX. Stay in touch with everything happening at Mobility Tech Conference & Expo. Follow us on Twitter. Edited by Braden Becker
<urn:uuid:103049ef-ae8d-4692-9df3-59cfd993d837>
CC-MAIN-2017-04
http://www.mobilitytechzone.com/topics/4g-wirelessevolution/articles/2012/08/17/303741-no-lte-interference-with-unlicensed-devices-says-ofcom.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00070-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921374
413
2.515625
3
“Sailfish” is a new computational method out of Carnegie-Mellon University and the University of Maryland that speeds up RNA sequencing analysis by a factor of 20 or greater. The method – dubbed Sailfish after the super-speedy fish – provides quantification estimates of gene expression much faster than previous methods such that a job that once took hours can now be completed in a few minutes without loss of accuracy. Details of the research have been published online in the journal Nature Biotechnology. Gene expression is the process by which genes (stretches of DNA that encode information) interact to produce different traits, such as blue eyes or a predisposition toward cancer. Gene expression occurs in all known life – it’s how the genetic code stored in DNA is “interpreted.” Along with major advances in genomics, gene expression analysis has grown in importance both for basic researchers and medical practitioners. There now exists large stores of RNA-seq data that scientists are using to re-analyze experiments, however the analysis is notoriously time-intensive with an average run taking about 15 hours. Fifteen hours might not seem like a lot, but when you multiply that by 100 experiments, it adds up, says paper co-author Carl Kingsford, an associate professor in CMU’s Lane Center for Computational Biology, adding “with Sailfish, we can give researchers everything they got from previous methods, but faster.” An organism’s genetic makeup is static, but the activity of individual genes varies greatly over time, explains the writeup from Carnegie Mellon. Gene expression is the key – it’s a research area that holds tremendous promise for disease prevention. Although gene activity can’t be measured directly, it can be inferred by tracking RNA, large molecules that perform vital roles in the coding, decoding, regulation, and expression of genes. To observe RNA, scientists typically use a method called RNA-seq, which has been useful in the field of genomic medicine in the analysis of certain cancers. The process results in short segments of RNA, called “reads.” In previous methods, reconstructing RNA molecules in order to measure them employed a process called mapping where reads were mapped back to their original positions in the larger molecules like pieces in a puzzle. The research team was able to eliminate this time-consuming step by allocating parts of the reads to different types of RNA molecules. Essentially each read provides several up-votes for a given molecule. By leaving out the mapping step, Sailfish is able to perform its RNA analysis 20-30 times faster than previous methods. The numerical approach will be more familiar to computer scientists than biologists, Kingsford notes, but Sailfish is more robust and better able to tolerate errors. Errors that would disrupt a mapping are not a problem for the “+1” approach. The result is increased accuracy. “By facilitating frequent reanalysis of data and reducing the need to optimize parameters, Sailfish exemplifies the potential of lightweight algorithms for efficiently processing sequencing reads,” the authors write in the paper abstract. The Sailfish code is available for download at http://www.cs.cmu.edu/~ckingsf/software/sailfish/.
<urn:uuid:576b98e8-37fa-4187-9538-42ce8dbb3025>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/04/24/sailfish-accelerates-gene-expression-analysis/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00556-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944796
669
3.4375
3
The US Department of Health and Human Services today said it would fund the development of a new generation of what it called novel, unconventional intelligent applications that could help people make complex health decisions. Specifically, the agency said it is looking to develop intelligent computer programs that could combine a person's computer-based health records and knowledge sources in the public domain. "The personal information may be drawn from a personal electronic health record maintained by the patient or from electronic medical records managed by a caregiver or hospital, or both. The intelligent computer program should be able to explain its reasoning and defend its conclusions to the patient, and state the certainty and reliability of its recommendations." Other game changing technologies: 25 tech touchstones of the past 25 years From the HHS: "The potential impact of the proposed research must be substantial, in terms of both the size of the community affected and the magnitude of its impact on that community. The investigator should anticipate starting and completing the project during the term of the award, (applicants may request up to 4 years for the project period) since this funding opportunity is not for support of ongoing research or for pilot projects, and awards are not renewable. The rationale for this grant program is that for informatics advances to have significant impact in health and science, investigators must have opportunities to test unconventional, potentially paradigm-shifting hypotheses, and to use novel, innovative approaches to solve difficult technical and conceptual problems that severely impede progress in a field. The purpose of the NLM Advanced Informatics for Health grant is to foster exceptionally innovative informatics research that, if successful, will have an unusually high impact on a problem in health or biomedical research." No pressure there. Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories:
<urn:uuid:35110096-7e04-4044-815d-7519005a83e3>
CC-MAIN-2017-04
http://www.networkworld.com/article/2229280/software/us-wants-novel--smart-apps-to-help-patients-make-healthcare-decisions.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00127-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942467
366
2.75
3
You can tell that the Court of Appeals decision that struck down the FCC net neutrality rules is important because there have been so many opinions flying around about what it means. Some commentators claim the decision goes far beyond the issue of net neutrality to give the FCC authority to regulate virtually any aspect of the Internet. Initially, at least, the pro-regulation folks were moaning. They claimed that Comcast and Verizon would become gatekeepers who would stifle innovation and restrict end user access to content. And initially, the free market folks were pretty happy. Comcast and Verizon would be able to make special deals with Netflix, Amazon and other content providers to give them faster access to end users, if they want to pay for it. Right now, Netflix and Amazon pay $0 to Comcast and Verizon. That could change. The court decision runs 63 pages, but it isn’t until page 45 that it starts to explain why the FCC policy must be struck down. It’s due to nearly 35 years of FCC policy to regulate telecommunications services but to not regulate services that incorporate information processing. The FCC’s Second Computer Inquiry decision, in 1980, first enunciated that principle. The details have evolved as the technology has evolved, and today the FCC policy is that Internet access is an unregulated information service. And similarly, regulation of telecommunications services has evolved to the point where such regulation is minimal in many cases. Even so, telecom services are still subject to the common carrier regulatory principles of furnishing service to the public upon reasonable request, and furnishing that service on non-discriminatory terms. The three prongs of the FCC’s net neutrality policy were transparency (disclosing accurate information regarding the network management practices, performance, and commercial terms for Internet access service); anti-blocking (prohibiting fixed broadband providers from blocking lawful content, applications, services, or non-harmful devices); and non-discrimination (prohibiting unreasonable discrimination in transmitting lawful network traffic over a consumer’s broadband Internet access service). The court decided that the FCC’s anti-blocking and non-discrimination rules were not only consistent with telecommunications common carrier regulatory principles, but actually required Internet service provides to become common carriers. And that, said the court, was illegal because of the longstanding FCC policy that a service with information processing is an unregulated information service. To retain those net neutrality principles, according to the court decision, the FCC should overturn the longstanding distinction between telecommunications services and information services, and should designate Internet access as a telecommunications service. In that case, the anti-blocking and non-discrimination rules would be permissible. But the first 44 pages of the decision have now caught the attention of the industry. Those pages are devoted to an analysis of Section 706 of the 1996 Telecommunications Act, which directs the FCC to encourage the deployment of broadband telecommunications capability. The FCC claimed that the net neutrality rules spur investment and development by content providers, which leads to increased end-user demand for broadband access, which leads to increased investment in broadband network infrastructure and technologies, which in turn leads to further innovation and development by content providers. Two of the three judges on this D.C. Circuit Court of Appeals bought this argument. One, Judge Silberman, did not. He asserted that the relevant language in Section 706 was not the “encourage the deployment” language but rather the methods authorized to encourage the deployment, namely, “measures that promote competition in the local telecommunications market or other regulating methods that remove barriers to infrastructure investment.” But the net neutrality rules don’t promote competition in the local telecommunications market. They might promote competition between content providers, but that’s not what the law covers. And regarding the infrastructure investment aspect, the FCC never identified any barriers to infrastructure investment. Indeed, with the widespread deployment of smart cell phones using LTE advanced mobile communications technologies, it’s difficult to argue that there are any barriers to infrastructure investment. What’s next? Parties may appeal the part of the decision that says Section 706 gives the FCC authority to regulate the Internet. They could either ask the full Court of Appeals to review the 2-1 decision, or they could appeal to the Supreme Court. Both are longshots. Other parties could go to the FCC and ask the FCC regulate Internet access providers as common carriers. I think that’s a longshot, too. So for now, by a 2-1 court majority, the FCC can regulate the Internet in virtually any way it wants, so long as it doesn’t run afoul of the longstanding distinction between telecommunications and information services, and so long as the regulations can be characterized as encouraging the deployment of broadband telecommunications capability. At least, that’s what some commentators think.
<urn:uuid:fd1e4143-130e-46bc-b101-461823571989>
CC-MAIN-2017-04
https://www.cedmagazine.com/article/2014/03/capital-currents-did-fcc-lose-battle-win-war?cmpid=related_content
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00429-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942395
980
2.59375
3
What You'll Learn - Assign themes and insert graphics to add visual appeal to documents and web pages created in Microsoft Word 2010, including using clip art, WordArt, SmartArt, charts, and shapes - Divide documents into separate sections, add headers and footers, and divide pages into multiple text columns - Use Outline view to organize documents, create tables of contents and indexes, add references to help navigate and display document information, sort lists in regular text and tables, and set up mathematical formulas in Word tables - Track document changes, insert comments, protect documents from being changed, and compare and merge documents - Record and edit macros to automate repetitive actions, assign keyboard shortcuts to macros, and customize the Quick Access Toolbar to quickly access macros and commands. Who Needs To Attend Those who are familiar with Microsoft Word 2010
<urn:uuid:0716bf3d-50b8-4550-abcd-66601d6c8d53>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/course/116310/microsoft-word-2010-level-3/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00245-ip-10-171-10-70.ec2.internal.warc.gz
en
0.775354
171
2.8125
3
A House committee on Thursday approved a three-year authorization bill for NASA that includes a plan for issuing warnings about impending space storms that could knock out navigation systems, power and smart phones. Because of technology's increasing reliance on satellites, many of the gadgets and systems Americans use on a daily basis are vulnerable to so-called space weather, according to NASA officials. The phenomenon refers to environmental conditions on the sun that can influence the performance and reliability of Earth-based and extraterrestrial digital systems. The House Science and Technology Committee's legislation, H.R. 5781, which authorizes funding and missions for NASA, includes a long-term strategy for a sustainable space weather program. The White House, through the director of the Office of Science and Technology Policy, would have to define individual agency responsibilities for carrying out the line of attack. According to NASA, the nation faces increasing uncertainty as Earth approaches the next peak of solar activity in 2013. The sun's magnetic field could produce turbulent solar wind, or charged particles streaming at high velocities. Other risks include solar flares, which are sudden eruptions of magnetic energy, as well as coronal mass ejections, emissions of plasma from the sun that disturb magnetic fields on Earth. Just a few of the devices and services that could go down during bad space weather include credit card transactions, air travel networks, the transmission of geothermal and wind power, most mapping applications, and telemedicine systems that send patient images from hospitals to physicians. The federal government already operates the National Space Weather Program. The forecasting initiative is overseen by NASA, the National Oceanic and Atmospheric Administration, the National Science Foundation, and the Defense, Energy, Interior, State and Transportation departments. The agencies monitor solar weather activity by exchanging data from space satellites, sensors and ground-based observational instruments. They then run the information through sophisticated computer models and generate analysis relevant to each of their departments' missions. NOAA heads up the effort to predict and describe space storms. The Federal Emergency Management Agency, a division of the Homeland Security Department, is not part of the initiative but recently has taken a greater interest in preparing for a potential space disaster. DHS is expected to join the program soon, according to officials. In June, FEMA Administrator W. Craig Fugate spoke at the annual space weather conference, which focused on critical infrastructure protection. "This is, to me, no different than any other natural hazard that we face. It's going to occur. It's part of our environment. And to the average person it's a nonevent except for the technologies that we are dependent upon," he said. "We're not ready. So, give me better data, give me longer warning, give me better impacts, so we can go tell the story to the decision-maker of why we have to update and develop our plans around this hazard so that if it does occur, it is an event, not a catastrophic disaster." The House bill also calls for the government to commission a National Academies study on the country's ability to accurately predict space weather and report findings and recommendations to Congress within 18 months after passage. NOAA and the other agencies in the program are collaborating to enhance the precision and timeliness of space weather forecasts with more sophisticated algorithms. Populating outer space with thousands of satellites to keep an eye on the sun is not an option, NASA officials said. "We can't afford to be everywhere all the time," Richard Fisher, head of NASA's heliophysics division, said on Thursday. "The scales are just enormous."
<urn:uuid:a43fb650-a8a5-410b-b474-1ded24a5aea3>
CC-MAIN-2017-04
http://www.nextgov.com/technology-news/2010/07/nasa-braces-for-solar-storms-that-could-bring-critical-systems-to-a-halt/47218/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00457-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938867
727
3.0625
3
Predictive analytics allows feds to track outbreaks in real time - By Frank Konkel - Jan 25, 2013 Scientists are using social media to track the spread of the flu virus. (CDC image) The flu spreads fast, but tweets spread faster, so health organizations and federal agencies, including the U.S. Centers for Disease Control and Prevention, are beginning to make use of predictive analytics of social data to monitor emerging situations like this season’s deadly influenza epidemic. The CDC is among agencies that now utilize social insights gleaned from Google Flu Trends and MappyHealth– predictive tools that take collective web searches and tweets on flu-related symptoms and correlate the data on regional maps. CDC partners with Google and MappyHealth, which won the Department of Health and Human Services NowTrending2012 challenge, to use social media surveillance in the service of public health. The CDC uses a variety of surveillance methods to track the spread of disease, said Richard Quartertone, health communication specialist at CDC’s Division of Notifiable Diseases and Healthcare Information. They include longstanding techniques such as monitoring hospital emergency room visits, performing laboratory tests and conducting population surveys. Now, epidemiologists also watch trends in web usage and at social-media sites, he said. "CDC is actively working with partners such as Google and MappyHealth to increase the public health surveillance value of information from social-media sites," Quartertone said. Google Flu Trends uses aggregate Google search data to provide real-time estimates of flu activity in more than 25 countries. When a user makes a Google search for relevant terms such as "influenza," he or she becomes part of the dataset used by Google Flu Trends to predict flu activity by geographic area. MappyHealth, meanwhile, mines real-time data from Twitter, looking for health trends through the search of 234 unique terms. Mined data is churned into visual graphs to assist end-users in spotting trends, which are then reported – and reported much more quickly than traditional health data compilation methods can manage. Traditional health reports prepared by the CDC take weeks, with local and state health departments compiling information and sending it up the federal chain of command. This season’s particularly extensive flu outbreak actually began in late October, but it wasn’t widely reported in the media until Dec. 3, when the CDC released a public warning highlighting the danger. Six weeks before – in mid-October – Baltimore-based Sickweather sent out a tweet warning users that the flu season was already here. Sickweather, another data-mining application, had scanned millions of Facebook posts and tweets on Twitter for 24 flu-related symptoms – like the word "fever" – and ran them though further linguistic analysis to weed out information unrelated to the flu. That data was then used to plot illness-related mentions to a map. Justin Herman, new media manager at the General Services Administration’s Center for Excellence in Digital Government, said predictive analysis of social data is creating new avenues for the public and government to work together. "Social data, as part of open data, is building new ways for agencies and the public to work together," said Herman, who works with the GSA-led Social Performance Metrics Working Group, to build collaboration between agencies in analyzing social data. "When a federal program manager can see the value and power of social data, it can help them identify emerging trends and develop an approach that will help them meet their unique program goals," Herman said. Of course, with epidemics like the flu, which has already killed some 40 children this winter, quicker trend-spotting can translate into faster reactions from government agencies like the CDC. Decisions to ship flu vaccines or deploy additional nurses to hard-hit areas can be made sooner with predictive insight. "Being able to spot general trends as they occur allows federal agencies to be more responsive and can sometimes result in immediate life-saving decisions, while also protecting citizens’ individual privacy," Herman said. "What’s unique about social data is the volume and immediacy of the information, which allows agencies to improve programs faster and more effectively." Frank Konkel is a former staff writer for FCW.
<urn:uuid:8ef8fdef-025f-41d2-9a8d-b967252d8fd7>
CC-MAIN-2017-04
https://fcw.com/articles/2013/01/25/flu-social-media.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00365-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938977
865
2.953125
3
Although it has only been around 8-10 years, DSL is known as the old workhorse of high speed internet access. It was faster to the market than cable and provided many people with their first taste of sweet relief from the annoying slowness and awkwardness of dial-up. Over the years, DSL has developed a reputation as a more primitive technology than cable broadband, but that really isn’t accurate and is probably due more to clever marketing by cable companies than anything else. The truth is that DSL is a different technology, and just like cable, has many pro’s and con’s. You can read about the 5 biggest DSL myths here. Most DSL connections are actually ADSL (Asymetrical Digital Subscriber Line) connections, but no one really says the “A” part anymore. “Asymetrical” simply means that the upload speeds (information going from your computer to the internet) are slower than the download speeds (information coming from the internet to your computer), usually by a factor of at least one half (most cable modem broadband is also asymetrical). How DSL Works DSL signals are transmitted between 2 devices: a device called a DSLAM and a DSL modem. A DSLAM is a large piece of transmission equipment owned either by the phone company or a DSL provider, and it basically takes many internet signals from different homes with DSL service and combines them into one signal that it then broadcasts over a phone company’s high speed data network. That network connects eventually to the ultra-high bandwith fiber optic cables that form the backbone of the internet. In most cases, the same regular phone lines that are used to carry voice signals for telephones are also used to carry the DSL internet data signals from the DSLAM to your DSL modem at home. This is possible because voice and data are sent at different, distinct frequencies and are thus easy to decipher and separate by both the DSLAM and your DSL modem at your home. Phone lines are made from twisted copper and are unshielded, which is why they are susceptible to losing signal strength over greater distances, which can make your connection slower. However, this problem is far less significant then it was when DSL was first rolled out. Since 2002, many remote DSLAMs have been deployed by DSL and phone companies, which increases the chances that one is close to your home or office. The Keys to DSL 1) Know Your Distance – Ask your provider how far away you are from their nearest DSLAM. They should be very familiar with what you are talking about, but if they aren’t, ask to speak with someone more technical. If you are located less than a mile from a DSLAM, then you should see no reduction in signal strength, and in theory, speeds that are close to what are advertised by the DSL company. Some DSL sending/processing methods allow for the data signal to travel over even greater distances without loss, like VDSL, ADSL2 and ADSL2+. That discussion is a little beyond the basics of DSL, so if you’re looking for a more in depth discussion of the various types of higher speed DSL signals that are coming out now, check out this page. 2) Leave It To The Pros – Make sure to get the phone or DSL company to install your DSL connection for you if you have a land line telephone. Do-it-yourself installations, while convenient, can lead to signal interference because they rely on a method of filtering out the voice telephone signal that is less effective than what the professional installers employ. Even if you don’t have land line telephone service, it still might be worth considering. 3) Make a Deal – DSL connections are generally offered by phone companies like Verizon, that are well-known for offering discounts on bundled services, like cell phone, land line phone and DSL service. Make sure you take avantage of these deals by bundling your services.
<urn:uuid:cd87524e-66f6-4f4b-89b1-6d1ceb8c02a3>
CC-MAIN-2017-04
http://www.highspeedexperts.com/internet-services/dsl/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00485-ip-10-171-10-70.ec2.internal.warc.gz
en
0.966045
819
2.828125
3
The FCC is considering opening an investigation into cell phone radiation standards, a controversial issue that has yet to find consensus within the scientific community. Agency Chairman Julius Genachowski circulated a draft inquiry into the issue on Friday, an FCC spokesman confirmed. The document will not be made public until it is voted on by the FCC's four commissioners, but it is expected to include questions about radiation levels in wireless devices used by children. "We are confident that, as set, the emissions guidelines for devices pose no risks to consumers," FCC spokeswoman Tammy Sun said. "Our action today is a routine review of our standards." If the review goes forward, it would be the first major inquiry into cell phone radiation emissions standards since it adopted its current regulations in 1996. The timing of the proposed inquiry coincides with a pending report from the Government Accountability Office that will examine the FCC's "inaction" on radiation standards, according to The Wall Street Journal. Some health groups have long raised alarms about the potential dangers of radiation from cell phones and other wireless devices, though studies on the issue have been both contradictory and inconclusive. The wireless industry has dismissed the concerns, arguing that government limits on the amount of radiation emitted from cell phones, also called the specific absorption rate, are sufficient to protect consumers. The issue was the subject of a lawsuit between CTIA and the city of San Francisco. The wireless trade association sued to block a San Francisco law that required wireless retailers to identify cell phone radiation as a "possible carcinogen." The city was later ordered to tone down its warnings after a judge found them to be "misleading" and “alarmist.” As for the latest development, CTIA public affairs executive John Walls told The Wall Street Journal, "We fully expect that the FCC's review will confirm, as it has in the past, that the scientific evidence establishes no reason for concern about the safety of cell phones."
<urn:uuid:2d30f065-b189-4f85-90ba-c0a722b872ac>
CC-MAIN-2017-04
https://www.cedmagazine.com/news/2012/06/fcc-may-open-cell-phone-radiation-inquiry
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00209-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960972
394
2.703125
3
The investigation of the disappearance of Malaysian Flight 370 is raising issues that are very similar to those considered in cybersecurity cases: the insider threat, deleting potentially key data from a computer, failure to share critical information and even corruption of the supply chain. Some have raised suspicions about insiders, namely the two pilots: Capt. Zaharie Ahmad Shah, 53, and co-pilot Fariq Abdul Hamid, 24. Malaysian police determined that some data on a computer system used as a flight simulator in Shah's home was erased on Feb. 3, more than a month before the flight. Malaysian authorities have asked the FBI to try to recover the missing data. And the FBI says it appears highly likely it will be able to retrieve the deleted material, according to news reports. As with many cybersecurity incidents, it appears that in the case of the missing airliner, there was a failure to share key information that could help mitigate the problem. In the case of Flight 370, a transponder that signals to ground controllers the location and speed of the aircraft apparently was turned off or otherwise disabled, suggesting that one of the pilots - an insider - did it. Similarly, experts believe someone - again, perhaps one of the pilots - reprogrammed the flight path in the aircraft's flight management system to veer the Malaysian jetliner away from its original destination of Beijing toward the Indian Ocean. Could implementing a two-person rule where the pilot and co-pilot each must approve such changes prevent such acts? The NSA, for instance, is implementing a two-person rule that requires two individuals with security clearances to approve access to classified material to prevent a future Snowden-like leak. But such a requirement 35,000 feet in the sky isn't worth the risk. What if one of the two pilots became disabled? Failure to Share Critical Information As with many cybersecurity incidents, it appears that in the case of the missing airliner, there was a failure to share key information that could help mitigate the problem. More than a week after Flight 370 went missing, Thailand's Air Force said it might have detected the missing plane on its military radar minutes after the aircraft's communications went down. And as I alluded in my most recent blog, Hacking a Boeing 777, supply chain risks exist that could introduce vulnerabilities into an aircraft's IT systems. Whether at five miles in the sky or at sea level, computer components purchased from vendors could be corrupted to alter systems that create an undesirable or dangerous environment.
<urn:uuid:17097131-aa10-4209-a00d-0449ce9b6362>
CC-MAIN-2017-04
http://www.bankinfosecurity.com/blogs/flight-777s-cybersecurity-connections-p-1641/op-1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00117-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961361
505
2.65625
3
NIST demonstrates method for reducing errors in quantum computing - By William Jackson - May 01, 2009 A team of researchers working at the National Institute of Standards and Technology in Boulder, Colo., have demonstrated the effectiveness of using microwave pulses to suppress errors in quantum bits, or qubits, the media for carrying and manipulating data in the still experimental field of quantum computing. The dynamical decoupling technique using microwave pulses they tested is not new, said John Bollinger, lead scientist on the project. “It’s something we borrowed from the [magnetic resonance imaging] community that was developed in the ’50s and ’60s,” Bollinger said. “Our work is a validation of an idea that has been out there.” But the experiments also advanced the theories, said Michael J. Biercuk, a NIST researcher who took part in the work. By using new pulse sequences, researchers demonstrated that the number of errors introduced into quantum computing through environmental noise could be reduced by an order of magnitude. This means the expected error rate can be brought down to well below the threshold for fault tolerance in quantum computing. The ability to suppress errors before they accumulate is important because qubits are to subject to the introduction of errors through stray electromagnetic “noise” in the environment. To date, there is no practical way to correct these qubit errors. The work was described in the April 23 issue of Nature. Quantum computing uses subatomic particles rather than binary bits to carry and manipulate information. While a traditional bit is either on or off, a 1 or a 0, a qubit can exist in both states simultaneously. Once harnessed, this superposition of states should let quantum computers extract patterns from possible outputs of huge computations without performing all of them, allowing them to crack complex problems not solvable by traditional binary computers. The researchers used an array of about 1,000 ultracold beryllium ions held in a magnetic field as the qubits. Sequences of microwave pulses were used to reverse changes introduced into the quantum states. The pulses in effect decouple the qubits from electromagnetic noise in the environment. Work on using the technique for suppressing quantum errors began a decade ago, Biercuk said. “Our work validated essentially all of the work” that had been done up to this point. It also introduced new ideas by moving the pulses relative to each other in the patterns, rather than increasing the number of pulses. The results showed an unexpectedly high rate of error suppression. The novel pulse sequences are tailored to the specific noise environment. The effective sequences can be found quickly through an experimental feedback technique and were shown to significantly outperform other sequences. The researchers tested these sequences under realistic noise conditions for different qubit technologies, making their results broadly applicable. Announcement of the work comes a little more than a month after other NIST researchers showed that a promising technique for correcting quantum errors would not work. The technique, called transversal encoded quantum gates, seemed simple at first. “But after substantial effort, no one was able to find a quantum code to do that,” said information theorist Bryan Eastin. “We were able to show that a way doesn’t exist.” The transversal operations used by Eastin were a “specific case” of error correction, Biercuk said, and the work does not mean that error correction cannot be done in quantum computers. Effective techniques for suppressing errors would mean that any error correction method would also be more effective, since there would be fewer errors to deal with. But quantum computing still is some years away. Biercuk said that practical quantum computing already has been demonstrated with arrays of several coupled qubits. “That is wonderful from an experimental point of view, but it is not useful,” he said. A quantum computer useful for doing complex simulations would require an array of about 100 qubits, he said. “That’s at least a decade away.” A computer capable of doing cryptographic factoring on a scale that cannot be done effectively by traditional computers still is 20 to 30 years off, he said. William Jackson is a Maryland-based freelance writer.
<urn:uuid:72942f38-c98d-486c-90e0-7a5cbfc36429>
CC-MAIN-2017-04
https://gcn.com/articles/2009/05/01/nist-quantum-computing-error-correcting.aspx?sc_lang=en
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00117-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948578
883
2.859375
3
Creating a strong password is one of the most important steps for securing your computer, yet I run into weak or insecure passwords almost daily. Here are a few guidelines I follow when creating a new, secure password: - Length: create a password longer than 12 characters - Create an alpha-numeric password; use both letters and numbers - Include non-standard characters. Examples are: ! @ # $ % ^ & * - Avoid using your name, birthday, company name, or other personal information that can be easily guessed; don’t use Sunfire in your password if you drive a Sunfire - Avoid using common repeated characters such as ‘qwerty’, or ‘12345’ - Try substituting letters for numbers, for example, 5 for S, 3 for E. An example would include ‘h3l10’ instead of ‘hello’ - Avoid Similarity. Change your password every three months, and create a new password that is different from the old one. Do not change or just add one more character, change the whole password - Variation: Do not use the same password for everything. Create a completely different password for Windows, your email, your banking website. Your personal information will be more difficult to obtain if it is protected by multiple, complex passwords It is important to keep in mind that your password is only secure as long as it remains a secret. Writing down your password can help in remembering it, however SIRKit strongly recommends that you do not keep your password written down, and do not keep your password on or in your desk, and do not place it on a sticky-note on your monitor or under your keyboard. As well, do not share your password with anyone. If you do, while on vacation for example, ensure you change your password immediately. Windows Domain DFS namespace – access is denied using domain FQDN, access allowed using server UNC paths directly This was easily one of the most frustrating and ridiculous fun times I've had working with DFS. The issue: At several client locations we run file server redundancy by offering (2) DFSR servers. A shared domain namespace with replicated folders to ensure they stay online if a server is offline for planned or unplanned good times. Within group-policy, we map folder redirection to a namespace path: - "documents" -> "\\domain.com\users\username\documents" - "desktop" -> "\\domain.com\users\username\desktop" By referencing the namespace, it will redirect when server A or B is offline. This should NOT be used in WAN deployments, LAN is fast and therefore replication is fast. Initially the DFS issue was identified when drives mapped to the namespace were missing. Within the client event logs, we saw "access denied" errors associated with these drive-letters. What we checked and verified: - Problematic client stations could not connect to "\\domain.com\dfsroot" (access denied) - Problematic client stations could not connect to "\\domain\dfsroot" (access denied) - Problematic client stations could connect to "\\serverA\dfsroot" - Problematic client stations could connect to "\\serverB\dfsroot" - Permissions on the shares for the DFS Root folder were correctly set to "everyone" with read/write - Each of these systems was removed and rejoined to the domain [no success] - The local profiles were completely removed from the local systems (file system and registry) and logged back in [no success] - Security suites were removed [no success] - Each user was tested on working machines and had no issues obtaining the right drives - When we disabled the 'offline files' component and rebooted -> "\\domain.com\dfsroot" was immediately accessible The offline file cache was corrupt. When offline files are disabled, the system accesses the namespace location directly without issue. This confirms a reference to the namespace is clearly saved within offline file cache. If the cache is corrupt you end up with "Access is Denied". Another quick way to determine if the issue is corrupt cache is to simply try and access the DFS root UNC paths on each server. If you can browse the contents when bypassing the shared namespace path, and this user has no issues on other domain PCs, then it's not permissions. Control Panel -> Sync Center -> Manage Offline Files -> Disable Offline Files 2) Clear the offline file cache This sets a temporary registry entry which is read on start-up and runs the cache wipe. Elevated Command Prompt -> "reg add HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Csc\Parameters /v FormatDatabase /t REG_DWORD /d 1 /f " You must reboot to successfully wipe the offline file cache 4) Test the namespace path -> "\\domain.com\dfsroot" If you can now browse the namespace contents, you can optionally re-enable Offline Files. We understand the Offline Files component is critical to road warriors. You should be safe to re-enable it and reboot. Control Panel -> Sync Center -> Manage Offline Files -> Enable Offline Files -> Reboot After you log back in, check that you can still access the namespace path "\\domain.com\dfsroot" after you run a forced sync. If there are still issues, I recommend you follow the steps we initially took "What we checked and verified:" and repeat this fix.
<urn:uuid:9da627d7-e306-4b8d-b0a5-0bb0b0aba312>
CC-MAIN-2017-04
http://wiki.sirkit.ca/2012/11/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00025-ip-10-171-10-70.ec2.internal.warc.gz
en
0.906924
1,168
2.6875
3
Organizations Using the Internet Cyprus / Turkish Republic of North Cyprus Modified 11 July 2002 History of Cyprus, from http://www.state.gov/r/pa/ei/bgn/5376.htm and http://www.cia.gov/cia/publications/factbook/index.html - 364 -- Cyprus was ruled by Byzantium. - 1100's -- Briefly ruled by Richard the Lion-Hearted. - Late 1100's -- Came under Frankish control - 1489 -- Ceded to the Venetian Republic - 1571 -- Conquered by the Ottoman Turks - 1878 -- Ceded to Great Britain - 1914 -- Formally annexed by the United Kingdom - 1925 -- Became a Crown Colony of the United Kingdom - 1960 -- Independence from the U.K. after a violent anti-British campaign by the Greek Cypriot EOKA (National Organization of Cypriot Fighters), guerilla group using terror and desiring enosis, or political union with Greece. Constitutional guarantees were extended to ethnic Turkish Cypriot minority, although Greek Cypriot majority argued that these were obstacles to efficient government. - 1963 -- Some constitutional guarantees to ethnic Turks were eliminated. End of ethnic Turk participation in government. - 1964 -- UN peacekeepers deployed. - 1974 -- Military junta controlling Greece sponsored a coup led by extremist Greek Cypriots. This was met with military intervention from Turkey to protect ethnic Turks. Turks fled north, Greeks fled south. Turkey soon controlled the northern 40% of the island. - 1983 -- Turkish-held area declared itself the ``Turkish Republic of Northern Cyprus'', recognized only by Turkey. - Current situation -- A UN buffer zone divides the Greek and Turkish sections, and there are two UK sovereign military bases mostly in the southern, Greek, section. - TRNC executive branch -- http://www.trncpresidency.org/ - Representation in the U.S. -- http://www.trncwashdc.org/ - Ministry of Foreign Affairs and Defence -- - Supposedly the government site, with a domain that would imply that TRNC is a part of Turkey -- http://kktc.pubinfo.gov.nc.tr - Views of the government of Turkey -- http://www.mfa.gov.tr/grupa/ad/add/relations.htm Intro Page Cybersecurity Home Page
<urn:uuid:caaff4ff-a640-4894-b034-2961abf16bad>
CC-MAIN-2017-04
http://cromwell-intl.com/cybersecurity/netusers/Index/cy
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00053-ip-10-171-10-70.ec2.internal.warc.gz
en
0.873486
509
2.90625
3
Canada-based Helios Wire is planning to launch 30 satellites into space in a bid to ‘democratize the Internet of Things (IoT) from space’. Helios Wire says the satellites will be used to monitor five billion sensors on Earth in a bid to significantly reduce the cost of IoT. Two satellites will be launched in 2018, with a further 28 launched over the following three years, for less than $100 million according to a report in the Vancouver Sun. Will Helios Wire disrupt IoT? The network will use 30 MHz of priority mobile satellite system (MSS) S-band spectrum to build a two-way global satellite-enabled system. This is the same infrastructure used for enabling pan-European mobile services. Crucially, it allows for very low-cost short bursts of data to low-power devices – which could reduce the cost of IoT. According to Helios’ website, the small transmitters on Earth will collect information such as location, infrastructure reliability, crop health, asset elevation, or almost any other digital information. That information is then relayed up to the satellites. These satellites pick up the signals and data from the ground based transmitters and forward that down to antennas on the ground, where it is then uploaded to a cloud-based analytics platform that should allow for better information and decisions. The technology to ‘democratize the IoT’ In comments made to the Vancouver Sun, Helios CEO Scott Larson said: “S-Band spectrum is really well-suited to short pings of data and it will allow us to connect a huge number of devices. It’s going to allow us to build out a space-enabled IoT network.” Larson believes early adopters will be farmers using precision agriculture systems or utilities using smart meters. He adds that “the system is particularly well-suited to monitoring things that are remote or moving over large distances,” which could prove useful for anything from emergency services personnel to conservation groups in Africa. “Space is hard, but it’s getting easier and we think we have the technology now to really democratize the Internet of Things,” he finished. Helios Wire has secured $1 million in initial funding, but will undertake several further financing rounds over the course of the coming year.
<urn:uuid:06fa77ab-35cb-40eb-88b3-d9b3f31deb6b>
CC-MAIN-2017-04
https://internetofbusiness.com/helios-wire-democratize-iot-space/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00265-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93067
485
2.546875
3
Automobiles and the highways we drive them on are wonders of technology -- old technology. Like the suburbs that sprang up after the interstate was born, little thought was given to how these marvels of road building would hold up in the future. Today's traffic is a worsening problem with no clear solution. Like climate change, the best strategy to soothe traffic woes is likely a combination of solutions. Except for a few pockets of hope, U.S. public transportation ranges from laughable to nonexistent. As much sense as a high-speed rail service would make in populous, spread-out places like California and Texas, the cost and political will it would require doesn't make it an option. What remains is constructing more roads and an amalgam of technologies known as intelligent transportation systems (ITS). Unfortunately ITS doesn't herald a new age of '60s-era transportation futurism. There will be no flying cars or downtown monorails. What ITS can do, however, is make traffic more bearable. While anyone with a modicum of foresight knows that gasoline-powered cars plodding along occasionally widening, often-crumbling freeways isn't a sustainable solution, ITS may help bide time to truly solve the transportation problem. Rubber, Meet Road Many drivers already use ITS in some fashion whether they know it or not. John Q. Public may be oblivious to ITS because the term covers so many different technology pieces. California Department of Transportation (Caltrans) Chief Deputy Director Randy Iwasaki did his best to sum up what exactly ITS encompasses. "Examples of ITS are 511, Web sites where motorists view real-time traffic speeds, FasTrak, smart parking, bus rapid transit, Wi-Fi access at rest areas, ramp meters, closed circuit television, changeable message signs and the vehicle infrastructure integration program," he said. On their own merits, 511, changeable message signs and ramp meters, aren't too exciting. But taken together, an ad hoc web develops that reaches into almost every part of the transportation experience. And California, like many other states, is working on cutting-edge stuff, such as the Vehicle Infrastructure Integration (VII) program. The U.S. Department of Transportation, the American Association of State Highway Transportation Officials and a number of automobile manufacturers are driving the national VII program. The goal is to create a nationwide network of communication-enabled infrastructure. In other words, VII is an attempt to connect vehicles on the road to the things surrounding them -- intersections, onramps and even other vehicles. If the infrastructure communicates to the vehicles and vice versa, drivers should be able to travel more efficiently and safely. The auto manufacturers onboard with VII are working on data-transmitting technologies that would interface with similar devices embedded in the infrastructure. One project to accomplish this feat is a test in Berkeley, Calif., that uses GPS-enabled mobile phones to transmit a vehicle's position and speed data to generate real-time traffic information without costly technology installation. The project is operated jointly by Caltrans; the California Center for Innovative Transportation; the University of California (UC), Berkeley; Nissan; NAVTEQ; and Nokia. In February, the consortium conducted an experiment to test the validity of using GPS phones as traffic sensors. The experiment, called the Mobile Century, involved 100 UC Berkeley student volunteers. Each student was given a Nokia phone and proceeded to drive up and down a prescribed section of Interstate 880. The students drove for 10 hours while the phones relayed speed and location data to a command center. The experiment's goal was to see whether the phone data could accurately predict traffic and help drivers avoid and prevent congestion. Transportation officials and UC engineers were thoroughly pleased with the experiment. The results suggested the system has potential. "Even though the phones are capable of sending their position and speed every three seconds, an efficient traffic-monitoring system should not need to transfer such a large amount of data, which would require enormous bandwidth," Alex Bayen, UC Berkeley assistant professor of systems engineering, told Berkeley News. "Our challenge is to find the optimum subset of this data for effective traffic monitoring. The quantity and quality of data provided by GPS-equipped cell phones present an unprecedented enhancement to mobility tracking technology and traffic flow reconstruction mechanisms." Data from the experiment suggests such a system could warn drivers of impending roadway problems en route and also show a scheduled meeting on a driver's phone and cross-reference that against the data being collected. If a meeting were to commence at 9 a.m. and traffic data showed problems on the road, a driver's phone could alert her, and provide an alternate route before she even gets in her car. Every year, billions of hours and billions of gallons of gasoline are wasted due to traffic congestion. This kind of innovative ITS solution could greatly reduce those numbers while avoiding the significant expense, according to Iwasaki. "California has made significant strides in rebuilding its transportation infrastructure. ITS is a smart investment of taxpayers' dollars. It offers the ability to make our existing transportation infrastructure more efficient," he said. It will be some time before the Mobile Century experiment becomes reality. The project partners still have many tests to conduct. Plans are being drawn up for an experiment involving thousands of cars and volunteers spread across a much larger area. In the meantime, there are ITS solutions ready to be deployed that could have an impact on traffic congestion. In Ohio, the state department of transportation uses Microsoft Virtual Earth to help drivers and transportation officials better manage traffic. Visitors to www.Buckeyetraffic.org find a wealth of traffic information for traveling through the state. Launched last October, Buckeyetraffic.org is built on the Virtual Earth platform, giving users a detailed and easily navigable Ohio map. On the Web site, a driver can examine a route and its potential traffic problems. In addition, the state's 250 traffic cameras are linked directly to the map, giving users a real-time view of what's transpiring. "Let's say I want to check my commute home," said Spencer Wood, deputy director of the Ohio Department of Transportation's Division of Information Technology. "I can go zoom into the Columbus area, it can show me all roadway activity for Columbus, and it's going to pull up all the roadway construction. It's going to show me any roadway closures or restrictions due to debris, disabled vehicles, flooding, roadwork, ice, and even what we call 'other' -- basically other events that we couldn't account for [in] a specific category, whether it be a gas leak, a fire that's closed down a road or something like that. So [users] get all that information, but also if you know this is a route you go home on every day, you can also select 'My Cameras,' and look at all the cameras in Columbus." Traffic and weather sensors across the state are linked to the site and layered onto Virtual Earth as an administrator chooses. Weather data is updated every five minutes, and in a place like Ohio, the information can be invaluable during brutal winters and unpredictable summers. For example, this summer the Midwest suffered through significant flooding from the swollen Mississippi River. Along with physical damage the flooding caused, it also wreaked havoc on residents' ability to travel. "We've been seeing millions of [Web site] hits, especially during bad weather times," Wood said. "We can also look at wind speed direction, and it's also a dashboard for us from a management point of view. We can actually look at the entire state and say which roads are clear, which roads have snow and ice, and which roads we would consider dangerous." Since Buckeyetraffic.org was built on the Virtual Earth platform, Ohio Department of Transportation engineers avoided the expense of building the application themselves. Wood estimated that so far the application has cost $60,000 -- far less than it would've cost to build the software internally. What's more, Buckeyetraffic.org takes a step toward crossing the digital divide. Many U.S. citizens can't afford devices such as in-car GPS. Because it's free, comprehensive and easy to use, Buckeyetraffic.org makes real-time, statewide traffic data available to people who may otherwise be unable to access it. And it's much easier for developers to work with than standard GIS systems, according to Kevin Adler, Microsoft geospatial solutions specialist. "One way we can work with transportation is we can allow the people who are responsible for collecting and then disseminating the data to easily publish that data onto Virtual Earth -- meaning you do not need to use the traditional ESRI tools that five years ago were the only game in town for pushing out data on a map," Adler explained. "Virtual Earth is designed for the typical developer to throw data on it and publish it. That eliminates the resources requirements of using that ESRI analyst. Your standard Web developer can do this. You're giving them an easy tool to use to create a platform for dissemination of data." ITS Goes Public Bringing ITS to public transit is another piece in the transportation puzzle. In Portage County, Ohio, the regional transit service helps many citizens get where they need to go. Many of the riders, like the elderly and disabled, couldn't otherwise reach their destinations. The Portage Area Regional Transportation Authority (PARTA) has served less-fortunate citizens for years, providing low-cost transportation within the county and surrounding areas. Like most other transportation authorities, PARTA relies on buses to do the brunt of its people-moving work. Some of PARTA's buses do door-to-door routes to pick up those who can't reach bus stops on their own. In the past, those routes' bus drivers had to fill out paper manifests with all sorts of data -- travel time and mileage, addresses, passenger numbers and pickup times. The data was then relayed via radio to a central office, where it was taken down and entered again for billing purposes. The paperwork and radio traffic was becoming increasingly unmanageable. PARTA Business Development Manager Bryan Smith discovered a surprising solution: rugged laptops. "The [Panasonic] Toughbook is really designed for the mobile environment," Smith said. "I use the computer every day at my desk and thought, 'How hard would it be to take the smaller version of this -- any kind of laptop -- and move it into a vehicle?'" But the answer is it's tough to do, he said, because it's often hot humid, cold and dusty inside a diesel bus and standard computer components don't stand up to those conditions. Smith said the Toughbook is perfect for his drivers because it features a touchscreen instead of a keyboard and can transmit all the mundane manifest data wirelessly. It also serves as a GPS device, which lets PARTA continually enhance routes, improve travel times and increase fuel efficiency. In addition, the Toughbooks work like an emergency beacon, which given the temperamental Midwest weather, can suddenly be very important. "We can pinpoint the location and find out what's wrong," Smith said. "One example that happened soon after we installed these was that one of our routes goes into downtown. There was snow on the roads, there was a snowplow coming down the freeway and it flung a chunk of ice through the windshield of a bus. The driver got hit in the face with glass, really couldn't see all that well, but he was able to hit his emergency button and dispatch was able to say, 'He's right there on the highway.'" Smith and PARTA are working on other ITS solutions that go beyond buses. A traffic management coordination center is in the works that will feature interactive voice response, online trip planning and Web-based trip requests. PARTA projects, like Buckeyetraffic.org and Mobile Century, aren't the solution to the current and future traffic issues people face every day. But innovations like them may one day add up to a sum greater than its parts. And maybe, just maybe, ITS will eventually make it easier to share the road.
<urn:uuid:5b395839-dd33-4915-bd38-55be45aaab69>
CC-MAIN-2017-04
http://www.govtech.com/transportation/Intelligent-Transportation-Systems-Target-Highway-Congestion.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00081-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954894
2,486
2.625
3
The science (and certification) of writing secure code The world runs on code. From online banking to electronic voting systems and from self-driving cars to medical devices, almost every aspect of modern life relies upon software for safety, efficiency and convenience. Millions of lines of code intersect with our lives on a daily basis. The security of that code is paramount to protecting the confidentiality, integrity and availability of the information, systems and devices upon which we rely. Unfortunately, there is a world full of hackers and other threat actors who wish to deprive us of the secure use of that code and focus relentlessly on undermining software security. For many years, application developers adopted a “get it done” mindset that focused on shipping code as quickly as possible to gain market share, capitalize on business opportunities and improve efficiency. This mindset often sacrificed security as a burdensome afterthought that simply got in the way of progress. Times have changed, however, and after a series of high profile security incidents, developers now recognize the importance of building security into their code from the outset, designing software that can withstand the dangers of the modern cybersecurity threat landscape. Following a Development Lifecycle One of the most important actions that developers can take is adopting a software development lifecycle that guides software projects through the design, implementation and testing phases and incorporates security requirements in a manner appropriate to each stage. Including security from the start yields a “baked-in” approach to security that results in solid code that can withstand the many tests that will be thrown at it. The alternative approach, “bolt-on” security, treats security requirements as an afterthought and results in ineffective and often inadequate security controls. Developers sometimes resist the idea of a development lifecycle because it conjures images of rigorous, stiff and formal practices that bog down development and slow progress on initiatives. Fortunately, this doesn’t need to be the case. Many organizations are now adopting agile approaches to software development that allow rapid iteration and offer the flexibility that both developers and customers crave. This is a perfectly acceptable approach that can result in very secure code development, as long as security requirements are considered in the early stages of design and built-in at each iteration of the software development process. Performing Input Validation While, there is no silver bullet for software security issues, input validation may be as close to a panacea as we’ll ever see. Whenever software developers create a mechanism that allows for any user input, they should treat that input as completely untrusted and test it thoroughly before allowing it to interact with other components. While it may be natural to assume that users are benevolent, that simply isn’t always the case. Developers should clearly document the acceptable types of input that may come from a user interaction and then scrub the input they receive to verify that it matches one of those expected patterns. For example, if a developer writes code that is expecting to see an integer between 0 and 10,000, user input should be scanned to ensure that it matches that pattern and is non-negative, below 10,000 and does not contain any non-numeric characters. Performing input validation is a critical software development practice and protects code against many different types of attack. SQL injection attacks attempt to insert database instructions into user input in an attempt to pass them to the back-end database. Application Security Testing In addition to developing software securely, organizations should also routinely test the security of their code using a combination of automated assessment tools and manual penetration tests. These techniques ensure that code remains secure, even in the face of newly developed threats and newly discovered vulnerabilities. Automated testing tools scan applications for potential flaws and provide developers with a roadmap for remediating any deficiencies. During penetration tests, trained security professionals use the same tools that an attacker would leverage during an actual attack. This type of testing can be expensive and time-consuming but it also provides the most realistic assessment of how software will behave when it comes under attack. Certifications for Secure Software Development Developers and security professionals seeking to bolster their secure software development skills may choose to pursue professional certifications in the field. The Certified Secure Software Lifecyle Professional (CSSLP) certification program from (ISC)2, provides experienced software security professionals with a means to demonstrate that they have a well-rounded knowledge of application security issues. Earning this credential requires five years of professional experience and passing a four-hour exam consisting of 175 multiple-choice questions. Individuals focused on web application security may wish to pursue a more specialized certification, such as the GIAC Certified Web Application Defender (GWEB) certification available from the SANS Institute’s Global Information Assurance Certification program. GWEB focuses specifically on web application security issues, including SQL injection, authentication, cross-site scripting and input validation. The GWEB credential does not include an experience requirement. Candidates seeking the certification must pass a three-hour exam containing 75 questions with a score of 68% or higher. Software is truly increasing in importance to both the functioning of everyday life and the protection of sensitive information. As organizations continue to rely more and more upon software, the demand for developers skilled in creating secure code will continue to increase. Individuals pursuing a career in software development and/or information security should have a strong working knowledge of software security issues and may wish to demonstrate that knowledge by adding one or more application security certifications to their professional resumes.
<urn:uuid:52aeb820-9aa8-4939-90ff-d53b8b16cd48>
CC-MAIN-2017-04
http://certmag.com/science-certification-writing-secure-code/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00383-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927339
1,119
2.546875
3
In general, today’s security operations center (SOC) monitors security alerts and alarms from security products and threats indicated by a security information and event management system (SIEM). These alerts and threats turn into cases that funnel into a workflow system in use by the security team. After initial review to determine if the alert is a false positive, additional data is gathered so that analysis can take place. To put it another way, the security team tries to build a story around the valid alert. Once the story is created, a different team might be assigned to contain the incident and that same team (or another) would be assigned to restore systems to a pre-infection state. This closely resembles today’s Detection-Analysis-Containment-Restoration security process. While there has been some refinement of the security tools that are used at the detection stage, “…Most of the security products available on the market are just a half-step better than old antivirus products.” The HIMSS organization surveyed nearly 300 healthcare organizations and the list of technologies healthcare providers had most of us could have recited from memory. AV, firewalls, log management, vulnerability management, IDS, access control lists, mobile device management and user access controls. A large majority of these security teams know they can’t stop current attacks (22% had confidence they could) and 81% said some new technology was needed. Security people are aware that processes built around those basic technology solutions listed above, have remained virtually unchanged for the last two decades. Zeroing in on the process, it’s not hard to see what’s broken – the detection and analysis portion, or, what I call knowledge-building portion of the process. Today attackers run malware through all the latest detection techniques and anti-virus software prior to deployment to make it as invisible as possible. It may also be coded to evade malware sandbox detections. Once inside the network, it inherits the identity of the system’s user and that person’s access level. The attacker’s activity simply looks like normal IT activity making all the technologies listed above blind to the attacker. Detection never takes place and the security process never kicks off. I can hear some say, “What about encryption doesn’t that help?” Having valid credentials gets the attacker around this little problem. If the user is able to do their work, their access level allows the data to be decrypted. The other part of the knowledge building process is Analysis. If you were lucky enough to have seen some evidence of malware on a system, it gets cleaned up but there is little to suggest which systems were infected and what credentials were compromised. If the data is valuable enough, the attacker can start over with the same or a different set of valid credentials. Exabeam moves the security team’s focus away from malware and to the credentials that enable it. To do this the system learns what are the normal credential behaviors and access characteristics for a user and the user’s peer groups so what is anomalous can be surfaced and scored. Security alerts are automatically attributed to the user credential involved and these alerts and the anomalous behaviors are placed on a time line. This creates an attack chain that shows the intersection of credential use, assets touched, and security alerts. Voilà, the entire attack chain is automatically created. Detection and analysis are now a single “knowledge-building” function. User behavior analytics makes the stealthiest attacks are made visible and analysis is created as the attack happens. You really have to see it to believe it. Attackers think its magic.
<urn:uuid:fa0c1d8d-8c85-4462-a9bf-c520f25dcd3f>
CC-MAIN-2017-04
https://www.exabeam.com/security/whats-wrong-with-todays-security-technologies-and-processes/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00109-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952042
752
2.59375
3
Multiple-Choice Questions Are the Answer It doesn’t take much research to determine that multiple-choice questions represent the majority of questions in any high-stakes test, including IT certification tests. I’m not telling you anything new—according to my recent number crunching, this question type accounts for more than 95 percent of the questions on IT exams. There are alternative question styles, but even those share common traits with the multiple-choice question we all know and love. For example, a question where the testtaker simply moves the mouse pointer over a graphic and clicks to select an area is really just a multiple-choice question without the text. The same goes for a question where the task is to drag one or more objects on the screen to the correctly corresponding destinations. It might seem like you’re doing something different, but these are fundamentally multiple-choice questions. A multiple-choice question is described simply as providing all necessary information and asking the test-taker to make a choice, usually with the keyboard or mouse. All of the choices can be read or viewed. The candidate doesn’t need to produce anything, except to choose one or more of the answers. Because everything is there on the screen, the response is usually relatively quick, as the test-taker only has to recognize the correct choice. The ease of presenting information and collecting the answer is the power behind the multiple-choice question. A true drawback is that a multiple-choice question can be guessed correctly. But if an exam is well written, guessing is limited and reduced by the number of choices available, and is taken into account in the scoring and where the pass-fail mark is set. Writing a larger number of choices and incorporating more than one correct answer are popular ways to reduce the chances of guessing a question correctly. Besides its efficiency in allowing easy reading and quick responding, the multiple-choice question lends itself well to computerized testing. The computer can present the questions quickly, one at a time. After the response, the answer can be scored immediately, and after the last item, the overall score can be immediately provided. So-called “performance” questions, such as essay questions, are much more difficult to present, collect the answer and score. The test-taker often has to wait weeks before the final score is reported. Over the years, the multiple-choice question has been criticized for its inability to measure complex human performance. It is generally believed that the multiple-choice question can only measure the simple memorization of facts. For example, the question may ask who the first president of the United States was and provide four or more good choices, including the correct answer, George Washington. But the criticism is incorrect and unfair. Here’s an example of how higher-level skills and performance can be tested with a simple multiple-choice question: Several years ago, while at Novell working in the certification program, we came upon a problem: how to measure network engineers’ ability to effectively use the technical manuals and support encyclopedias stored on CDs. We decided to install CD players in all of the testing centers and distribute the CDs to them. We then authored multiple-choice questions that asked very technical questions that could only be answered by successfully launching and navigating the information on the CDs. During the question, a competent test-taker was able to get into the CD, find the information quickly, return to the test and answer the question by selecting the correct answer from a list of choices. What Novell was measuring was the higher-order cognitive skill of planning and conducting an efficient search for information. Those who could do it answered the questions correctly. Others guessed, and their lower scores reflected that. The candidates knew they were being tested on skills they used every day to solve network problems and responded favorably to the new test. While a bit more complex, the question was still multiple-choice. Because of their advantages and few problems, multiple-choice questions will continue to dominate the testing landscape for years to come. As audio, video, simulations and graphics become more common components of multiple-choice questions, they will become even more effective in measuring many IT certification skills. David Foster, Ph.D., is president of Caveon (www.caveon.com) and is a member of the International Test Commission, as well as several measurement industry boards. He can be reached at email@example.com.
<urn:uuid:69bc8e5f-0f56-433a-b590-a0c95a50a271>
CC-MAIN-2017-04
http://certmag.com/multiple-choice-questions-are-the-answer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00531-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956561
909
2.90625
3
Music is an art form whose medium is sound and silence. Its common elements are pitch (which governs melody and harmony), rhythm (and its associated concepts tempo, meter, and articulation), dynamics, and the sonic qualities of timbre and texture. Just like math, it is hard to say if music has been invented or just found to exist. Language, on the other hand, may refer either to the specifically human capacity for acquiring and using complex systems of communication, or to a specific instance of such a system of complex communication. Origin of language is somewhat unknown and there are several assumptions. Some theories are based on the idea that language is so complex that one can not imagine it simply appearing from nothing in its final form, but that it must have evolved from earlier pre-linguistic systems among our pre-human ancestors. These theories can be called continuity based theories. The opposite viewpoint is that language is such a unique human trait that it cannot be compared to anything found among non-humans and that it must therefore have appeared fairly suddenly in the transition from pre-hominids to early man. These theories can be defined as discontinuity based. Similarly some theories see language mostly as an innate faculty that is largely genetically encoded, while others see it as a system that is largely cultural, that is learned through social interaction. Currently the only prominent proponent of a discontinuity theory of human language origins is Noam Chomsky. Chomsky proposes that "some random mutation took place, maybe after some strange cosmic ray shower, and it reorganized the brain, implanting a language organ in an otherwise primate brain". While cautioning against taking this story too literally, Chomsky insists that "it may be closer to reality than many other fairy tales that are told about evolutionary processes, including language". Somewhre in November last year Chomsky had an interview with Discover folks - you can read more here. In interview Chomsky states if you look at the archaeological record, a creative explosion shows up in a narrow window, somewhere between 150000 and roughly 75000 years ago. All of a sudden, there’s an explosion of complex artifacts, symbolic representation, measurement of celestial events, complex social structures - a burst of creative activity that almost every expert on prehistory assumes must have been connected with the sudden emergence of language. And it doesn’t seem to be connected with physical changes; the articulatory and acoustic (speech and hearing) systems of contemporary humans are not very different from those of 600000 years ago. There was a rapid cognitive change. And nobody knows why. Continuity based theories are currently held by a majority of scholars, but they vary in how they envision this development. Those who see language as being mostly innate, for example Steven Pinker, hold the precedents to be animal cognition, whereas those who see language as a socially learned tool of communication, such as Michael Tomasello see it as having developed from animal communication, either primate gestural or vocal communication. Other continuity based models see language as having developed from music. As per Mark Changizi, we're fish out of water, living in radically unnatural environments and behaving ridiculously for a great ape. So, if one were interested in figuring out which things are fundamentally part of what it is to be human, then those million crazy things we do these days would not be on the list. But what would be on the list? Language is the pinnacle of usefulness, and was key to our domination of the Earth (and the Moon). And music is arguably the pinnacle of the arts. Language and music are fantastically complex, and we're brilliantly capable at absorbing them, and from a young age. That’s how we know we're meant to be doing them, ie., how we know we evolved brains for engaging in language and music. What if we're not, in fact, meant to have language and music? What if our endless yapping and music-filled hours each day are deeply unnatural behaviors for our species? Mark's take on this is that both language and music are not part of our core - that we never evolved by natural selection to engage in them. The reason we have such a head for language and music is not that we evolved for them, but, rather, that language and music evolved - culturally evolved over millennia - for us. Our brains aren't shaped for these pinnacles of humankind. Rather, these pinnacles of humankind are shaped to be good for our brains. If language and music have shaped themselves to be good for non-linguistic and amusical brains, then what would their shapes have to be? We have auditory systems which have evolved to be brilliantly capable at processing the sounds from nature, and language and music would need to mimic those sorts of sounds in order to harness our brain. Mark base whole book on this subject. The two most important classes of auditory stimuli for humans are: - events among objects (most commonly solid objects), and - events among humans (for example human behavior). In his research, Mark has shown that the signature sounds in these two auditory domains drive the sounds we humans use in - speech and - music, respectively. For example, the principal source of modulation of pitch in the natural world comes from the Doppler shift, where objects moving toward you have a high pitch and objects moving away have a low pitch; from these pitch modulations a listener can hear an object’s direction of movement relative to his or her position. In the book Mark provides a battery of converging evidence that melody in music has culturally evolved to sound like the (often exaggerations of) Doppler shifts of a person moving in one’s midst. Consider first that a mover’s pitch will modulate within a fixed range, the top and bottom pitches occurring when the mover is headed, respectively, toward and away from you. Do melodies confine themselves to fixed ranges? They tend to, and tessitura is the musical term to refer to this range. In the book Mark runs through a variety of specific predictions. For the full set of arguments for language and music you'll have to read the book, and the preliminary conclusion of the research is that, human speech sounds like solid objects events, and music sounds like human behavior! That’s just what we expect if we were never meant to do language and music. Language and music have the fingerprints of being unnatural (of not having their origins via natural selection) and the giveaway is, ironically, that their shapes are natural (have the structure of natural auditory events). We also find this for another core capability that we know we're not "meant" to do - reading. Writing was invented much too recently for us to have specialized reading mechanisms in the brain (although there are new hints of early writing as old as 30000 years), and yet reading has the hallmarks of instinct. Mark's research suggests that language and music aren't any more part of our biological identity than reading is. Counterintuitively, then, we aren't "supposed" to be speaking and listening to music. They aren't part of our “core” after all. Or, at least, they aren't part of the core of **** sapiens as the species originally appeared. But, it seems reasonable to insist that, whether or not language and music are part of our natural biological history, they are indeed at the core of what we take to be centrally human now. Being human today is quite a different thing than being the original **** sapiens. Almost month ago, Geoffrey Miller and Gary Marcus had public discussion whether music is instinct or cultural invention, respectively. In recent years, archaeologists have dug up prehistoric instruments, neuroscientists have uncovered brain areas that are involved in improvisation, and geneticists have identified genes that might help in the learning of music. Yet basic questions persist: Is music a deep biological adaptation in its own right, or is it a cultural invention based mostly on our other capacities for language, learning, and emotion? Marcus goes to say that the oldest known musical artifacts are some bone flutes that are only 35000 years old, a blink in an evolutionary time. And although kids are drawn to music early, they still prefer language when given a choice, and it takes years before children learn something as basic as the fact that minor chords are sad. Of course, music is universal now, but so are mobile phones, and we know that mobile phones aren't evolved adaptations. When we think about music, it's important to remember that an awful lot of features that we take for granted in Western music - like harmony and 12-bar blues structure, to say nothing of pianos or synthesizers, simply didn't exist 1000 years ago. When ethnomusicologists have traded notes to try figure out what's universal about music, there's been surprisingly little consensus. Some forms of music are all about rhythm, with little pitch, for example. Another thing to consider is the music is not quite universal even with cultures. At least 10% of our population is "tone deaf", unable to reproduce the pitch contours even for familiar songs. Everybody learns to talk, but not everybody learns to sing, let alone play an instrument. Some people, like Sigmund Freud, have no interest in music at all. Music is surely common, but not quite as universal as language. On the other hand, the bone flutes are at least 35000 years old, but vocal music might be a lot older, given the fossil evidence on humans and Neanderthal vocal tracts. Thirty-five-thousand years sounds short in evolutionary terms, but it's still more than a thousand human generations, which is plenty of time for selection to shape a hard-to-learn cultural skill into a talent for music in some people, even if music did originate as a purely cultural invention. Maybe that's not enough time to make music into a finely tuned mental ability like language, but nobody knows yet how long these things take. Whether or not Neanderthals sang, music remains relatively recent in evolutionary terms, less than a 10th of a percent of the time that mammals have been on the planet. Still, we know responsiveness to music starts in the womb and kids show such a keen interest in music. We're born to listen for language, and music sounds sort of like language, so kids respond might respond because of that. But given the choice, infants prefer speech to instrumental music and they analyze language more carefully than music. Video games, television shows and iPhones are all cultural artifacts that were shaped to be irresistible to human brains, and that provoke strong emotions like music, but that doesn't mean that human brains were shaped to be attracted to them. There doesn't seem to be any part of the brain that is fully dedicated to music, and most (if not all) of the areas involved in music seem to have "day jobs" doing other things, like analyzing auditory sounds (temporal cortex), emotion (the amygdala) or linguistic structure (Broca's area). You see much the same diversity of brain regions active when people play video games. Face recognition has a long evolutionary history, and a specific brain region (the fusiform gyrus) attached, but music, like reading, seems to co-opt areas that already had other functions. Maybe, if we evolved music millions of years ago like they did. But since we're the only great apes with any aptitude for rhythm or melody, human music is probably much more recent: not enough time for such specialization of brain structure. And the songbirds never evolved language. If they had, we'd probably see overlapping brain areas for music and speech in their brains, just like ours. Which would have led their scientist-songbirds to argue that birdsong is just a side-effect of birdspeech. One counterintuitive principle is that for sexually selected mental traits like music to work well as signals of general brain function and intelligence, they need to recruit a lot of different brain areas and mental abilities. Otherwise they wouldn't be very informative about the brain's general health. If musical talent didn't depend on general intelligence, and general mental health, and general learning ability, it wouldn't be worth paying much attention to when you're choosing a mate. Content analyses show that pop song lyrics have usually concerned Iust, love, or jealousy - around the world, at least throughout the 20th century. There's an emotional resonance to courtship music that you just don't see with purely cultural inventions. So why haven't we found any genes that are specifically tied to music? That's not surprising from a sexual selection perspective. For music to work as a "good genes" indicator in mate choice, music needs to recruit a lot of different genes and gene-regulatory systems and biochemical pathways. You shouldn't expect just a few "music genes" that explain most musical talent, but thousands of contributing genes. But that's not why we haven't found any music genes yet. Nobody's really looked. There's very little gene-hunting work on music, and hardly any twin research on the heritability of musical talent. There are two kinds of music genes that could matter: the music-talent genes that explain individual differences in musical talent among humans, and the music-capacity genes that explain why we have musical abilities at all compared to most other mammals. The music-talent genes might number in the tens of thousands. We already know there are more than half a million DNA base pair differences that contribute to general intelligence differences between people, and a similar number might influence musical intelligence. But those music-talent genes will be much easier to identify using standard molecular genetics methods. The music-capacity genes that distinguish musical humans from non-musical chimps might be far fewer in number, but much harder to identify. If we can identify them though, and if they also exist in the Neanderthal genome (which is being pieced together now from fossil DNA), we'd know that music is probably at least 200000 years old, because we diverged from Neanderthals by then. So it's true that music doesn't fossilize, but we still might learn when music evolved from the genetics. If we could really show decisively that Neanderthals could sing, that sort of genetic evidence would certainly help, but unless we find genes that are specifically tied to music, it might be hard to go on in the other direction: to deduce whether Neanderthals can sing based on their genomes. Chimpanzees are much less interested in music than humans are, but we still haven't been able to link that to a particular genetic difference. Of course, as Mark suggests, music might just be illusion of the instinct cause by cultural evolution. Once humans were sufficiently smart and social that cultural evolution could pick up steam, a new blind watchmaker was let loose on the world, one that could muster designs worthy of natural selection, and in a fraction of the time. Cultural selection could shape our artifacts to co-opt our innate capabilities. If the origins of music comes from nature-harnessing then it will have many or all the signature signs of instinct. But it won't be an instinct. Instead, it will be a product of cultural evolution, of nature-harnessing. And it won’t be a mere invention that we must learn. In a sense, the brain doesn't have anything to learn - cultural evolution did all the learning instead, figuring out just the right stimulus shapes that would flow right into our emotional centers and get us hooked. For some further discussion on this topic click here. So, what is it to be human? Unlike **** sapiens, we're grown in a radically different petri dish. Our habitat is filled with cultural artifacts - the two heavyweights being language and music - designed to harness our brains’ ancient capabilities and transform them into new ones. Humans are more than **** sapiens. Humans are **** sapiens who have been nature-harnessed into an altogether novel creature, one designed in part via natural selection, but also in part via cultural evolution. Credits: Wikipedia, Discover Magazine, Noam Chomsky, Mark Changizi, Geoffrey Miller, Gary Marcus
<urn:uuid:1b73e07b-3f4f-4eec-89ad-87bef3f80006>
CC-MAIN-2017-04
https://community.emc.com/people/ble/blog/2012/04
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00071-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962478
3,314
3.46875
3
While networks are agnostic to the content of the serviced packets, end users have a very good understanding of what traffic they have and what is the expected shape of well behaved traffic. It is increasingly useful to pass and make use of such information upstream in order to better address the increasing risk of abusive or misbehaving traffic. This information needs to cross network boundaries while still being part of established trust domains. The presence of BGP sessions between networks indicates a good trust relationship between them and FlowSpec exploits this to facilitate passing traffic shape information either between routers, routing domains or networks. Examples of expected well behaved traffic from an end user’s perspective might be: - an online shop’s 90+% of customers originate in Europe and US with very few visits from elsewhere. Rate limiting traffic from elsewhere at for example 1000Mbps will cover the needs of those few visitors during normal use and will protect the network in case of abusive spikes from most probably malicious attacks. - a bank’s policy is to prohibit Bittorrent use. This could be enforced by dropping all packets targeting Bittorrent port ranges. - a video conferencing service uses UDP to pass most of the live and time-sensitive content. This is achieved best by redirecting UDP ‘flows’ directly to the forwarding farms while routing the typical session setup or authentication TCP flows to ordinary web farms. Even when this information cannot propagate upstream, being able to enforce policies at the network’s edge brings many benefits including advance planning and preparation. This allows networks to start using the technology even when their neighbors and upstream providers are not yet ready to support it. Of course, the full potential of BGP FlowSpec is reached when this traffic shape information propagates and is enforced as far upstream as possible. This might take time and this depends on wide BGP FlowSpec adoption by networks. The main factors for BGP FlowSpec broader adoption are: A. complexity that network administrators need to balance against benefits the technology brings. B. technological design flaws that limit the use cases that address varying network needs. C. scalability and computational constraints imposed by the technology on networking equipment. D. acceptance by neighbors increases the utility of the technology. Of course, acceptance usually increases as the technology matures. For example adopters of BGP FlowSpec report the following issues during early adoption: - scaling issues where BGP FlowSpec capable routers hit their CPU limits while trying to enforce FlowSpec traffic shaping rules. These days some of our customer testimonials show us that their network has been tested to perform admirably even with tens of thousands of FlowSpec rules. - implementation bugs were another significant reason. Many years have passed and most of the bugs have been addressed. - design limitations made BGP FlowSpec less useful for many networks. In the meantime the initial RFC5575 has been supplemented with additional capabilities to address many other network use cases. Networks are likely to implement BGP FlowSpec in the near future. At least these are the findings of a survey carried out by Juniper. The scale to the right highlights how likely individual networks assess whether they will implement BGP FlowSpec. Less than 5% of the respondents are very unlikely to implement it while more than one third are very likely to introduce the technology even in their current environment. Even those who currently do not intend to introduce BGP FlowSpec, most do not hold a very strong opinion against it and clearly indicate a willingness to reconsider once the environment becomes more favourable (perhaps more widely adopted). Networks are ready to implement BGP FlowSpec as they can do it gradually, while becoming more comfortable with its capabilities. Along the way, more confidence and experience is gained with its deployment. Noction strives to include BGP FlowSpec capabilities into IRP in such a manner that keep complexity under control and specifically allow easy definition and review of BGP FlowSpec policies. Once the complexity issue is addressed Noction believes its customers will gradually increase their reliance on this technology in order to reap the ever increasing benefits. Of course, there have been notable horror stories with BGP FlowSpec implementations. A few words regarding the well publicized CloudFlare outage of 2013 that put BGP FlowSpec in the spotlight as for example reported in a postmortem here – https://blog.cloudflare.com/todays-outage-post-mortem-82515 Indeed, it was a catastrophic event that affected a well known Internet player. Still it was just another case of an implementation bug that as reported by CloudFlare itself: “What happened instead is that the routers encountered the rule and then proceeded to consume all their RAM until they crashed.” The incident helped BGP FlowSpec implementations become more resilient and better designed, just like any other maturing technology. Three years have passed without similar cases and it is the time that we more broadly started using this technology.
<urn:uuid:08595061-760e-44fd-8da3-c30c3e9f5156>
CC-MAIN-2017-04
https://www.noction.com/blog/bgp-flowspec
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00375-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950619
1,012
2.71875
3
If you could fly to the edge of our universe you'd find giant magnetic bubbles about 100 million miles wide. That's what computer models digesting data from NASA's Voyager spacecraft, which are now close to 10 billion miles away from Earth, are suggesting as they try to figure out the information being beamed from the edge of our solar system. NASA explains: Like Earth, our sun has a magnetic field with a north and south pole. The field lines are stretched outward by the solar wind or a stream of charged particles emanating from the star that interacts with material expelled from others in our corner of the Milky Way galaxy. "The sun's magnetic field extends all the way to the edge of the solar system. Because the sun spins, its magnetic field becomes twisted and wrinkled, a bit like a ballerina's skirt. Far, far away from the sun, where the Voyagers are now, the folds of the skirt bunch up," said astronomer Merav Opher of Boston University. More on space: Gigantic changes keep space technology hot When a magnetic field gets severely folded like this, interesting things can happen. Lines of magnetic force criss-cross, and "reconnect". (Magnetic reconnection is the same energetic process underlying solar flares.) The crowded folds of the skirt reorganize themselves, sometimes explosively, into foamy magnetic bubbles. So far, much of the evidence for the existence of the bubbles originates from an instrument aboard the spacecraft that measures energetic particles. Investigators are studying more information and hoping to find signatures of the bubbles in the Voyager magnetic field data, NASA said. Understanding the structure of the sun's magnetic field will allow scientists to explain how galactic cosmic rays enter our solar system and help define how the star interacts with the rest of the galaxy. NASA says galactic cosmic rays are subatomic particles accelerated to near-light speed by distant black holes and supernova explosions. When these microscopic cannonballs try to enter the solar system, they have to fight through the sun's magnetic field to reach the inner planets. "The magnetic bubbles appear to be our first line of defense against cosmic rays," points out Opher. "We haven't figured out yet if this is a good thing or not." On one hand, the bubbles would seem to be a very porous shield, allowing many cosmic rays through the gaps. On the other hand, cosmic rays could get trapped inside the bubbles, which would make the froth a very good shield, NASA said. Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories:
<urn:uuid:b6962f17-f071-42ed-9e93-9afc7e27a313>
CC-MAIN-2017-04
http://www.networkworld.com/article/2229447/security/nasa--the-edge-of-the-universe-is-home-to-1-million-mile-wide-magnetic-bubbles.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00283-ip-10-171-10-70.ec2.internal.warc.gz
en
0.915965
530
3.78125
4
The definition of SDN from the experts for the kids (and adults, too). Nathan Pearce, Cloud and SDN Product expert, F5 Networks "Software defined networking (SDN) is like having the power to make new things, at the touch of a magic button – or even just by thinking about it! Imagine having a big shiny button on your bedroom wall and every time you want to do something faster or in a different way, you just press it and it happens. Just think, if you want to get to school faster, you could hit the button and a speedy slide from your bedroom window to the school gate appears. Or if you want the game you’ve ordered to arrive faster, just hit the button and, quick as a flash, the world’s fastest car drops off the delivery man with your new game!" Jennifer Pigg Clark, VP of Mobility Research, 451 Research "You know how when you drive somewhere with you parents lots of times one parent (maybe your Dad) does the driving and the other parent (maybe you mom) tells him where to turn? That’s just how network traffic works – someone has to drive the information but someone else has to know where to turn so the information gets where it’s going. "SDN is like having GPS. Someone still needs to drive the car, but mom can stay home, because the GPS system will tell Dad where to turn. Not only that – it can tell all the dads, in all the different cars, where to turn. So that’s a lot simpler, it’s faster, there are fewer fights, everyone’s happy and all the moms can get together and go do something else – like go to the pool with you, or build a new civilisation – fun stuff." Andy Chew, Cisco’s UK & Ireland Managing Director of Architectures "By 2020 there will be fifty billion things connected to the Internet – or the information super highway as we used to call it in the 1990s. The more devices we use, the more network traffic we’ll experience – if this continues unchecked the information superhighway is likely to become one very big and congested traffic jam! "Software Defined Networking (SDN) is an approach to networking that will help reduce and alleviate this traffic congestion by being able to programme the ‘highway’. SDN works by separating the network control-plane, think of this as a traffic update on the radio, from the network devices, think of these as the cars. "By being able to differentiate critical applications (suggested new traffic routes) from noncritical ones (sitting in the traffic jam) it allows companies to dynamically allocate network resources to higher-priority applications – thereby increasing traffic flow and making sure the road stays clear." Stu Bailey, founder and CTO, Infoblox "Think about the tablet you use to watch videos and play games, or the phone your Dad uses to check his email or the laptop your Mum uses for work. These are all different kinds of computer. "Imagine each of these computers is a city full of people doing different things. Today, these cities are connected by highways with cars carrying people back and forth, so one computer can talk to another. These highways have traffic lights and traffic jams and car crashes that slow things down. "Now, let’s imagine each computer city has a magic balloon around it. When ten or twelve or even a thousand computer cities want to talk to each other, the people inside make the cities float around and find each other! No more highways, no more cars, no more crashes. As long as two magic balloons are touching, the people inside can talk and visit. This magic world is called SDN, and it’s how computers will talk to each other before you’re in high school." Dr Nick Race, senior lecturer at School of Computing and Communications at Lancaster University "Ever been lost or stuck in traffic and wanted advice on exactly the best way to reach your destination? You might ask a passer-by, who helps you get closer to your destination – only for you to run into more traffic. You turn back, looking for another route. This is very much like how today’s computer networks operate. "Now let’s imagine the same scenario using SDN. SDN is the networking equivalent of having a reliable, up-to-date mapping application for your smartphone: retrieving the very latest maps, using GPS to plot your location with a central server constantly calculating the best route for you to take. The power of SDN is the software: running on a central server it has a complete picture of the network and can give you the most up-to-date information to help guide you to your destination and avoid those annoying jams." Ed Ogonek, president and CEO, CENX. "The Internet is like a kindergarten classroom – where the teacher asks you to pass a ball from the front of the room to the back and each of you decide on your own what is the best way to do so. The ball likely moves in a haphazard manner from one child to another, even touching some multiple times. You may pass it, roll it, throw it, or even drop it. "A Software Defined Network is one in which the teacher first lines you all up in a straightforward line, tells you to take the ball from the child on your right and pass the ball to the other child on the left. And you do what you’re told. This saves you a lot of time since the ball gets to the back of the classroom much faster." Mike Fratto, principal analyst of Enterprise Network Systems, Current Analysis "Regular networking is like playing soccer or kick ball. You play a position and your friends play other positions. You know what to do but sometimes your coach or team mates yell out suggestions. You may or may not do what they say, but you’re all trying to make a goal. SDN isn’t like that. SDN is like a school play. You all have your costumes wear and lines to learn. Your teacher organizes you into places. Then you go on stage and you read your lines and if someone goofs, you fill in. In the end, the audience applauds." Stuart Greenslade, sales director of EU networking, Avaya "The benefits of SDN can be likened to removing the constraints of the existing plumbing in your house, when you are refurbishing it. For example when planning a new kitchen you might want to place the sink in the middle of the room because there it would be equidistant between the fridge and the cooker and therefore in the most practical location. "However you may find that this isn’t possible and that the sink has to be in the corner, because that is where the existing water and waste pipes come into the kitchen. In an SDN environment, network managers are no longer constrained by ‘the plumbing’ – i.e. they would be able to locate the sink in the most useful location, and even move it around several times, regardless of where the pipes are. "Software simply becomes a toolset and the network manager can move to focusing on solving business problems, not overlaying a software vision on top of business problems. Vitally, SDN allows network managers to really concentrate on the services that their network or data centre delivers – they can organise their networks by use and make them more flexible." Akshay Sharma, researcher director of Gartner’s Carrier Network Infrastructure Group "It’s all about bringing the puffy clouds and the stars in space to you and to your toys here on Earth, and this will allow you to have your toys move to the puffy clouds and the stars in space, so you can play with them across other places, and on other devices: TVs, smartphones, tablets…and to enjoy them as you like, and share them with your friends too, and keep everyone happy…" Clive Hamilton, VP Network Services at NTT Europe "Think of a network as a football pitch and the ball is the data you want to deliver. Each player has his or her function on the pitch; the striker, the defence and so on. And they all have to work together to deliver the ball to the back of the net. "But rather than them all having their own opposing strategies on how to achieve this or working individually, which would be chaotic and an impractical use of resource and their individual skills, they need someone to bring them together. "SDN is the football manager who defines and executes the overarching game plan and strategy. It can also change the game plan in real time to take account of events on the field, such as injuries (downtime and glitches) or a tackle by the opposition that takes a key player down (network conditions that prevent the delivery of the ball, such as network congestion)." Don McCullough, Director Strategic Communication at Ericsson "SDN is like when I let you use my pots and pans to play games instead of cooking dinner. You are like a startup company thinking of new ways to have fun, that is great. But I still have to cook dinner, so you must clean them off and give them back to me at the end of the day. The SDN controller is like me letting you think up new ways to play with my pots and pans. It opens up the network so that many different people and companies can try new ideas that will benefit people all around the world. But it also sets up rules that make sure that the network is protected and maintained properly." Pages: 1 2
<urn:uuid:b3f47989-f236-444b-9445-bcfc869d2c81>
CC-MAIN-2017-04
http://www.cbronline.com/news/enterprise-it/it-network/20-ways-to-explain-software-defined-networking-to-a-five-year-old-4348216
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00007-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952193
2,010
2.75
3
Young people and old people use the Internet differently, and their privacy expectations vary. Young people are very fluent with social media – but not necessarily technically fluent. They’re used to living more of their lives in public – a thing that before was reserved only for celebrities. They also care a lot about having control over their data – they care about their parents or teachers having access to it. But, interestingly enough, they are also more trusting of governments and corporations. They’re likely to mind security cameras or screenings less that older people. Or, at least, so says Bruce Schneier, BT Counterpane’s CTO and renowned security critic and author. At this year’s Network and Information Security Summer School organized by ENISA in Greece, with the general theme “Privacy and Security in the Future Internet”, he talked about the generational gap that divides users and their attitude towards privacy. He also talked how, generally, our privacy expectations are higher than the actual privacy we can get. He says that, contrary to what people might think, users are not Google’s customers – advertisers are. It is in their interest that users share as much information about themselves as possible, and use their services as often as possible – and that’s why they talk a lot about privacy and privacy settings, but actually do little. They’re trying to change the privacy balance in their favor, and they are concerned by what is possible, what is legal and what is sellable. Schneier says that at this day and age everything we do produces data, and that this data is collected somewhere. Our communications and actions used to be informal, but not everything is stored and researchable. Basically, we’re leaving massive digital footprints. And “systems never forget,” he says. People have very little control over their privacy. As we judge previous generations about not giving enough attention to the issue of pollution, he thinks that this generation will be judged if they don’t get a hold on the privacy issue and resolve it. A great problem with all of this is that the legal system can’t keep up with the fast pace in which the technology changes. “I would like to see laws that are technologically invariant, but that’s hard to do. I’d like to see more active legislation for protecting privacy,” Schneier says. When asked if educating, teaching young people about privacy could be a good idea, he seemed somewhat skeptic – “What are the right privacy decisions, anyway?” And he also noted that we have been trying to educate the public about choosing a strong password for years – and we all saw how this went. And this is much, much harder.
<urn:uuid:19ff9a26-43df-468c-80e8-049a12c569c9>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2010/09/14/privacy-expectations-and-the-generation-gap/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00246-ip-10-171-10-70.ec2.internal.warc.gz
en
0.971878
574
2.640625
3
Refer to the exhibit. EIGRP has been configured on all routers in the network. Whatadditional configuration statement should be included on router R4 to advertise a defaultroute to its neighbors? Refer to the exhibit. Router RTA is the hub router for routers RTB and RTC. The FrameRelay network is configured with EIGRP, and the entire network is in autonomous system 1.However, router RTB and RTC are not receiving each other’s routes. What is the solution? Refer to the exhibit. EIGRP is configured on all routers in the network. On a basis of theshow ip eigrp topology output provided, what conclusion can be derived? Refer to the exhibit. Which three statements are true? (Choose three.) Which command will display EIGRP packets sent and received, as well as statistics on hellopackets, updates, queries, replies, and acknowledgments? Which three statements are true about EIGRP operation? (Choose three.) Select 3response(s). Which two statements about the EIGRP DUAL process are correct? (Choose two.) Select 2response(s). What are three key concepts that apply when configuring the EIGRP stub routing feature in ahub and spoke network? (Choose three.) Select 3 response(s). Based on the exhibited output, which three statements are true? (Choose three.) Refer to the exhibit. EIGRP is configured with the default configuration on all routers. Autosummarization is enabled on routers R2 and R3, but it is disabled on router R1. Which twoEIGRP routes will be seen in the routing table of router R3? (Choose two.)
<urn:uuid:9b7d5079-b952-4843-ac2b-57627e63d515>
CC-MAIN-2017-04
http://www.aiotestking.com/cisco/category/exam-300-101-implementing-cisco-ip-routing-route-v2-0-update-july-21th-2016/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00550-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945529
356
2.75
3
David W. Bennett August 20, 2011 The Challenges Facing Computer Forensics Investigators in Obtaining Information from Mobile Devices for Use in Criminal Investigations There are a number of electronic personal devices that are labeled mobile devices” on the market today. Mobile devices include cellphones; smart phones like the Apple iPhone and Blackberry; personal digital assistants (PDAs); and digital audio players such as iPods and other MP3 type devices. Laptop computers, tablets and iPad products are not typically classified as a mobile device as they are not small enough to be considered handheld. Today, the ever popular smartphone comes with a storage capacity that is similar to a laptop while commonly utilized as a portable office, social network and entertainment center all rolled into a solitary, convenient device. A smartphone is a mobile device that provides advanced computing and offers the ability to run mobile applications with more connectivity options than a cellular phone. Technological and storage capacity for mobile devices has grown exponentially. Over the last decade, capabilities and features of mobile devices have turned them into data repositories that can store a large amount of both personal and organizational information. Unfortunately, criminals have not missed the mobile device information revolution. Within the past few years, they have increasingly been using mobile phones and other handheld devices in the course of committing criminal acts. For example, a drug dealer may keep a list of customers who owe him money in a file stored on his handheld device or a child pornographer could keep nude images of underage children engaging in sexual activities on a mobile device for the purposes of trading photos or video files with other pedophiles. Indeed, almost every class of crime can involve some type of digital evidence from a device that is essentially a portable data carrier. This increases the potential for incriminating data to be stored on mobile devices and to be utilized as evidence in criminal cases. Can valuable information be obtained from a mobile device to assist in a criminal investigation? What are the challenges a forensics investigator faces in obtaining information from these devices? Mobile devices cancontain such electronic records information such as electronic mail, word processing files, spreadsheets, text messages, global positioning system (GPS) tracking information and photographic images that can provide law enforcement personnel with essential evidence in a criminal investigation. A mobile phone’s ability to store, view and print electronic documents is easily utilized from a single hand-held device with the processing power and the storage capacity similar to a bulky laptop (Marwan Al-Zarouni, 1). Need for Mobile Forensics Mobile device forensics is the process of recovering digital evidence from a mobile device under forensically sound conditions and utilizing acceptable methods. Forensically sound is a term used in the digital forensics community to justify the use of a particular technology or methodology. Many practitioners use the term to describe the capabilities of a piece of software or forensic analysis approach (McKemmish, 3). Mobile devices vary in design and manufacturer. They are continually evolving as existing technologies progress and new technologies are introduced. It is important for forensics investigators to develop an understanding of the working components of a mobile device and the appropriate tasks to perform when they deal with them on a forensic basis. Knowledge of the various types of mobile devices and the features they possess is an important aspect of gathering information for a case since usage logs and other important data can potentially be acquired using forensics toolkits. Mobile device forensics has expanded significantly over the past few years. Older model mobile phones could store a limited amount of data that could be easily obtained by the forensics investigator. With the development of the smartphone, a significant amount of information can still be retrieved from the device by a forensics expert; however the techniques to gather this information have become increasingly complicated. The demand for mobile device forensics stems from mobile phones being employed for such functions as to store and transmit both personal and corporate information. The use of mobile phones in online transactions such as stock trading, flight reservations and check-in; mobile banking; and communications regarding illegal activities that are being utilized by criminals has created a need for mobile device forensics. While it took decades to convince legitimate businesses that mobile devices could increase sales, communications, marketing and other improvements to their operation; crime organizations were well aware of the substantial benefits that mobile phones could provide (Mock, 1). Law enforcement and forensics investigators have struggled to effectively manage digital evidence obtained from mobile devices. Some of the reasons include: - Mobile devices require specialized interface, storage media and hardware. - File systems that are contained in mobile devices operate from volatile memory or computer memory that requires power to maintain stored information versus nonvolatile memory devices like a standalone hard disk drive that does not require a maintained power supply. - The diverse variety of operating systems that are embedded in mobile devices. - The short product cycles from the manufacturers to provide new mobile devices and their respective operating systems are making it difficult for law enforcement agencies to remain current with new technologies. A few of the more well received commercial off the shelf (COTS) products and open source applications available to the forensics community are reviewed below; however, no recommendations are made or implied. Of the most emerging commercial products, stands the Cellebrite Universal Forensics Extraction Device (UFED) Forensic System – a standalone mobile forensic device utilized both in the field and in the research lab. The UFED device supports most cellular device interfaces including serial, USB, infrared, and Bluetooth and can provide data extraction of content such as audio, video, phone call history and deleted text messages stored in mobile phones. The Cellebrite product is popular with investigators because it works well with the Apple iPhone and the acquisition methods can recover a significant portion of the data on the iPhone device. The firmware, which is used to run user programs on the device, is updated often enough to support new mobile devices and its functionality for the forensics examiner. Paraben Corporation’s Device Seizure product is another COTS forensic acquisition and analysis tool for examining over 2,200 handheld devices including cellular phones, PDAs and GPS devices. In addition, Device Seizure is designed to support the full investigation process and can perform physical acquisition through a data dump in its ability to recover deleted files and other information. The Device Seizure product, according to many experts in the forensics area, is considered shelf-ware and often will not perform as marketed (Mislan). Final Data’s Final Mobile Forensics product is another and is specific to Code Division Multiple Access (CDMA) mobile phones. CDMA phones were first launched commercially in Hong Kong in 1995 and are now currently utilized by major cellular carriers in the United States as an alternative to Global System for Mobile communications (GSM) technology. To help gain perspective, the wireless world is divided into GSM (standard outside of US and used inside US by AT&T and T-Mobile) and CDMA (standard in North America and parts of Asia). While there may never be a single standard technology worldwide, GSM is used in 219 countries and territories serving more than three billion people and providing international travelers the broadest access to mobile services (Moore, 3-5). Another type of product, a flasher box, is available but not recommended as a substitute for one of the above-mentioned automated COTS products because they are not always reliable. Flasher boxes are not designed for forensic work but can help recover data that is not readily available. Flasher boxes should never be used as a first response as they are considered a dangerously intrusive alternative and should only be utilized by trained or highly experienced investigators for their use in controlled environments as they can be technically challenged and complicated to use. Although flasher boxes do not require any software to be installed as with other forensics toolkits, modifications to the data can occur very easily through incorrect use, thus, leaving the evidence tainted and deemed useless to a criminal investigation. Flasher boxes are not usually documented by any best practices or principles, therefore, there are no simple methods to determine if they do preserve evidence in the mobile device’s memory and no guarantee that flashers will work in a dependable manner. Some examples of open source products that are freely available for download but limited in features when compared to commercial products include BitPim, a program that allows the user to view and manipulate data on many CDMA phones; Smelter for use on Siemens brand mobile phones; and ChipIt used to explore GSM Subscriber Identity Module (SIM) cards to view and copy a mobile device phone book. Although open source products like the aforementioned ones are heavily adopted and easily available to the forensics investigator, there are many issues that arise such as timely updates to the software, limited functionality and quality assurance testing of the software has been known to be problematic (Moore, 3-5). Information is stored in the mobile phone’s internal memory. Pertinent data such as call histories are stored in proprietary formats in locations that will alter that data according to phone model. Even the cable used to access the mobile device’s memory will vary according to manufacturer and model. Many examiners look at the SIM cards, which store personally identifiable information (PII), cell phone numbers, phone book information, text messages and other data for valuable information because it is typically stored in a standard format; however, the limited storage capacity of a SIM card forces the majority of the data to be stored on the phone itself. Unlike traditional computer forensics on a desktop or laptop computer where the investigator would simply remove the hard drive, attach to a write blocker device thus allowing acquisition of information on a computer hard drive without creating the possibility of accidentally damaging the drive contents and image the hard drive in order to fully analyze the data; the process to extract information from a mobile device is more complicated. There are a number of complex mobile forensics software applications to assist in the removal of data that are available to the forensics community. However, the lack of a leading edge tool and decreasing budgets for acquiring the tools are an ongoing problem (Mislan, 1-3). Since no single tool comes highly recommended by the forensics community, it is often desirable to use a range of software tools to acquire the data, thus increasing the budget needed to acquire the appropriate tools. The software tools available are expensive and law enforcement agencies are operating under restricted budgets and fixed resources. ComScore, a marketing research company that provides digital marketing intelligence for Internet businesses, estimate that roughly 63 million smartphone subscribers are in the United States, of which, Research in Motion (RIM)’s Blackberry device lead the pack with 31.6 percent, Google’s Android in the number two spot with 28.7 percent and Apple iPhone at number three with 25 percent of the market. ComScore data states that 234 million Americans ages thirteen and older used some type of mobile device in December 2010; however, the more interesting data are the mobile content usage in December 2010. The data estimate that 68 percent of US mobile subscribers used text messaging on their mobile device, web browsers were used by 36.4 percent and mobile applications usage at 34.4 percent (ComScore Web). An example of one of the fastest growing smartphone devices is the iPhone from Apple Inc. which debuted in January 2007. There are entire books dedicated to the operating systems for the Apple products as well as the development of applications for them. Like most electronic devices, the iPhone is a collection of modules, computing chips and other electronic components from various manufacturers making it difficult to utilize a “one size fits all” forensics software application as a staple for the forensics process. In fact, this is true for most mobile devices on the market. There does not seem to be a single vendor that is the emerging leader in forensics toolkits and oftentimes, as is the case with the popular iPhone, forensics investigators are relying on the hacker community for assistance in analyzing mobile devices (Mislan). Today’s mobile phone devices have a large storage capacity and a wide range of applications and connectivity options available to the user with each telecommunications provider. Mobile device forensics applications and toolkits are relatively new and developers are having difficulty in keeping up with the emerging technological advances due to the revolving door of products from market demand. The forensic tools available are often limited to one or more phone manufacturers with a limited number of devices supported (Marwan, 2-3). Regarding standards, the only evaluation document available for mobile phone forensics toolkits is published by the National Institute of Standards and Technology (NIST) (Ayers NIST Web, 1-2). NIST and various law enforcement staffs help to develop the requirements, assertions and test case documents to evaluate the toolkits and to assist in providing guidance in choosing the correct product to fit their need. The NIST evaluation document contains generic scenarios created to mirror real-life situations that may arise during a forensic examination of a mobile device. The NIST scenarios serve as a baseline for helping the forensics community determine a tool’s capacity to acquire and examine data in order to gain a perspective on the correct tools to invest. The NIST evaluation documents are considered to be an important resource for forensics investigators to maintain quality control and to validate toolkit functionality for mobile device forensics in proper data acquisition and reporting. Another organization discussing mobile device standards is a forum formerly entitled Open Mobile Terminal Platform (OMTP) and now called the Wholesale Applications Community (WAC) that has been created by mobile network operators to discuss and formulate standards with manufacturers of cell phones and other mobile devices. The goal of the WAC is to encourage open standardized technologies and allow developers to deploy applications across multiple devices and operators through the use of the standard technologies. The WAC has published some requirements for the support of advanced SIM cards and mobile device security but has mostly received broad support from European mobile device operators. It is no simple task to try and create standards for such a varying group of device manufacturers who utilize proprietary circuits and do not seem to agree on a communications standards so the forum has had limited success in the United States. Apple has already stated they will not join any standards. The outcome of the WAC will likely be a broad set of guidelines that will be adopted inconsistently by manufacturers. It would be prudent for the government to support open standards in order to lower the cost for law enforcement forensics investigators to recover data for investigations and to choose the appropriate tools to utilize. There are many devices that are cheaply manufactured in China and are very difficult to perform forensics by examiners. The primary reason is that inexpensive Chinese cell phones are unbranded, meaning they have no International Mobile Equipment Identity (IMEI) number and therefore, cannot be traced. The phones are attractive to criminals and terrorists who often utilize the cell phones for activities such as detonating bombs without being detected. A unique IMEI number is required for all GSM phones. This number allows a signal tower to identify individual cellular handheld devices in a service network which in turn helps the military and law enforcement establish the location of the phone (Moore, 5). With an unbranded phone, the absence of the IMEI number makes it impossible to track these mobile devices; thus, making the Chinese-made phones attractive to criminals and terrorist organizations alike. The United States Armed Forces has found an abundance of the Chinese-made cell phones in theater while in the Middle East. The India government has banned the Chinese-made cell phones from entering the country; however, these low-cost phones have penetrated into Pakistan and other developing markets. This is proving to be a serious security issue for American troops stationed in the Middle East. There is much exploration to be conducted in the area of these devices as China is one of the world’s largest and fastest growing markets for inexpensive and unbranded mobile devices. The investigative world knows little about the design, make, manufacturers and behavior of these mobile devices. Forensics evidence is only as valuable as the integrity of the method that the evidence was obtained. The methods applied to obtain evidence are best represented if standards are known and readily established by the digital forensics community. The Fourth Amendment limits the ability of government agents to perform search and seizure evidence tactics without a warrant, including computers. The Fourth Amendment states: The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized. The Fourth Amendment question that typically comes up in digital evidence cases asks whether an individual has a reasonable expectation of privacy having electronic information stored on electronic devices under that individual’s control. Computer evidence can present a challenge for both prosecutors and defendants alike. A guide to offering mobile device data as evidence is beyond the scope of this research but a few examples of some digital forensics issues in real life situations are described below. A legal issue in presenting evidence is the “best evidence rule” which states that to prove the contents of a document, recording or photograph, the “original” document, recording or photograph is ordinarily required. For example, in United States v. Bennett, 363 F.3d 947, 953 (9th Cir. 2004), a federal agent testified about information that he viewed on the screen of a GPS on the defendant’s boat in order to prove he had imported drugs across international waters. It was decided the agent’s testimony violated the best evidence rule because he had only observed a graphical representation of data from the GPS instead of actually observing the professed path the boat had been following during the encounter. Since the U.S. sought to prove the contents of the GPS, the best evidence rule was invoked and required the government to present the actual GPS data or printout of the data, rather than the testimony from the federal agent (Open Jurist Web, 1-2). In 2010, a Japanese sumo wrestling match-fixing scandal was brought to light after investigators analyzed data left on fifty cell phones seized from wrestlers of the Japan Sumo Association (JSA) while probing a baseball scandal in that country. The Japanese police were able to retrieve and restore electronic mail messages previously deleted from the mobile phones including messages exchanged among wrestlers who were being implicated in the wrestling bout-rigging case. The sumo wrestlers refused to turn over their mobile devices to law enforcement claiming their phones were damaged due to water or the battery had died in the phones. The case is still ongoing in Japan but members of the JSA plan to obtain data left on the cell phones utilized by the suspected wrestlers to restore deleted email messages in order to prove the case against the sumo wrestlers. Even if deleted, the cell phone email data remains in binary format on the handheld device’s memory. This is called data remanence or the residual representation of data that remains after attempts have been made to remove or erase the data. Through digital forensics, even mobile devices that have been ruined or immersed in water can still recover data unless the device’s memory chips are destroyed (Daily Yomiuri Online Web). Like digital evidence from a computer, it is necessary to have proper legal authority in order to perform a forensics investigation of cellular telephones and mobile handheld devices. An exception that is supported by case law (U.S. v. Finley C.A.5 Tex., 2007, & U.S. v. Carroll N.D. Ga. , 2008) allows a search “incident to arrest” and is often connected with searches of arrestees and motor vehicles. For example, in the U.S v. Finley case, it was noted that the defendant in the case “had conceded that a cell phone was analogous to a closed container” for the purpose of Fourth Amendment analysis Cyb3rcrim3 Web). Such searches are allowed by the court to be performed for the preservation of evidence that could easily be altered or damaged. This exception for handheld devices is restricted by a limited period of time and according to law, may be searched without a warrant only if the search is “substantially contemporaneous with the arrest (U.S. v. Curry D Me., 2008) (Lewis, 2). The authors of the Fourth Amendment could not have envisioned the powerful technology of today’s electronic age and courts have only begun to answer difficult questions that are being introduced through the use of these devices. Current Fourth Amendment doctrine and precedent cases suggest that the United States Supreme Court would consent to invasive searches of a mobile device found on the person of many individuals and has allowed an exception permitting warrantless searches on the grounds that law enforcement should be allowed to look for weapons or other evidence that could be linked to an alleged crime. The Obama administration and many local prosecutors feel that warrantless searches are perfectly constitutional during arrests (McCullagh,2). Privacy advocates feel that existing legal rules allowing law enforcement to search suspects at the time of an arrest should not apply to mobile devices like the smart phone because the value of information being stored is greater and the threat of an intrusive search is much higher, such as PII. Personally identifiable information (PII) is information connected to an individual including but not limited to education, financial transactions, medical information, and criminal or employment history which can be used to trace that individual’s identity such as name, social security number, or birth date. While technologies have evolved over the years, the search incident principle has remained constant. The Fourth Amendment applies to mobile electronic devices and digital evidence just as it does any other type of criminal evidence. Legally, when handling computers and mobile devices, it is best for the forensics investigator to treat them as they would a closed container, such as a briefcase or a file cabinet. Generally, the Fourth Amendment prohibits law enforcement personnel from accessing, viewing, or examining information stored on a computer or mobile device if the law enforcer would be prohibited from opening a closed container and examining its contents in the same situation. The forensics investigator should always be aware that laws vary state by state and unopened electronic mail, unread texts, and incoming phone calls of seized devices may present non-consensual eavesdropping issues. In digital media searches, the media is frequently searched off site and in an enclosed forensics laboratory. Generally, courts have treated the offsite forensics analysis of seized digital media as a continuation of the initial search and thus, the investigator is still bound by the Fourth Amendment. Because this analysis is often treated as part of the initial search, the government bears not only the burden of proving the seizure was reasonable and proper, but also that the search was conducted in a reasonable manner. To ensure that search and seizure forensics analysis meets the burden later at the trial, the forensics investigator should generate a written report with clear documentation of the analysis. Chain of Custody and Preservation of Evidence The goal of a forensic investigator is to obtain evidence utilizing the most acceptable methods, so the evidence will be admitted according to law in the trial. Obtaining a judge’s acceptance of evidence is commonly called admission of evidence. Evidence admissibility will require a lawful search and the strict adherence to chain of custody rules including evidence collection, evidence preservation, analysis, and reporting. According to the International Organization on Computer Evidence, some general principles should be followed in recovering digital evidence for chain of custody: - All of the general forensic and procedural principles should be adhered to when dealing with digital evidence. - Upon seizing digital evidence, any actions taken should not modify the original evidence. - When it is necessary for personnel to access the original digital evidence, the personnel should be appropriately trained for the purpose. - All activities associated to the seizure, access, storage or transfer of digital evidence must be fully and properly documented, preserved and available for review. - An individual is responsible for all actions taken with respect to digital evidence when digital evidence is in that individual’s possession. - Any agency that is responsible for seizing, accessing, storing or transferring digital evidence is responsible for compliance with all six principles (Guidelines for Best Practice in the Forensic Examination of Digital Technology 17-18). There are several publications, including those from the U.S. Department of Justice, that do not list any doctrine or principles like the ones aforementioned from the International Organization on Computer Evidence; however, many of the points addressed in the above principles are covered and provide a comprehensive explanation of the forensic process as well as related legal issues in the United States. As a rule, in criminal court proceedings, the process is often more scrutinized than the actual evidence recovered for a criminal investigation. An important part of the preservation of evidence process is in securing and isolating cell phones and other mobile devices found on-site for transport to the forensics lab for evaluation. While a mobile phone is powered on, it will search for the strongest signal, usually from the nearest active cellular tower, or a tower that enables the device to obtain the best signal. As a mobile device is transported, it will continue to search and adjust to maximize the strength of signal with that tower. The designation of the most recently connected cellular tower is then recorded as a database entry in the file system of the cellular phone; thus, when a mobile device moves to a new area, a new entry will be updated in that database. The most important step for a first-responder investigator, when arriving at the scene of a crime and identifying a mobile device for possible evidence submission, is to determine how best to preserve that device and its data. Recording and documenting the scene, including photographs of the mobile device in an undisturbed state should be included. It is recommended to power the mobile device off to preserve the data and battery power. If it is not possible to power the device off in a safe manner, the phone should be protected from cellular phone towers. Aside from locking down the mobile device by either disengaging or maintaining the power supply, the investigator should seize any additional accessories to the device such as SIM and media cards, headsets, charger cables and cases that could potentially contain evidence. When a mobile device has been powered off, text messages and other data may queue for delivery when the phone is powered back on and returned to service. The queued messages and data can overwrite old and deleted messages and/or data once they are delivered to the carrier. Carrier providers may update system files and roaming services when the mobile device is connected to the system. There will also be the potential for corruption of downloaded data as well as the file system of the device during a forensic examination when the system updates are transmitted to the system. The equipment that works the best is Radio Frequency (RF) shielded test enclosure boxes such as the type from a forensics product vendor like Ramsey Electronics. The Ramsey boxes ensure the mobile device is isolated from a cellular carrier’s network, and other RF signals to prevent any incoming or outgoing communications, including GPS tracking. Another option to transport a mobile device from the crime scene to the crime lab is a Faraday bag. Faraday bags are specially designed RF plastic coated shielded bags used to shield a mobile device from external contact. The bags are coupled with a conductive mesh to provide secure transportation to the laboratory. One issue with Faraday bags is that, oftentimes a cell phone will continue to search for a signal even while in the protected bag thus zeroing out the register that holds the location data – and making the device useless as an evidence artifact. Yet another issue is the increased activity while in the Faraday bag while the mobile device is powered on that can cause the battery to fail at a faster pace. With the Apple iPhone in particular, it is imperative for the forensic investigator to properly seize the mobile device due to the option of the Remote Wipe feature on the phone. A user can perform this command if the smart phone is connected to the Internet or phone network. If the device is powered off or placed in a Faraday bag, it cannot be remotely wiped; however, once powered back on, the wiping process, if activated, will automatically be invoked. When choosing a shielding artifact like one of the above-mentioned products, it is important to enable the forensics investigator to utilize the necessary tools to complete the examination and within the shielded area of a forensics laboratory if possible. Mobile device forensics is an ever-evolving field filled with challenges and opportunities when analyzing a mobile device for forensic evidence in support of a criminal investigation. The process can be more difficult than traditional computer forensics due to the volatile nature of electronic evidence. The software applications for mobile forensic testing are often not 100% “forensically sound”. A well trained, highly skilled digital forensics investigator plays an essential role in the criminal investigation process when performing forensics analysis of mobile devices that belong to suspects, witnesses, victims or through the analysis of network traffic in response to computer security incidents (Curran, K., Robinson, A., Peacocke, S. and Cassidy, S., 1-4). Although forensics toolkits do exist for the investigator, the majority of the tools are either not fully developed and do not yet provide full functionality for multiple devices. Budget constraints of law enforcement departments prohibit the purchase of quality software packages to use with the varying mobile device manufacturers. The key is for the investigator to use the appropriate toolset that is meant for that particular purpose in performing forensics analysis in an effective manner that will support a criminal case (Mislan). Even such a pertinent piece of forensics equipment, like the Faraday bag for the first-responder, is not free from issue. Once removed from the Faraday bag, a mobile device can start receiving data if powered on and be able to connect to the network. This may be difficult to control for the first responder if he is instructed by a higher official to leave the mobile device powered on upon discovery at the crime scene. Some devices can be controlled by placing the phone in airplane mode, thus disabling the wireless features, but not all mobile devices possess this functionality. For the most part, Faraday bags are reliable but cannot fully guarantee that a signal will not reach the phone. Successfully blocking the signal depends upon the quality of the bag, the distance to the cell tower, and the power of the transmitter in the mobile device. Another challenge that faces the forensics investigator is digital evidence that is obtained for a criminal investigation can be preceded by a suppression hearing. A suppression hearing is an opportunity for a judge to look at the evidence and determine whether it will be admissible or violates the suppression of evidence which determines if an unreasonable search or seizure violated a defendant’s constitutional right. The judge will determine whether the Fourth Amendment has been followed in the search and seizure of evidence. A forensics investigator’s knowledge of preservation of evidence rules, chain of custody principles and the overall legal issues in obtaining digital evidence from a mobile device is vital. It is important for the forensics investigator to stay current on the latest technological tools and laws that deal with admissibility of evidence, in order to avoid the evidence carefully obtained being struck down by a proceeding judge. The investigator should always keep up to date on what the latest efforts that criminals are utilizing to combat the forensics process. Forensic computing continues to play an increasingly important role in civil litigations, especially in electronic discovery, intellectual property (IP) disputes, as well as information security and employment law disputes. Forensics investigators must be aware of certain issues pertaining to data acquisition and the preservation of digital evidence for a criminal investigation. Electronic data is very susceptible to alteration or deletion, whether through an intentional change or from the result of an invoked application in some computing process. As electronic data is created, modified or deleted through the normal operations of a computing system, there lies the possibility of modifications arising from an incorrect or inappropriate digital forensics process. Given that the results of such actions can be treated as critical evidence in a case, it is essential that every measure be taken to ensure the reliability and accuracy of the forensics process. A digital forensics process must be developed and applied with due regard to jurisprudence issues. It is imperative that the digital forensics process is capable of being examined thoroughly to determine the reasonableness and reliability to refrain from being admissible. Al-Zarouni, Marwan. “Mobile Handset Forensic Evidence: A Challenge for Law Enforcement”. Australian Digital Forensics Conference. Edith Cowan University. Abstract. December 4, 2006. Ayers, Richard. “Mobile Device Forensics – Tool Testing”. National Institute of Standards and Technology (NIST) Web. www.cftt.nist.gov Curran, K., Robinson, A., Peacocke, S. and Cassidy, S. “Mobile Phone Forensics Analysis”, International Journal of Digital Crime and Forensics, Vol. 2, No. 2, April-May 2010 Cyb3rcrim3 Web. “Warrant Needed to Search Cell Phone”. December 16, 2009. http://cyb3rcrim3.blogspot.com/2009/12/warrant-needed-to-search-cell-phone.html “Guidelines for Best Practice in the Forensic Examination of Digital Technology”. July 2006. pp17-18. December 6, 2006. Print. Lewis, Don L. “Examining Cellular Phones and Handheld Devices”. Forensics magazine, August/September 2009. McKemmish, Rodney. “Advances in Digital Forensics IV”. 2008. International Federation for Information Processing. Mislan, Richard P. Assistant Professor, Department of Computer and Information Technology, Purdue University. Personal Interview. February 11, 2011. McCullagh, Declan. “Police Push for Warrantless Searches of Cell Phones”. CNet Web. June 26 2010. http://news.cnet.com/8301-13578_3-10455611-38.html Mislan, Richard P. “Cellphone Crime Solvers”. IEEE Organization. Web. July 2010. http://spectrum.ieee.org/computing/software/cellphone-crime-solvers Mock, David. “Wireless Advances the Criminal Enterprise”. The Feature Archives Web. June 28, 2002. http://thefeaturearchives.com/topic/Technology/Wireless_Advances_the_Criminal_Enterprise.html Moore, Tyler. “The Economics of Digital Forensics”. University of Cambridge, June 2006. Print. Open Jurist Web. April 9 2004. UNITED STATES of America, Plaintiff-Appellee,v. Vincent Franklin BENNETT, Defendant-Appellant. http://openjurist.org/ The Yomiuri Shimbun Online Web. “Data Retrieval Key to Sumo Scandal”. February 9, 2011. http://www.yomiuri.co.jp/dy/sports/T110208005743.htm
<urn:uuid:ee053ca4-3d6b-4c3e-95cf-4d18ff58c564>
CC-MAIN-2017-04
https://articles.forensicfocus.com/2011/08/22/the-challenges-facing-computer-forensics-investigators-in-obtaining-information-from-mobile-devices-for-use-in-criminal-investigations/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00550-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935226
7,251
2.5625
3
Researchers from the Massachusetts Institute of Technology (MIT) have created a way to take thermal images of energy-leaking buildings from city streets and turn the data into both power and financial savings. A roadside scan of a structure is combined with location and environmental data and run through a computer program that identifies the types and locations of all the energy leaks. A heat flow model calculates the amount of energy loss, and — using the estimated efficiency of the building’s heating and cooling system — that energy loss is then converted to dollars. The system was developed off of MIT Mechanical Engineering Professor Sanjay Sarma’s concept of “negawatt mining,” which is the idea of recapturing energy loss from buildings. Buildings lose about 25 percent of input energy due to various inefficiencies. Long Phan, a doctoral candidate at MIT who worked on the research and development, said the U.S. spends $400 billion per year on energy inputted into residential buildings and loses about $100 billion of that due to leaks. Theoretically, by correcting the leaks, that money should be recouped. In addition, the recaptured energy — the wasted watts — essentially constitutes a new power supply. “Imagine Google Street View with the ability to drive around a city and capture all the long-wave infrared images in that city and being able to map all that energy loss onto a map, therefore having a well defined carbon or energy leak map of that city,” Phan explained. The prototype system consisted of one long-wave infrared imager containing a GPS locator and various sensors, all mounted on a car. The system allowed MIT students to geo-tag each image they captured. Phan said the idea was to turn handheld infrared scanning from an ad hoc process to a systematic method that captures all the heat loss information from a structure. The thermal images and data are examined using a computer vision algorithm that automatically finds leaks, targets them and — based on the size, shape and texture of the leaks — classifies what type of leak it is, such as a door or window. The program uses a prototype library to compare each result to classify the leak. The model was used successfully in various neighborhoods in Cambridge, Mass., Ft. Drum in New York and a number of other locations, according to Phan. The final system, which Phan said is in the final phase of production, is made up of a camera array of 14 high-resolution imaging units. The units will also be able to be mass produced. Jonathan Jesneck, a research scientist in the MIT Field Intelligence Lab, who along with Phan created the multiphase process of geospatial data integration and scalable prediction analysis that the drive-by thermal imaging system uses, said the program can identify something as trivial as a window that’s leaking and costing $15 in monthly energy costs. With a little caulk, that could be reduced to $8 per month. “[We] build up a database of how expensive each leak is and have an estimate on how expensive it would be to fix each one, so you can do a financial analysis to figure out the return on investment of fixing each leak,” Jesneck said. “You’ll know exactly where to put your money for the biggest bang for the buck.” The system has been so successful that Phan and Jesneck, along with various colleagues, have started a spinoff company from their work at MIT. Called Eye-R Systems, the company is mass producing the scanning technology, which is called Energy Diagnostic for Global Efficiency. Phan is president and CEO of Eye-R Systems, while Jesneck is vice president of research and product development. The business also has its first customer — the U.S. Department of Defense (DoD). Eye-R Systems and MIT will be working with the U.S. Army Engineer Research and Development Center to demonstrate its technology as a tool for the DoD to make better decisions regarding building design and retrofit projects. Ultimately, however, Phan said his company’s major goal is to develop a national energy database using the technology. “Imagine every city in the U.S. mapped onto a national energy database that will allow customers to log in and generate energy reports of their home,” Phan said. “That report will come in the form of several criteria [and] have an energy efficiency rating score associated with that house.”
<urn:uuid:9f7b2f92-11e2-43fc-a2ed-7169533f181b>
CC-MAIN-2017-04
http://www.govtech.com/technology/Drive-By-Thermal-Imaging-Quantifies-Energy-Loss.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00366-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955877
926
3.21875
3
After years of guesswork and innumerable attempts to quantify the costly effects of cybercrime on the U.S. and world economies, McAfee engaged the Center for Strategic and International Studies (CSIS) to build an economic model and methodology to accurately estimate these losses, which can be extended worldwide. “Estimating the Cost of Cybercrime and Cyber Espionage” posits a $100 billion annual loss to the U.S. economy and as many as 508,000 U.S. jobs lost as a result of malicious cyber activity. To help measure the real loss from cyber attacks, CSIS enlisted economists, intellectual property experts and security researchers to develop the report. The general accepted range for cybercrime launch was between $100 billion and $500 billion to the global economy. Researchers used real-world analogies like figures for car crashes, piracy, pilferage, and crime and drugs to build out the model. They noted the difficulty of relying on methods such as surveys because companies that reveal their cyber losses often cannot estimate what has been taken, intellectual property losses are difficult to quantify and the self-selection process of surveys can distort the results. For purposes of the research, CSIS classified malicious cyber activity into six areas: - The loss of intellectual property - The loss of sensitive business information, including possible stock market manipulation - Opportunity costs, including service disruptions and reduced trust for online activities - The additional cost of securing networks, insurance and recovery from cyber attacks - Reputational damage to the hacked company. “We believe the CSIS report is the first to use actual economic modeling to build out the figures for the losses attributable to malicious cyber activity,” said Mike Fey, executive vice president and chief technology officer at McAfee. “Other estimates have been bandied about for years, but no one has put any rigor behind the effort. As policymakers, business leaders and others struggle to get their arms around why cyber security matters, they need solid information on which to base their actions.” The cost of malicious cyber activity involves more than the loss of financial assets or intellectual property. There are opportunity costs, damage to brand and reputation, consumer losses from fraud, the opportunity costs of service disruptions “cleaning up” after cyber incidents and the cost of increased spending on cybersecurity. Each of these categories must be approached carefully, but in combination, they help us gauge the cost to societies. “This report is also the first to connect malicious cyber activity with job loss,” said James Lewis, director and senior fellow, Technology and Public Policy Program at CSIS and a co-author of the report. “Using figures from the Commerce Department on the ratio of exports to U.S. jobs, we arrived at a high-end estimate of 508,000 U.S. jobs potentially lost from cyber espionage. As with other estimates in the report, however, the raw numbers might tell just part of the story. If a good portion of these jobs were high-end manufacturing jobs that moved overseas because of intellectual property losses, the effects could be more wide ranging.” This is the first CSIS is undertaking to help better understand the true cost of cybercrime. This first report builds a model to scope the direct losses from cybercrime and cyber espionage. A second report, which is underway, will look at the ramifications of cyber security losses on the pace of innovation, the flow of trade and the social costs associated with crime and job loss. Lewis and co-author Stewart Baker of Steptoe & Johnson LLP point out that as thoroughly as they plan to develop their estimates, the dollar amount might not fully reflect all the damaging effects that cyber espionage and cybercrime have on the global economy. Both activities slow the pace of innovation, distort trade and bring the spate of social costs associated with crime and job loss, according to the report. Lewis and Baker say the larger effect may be more important than any actual number, and it will be the focus of the next report.
<urn:uuid:ccf52d3d-5398-457c-b50a-75c29d9115b5>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/07/23/study-connects-cybercrime-to-job-loss/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00274-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948385
826
2.71875
3
U.S. supercomputer tops list of world's fastest machines The U.S. has the world's fastest supercomputer for the first time since 2009, according to a biannual list of Top 500 supercomputers. The Sequoia is an IBM machine that is part of a generation of IBM supercomputers known as BlueGene/Q. It is based at the U.S. Department of Energy's Lawrence Livermore National Laboratory in California. The high-performance machine functions at processing speeds of 16.32 petaflops per second (Pflop/s), or about 1.5 million times faster than the average laptop. One petaflop is equal to one quadrillion, or 1015, floating operations ( i.e. mathematical computations). The supercomputer is actually a highly interconnected cluster of 1,572,864 processors, or cores, mounted on 98,304 "compute nodes," or boards, that are arranged on a series of 96 standing racks acorss 318 square metres of floor space. The biannual list of the world's most powerful machines was revealed Monday at the International Supercomputing Conference in Hamburg. The list is compiled each June and November by a group of computer experts, manufacturers and computational scientists and uses what's known as the Linpack Benchmark to measure how fast computers execute a particular program. No. 1 supercomputer used to test nuclear weapons Supercomputers are used in a variety of fields, including Earth sciences, geophysics, astronomy, medicine and nuclear science. The newly assembled Sequoia will be used to conduct simulations intended to extend the life of America's aging nuclear weapons arsenal, in lieu of underground nuclear testing. The supercomputer "will provide a more complete understanding of weapons' performance, notably hydrodynamics and properties of materials at extreme pressures and temperatures," said Thomas D'Agostino of the National Nuclear Security Administration in a news release.
<urn:uuid:0f73c459-360f-472c-891b-0966abd932e8>
CC-MAIN-2017-04
http://e-channelnews.com/ec_storydetail.php?ref=429699
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00182-ip-10-171-10-70.ec2.internal.warc.gz
en
0.909711
404
3.171875
3
In this article, learn about these concepts: - UNIX accounts - Managing Samba accounts - Mapping accounts - Forcing account permissions to files and directories This article helps you prepare for Objective 313.1 in Topic 313 of the Linux Professional Institute's (LPI) Mixed Environment specialty exam (302). The objective has a weight of 4. To get the most from the articles in this series, you should have an advanced knowledge of Linux and a working Linux system on which you can practice the commands covered in this article. In particular, this article assumes that you have a working knowledge of Linux command-line functions and at least a general understanding of the purpose of Samba, as covered in "Learn Linux, 302 (Mixed environments): Concepts." To perform the actions described in this article, you must have the Samba software installed. In addition, you should have network access to a Windows client. Understanding UNIX user and group accounts Your Samba server probably doesn't exist in a silo. Users need to access files and directories, but before they can do so, they need to authenticate. Users can connect from Linux workstations or a Windows desktop. Either way, they need accounts that the Samba server recognizes. Once users are authenticated, they need appropriate permissions to files, directories, and printing services. Groups are a feature of Samba that can help you better manage these permissions. The sam back-end database is your mediator from the local UNIX accounts to the remote user accounts. There are several methods for allowing your users to authenticate to the Samba server, but before delving into Samba accounts, you should have a solid understanding of the basics of UNIX user and group account management. When you create a local user account on a Linux computer with a tool such as useradd, the account information is written to the /etc/passwd file. This file stores information such as the user's user name, home directory, default shell, and any comments associated with the account. These accounts are commonly referred to as UNIX local accounts. This article uses the terms UNIX account and local account Listing 1 creates a local account with the user name monty, provides a description of Monty Python in the comment section -c), specifies a home directory -m), and gives the user a default shell of /bin/bash Listing 1. Creating a local account [tbost@samba ~]$ sudo useradd -c'Monty Python' -m -s /bin/bash monty [tbost@samba ~]$ less /etc/passwd | grep monty monty:x:504:504:Monty Python:/home/monty:/bin/bash [tbost@samba ~]$ Each line in /etc/passwd represents a user account record. Each record has seven fields separated with a delimiter: a colon ( The user name in the first field, the user ID (UID) in the third field, and the group ID (GID) in the fourth field are of particular concern when you manage Group accounts perform a vital role in easing the burden of management for any multi-user computer. If you are managing a Samba server, allowing destined groups access to specific directories, files, and printing services is part of a typical configuration. As in user accounts, if you are working with a local Samba account configuration, you need to create UNIX group accounts on the local Samba server in most Samba configurations. You can locate UNIX group account information in the /etc/group file. Some Linux distributions create a local private group for each new user. Such is the case here, with the addition of user monty: [tbost@samba ~]$ less /etc/group | grep monty monty:x:504: [tbost@samba ~]$ This code displays the private group account created for user monty. If you are working in a mixed environment with Windows computers, keep in mind that Windows doesn't allow a user account and group account to have identical names. Much like user accounts, group accounts should exist on the local UNIX server before Samba can use them. Create a group by using a utility such as groupadd (see Listing 2), or edit the /etc/group file directly with an editor such as Listing 2. Creating a group account and adding a user to it [tbost@samba ~]$ sudo groupadd accounting [tbost@samba ~]$ sudo usermod -G accounting monty [tbost@samba ~]$ less /etc/group | grep accounting accounting:x:506:monty [tbost@samba ~]$ Listing 2 uses the /sbin/usermod tools to create the group and add a user to it. If you have multiple users to add to a group, you can create a script to perform the task or add the users to the /etc/group file directly. Members of the group should be in the last delimited field and separated by a comma ,). If you create groups manually, keep in mind that each group should have a unique GID. Managing Samba accounts For the typical Samba configuration, account information is stored in one of three password databases: Using smbpasswd and tdbsam The smbpasswd database is the default back-end database used by Samba until version 3.4. In Samba 3.4, smbpasswd is being deprecated and tdbsam is now the default back end as well as the recommended back-end database for an environment with less than 250 users. The tdbsam database is considered more scalable than smbpasswd. If you are using a version of Samba that employs smbpassd by default, you can change the back-end database in the smb.conf file by specifying the parameter passdb = tdbsam in the But smbpasswd is not just a database: It's a tool included with the Samba suite that can provide a way to manage Samba accounts in a simple Samba configuration. To create a Samba account, you need root privileges. The account should exist on the local Linux server before you attempt to create the samba account. Listing 3 shows the code for creating a Samba user account with smbpasswd. Listing 3. Creating a Samba user account using smbpasswd [tbost@samba ~]$ sudo smbpasswd -a monty New SMB password: Retype new SMB password: Added user monty. Users do have access to smbpasswd to change their passwords, as shown in Listing 4. Listing 4. Local user changing the password with smbpasswd [monty@samba ~]$ smbpasswd Old SMB password: New SMB password: Retype new SMB password: Password changed for user monty [monty@samba ~]$ Alternatively, you can configure Samba for password synchronization so that when a user changes the local account password, the Samba password is updated, as well: [global] unix password sync = yes If a user doesn't need access to the Samba server for an extended period of time, you can temporarily disable the account, and then enable it at a later date. If a user no longer needs access, you can delete the account. Listing 5 shows the commands. Listing 5. Disabling, enabling, and deleting a Samba account with smbpasswd [tbost@samba ~]$ sudo smbpasswd -d monty Disabled user monty. [tbost@samba ~]$ sudo smbpasswd -e monty Enabled user monty. [tbost@samba ~]$ sudo smbpasswd -x monty Deleted user monty. [tbost@samba ~]$ A feature-rich tool included with the Samba suite is This tool can work with accounts from any of three back-end databases. In addition to creating, modifying, and removing users, you can use - List user accounts - Specify home directories - Import user accounts - Set account policies You can use interchangeably on the tdbsam database (see Listing 6). Any commands you perform with pdbedit must be with root Listing 6. Interacting with the back-end database using smbpasswd and pdbedit [tbost@samba ~]$ sudo smbpasswd -a monty New SMB password: Retype new SMB password: Added user monty. [tbost@samba ~]$ sudo pdbedit -L monty:504:Monty Python [tbost@samba ~]# sudo pdbedit -L --verbose Unix username: monty NT username: Account Flags: [U ] User SID: S-1-5-21-2247757331-3676616310-3820305120-1001 Primary Group SID: S-1-5-21-2247757331-3676616310-3820305120-513 Full Name: Monty Python Home Directory: \\samba\monty HomeDir Drive: Logon Script: Profile Path: \\samba\monty\profile Domain: SAMBA Account desc: Workstations: Munged dial: Logon time: 0 Logoff time: never Kickoff time: never Password last set: Tue, 24 May 2011 14:19:46 CDT Password can change: Tue, 24 May 2011 14:20:16 CDT Password must change: Tue, 24 May 2011 14:20:16 CDT Last bad password : 0 Bad password count : 0 Logon hours : FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF Listing 6 demonstrates how you can create a user with and then list Samba users with pdbedit. For more extensive account information in You can also use pdbedit to set account policies. Account policy names you can manage are: min password length user must logon to change password maximum password age minimum password age reset count minutes bad lockout attempt refuse machine password change Listing 7 changes the minimum password length to eight characters, and then changes the maximum password age to 30 days. The -P switch takes a string argument that should match exactly one of the predefined policy names, while the switch takes an argument of the value for the policy setting. Listing 7. Managing accounts with pdbedit [tbost@samba ~]$ sudo pdbedit -P 'min password length' -C 8 account policy "min password length" description: Minimal password length (default: 5) account policy "min password length" value was: 5 account policy "min password length" value is now: 8 [tbost@samba ~]$ sudo pdbedit -P 'maximum password age' -C 30 ... account policy "maximum password age" value was: 4294967295 account policy "maximum password age" value is now: 30 Refer to the man pdbedit documentation, or type pdbedit -h for more details about available commands. If you are working with an existing directory service such as Lightweight Directory Access Control (LDAP) or working in a larger environment (that is, more than 250 users), you can use the ldapsam back end. Of the three back-end databases, ldapsam is the only one that allows storage of group accounts. By storing all users and groups in the ldap back end, all your servers can have consistent UIDs and GIDs. Configuring LDAP is beyond the scope of this article, but the idmap backend parameter in smb.conf specifies the location of your LDAP server. The parameter set below directs Samba to use the LDAP directory service at host name directory-services.example.org as its back-end storage. You should first have a working LDAP server that is configured to interact with Samba. idmap is discussed in more detail in the following section.) [global] idmap backend = ldap:ldap://directory-services.example.org:636 If your Samba server is a stand-alone server within one domain, you'll probably just use mapping files. However, if your environment consists of users connecting to the Samba server from another domain, the idmap tool assists with mapping the UIDs and GIDs properly. User mapping using sampasswd and TDB files If Windows users connecting to the Samba server have the identical user names as those created on the Samba server, a mapping file shouldn't be necessary. However, if your Windows users have names that do not map exactly, you can create a mapping file to link the user names. Keep in mind that although Linux is case sensitive, Windows user name are not. So, the Windows user name TBost is not the same local account as tbost. Table 1 shows the mapping from Windows to UNIX account names. Table 1. Windows and UNIX account names to be used for mapping When you create the Samba accounts, use the Windows account name. You can then specify a file location in the smb.conf file that will map the accounts to the appropriate UNIX account. Listing 8 shows account mapping in UNIX. Listing 8. Simple account mapping in UNIX [tbost@samba ~]$ sudo vi /etc/samba/smb.conf [global] username map = /etc/samba/smbusers ... ... ... [tbost@samba ~]$ sudo vi /etc/samba/smbusers # Unix_name = SMB_name1 SMB_name2 ... root = administrator admin nobody = guest pcguest smbguest monty = Monty tbost = bostt sue = sue.george The code command in Listing 8 configures the parameter to use /etc/samba/smbusers as the mapping file. When mapping accounts, it is straightforward: You place the UNIX account name on the left side and the Samba account names on the right side, separated by the equal sign =). When users connect, Samba maps to the For typical Samba server environments, group mappings are configurable using the net groupmap command from the Samba suite. Suppose Windows users accounts Monty, bostt, and sue.george are members of Domain Admins, Domain Users, and Domain Guests group accounts. If you want these users to have group account permission for the similar UNIX groups on the Samba server, add the UNIX account user names to each group: adm:x:4:root,adm,daemon,monty,tbost,sue users:x:100:monty,tbost,sue guests:x:507:monty,tbost,sue This is only a partial listing of the complete list of groups on a Samba server. Groups adm and users were created when the Linux operating system was installed. You will need to add each user to the appropriate group (see Table 2). Table 2. Windows and UNIX account groups to be used for mapping |Windows||UNIX||Windows relative ID (RID)||UNIX GID| net groupmap command can map your domain groups (see Listing 9), and net groupmap list lists the domain group mappings. Starting with Samba 3.x, new group-mapping functionality is available to create associations between a Windows group RID and a UNIX GID. Listing 9. Mapping groups with the groupmap command [tbost@samba ~]$sudo net groupmap add ntgroup="Domain Admins" unixgroup=adm \ rid=512 type=d Successfully added group Domain Admins to the mapping db as a domain group [tbost@samba ~]$ sudo net groupmap add ntgroup="Domain Users" unixgroup=users \ rid=513 type=d Successfully added group Domain Users to the mapping db as a domain group [tbost@samba ~]$sudo net groupmap add ntgroup="Domain Guests" unixgroup=guests \ rid=514 type=d Successfully added group Domain Guests to the mapping db as a domain group [tbost@samba ~]$sudo net groupmap list Domain Users (S-1-5-21-2247757331-3676616310-3820305120-513) -> users Domain Guests (S-1-5-21-2247757331-3676616310-3820305120-514) -> guests Domain Admins (S-1-5-21-2247757331-3676616310-3820305120-512) -> adm The sequence of steps to map groups in Listing 9 is: - With root privileges, use the net groupmap addcommand to specify the Windows group ntgroup='Domain Admin" to map to the UNIX group, unixgroup=adm. Perform this step for each group mapping. - The final command in Listing 9 displays the mapping for the groups. Using identity mapping For most environments, the above mappings are sufficient. However, if you manage a more complex environment, such as one with multiple Samba servers or workstations from different domains connecting to your Samba server, you should become familiar with identify mapping (IDMAP) and Winbind. IDMAP can help overcome interoperability concerns between a security ID (SID) and a local UNIX UID or GID. If your Samba server is a member of a Windows domain, you can use Winbind to map an SID to a UID or GID. You can set the range of the parameter and specify how long Winbind should cache the account information in the smb.conf file: [global] idmap uid = 20000-50000 idmap gid = 20000-50000 winbind cache time = 300 The parameters in the code above instruct Winbind to use the local UID range of 20000-50000 and a GID range of 20000-50000. This configuration is a relatively safe range for a Samba server that doesn't expect to have several thousand local user or group accounts. The winbind cache time = 300 parameter instructs Winbind to cache account information for 300 seconds. By default, Winbind stores mappings in the winbind_idmap.tdb file. Using default accounts to force ownership Instead of adding every user to a group, you may find it less cumbersome to use the force user and parameters. When set, these parameters instruct Samba to connect an authorized user as having the permissions for the specified user and group. This is especially beneficial when configuring a share that will be accessed by many users and common permissions [global] username map = /etc/samba/smbusers force user = guest force group = +employees In the code above, the force user parameter treats all connected users as user guest when working with files. A user must still connect with a valid user account. The configuration shows will force user accounts to guest, with the group account employees. - Learn more about Samba account information databases in Chapter 11 of the Samba 3.x manual. - Learn more about grup mapping in Chapter 12 of the Samba 3.x manual. Get a detailed description of the pdbedittool in the - Learn more about Identity Mapping (IDMAP) for stand-alone and primary domain controller servers in Chapter 14 of the Samba manual. - At the LPIC Program site, find detailed objectives, task lists, and sample questions for the three levels of the LPI's Linux systems administration certification. In particular, look at the LPI-302 detailed objectives and the tasks and sample questions. - Review the entire LPI exam prep series on developerWorks to learn Linux fundamentals and prepare for systems administrator certification based on LPI exam objectives prior to April 2009. - Exam Preparation Resources for Revised LPIC Exams provides a list of other certification training resources maintained by LPI. - In the developerWorks Linux zone, find hundreds of how-to articles and tutorials as well as downloads, discussion forums, and a wealth of other resources for Linux developers and administrators. - Follow developerWorks on Twitter, or subscribe to a feed of Linux tweets on developerWorks. - Stay current with developerWorks technical events and webcasts focused on a variety of IBM products and IT industry topics. - Attend a free developerWorks Live! briefing to get up to speed quickly on IBM products and tools as well as IT industry trends. - Watch developerWorks on-demand demos ranging from product installation and setup demos for beginners to advanced functionality for experienced developers. - Get involved in the My developerWorks community. Connect with other developerWorks users while exploring the developer-driven blogs, forums, groups, and wikis.
<urn:uuid:6535d9f6-62c3-43b3-8594-ff30fe2d924e>
CC-MAIN-2017-04
http://www.ibm.com/developerworks/linux/library/l-lpic3-313-1/index.html?ca=drs-
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00182-ip-10-171-10-70.ec2.internal.warc.gz
en
0.801276
4,574
2.859375
3
Although the majority of people (71 percent) are worried about the amount of personal information held online, a significant proportion would still share confidential information with people they didn’t know, with almost a third (32 percent) stating they would send a password, bank account number or their mother’s maiden name via email or a social networking website, say the result of a recent Faronics survey exploring UK web users’ attitudes to online security. Respondents were particularly trusting of LinkedIn, with 33 percent of site users admitting they have accepted connection requests from people they do not know. This compares to just 15 percent of Facebook users. Likewise, while 46 percent of Facebook users have customized their privacy settings, just 20 percent of those on LinkedIn have controlled who can view the information on their profiles. “While the risk of identity theft and other cyber threats is relatively well known, many users still seem to be in complete denial that it could happen to them,” said Bimal Parmar, VP marketing at Faronics. “The aim of this survey was to assess just how knowledgeable people are about the specific security threats that their social networking accounts can pose – and the results are eye-opening to say the least.” “Users are clearly concerned about the amount of data held online, yet they are continuing to trust social networking sites with very personal information. A growing concern is that when it comes to websites such as LinkedIn, it appears that this trust is even greater. While issues surrounding Facebook’s security – or lack thereof – have been widely covered in the media, LinkedIn is very rarely mentioned, which has led users to fall into the trap of believing that the security risk is lower. Unfortunately, as the threat landscape evolves, and attacks become more targeted and convincing, this is simply not the case.” Many people still do not believe they are a target for cybercriminals, with 51 percent of all respondents claiming they are not at risk of cyber fraud, and 28 percent believing there is no value in the information posted on their social networking pages. That said, 13 percent would be happy to send a password to complete strangers online if the request looked genuine. This, coupled with the fact that only a fifth (21 percent) of those asked have heard of attacks such as spear-phishing indicates a significant lack of awareness when it comes to changing cybercrime tactics. “Today, any personal information can be harvested and exploited by a determined cybercriminal,” continued Parmar. “As more cybercriminals employ social engineering tactics that tap into basic human psychology, even the smallest bits of information – such as birthdays, job roles, supplier information, travel plans or details of hobbies – can be used to form a convincing email that the victim could believe originated from a trusted source. All the target has to do is open the email, click on a link or download an attachment for spyware, keyloggers or other malware to be dropped onto the computer and open the entire corporate network to fraud.” Just over half (51 percent) of those surveyed admitted they had been targeted by a spear phishing campaign, with 12 percent of these attacks reported as successful. This is perhaps unsurprising as 60 percent of all respondents stated they would be willing to open an unsolicited email attachment if it looked relevant, interesting or appeared to be in response to an action they had taken (for example, a receipt for a recent purchase). This lack of consideration could be partially down to the fact that just 24 percent of UK organizations admit to having specific policies, training and/or safe computing measures in place to prevent an employee from falling victim to spear phishing and other email scams, and a fifth of survey respondents still believe that a good PC security package will solely protect them from fraud. The full findings of the survey can be found here.
<urn:uuid:59b74ddb-f736-4fe5-800e-758ae4d9f241>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2012/04/24/users-worry-about-data-security-but-still-trust-social-networks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00090-ip-10-171-10-70.ec2.internal.warc.gz
en
0.967588
791
2.609375
3
Once the cable plant is installed and terminated, or likewise fiber optic cables get damaged as the time passes by. It is recommended to test the fiber optic segment. Thus it’s of prior need to get them tested on a regularly basis, so that possibilities of hindering data transmission efficiency can be lowered down or eliminated. This is where a crucial aspect of fiber optic testing is needed to be conducted. Basically, the main purpose of testing is to evaluate how the cables are working, and to eliminate the faults, if perceived. Thus via this way quality of standards, as well as overall functioning of system gets improved. Usually, the components that are needed to be tested are: connectors, receiver, light source or LED, detectors and splices etc. The testing should be done according to TIA TSB-140 and the Acceptance Testing Notes guidelines. These documents provide additional guidelines for field-testing length, loss and polarity of a completed fiber optic link. Various types of testing equipment are available on the market, such as a fiber visual fault locator (VFL), a fiber power meter, a network cable tester or an optical time-domain reflectometer (OTDR). For troubleshooting, the OTDR is recommended. Visual Fault Locator The VFL is a red laser source; the tracer is a LED source. This instrument can be used to locate the breakpoint, bad splices, poor connections, bending or cracking on optical fiber cables. Visual fault locator is an efficient tool for tracing fiber, fiber routing, and continuity checking in optical networks, as well as to identify fibers and connectors in patch panels or outlets. It easily isolates high loss points and faults in fiber optic cables and is an ideal solution for applications in telecommunication, LAN, WAN, fiber data links and CATV systems. Fiber Power Meter Optic Power Meter is used for absolute light power measurement as well as fiber optic loss related measurement. For dBm measurement of light transmission power, proper calibration is essential. For measuring loss or relative power level in dB, fiber power meter is always used with an optical light source. There are general-purpose power meters, semi-automated ones, as well as power meters optimized for certain types of networks, such as FTTx or LAN/WAN architectures. It’s all a matter of choosing the right gear for the need. Network Cable Tester A network cable tester is used to test the strength and connectivity of a particular type of cable or other wired assemblies. Network cable tester can tell whether the cable is capable of carrying an Ethernet signal. If the cable carries the signal, this indicates that all the circuits are closed, meaning that electric current can move unimpeded through the wires, and that there are no short circuits, or unwanted connections, in the wire. There are a number of different types of cable testers, each able to test a specific type of cable or wire (some may be able to test several different types of cables or wires). Optical Time Domain Reflectometer (OTDR) The OTDR is a more sophisticated measurement instrument. It uses a technology that injects a series of optical pulses into the fiber under test and analyses the light scattering and the light reflection. This allows the instrument to measure the intensity of the return pulse in functions of time and fiber length. The OTDR is used to measure the optical power loss and the fiber length, as well as to locate all faults resulting from fiber breaks, splices or connectors. When do fiber optic testing, you should always according to TIA TSB-140 and the Acceptance Testing Notes guidelines. They provide additional guidelines for field-testing length, loss and polarity of a completed fiber optic link. For example, clean all connections and adapters at the optical test points prior to taking measurements.
<urn:uuid:1db13ad3-bc39-43f6-9512-a64890703503>
CC-MAIN-2017-04
http://www.fs.com/blog/fiber-optic-testing-equipments-and-guidelines.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00422-ip-10-171-10-70.ec2.internal.warc.gz
en
0.911075
786
2.71875
3
Assessing business intelligence tools means first answering the question, “What is business intelligence?” Business Intelligence, usually referred to as “BI” for short, refers to software designed to analyze data with the goal of discovering useful business insights. For example, a multi-site retailer might use BI tools to reveal a previously unknown pattern of revenue changes correlated to time of day. BI is related to data analytics and business analytics, though the connotation of BI is that it’s accessible to a bigger group of end users. As some reviewers on IT Central Station note, everyone, not just IT people or data specialists, should be able to use business intelligence software in their daily jobs. As a result, ease of use figures prominently into many user reviews on the site. IT Central Station reviewers want to know how little training is required for a BI tool to get non-IT end users going. They want BI to be easy to implement. Users want tools to enable easy report building and administration as well. The desire for BI tools to be easy to use flows from a trend in the technology over the least few years. BI has gone from being complex discipline reserved for highly-skilled people to being something the general knowledge worker can use every day. It’s not an either/or scenario. An organization might have some BI workloads that are reserved for data scientists, with others available to everyone. Regardless of where BI is deployed, however, continued support of end users and technical training for the support team are critical for success. In addition to security, performance, scalability and stability, users emphasize the importance of BI’s ability to integrate with other systems. BI is not a standalone technology. It works in concert with database management and business applications. For example, BI must integrate with OLTP databases with minimal footprint. BI also needs to integrate easily with graphical tools and reporting software. A business intelligence toolset ought to integrate with visualization tools - with ability to produce visually appealing, value added dashboards, charts, and standard reports. Mobility also counts, with workers wanting to be able to do analytics on mobile form factors such as tablets. Given that the “B” in BI stands for business, the business use case is considered highly relevant in choosing the right business intelligence toolset. BI should meet business needs. The total cost of ownership (TCO) should be well thought-out. And, any initiative to undertake BI should have clear executive management approval and a business plan for success. A thorough business needs analysis is essential. According to IT Central Station members, the best BI tools support multiple file output options and publication options. For instance, can the tool produce interactive files (e.g. Xcelsius output) that are shared externally via .pdf, Excel, etc.? A business analytics solution should easily access multiple types of data sources, with data blending capabilities. Your trust is our top concern, so companies can't alter or remove reviews. Evaluation Matrix for SAP, Oracle, QlikView, Microsoft Evaluation Matrix for SiSense, Tableau, QlikView Evaluation Matrix for MicroStrategy, IBM, Spotfire, Pentaho, others Evaluation Matrix for Microsoft, IBM, Oracle, Birst, Qlik, Tibco Visualization Tool Evaluation Template - Updated for 2015 20+ year IT Industry ERP and BI. Management Consulting and Perform Implementation Business Intelligence, Data Warehouses, Data Mart and ETL architect. Projects and Manager PMO conducting of IT in Business Intelligence and Balanced Scorecard (Strategic Management Methodologie), acting on big... more>> An experienced and ambitious BI Lead, with experience in the Financial Services and Technology industries. With advanced analytical skills developed in Banking & Insurance, coupled with leadership of data warehousing programmes. I specialise in transforming information into value adding... more>> EPM/BI certified Consultant, Oracle ACE and TeraCorp Consulting CEO More than 19 years engaging in EIS, BI, EPM projects in Brazil working for the last 5 years in global projects to a fortune 100 client. Highly adaptable in reverencing local cultures and business environment with an easy going consultative style. • First and only Oracle ACE in Business... more>> Oracle BI & DW Consultant DW/Business Intelligence Architect/Developer: Oracle data Integrator(ODI 10g, 11G) Oracle Business Intelligence Enterprise Edition(OBIEE 10G/11G) Data Warehouse best Practices, DW optimization and performance enhancement (Slowly Changing Dimensions, Change-data-capture, Materialized views,... more>> Business Intelligence: Raw Data or Strategic Analytics' There is a big difference between corporate information and Strategic Knowledge. Corporate Information usually consists of internal raw data while Strategic Knowledge is the business wisdom necessary to achieve profitable growth... more>> I love data. I love to categorise it, manipulate it, study it, and share it. Data Warehousing is how I satisfy this love and Business Intelligence is how I show it to others. Any type of data - music, words, pictures, numbers, ... Specialties: Data modeling, data architecture, ETL, Cognos... more>> Senior business Intelligence consultant Freelance MicroStrategy Professional. Author of the book: "Business Intelligence with MicroStrategy" I've been doing Business Intelligence projects with MicroStrategy and SQL Server/Oracle for the last 10 years now. I have experience in several sectors including Healthcare, Agriculture and... more>> Empresario, con la mirada fija en cumplir mis metas siempre ayudando a los demás y compartiendo conocimiento.....con el afán de hacer crecer el Software Libre en el país... Experto en Inteligencia de Negocios, Data Warehousing y Big Data Highly skilled in data exploration, analysis, visualization and presentation. Experienced in descriptive, behavioral and predictive customer analytics using industry standard tools and processes (SQL, R, Rapid Miner, MS SSAS, MS Excel). Highly skilled in guided analytics,... more>> Experienced professional with blended knowledge in visual analytics development, statistics, logistics and operation optimization. Specialized in translating user requirements into tangible visualizations to track performance, detect outliers and perform root cause analysis. Expertise in business intelligence, business process management, as well as IT compliance, governance and security. Founder and Business Intelligence Consultant Working with Business Intelligence since 2009. Always learning new concepts and studying new ideas to offer better services to my customers. Expertise in Open Source B.I. Solutions (Pentaho). Attended the course Dimensional Modelling in Depth with Ralph Kimball (Stockholm - Sweden). Hardwork + confidence = ME Specialties: ITILv3, SAP-BOBJ, SAP-BO Mobile, BO Mobile SDK (android & IOS), BO Admin, SAP-ABAP, JAVA, ORACLE, SQL, WEB DESIGNER To know more about me I worked on many BI projects mainly for finance departments (Budgeting & planning, PNL&BS); at every levels: Pre-sales, development/implementation, training of end users, business analysis and project management. From 2000 to 2007 I worked mainly on reporting projects in many sectors:... more>> Senior Pentaho Business Intelligence Developer - Consultant Responsible for leading the strategic design and maintenance of business intelligence applications. Ensures that the use of BI applications enhances decision making capabilities. Specialist in ETL utilizing various BI tools and scripting languages. Experience with data warehouse design and... more>> I am an Independent consultant. My main expertise majoring in: 1. Accounting and Auditing 2. Internal Control and Risk Management 3. Regulatory Compliance 4. Internal Audit 5. Business Process Management 6. Information Technology Planning & Design 7. Enterprise Architecture I'm Business Intelligence Consultant with exposure to MicroStrategy, SAP Business-objects and Microsoft SSRS BI providers. I'm also a guru on Microstrategy official discussion forum. Business Intelligence Consultant Experienced Business Intelligence specialist with diversified knowledge and expertise in maintaining analytical environments, providing technical support and creating solutions in alignment with business targets. Technological experience in diversified business areas: Entertainment,... more>> Data Visualization and BI Consultant An astute data analyzer and visualization expert with sound knowledge on BI solution development and implementation. Rich experience in catering to business verticals in different industries with normal to highly complex data intricacies and reporting environment. Specialties: Business... more>> Thorough knowledge of the Spotfire Server, Client and Web as well as Automation Services. Supporting customers around the world with their questions on the Spotfire platform. Initially I started as a support desk employee, growing onwards to become an analyst... more>> Business intelligence and analytics professional with 12 years of experience working with upstream and downstream oil and gas businesses on a wide variety of BI subject areas including -- Spotfire, R, Python, SQL, lean process improvement, data quality management, business analysis,... more>> BI and Location Analytics Consultant Business Intelligence Consultant Business Intelligence Consultant Senior BI developer and consultant Consultor e desenvolvedor QlikView com experiência de quatro anos no Grupo Tuper S/A, inciei minha carreira como desenvolvedor e administrador QlikView, responsável pela área de Business Intelligence da companhia, durante este período implantamos e criamos soluções para as mais diversas áreas da... more>> • Working as Spotfire Manager, Consulting Group, India since last 3 years. • 3 years of experience in TIBCO Spotfire Professional, TIBCO Spotfire Server Administration, TIBCO Spotfire Web Player, TIBCO Spotfire Automation, Customization and API programming. • 10+ years of experience in Project... more>> VIA or Visual Intelligence and Analysis provides training from basic to expert level using Tableau as well as technical data preparation skills when the data are not Tableau ready. Training and documentation is available in English, Spanish and French. Specialties: Subject Matter Expertise... more>> 8 years of experience in several sectors in information systems, mainly in conception of BI solutions and financial software. Capable of addressing an entire project lifecycle from requirements gathering to testing and deployment. I have more than 10 years of experience in Data Warehousing and have worked extensively in Informatica, OBIEE, Business Objects, Talend and Tableau. Motorola has been my client for all these years. I've worked in various data warehouses of Motorola (Mobility).I've extensive experience in working... more>> Business Intelligence Consultant - IBM Cognos Senior Consultant, graduated in Information Systems and currently studying Post-Graduation in Accounting, Controllership and Finance. Around 10 years of IT experience, since 2013, focused on developing and modeling customized solution of Financial projections and / or Strategic Planning... more>> Principal consultant and project manager on Financial Performance Management and Business Intelligence solutions with over 13 years of experience in team leadership, project management, analysis, design, development, training, and support on Planning, Budgeting, Forecasting, Operational and... more>> Have good experience and exposure of development, deployment and maintenance of BI solutions using enterprise as well as open source tools on web and mobile platforms. Have worked on Data integration using Talend and PDI tools. Have experience in handling projects with US, APAC and UK teams for... more>> Business Intelligence Consultant Business Intelligence Developer - Microstrategy I have just over 17 years of Information Technology experience. The past 10 years of my career have been focused on Oracle business intelligence applications. I regularly blog about Oracle EPM/BI topics and I was honored to be recognized by Oracle as an ACE Associate for my contributions. • Tableau Bronze Accredited BI Solution Consultant with 5 years of experience in executing BI projects for a large Networking Major • Deputed at client location (San Jose) for a period of 10 months and was responsible for end-to-end reporting solution development based on Tableau and SAP HANA BI & Digital Transformation Consultant Project Manager - Business Intelligence Senior BI Consultant (Qlikview) FSI IT Consulting Manager Business Analytics Cognos Consultant Master's degree, Computer Engineering for Intelligent Systems I work on design and development of data warehouse and BI solutions I solve data management problems for companies in various sectors of industry: Airline Travels, Energy Bill Management, Manufacturing, Commercial Printing/Publishing, Transportation and Logistics, Health Care, and Software Development. Specializing in building Data Warehouses, designing and... more>> I've performed different positions along Information Systems. I passed from COBOL programmer to Information Systems major department responsible. I'd already being on the commercial side and processes definition - working as ERP pre-sales - and also I'd the chance to work as system and database... more>> 27 years on a IT dept for a manufacturing company and I have started my own company in 2014. Bachiller de la facultad de Ingeniería de Sistemas e Informática de la Universidad Nacional Mayor San Marcos, perteneciente al quinto superior, con experiencia de más de 1 años en el área de inteligencia de negocios, con sentido de responsabilidad, capacidad de rápido aprendizaje, con buen manejo... more>> • Business Intelligence and Data warehousing professional with close to 5 years of experience in design, development, administration, testing, release management & Pre Sales of the projects from inception to completion. • Close to 3 years of onsite experience as a Business Intelligence... more>> Andy Rocha is the Consulting Manager for Rittman Mead America, a leading consultancy focused on delivering Oracle Data Warehousing and Business Intelligence technology solutions. Specializing in Oracle Business Intelligence Enterprise Edition (OBIEE) 11g, Andy, an Oracle Certified Implementation... more>> Data Warehouse Consultant Sr BI Application Consulting Manager Senior Managing Consultant Consultant on Health Care Reporting and Analytics Senior Business Intelligence Consultant Cognos & OBIEE Business Intelligence Consultant Technical Lead Consultant Business intelligence consultant Business Intelligence Consultancy,Business Analyst, Systems Architect, Specialist, Cisco, Linux, Win Servers, Application Developer, Database Administrator, and Project Manager in a wide variety of business applications. Particularly interested in client/server and relational database design... more>> BI Team Leader & Technology Consultant Business Intelligence Consultant • More than 6 years of IT industry experience in Mainframe technologies & Data Quality tool Trillium Software Systems. • Have worked on variety of projects of Banking and Financial Services, Insurance Sector and Aviation Sector for clients such as First Data Merchant Services and JP Morgan... more>> Freelancer - DW & BI (OBIEE & Informatica) • Gaur is a Freelancer in Information Technology experienced with the latest trends and techniques of the field, having an inborn quantitative aptitude, determined to carve a successful and satisfying career in the IT industry. • Technically competent Datawarehouse consultant (DW/BI) with... more>> Following are the areas I work in : Analysis of data is a process of inspecting, cleaning, transforming, and modeling data with the goal of discovering useful information, suggesting conclusions, and supporting decision-making. Data analysis has multiple facets and approaches, encompassing... more>> Arnab has wide experience in Information Management and Business Intelligence. He has lead multiple projects as a solution architect defining architecture of multi-terabyte Enterprise Data Warehouse and BI applications. He has worked on multiple technical platforms including Informatica... more>> Over 3+ years of experience in Database Development, Database Administration, Business Intelligence Solutions and Web Development. Extensive experience in SQL Development, Reporting Services (SSRS), Integration Services (SSIS) and Analysis Services (SSAS). Expertise in creating reports,... more>> I have more than 7 years of IT experience in DW/BI technology. Specialties: Informatica,Qlikview, Data Warehousing,Teradata. I have had over ten years experience working in various financial organizations and been involved in various business areas including Fixed Income, Finance, Credit, Rates, Equities, Syndicated Loans and worked on both Vanilla and Structured product projects. Generally I have worked autonomously... more>> SQL Server DBA, system administrator, virtualization junkie, blogger, author and speaker. Oracle, MySQL - DBA, Developer Business Objects & Crystal Reports Adabas/Natural - DBA, Developer 10 years business experience in life insurance industry, followed by 15 years experience as an analyst developer in a wide range of technologies, including: COBOL, PL1, CICS, Mantis on IBM mainframe C++, Java, VB(6) Currently working as a lead analyst and developer in Business Intelligence,... more>> I am addicted to learn new technologies, sharing knowledge and finding new ideas to implement and share with my team. I always have passion to learn and find new solutions that meet business needs and boost the performance of the business life cycle. I am seeking to know tips and tricks of... more>> Tool agnostic decision support expert with deep experience in healthcare, pharmaceuticals, and finance. Agile, results-driven consultant targeting high quality results in fast iterations with strong customer involvement, often using tools like SQL Server, Vertica, SAP Data Services,... more>> Business Intelligence Consultant Bachelor degree in Computer Science Engineering at Universidade do Algarve - Faculdade de Ciências de Tecnologia, Portugal. At the moment, I'm playing a role as a Business Intelligence Consultant and I've already integrated in various projects in the... more>> Specialties: Universe, Unidata, Ultimate, Pick Basic, jBase, PROC, DataTel Systems,. Open to new opportunities. Working in government Systems, DODefense systems, Housing, Distribution, Health care, Warehouse-Inventory and Cost Accounting Systems. Support systems and links between them, Mike is... more>> Business Intelligence Consultant IBM Cognos Instructor and Consultant Managing Partner, Lead Business Intelligence Consultant MicroStrategy BI Consultant Sr. MicroStrategy Consultant Founder and Technology Consultant Senior Software Developer MicroStrategy Expert with 15 years exp. First in Theoretical Physics with 2 years post-grad study of quantum mechanics and general relativity. Founded Butler Group - Europe's largest indigenous IT analyst firm until acquisition by Datamonitor in 2005. Founder of Butler Analytics - dedicated to analysis of analytics technologies and... more>> Qlikview Business Intelligence Consultant 15 Years Experience in Warehouse Operations,Systems and Management. Jeff has over 40 years experience helping manufacturing and distribution companies improve their information systems. He started his career at IBM in the late 1960’s, specializing in mid-range systems for the manufacturing industry. In 1975 Mr. Carr founded Professional Computer Resources... more>> Business Intelligence consultant & architect with 10+ years of experience with various BI tools – started with BusinessObjects and Cognos, and switched to QlikView in 2009. Familiar with banking, telecommunications and retail trade industries. Runs BI Review (http://bi-review.blogspot.com) --... more>> • Visualization Expert, Data Analyst/BI Architect, with 12 years of experience in Advance Analytics, Management Reporting, Business Intelligence, ETL/DWH Solution Delivery for Fortune 500 environments. Adept at working in a diverse team setting and collaborating across... more>> I started my career as an account, followed up by some years as finance controller and now I have dedicated my work 100 % to Business Intelligence. I found this combination very useful, cause of the combination of IT and business skills. I have unique experience in what the... more>> Company business strategy development and management. Business Analytics concepts, architectures, methodology and implementation. Experienced Data Warehousing, ETL, Information Management, Business Intelligence, Planning, Budgeting and Consolidation project and quality manager, solution... more>> Over 20 years of experience delivering IT solutions that strengthen core business competencies. Rich experience with formulating and interpreting business strategy into enterprise data requirements and implementations. Design enterprise business intelligence end-to-end from technical aspects to... more>> Business Analyst with focus on best practices in business information visualization. Design and implementation of executive dashboards using Excel and Cognos. Specialties:information visualization, dashboard design, Excel, data analysis Over 40 years in business. Owner of BIAlytics: a one man business intelligence and data visualization consulting company, based in Turlock CA, with clients in the USA, Canada, and, Europe. The company name means “Better Insights through Analytics”. I do consulting, training, and speaking... more>> In his role, Mr. Sharma is responsible for providing cutting edge BI Solutions to our clients enabling to run better. Mr. Sharma has more than 20+ years of experience in Business Intelligence domain across key BI technologies and several industries. Mr. Sharma is very well versed in leading BI... more>> Independent Management Consultant who delivers initiatives utilising the below key skills: • Finance and Accounting – Qualified accountant with 20+ years of experience with a focus on business profitability, commercial management, governance, internal controls and finance effectiveness. Business Intelligence and Data Warehousing (BI/DW) specialist with over 11 years experience. Proven ability to work with users and developers at all levels of an organization. Great communication and teaching ability with non-technical users. Goal: Seeking BI/DW consulting in Oklahoma... more>> Business Intelligence and Data Warehousing Developer Co-Leader of Charlotte BI Group Co-Organizer of SQL Saturday 237 Co-Organizer of SQL Saturday 174 Professional skills & interests: • Design and development of BI, data warehousing, and OLAP solutions • Needs analysis, requirements... more>> 5+ years of experience building and managing an international Business Intelligence practice. 10+ years of cross-industry experience in architecting, designing and building Business Intelligence solutions. Microsoft Certified Trainer specializing in SQL Server, Business intelligence and... more>>
<urn:uuid:19545775-36b7-44b2-b9f3-f46657bb1919>
CC-MAIN-2017-04
https://www.itcentralstation.com/categories/business-intelligence-tools
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00238-ip-10-171-10-70.ec2.internal.warc.gz
en
0.862941
4,720
2.59375
3
Intel will advance Moore's Law for the foreseeable future, but keeping up with it is becoming more challenging as chip geometries shrink, according to a company executive. [ALSO: Moore's Law on its last legs] Moore's Law is based on a theory that the number of transistors that can be placed on silicon doubles every two years, which brings more features on chips and provides speed boosts. Using Moore's Law as a baseline, Intel for decades has added more transistors while reducing the size and cost of a chip. The manufacturing advances help make smartphones, tablets and PCs faster and more power efficient. But as chips get smaller, maintaining pace with Moore's Law is perhaps more difficult today than it was in years past, said William Holt, executive vice president and general manager of Intel's Technology Manufacturing Group, during a speech at the Jeffries Global Technology, Media, and Telecom Conference this week. "Are we closer to an end than we were five years ago? Of course. But are we to the point where we can realistically predict that end, we don't think so. We are confident that we are going to continue to provide the basic building blocks that allow improvements in electronic devices," Holt said. Moore's Law slide on cost-per-transistor The end of the industry's ability to scale chips down in size has "been a topic on everybody's mind for decades," Holt said, but dismissed arguments by observers and industry executives that Moore's Law was dead. Some predictions about the law were short-sighted, and the paradigm will continue to apply as Intel scales down chip sizes, Holt said. "I'm not here to tell you that I know what's going to happen 10 years from now. This is much too complicated a space. At least for the next few generations we are confident we don't see the end coming," Holt said, talking about generations of manufacturing processes. Moore's Law was first established in 1965 by Gordon Moore, who co-founded Intel in 1968 and ultimately became CEO in 1975. The original paper on the law, published in Electronics magazine in 1965, focused on the economics related to cost-per-transistor, which would come down with scaling. "The fact that now as we look at the future, the economics of Moore's Law ... are under considerable stress is probably appropriate because that is fundamentally what you are delivering. You are delivering a cost benefit each generation," Holt said. But Holt said that manufacturing smaller chips with more features becomes a challenge as chips could be more sensitive to a "wider class of defects." The sensitivities and minor variations increase, and a lot of attention to detail is required. "As we make things smaller, the effort that it takes to make them actually work is increasingly difficult," Holt said. "There are just more steps and each one of those steps needs additional effort to optimize." To compensate for the challenges in scaling, Intel has relied on new tools and innovations. "What has become the solution to this is innovation. Not just simple scaling as it was the first 20 years or so, but each time now you go through a new generation, you have to do something or add something to enable that scaling or that improvement to go on," Holt said. Intel has the most advanced manufacturing technology in the industry today, and was the first to implement many new factories. Intel added strained silicon on the 90-nanometer and 65-nanometer processes, which improved transistor performance, and then added gate-oxide material -- also called high-k metal gate -- on the 45-nm and 32-nm processes. Intel changed transistor structure into 3D form on the 22-nm process to continue shrinking chips. The latest 22-nm chips have transistors placed on top of each other, giving it a 3D design, rather than next to each other, which was the case in previous manufacturing technologies. Intel in the past has made chips for itself, but in the last two years has opened up its manufacturing facilities to make chips on a limited basis for companies like Altera, Achronix, Tabula and Netronome. Last week Intel appointed former manufacturing chief Brian Krzanich to CEO, sending a signal that it may try to monetize its factories by taking on larger chip-making contracts. Apple's name has been floated around as one of Intel's possible customers. For Intel, the advances in manufacturing also correlate to the company's market needs. With the PC market weakening, Intel has made the release of power-efficient Atom chips for tablets and smartphones based on the newest manufacturing technologies a priority. Intel is expected to start shipping Atom chips made using the 22-nm process later this year, followed up by chips made using the 14-nm process next year. Intel this week said upcoming 22-nanometer Atom chips based on a new architecture called Silvermont will be up to three times faster and five times more power-efficient than predecessors made using the older 32-nm process. The Atom chips include Bay Trail, which will be used in tablets later this year; Avoton for servers; and Merrifield, due next year, for smartphones. Intel is trying to catch up with ARM, whose processors are used in most smartphones and tablets today. The process of scaling down chip sizes will require lots of ideas, many of which are taking shape in university research being funded by chip makers and semiconductor industry associations, Holt said. Some of the ideas revolve around new transistor structures and also materials to replace traditional silicon. "Strain is one example that we did in the past, but using germanium instead of silicon is certainly a possibility that is being researched. Even more exotically, going to III-V material provide advantages," Holt said. "And then there are new devices that are being evaluated as well as different forms of integration." The family of III-V materials includes gallium arsenide. The U.S. government's National Science Foundation is leading an effort called "Science and Engineering behind Moore's Law" and is funding research on manufacturing, nanotechnology, multicore chips and emerging technologies like quantum computing. Sometimes, not making immediate changes is a good idea, Holt said, pointing to Intel's 1999 transition to the copper interconnect on the 180-nm process. Intel was a late mover to copper, which Holt said was the right decision at the time. "That equipment set wasn't mature enough at that point in time. People that moved [early] struggled mightily," Holt said, adding that Intel also made a late move to immersion lithography, which saved the company millions of U.S. dollars. By the time Intel moved to immersion lithography the transition was smooth, while the early adopters struggled. The next big move for chip manufacturers is to 450-mm wafers, which will allow more chips to be made in factories at less cost. Intel in July last year invested $2.1 billion in ASML, a tools maker, to enable smaller chip circuits and larger wafers. Following Intel's lead, TSMC (Taiwan Semiconductor Manufacturing Co.) and Samsung also invested in ASML. Some of TSMC's customers include Qualcomm and Nvidia, which design chips based on ARM processors. Intel's investment in ASML was also tied to the development of tools for implementation of EUV (extreme ultraviolet) technology, which enables more transistors to be crammed on silicon. EUV shortens the wavelength range required to transfer circuit patterns on silicon using masks. That allows creation of finer images on wafers, and chips can carry more transistors. The technology is seen as critical to the continuance of Moore's Law. Holt could not predict when Intel would move to 450-millimeter wafers, and hoped it would come by the end of the decade. EUV has proved challenging, he said, adding that there are engineering problems to work through before it is implemented. Nevertheless, Holt was confident about Intel's ability to scale down and to remain ahead of rivals like TSMC and GlobalFoundries, which are trying to catch up on manufacturing with the implementation of 3D transistors in their 16-nm and 14-nm processes, respectively, next year. But Intel is advancing to the second generation of 3D transistors and unlike its rivals, also shrinking the transistor, which will give it a manufacturing advantage. Speaking about Intel's rivals, Holt said, "Since they have been fairly honest and open they are going to pause area scaling, they won't be experiencing cost saving. We will continue to have a substantial edge in transistor performance."
<urn:uuid:585b6be2-bbcd-4ceb-b07c-26d3367d62ef>
CC-MAIN-2017-04
http://www.networkworld.com/article/2166095/computers/intel--keeping-up-with-moore--39-s-law-becoming-a-challenge.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00540-ip-10-171-10-70.ec2.internal.warc.gz
en
0.969159
1,766
3.078125
3
Many users feel more secure using smartphones to surf the Internet than PCs, and a majority consider the risk of losing personal data higher on computers than on smartphones, according to Kaspersky Lab. The new findings highlight a dangerous misconception with regards to smartphone protection and demonstrate security software is less common on mobile devices than it is on PCs. 1,600 smartphone users were surveyed in Great Britain, France, Italy and Spain. The survey examines the extent to which European smartphone users are aware of the current mobile malware threats and whether or not they consider smartphone protection a necessity. There has been a recent increase in the number of attacks on mobile operating systems like Android and iOS, and experts expect to see considerably more of these in the future. Despite this, users in Europe, according to the Kaspersky Lab survey, feel more secure accessing the Internet via a mobile device. 51 percent of those surveyed are afraid of having their computer infected with malware while surfing the Internet, compared with the fact that 27% of respondents consider a virus infection on their computer a serious threat. One interesting detail which emerged from the survey is that users consider the risk of losing personal data lower on a smartphone than on a PC – despite the fact that around a fifth of all smartphone users have already experienced the loss or theft of a mobile device. The majority of users – over 90 percent in most European countries – store personal data, such as photos, emails or contact details, on their smartphones. Around one-third also save login information, such as PIN codes or passwords, for various services on their mobile devices, demonstrating a large gap between secure reality and user perception.
<urn:uuid:8c179b1b-657d-4fe9-8f00-79f66c297af7>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2011/04/07/smartphone-users-feel-more-secure-than-pc-users/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00082-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932958
332
3
3
DOE appeal: Breaking exaflop barrier will require more funding - By Frank Konkel - Jul 17, 2013 The Cielo supercomputer at Los Alamos National Laboratory, built by Cray, has a theoretical maximum performance of 1.37 petaflops. (LANL photo) Department of Energy-funded supercomputers were the first to crack the teraflop- (1997) and petaflop (2008) barriers, but the United States is not likely to be the first nation to break the exaflop barrier without significant increases in DOE funding. That projection is underscored by China's 55-petaflop Milky Way 2, which has achieved speeds double those of DOE's 27-petaflop Oak Ridge National Laboratory-based Titan, and which took the title of world's fastest supercomputer in June. China is rapidly stockpiling cash for its supercomputing efforts, while Japan recently invested $1 billion into building an exascale supercomputer – both countries hope to build one by 2020 – and the European Union, Russia and a handful of large private sector companies are all in the mix as well. DOE's stated goal has also been to develop an exascale supercomputing system – one capable of a quintillion, or 1,000,000,000,000,000,000 floating point operations per second (FLOPS) – by 2020, but developing the technology to make good on that goal would take at least an additional $400 million in funding per year, said Rick Stevens, associate laboratory director at Argonne National Lab. "At that funding level, we think it's feasible, not guaranteed, but feasible, to deploy a system by 2020," Stevens said, testifying before the House Science, Space and Technology subcommittee on Energy on May 22. He also said that current funding levels wouldn't allow the United States to hit the exascale barrier until around 2025, adding: "Of course, we made those estimates a few years ago when we had more runway than we have now." DOE's Office of Science requested more than $450 million for its Advanced Scientific Computing Research program in its Fiscal 2014 budget request, while DOE's National Nuclear Security Administration asked for another $400 million for its Advanced Simulation and Computing Campaign. That's a lot of money at a time when sequestration dominates headlines and the government is pinching pennies. Subcommittee members weighed in on the matter, stressing the importance of supercomputing advancements but with a realistic budgetary sense. Chairman Cynthia Lummis (R-Wyo.) said the government must ensure DOE "efforts to develop an exascale system can be undertaken in concert with other foundational advanced scientific computing activities." "As we head down this inevitable path to exascale computing, it is important we take time to plan and budget thoroughly to ensure a balanced approach that ensures broad buy-in from the scientific computing community," Lummis said. "The federal government has limited resources and taxpayer funding must be spent on the most impactful projects." An exascale supercomputer would be 1,000 times more powerful than the IBM Roadrunner, which was the world's fastest supercomputer in 2008. Developed at the Los Alamos National Laboratory with $120 million in DOE funding, it was the first petaflop-scale computer, handling a quadrillion floating operations per second. Yet in just five years, it was rendered obsolete by hundreds of faster supercomputers and powered down, an example of how quickly supercomputing is changing. Supercomputers are getting faster and handling more expansive projects, often in parallel. Supercomputers through time Some highlights of the history of supercomputing. Hover mouse over each one for more information. The U.S. Postal Service, for instance, uses its mammoth Minnesota-based supercomputer and its 16 terabytes of in-memory computing to compare 6,100 processed pieces of mail per second against a database of 400 billion records in around 50 milliseconds. Today's supercomputers are exponentially faster than famous forefathers in the 1990s and 2000s. IBM's Deep Blue, which defeated world champion chess player Gary Kasparov in a best-of-three match in 1997, was one of the 300 fastest supercomputers in the world at the time. At 11.38 gigaflops, Deep Blue calculated 200 million chess moves per second, yet it was 1 million times slower than the now-retired Roadrunner, which was used by DOE's National Nuclear Security Administration used to model the decay of America's nuclear arsenal. Of vital importance to national security before it was decommissioned, Roadrunner essentially predicted whether nuclear weapons – some made decades ago – were operational, allowing a better grasp of the country's nuclear capabilities. Titan, which operates at a theoretical peak speed of 27 petaflops and is thus 27 times faster than Roadrunner, has been used to simulate complex climate models and simulate nuclear reactions. However, even at its blazing speed, Titan falls well short of completing tasks like simulating whole-Earth climate and weather models with precision. Computer scientists believe, though, that an exascale supercomputer might be able to do it. Such a computer, dissecting enough information, might be able to predict a major weather event like Hurricane Sandy long before it takes full form. Yet reaching exascale capabilities will not be easy for any country or organization, even those that are well funded, due to a slew of technological challenges that have not yet been solved, including how to power such a system. Using today's CPU technology, powering and cooling an exascale supercomputing system would take 2 gigawatts of power, according to various media reports. That is roughly equivalent to the maximum power output of the Hoover Dam. Frank Konkel is a former staff writer for FCW.
<urn:uuid:f013925a-7f66-4cca-b899-dcb585e225da>
CC-MAIN-2017-04
https://fcw.com/articles/2013/07/17/exoflop-supercomputing.aspx?admgarea=TC_Agencies
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00384-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947302
1,209
2.578125
3
Even though over 80% of email users are aware of the existence of bots, tens of millions respond to spam in ways that could leave them vulnerable to a malware infection, according to a Messaging Anti-Abuse Working Group (MAAWG) survey. In the survey, half of users said they had opened spam, clicked on a link in spam, opened a spam attachment, replied or forwarded it – activities that leave consumers susceptible to fraud, phishing, identity theft and infection. While most consumers said they were aware of the existence of bots, only one-third believed they were vulnerable to an infection. Less than half of the consumers surveyed saw themselves as the entity who should be most responsible for stopping the spread of viruses. Yet, only 36% of consumers believe they might get a virus and 46% of those who opened spam did so intentionally. This is a problem because spam is one of the most common vehicles for spreading bots and viruses. The malware is often unknowingly installed on users’ computers when they open an attachment in a junk email or click on a link that takes them to a poisoned Web site. Younger consumers tend to consider themselves more security savvy, possibly from having grown up with the Internet, yet they also take more risks. Among the survey’s key findings: - Almost half of those who opened spam did so intentionally. Many wanted to unsubscribe or complain to the sender (25%), to see what would happen (18%) or were interested in the product (15%). - Overall, 11% of consumers have clicked on a link in spam, 8% have opened attachments, 4% have forwarded it and 4% have replied to spam. - On average, 44% of users consider themselves “somewhat experienced” with email security. In Germany, 33% of users see themselves as “expert” or “very experienced,” followed by around 20% in Spain, the U.K. and the U.S.A., 16% in Canada and just 8% in France. - Men and email users under 35 years, the same demographic groups who tend to consider themselves more experienced with email security, are more likely to open or click on links or forward spam. Among email users under 35 years, 50% report having opened spam compared to 38% of those over 35. Younger users also were more likely to have clicked on a link in spam (13%) compared to less than 10% of older consumers. - Consumers are most likely to hold their Internet or email service provider most responsible for stopping viruses and malware. Only 48% see themselves as most responsible, though in France this falls to 30% and 37% in Spain. - Yet in terms of anti-virus effectiveness, consumers ranked themselves ahead of all others, except for anti-virus vendors: 56% of consumers rated their own ability to stop malware and 67% rated that of anti-virus vendors’ as very or fairly good. Government agencies, consumer advocacy agencies and social networking sites were among those rated most poorly. The survey was conducted online between January 8 and 21, 2010 among over a thousand email users in the United States and over 500 email users in Canada, France, Germany, Spain and the United Kingdom. Participants were general consumers responsible for managing the security for their personal email address. The full report is available in PDF format here.
<urn:uuid:4ba8c472-4b06-448a-9e40-1cc85d641b75>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2010/03/25/millions-continue-to-click-on-spam/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00384-ip-10-171-10-70.ec2.internal.warc.gz
en
0.972377
698
2.5625
3
Traditionally rootkit research has focused on accomplishing persistence and stealthiness with software running at the user or kernel level within a computer's operating system. The techniques used to run code undetected have evolved over time and studying them allows the information security community to understand the evolution of a type of malware that has severe impact on privacy and security IT users. A potentially much more dangerous scenario is that of malware that can effectively avoid detection and removal because it has stored itself on the computer's BIOS, the firmware that runs during the boot process prior to execution of the operating system itself. Such malware would resist reinstallation of the operating system, wiping and even replacement of the hard disk and could achieve more stealthiness than OS-depedant rootkits. In 2009 Alfredo Ortega and Anibal Sacco discovered a generic technique to modify the BIOS of certain chipsets so that they could insert homebrewed rootkit code. The technique is applicable to any computer that supports installation of BIOS updates that are not digitally signed using cryptographically strong methods. This work is available at the Persistent BIOS infection page During their research, they also discovered that several computer manufacturers ship computers with pre-installed BIOS firmware that already provides rootkit functionality. Closer inspection revealed that the concealed code came from a software vendor's anti-theft technology that is currently embedded in millions of computers. Further research identified and documented multiple security weaknesses that make the discovered software vulnerable to manipulation by potentially malicious parties that could turn into a highly effective rootkit. The researchers investigated ways to prevent and detect tampering software embedded in BIOS code. This work is available at the Deactivate the Rootkit page
<urn:uuid:2af1664c-4c3e-4895-b30e-0b0d363de5fe>
CC-MAIN-2017-04
https://www.coresecurity.com/corelabs-research/projects/bios-rootkits
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00110-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936856
338
3.3125
3
U.S. consumer attitudes and behavior related both to data sharing and social media oversharing have shifted significantly just within the last two years, according to a new study by McCann Worldgroup. This “pendulum swing” is occurring across the board generationally, but is most pronounced among teens, whose “migration” across social media channels reflects growing concerns about bullying in addition to evolving opinions about what’s cool or not. Research found the following striking changes over the last two years: - The #1 privacy fear that increased significantly since 2011 is that the government will use people’s personal data against them in some way. - Companies considered to be the greatest threat and that are the least trusted with data are the Silicon Valley companies, such as Google and Facebook. - What did not change, however, is that banks are still the most trusted institution when it comes to using sensitive personal information properly. In a surprising twist, the study has also uncovered new concerns, a kind of “privacy backlash,” that has much more to do with a new consumer etiquette around what and how to share online. “Selfie,” for example, may be 2013’s dictionary word of the year, but just under half of American under 34 say selfies are not cool. Similarly, reflecting that this is not just a young generational trend, 77% of people over the age of 35 consider posting frequent selfies on Instagram to be “uncool.” “We found evidence of a new trend towards being more selective and exclusive when it comes to sharing, even among the teenage generation,” said Nadia Tuma, Deputy Director, McCann Truth Central. “As one of our young people said, ‘the pendulum is swinging in the direction of more privacy.’ This may explain why young people are moving from Facebook to Snapchat. It is becoming cooler to be a bit mysterious, like not being very searchable on Google.” “With social networks taking on a more dominant role in our lives, we face a myriad of potential social pitfalls,” said Laura Simpson, Global Director, McCann Truth Central. “Our findings point to new rules for navigating a world where privacy and publicity collide. The challenge lies in maintaining a delicate balance between making yourself seem interesting without looking vain.” Concerns about privacy, including bullying as a related aspect, are having a marked effect on youth migration patterns with regard to social media. Given the permanence of texts, tweets and status updates, bullying is changing the way people behave online. For example, youth in the survey explained their migration from Facebook to Snapchat as being partly attributed to greater privacy (and therefore less bullying). But bullying is only one of what might be called “The 4 B’s” that are defining currently accepted sharing and privacy practices with regard to social media. In addition to Bullying, these include avoiding Boring, Boasting and Begging behaviors as well. - Only 34% of people think posting routine activities as status updates on Facebook is COOL - On the other hand, 64% of people think the less personal approach of frequently posting silly or funny articles on Facebook is COOL. - Only 35% of people think frequently “checking in” your location on Foursquare is COOL - 63% of people think having a personal style blog that chronicles your daily outfits is UNCOOL. - 73% of people think adding people you don’t know as LinkedIn connections is UNCOOL - 72% of people think adding people you don’t know as Facebook friends is UNCOOL - 63% of people think defriending people who are not your “real” friends on Facebook is COOL.
<urn:uuid:ac7e7d66-fe68-4b2f-a937-8131ed97cd11>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2014/01/10/fear-of-data-sharing-on-the-rise/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00110-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9557
789
2.59375
3
The correct answer is C. The highest risk of collision is based on the shorted hash value output length. From this list of MD5 has the shortest with 128 bit hash value length. SHA-1 has 160 bit hash value length, and SHA-2 has hash value length starting as 224 increasing from there. HMAC is not a hashing algorithm, instead it is an implementation of hashing. HMAC can use any hashing algorithm, such as MD5 or SHA-1, then adds the use of a symmetric key as a randomness source in order to produce a more complex hash. It does not produce an encrypted hash. Since HMAC can use any hashing algorithm, it is not necessarily using MD5 and with the added randomness, collisions are less common that with MD5 on its own.
<urn:uuid:08152225-84e7-4de2-8d49-99e7587f3eb5>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2016/11/11/security-question-of-the-week-risk-of-collision/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00320-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939909
163
3.359375
3
ACCESSING THE MAINFRAME Edward L. Bosworth, Ph.D. TSYS Department of Computer Science Some notes for Assembly Language I The Basic Process 1. Start the Terminal Emulator 2. Log onto the Mainframe 3. Copy the program to be run (if needed) 4. Edit the program to add the required features. 5. Submit the program for execution. 6. Examine the results of running the program. 7. Notify the instructor of success, so that the output can be examined and graded. We start with a few cautions about running the in the standard mode that is most useful for accessing the Mainframe. What Keys to Use? Here is a depiction of a modern keyboard. Avoid the following sets of keys: The Numeric Keypad Keys The Toggle and Other Keys The Cursor Control Keys (with some exceptions) The Windows Key, Control Key, Alt Key, and keys at the top. Be Careful With the Backspace Key The preferred mode for accessing the Mainframe is called mode, in which a character replaces the one just after the cursor. This differs from INSERT mode, which is the more commonly used. Consider changing the string “AAABBBCCC” to read “AAADDDCCC”. Insert mode: Place the cursor after the “BBB”, backspace 3 times, and then insert the “DDD”. Get “AAADDDCCC”. Overwrite mode: Place the cursor after the “BBB”, backspace 3 times, to get the string “AAACCC”. Insert the string “DDD” to get “AAADDD”. What happened? After the backspace, the cursor position was the “AAA”; view the string as “AAA|CCC”. replaced the next three characters in the string “AAACCC”, which were the “CCC”. Lesson: In Overwrite mode, be cautious with the backspace key. Locking and Unlocking the Keyboard Upon occasion, some mistake will cause the keyboard to “lock up”. You will see the string “XMIT Lock” in the lower pane of the Emulator. When this string is displayed, the emulator does not respond to the keyboard. There are two remedies. 1. First try to hit the ESC (Escape) key a few The usually will unlock the keyboard. 2. If that does not work, hit Ctrl–Q and then Hit these keys alternately until the string disappears. Logging Onto the Mainframe To Log On 1. Start the terminal emulator 2. Connect to the Mainframe (File menu or Alt–C). You may also click on the “lightning bolt” below the File menu entry. 3. Optionally, press the CAPS LOCK key on the as what you enter is best done as uppercase letters. 4. Enter the string “L TSO” followed by the Enter Key. 5. You should enter your user ID at the prompt. 6. Enter your password in the position indicated by the cursor. 7. The system will display a few pages of announcements, which may be disregarded. Hit the ENTER key to move to the next announcements. 8. The last announcement page (or partial page) will end with “LAST MESSAGE FROM VENDOR.CLIST”. Hit ENTER again. You are logged onto the Mainframe. Logging Off the Mainframe We shall discuss this in detail later. For now, we present a CAUTION. YOU MUST LOG OFF THE SYSTEM IN THE 1. Return the system to a specific screen (discussed later) 2. Enter the string “LOGOFF”. 3. Then stop the Terminal Emulator. If you close the emulator without logging off the system, your session with the Mainframe will not be closed properly. It is possible that you might not be able to log back into the system for an hour. While the time might be as small as 10 minutes, this is not certain. terminal emulator itself is a proper Windows program; the process running on the IBM Mainframe is not. Your First Login 1. If this is your first login, you will each use the default password assigned for the class. On some occasions, it is the five character string “CSUPW” (without the quotes), and at others it is your User ID. 2. Enter the password, followed by the Enter key. 3. On your first login, you will be prompted to select and confirm a new password. The best lengths seem to be five or six characters. DO NOT USE MORE THAN SEVEN CHARACTERS FOR YOUR PASSWORD, AS THIS WILL LEAD TO STRANGE HAPPENINGS. 4. You will then see events proceed as described above. If this does not happen, contact the instructor immediately. The ISPF Primary Menu After the login sequence has been completed, you will should see this menu. Getting the First Program There are two ways to generate the first program. 1. You may open the editor and enter the entire This is tedious and almost certain to introduce errors into your text. 2. You may copy the standard program from the Public Library and modify that program as needed. The next few slides describe the preferred method to get the first program, which is titled “LAB01”. Most students will store the file with the same name. 3. In the Primary Options Menu, put a 3 into the Options area and hit Enter. 4. This brings you to the Utilities Menu. Again put a 3 into the Options area and hit Enter. This brings you to the Copy/Move menu. 5. Complete the “From Data Set” menu as shown below. 6. Complete the “To Data Set” menu as shown below. 7. Hit F3 one or more times to return to the ISPF Primary Options Menu. The “From Data Set” Menu Described 1. Place the single character “C” into the Options part of the menu. 2. Hit the TAB key to move to another field. Do not hit the ENTER key at this time. 3. You should verify that your User ID is placed in the Project Field under the heading “From ISPF Library”, that the Group Field contains “C3121”, and that the Type Field contains “ASSY”. Normally, you should not need to change anything in these fields. 4. Use the TAB key to move to the box labeled “Name” in the section labeled “From Other Partitioned or Sequential Data Set”. Enter along with the single quotes. Hit the ENTER key after you do this. The next slide show a typical appearance of the screen before ENTER is hit. The “From Data Set” Menu Shown The “To Data Set” Menu Shown If the fields in the area “To ISPF Library” are correct you just hit ENTER. The Project name must be your User ID. The Copy Menu Described You should now see the Copy Menu. It has a list of files down the left side. You are looking for “LAB01”. 1. Use the function key F8 to move down the list until you see the name. If you go too far, use the function key F7 to move back up the list. 2. When you see a screen with the file name “LAB01” on it, use the TAB key to move the cursor into the box just to the left on the name. If you go too far, press the SHIFT key and hit TAB to move back. 3. Place an “S” in the box just before the file name. 4. An optional step it to TAB over to the prompt box to the right of the file name and enter a new file name. do not place a new file name, the file will appear as LAB01 in your listing. As I already had a file by that name, I chose another name. 5. Hit the ENTER key and complete the copy. The Copy Menu Shown As noted above, I chose the name “LAB01A” only to change the file name. Edit the File in Your Project You are now in a position to edit the file to make it your own. You MUST change the User ID from CSU0003 to your own. Generating the Next Lab Once you have obtained a copy of the file, you should hit the function key F3 a number of times to return to the ISPF Primary Options Menu. The discussion of how to run a program will be given below. Once the program LAB01 (or any other program) is run, the file should NOT be changed. This is especially true of LAB01, which contains the basic code structure to be used by every other program. We now discuss how to copy the file LAB01A into a new file LAB02. 1. From the ISPF Primary Options menu, select option 2 for Edit. 2. In the Edit menu, I verify the Project, Group, and Type fields, and then enter the name LAB02 for the new file. Generating the Next Lab (Part 2) 3. Hit ENTER to obtain a blank edit page. Enter the command COPY followed by your file name, here “LAB01A”. Generating the Next Lab (Part 3) 4. Hit ENTER to obtain the copy. Necessary Editing Changes (For All Labs) 1. The first line must begin with your User ID, with a random letter attached. My user ID CSU0003 is expanded to CSU0003A. 2. Change the ‘ED BOZ’ to something appropriate for your program. 3. Change the TITLE in line 500 to something appropriate to you. 4. Change the description in lines 900 – 1300 to include your name, the date the program actually was written, and its purpose. WARNING: If you do not change the User ID in the first line to your User ID, the program listing will be placed in my project. that happen, I shall discard it without grading it; you will get a 0 (zero) for the assignment. Entering the Editor (Step 1) From the ISPF Primary Option Menu, select option 2. You will see. Entering the Editor (Step 2) You could enter the file name in the Member field, or just hit Enter to see. Use the TAB (and Shift–TAB) key to move to the box just in front of the file you want to edit, and then place an S in the box. Hit ENTER to open the file for editing. The Dual Mode Editor The editor is a classical Dual Mode Editor, of the type rarely used today. The editor has two modes: Insert and Command. In the Insert Mode, text is entered into the program. In the Command Mode, commands are executed and text is into the program. Common commands move up and down the file, delete lines of text, and enter the Insert Mode. Changing Editor Modes To enter the Insert Mode, place an I on the line number of the line after which you wish to insert text and hit ENTER. To leave the Insert Mode, just enter a blank line and hit ENTER. Text in a single line can be changed while in Command Mode. Just use the cursor control keys to place the cursor and type the new text that is to replace the old text. Executing a Program The easiest way to execute a program is to open its file with the Editor and then typing SUBMIT in the Command Line. Hit ENTER to submit. When I submitted my job, I saw the following at the IKJ56250I JOB CSU0003A(JOB02189) SUBMITTED Hit ENTER once to view the results of the submission. You should see one line containing the text MAXCC = 0. This indicates a success. If you see something like MAXCC = 4 or MAXCC = 8, your program had one or more errors and requires further editing to fix it. If you do not see anything, you probably have an error in one of the first two lines of the program. Be sure your User ID is set correctly. Hit Enter again to return to the Editor. Either correct your program or hit F3 a few times to return to the ISPF Primary Options Menu and view the output. Setting Up the View Filter In order to see your program listings, you must first set up the view filter. This must be done only once, after you run your first program. 1. In the ISPF Primary Options Menu, enter the two character string “SD”. 2. Tab over to the Filter command box and hit ENTER. 3. Enter a 1 into the area provided. 4. Tab into the first Value box and enter your user ID. You see my ID. The Output Queue From the ISPF Primary Option Menu, enter the 2 character string “SD” to access the SDSF system, and then enter the single character “O” (not the digit “0”) to display the Output Queue. TAB down to the box in front of the job you want to display. There are two options that are commonly used. 1. If you had a MAXCC = 0, enter a “?” (as shown above), then an “S” in front of the PRINTER ASM entry in the next menu, to see the output. 2. Otherwise, enter an “S” in the above menu to see the program listing. Purging the Output Queue If you run many programs, or make many attempts to run a single program, your Output Queue will get rather full. In order to avoid clutter, it is best to purge the output queue occasionally. 1. From the ISPF Primary Options Menu, enter the two character command “SD”, followed by ENTER to access the SDSF system. 2. Enter the single alphabetical character “O” for Output Queue. 3. Use the TAB and Shift–TAB keys to move the cursor in front of each JOBNAME you want to delete and place a “P” in the box. Do not hit ENTER unless you want to purge only one job. 4. Hit ENTER after you have selected the last of the jobs to be purged. 5. The best option requires you to verify each job to be purged. After doing this, exit the menu in the standard way. Exiting and Logging Off It is VERY IMPORTANT that you log off in an orderly fashion. 1. Return to the ISPF Primary Options Menu. 2. Insert the one–character command “X” and hit ENTER. 3. If you are prompted to select an exit option, which ever one you fancy. 4. Enter the six character string “LOGOFF”. 5. Disconnect from the Mainframe by going to the File Menu of the terminal menu and clicking on Disconnect. 6. Shut down the Terminal Emulator. AGAIN: If you terminate your session without logging off in the correct manner, you may be frozen out for about an hour. just takes that long for the Mainframe Operating System to clear your session and permit another logon.
<urn:uuid:2db8720b-fd2a-4c1e-b6bb-888c5ff8dbdb>
CC-MAIN-2017-04
http://www.edwardbosworth.com/MY3121_LectureSlides_HTML/AccessingTheMainframe.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00228-ip-10-171-10-70.ec2.internal.warc.gz
en
0.852238
3,479
3.1875
3
Kelman: The impact of pay disparities - By Steve Kelman - Jul 24, 2008 Every once in a while you read something that uses a small set of concepts to explain a lot about the world. John Donahue’s new book, “The Warping of Government Work” (Harvard University Press), is an example. (In the interest of full disclosure, I must say that Donahue is a Harvard colleague and a friend.) The book is organized around a simple idea. In the past 20 years, a major change has occurred in the distribution of earnings in the private sector. With growing demand for knowledge-intensive services, incomes for highly educated, highly skilled people have taken off. At the same time, globalization has brought many unskilled people from developing countries into the international market, and competition has caused incomes for people at the middle and bottom to stagnate. However, this change has not occurred in the public sector. Because of the strength of public-sector unions (at the bottom) and public hostility to high salaries for government employees (at the top), the government’s wage structure is now far more egalitarian than its private-sector counterpart. That means blue-collar government jobs pay noticeably more than comparable ones in the private sector. For example, in 1970, the pay for postal employees was 10 percent higher than for high school-educated men in general; by 2000, it was 60 percent higher. That trend also means that professional, highly skilled jobs in government pay noticeably less. In 2003, the average salary for the top 10 percent of information technology employees was 27 percent less in government than in the private sector. For top executives, the gap is much larger. As a result, for those at the bottom, government jobs are a safe harbor from the turmoil facing unskilled workers in the rest of the economy, and those workers will fight hard to prevent changes in their work conditions. For those at the top, such jobs are a backwater, unattractive to the best and brightest. Donahue uses that observation to explain many of the ills facing government. To protect their safe harbor, employees create strong unions, which act to inhibit changes that would allow agencies to better serve the people. Because government is a backwater for high-end employees, its effectiveness in handling complex tasks is reduced. It often inappropriately outsources jobs for which contracts are hard to manage or that involve core governmental competencies because pay scales make it impossible to hire the talent government needs. Donahue realizes the problems the separate government world has created. But changing the government’s egalitarian wage structure is difficult. Maybe we can take some small steps? Kelman (email@example.com) is professor of public management at Harvard University’s Kennedy School of Government and former administrator of the Office of Federal Procurement Policy. Kelman is professor of public management at Harvard University’s Kennedy School of Government and former administrator of the Office of Federal Procurement Policy. Connect with him on Twitter: @kelmansteve
<urn:uuid:32fa0f8d-fcc8-4269-9cfd-ed525d18e0f3>
CC-MAIN-2017-04
https://fcw.com/articles/2008/07/24/kelman-the-impact-of-pay-disparities.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00136-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955567
635
2.609375
3
There is a reluctance or hesitation on the part of academic institutions to engage in the craft of hacking as an integral skill for those in IT. While some colleges are starting to have programs in this area, Oliver Lavery, vice president of research and IT security specialist for IMMUNIO, said, "It is strange that it's not a broader part of [computer science] curriculum. We are not addressing the problem. We need to teach developers to think like hackers." [ ALSO ON CSO: Cyber security curriculum across all disciplines ] Thus the security industry might have a need to rebrand hacking. If academic institutions are struggling to justify why to bring “hacking education” into their programs, here are five reasons Lavery said you might want to consider teaching these skills. - What does hacking mean? Hacking is the ability to look at the design of a system and use it for ways that it wasn’t designed to do. Security is fundamentally important to have people around in order to understand how it might be exploited. Hackers are able to identify those poorly designed applications that are allowing for exploitation. It is a skill set and tool like any other tool. - Society has stigmatized this area of knowledge. Hacking never started as a term that implied the people doing it were unethical. The idea that hackers are unethical to begin with is not necessarily true. We have consumed that negative stigma into our collective consciousness. A huge advantage of including cyber security and hacking in curriculum is being able to teach students about the ethics of the field, and the legal risks for unethical actions. - Hacking used to be a pretty typical profession. Computer security circa 2000 was just emerging, and hacking was a strange or obscure thing within larger organizations that were focused on security. Hackers are those who possess a fundamental skill set that allows them to take a program and understand how it works without access to that source code. - If something unexpected happens, it shouldn’t fail catastrophically. Fundamentally the skill set is thinking outside of the box. Teaching a combination of good applied programming skills and an understanding of how computers work and teaching them to question assumptions will prepare practitioners to understand the failure modes of a system with an emphasis on how to go from a set of problems to a minimal complete solution. - There is a huge problem with the hiring of skilled people. We see more and more demand, with a diminished supply of talented practitioners. Schools don’t have a problem teaching criminology for those going into the law enforcement professions, so why do we not teach the fundamental principal of thinking like an attacker. These skills are things people should learn along with being ethical, but the identification of hacker has been sullied by the use of the "ethical" qualifier. "We don't call someone an "ethical" programmer, but a programmer can write a software that is nasty and malicious," said Lavery. Nor does society differentiate good actors from irresponsible or malicious ones in any other profession. We don't refer to someone as an ethical steel worker or an ethical teacher. The presumption is that hackers are inherently unethical and there is a segment of the population that has taken the skills of hackers and used them for good, deeming them the ethical ones. The space that hackers operate in is always the same. Hacking is looking at a system differently, thinking If I give it completely different inputs, what will it do and how can I use that to accomplish some other goal. Academia can help to rebrand the field of hacking in a way that helps erase the stigma and change the collective consciousness to better understand the value of these skills. This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:226e28eb-611d-4618-820e-5d4b50dac5af>
CC-MAIN-2017-04
http://www.csoonline.com/article/3088564/security/5-reasons-why-academica-needs-to-rebrand-hacking.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00440-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962222
768
2.953125
3
Matt Skinner is a student at San Diego State University In the fourth quarter of 2003, San Diego became the first major metropolitan market in which the majority of Internet users connected through a broadband service rather than a slower provider. This is a good beginning and puts San Diego at an advantage. However, without extensive support for broadband, the county may fall behind other regions that offer a better broadband infrastructure and a more promising economy. In January of 2005, the United States ranked only16th in per capita broadband usage worldwide. Municipalities around the country from Philadelphia to San Francisco are recognizing the importance of solid broadband policies by subsidizing their development through government funds. There are currently more than 100 municipalities in the United States with networks in place for public safety, business opportunity and community building. Even New Orleans has begun offering free Wi-Fi wireless Internet access throughout the city in an effort to promote economic development following the recent hurricane disasters. This decision to fund such a program in a time of economic crisis evidences the extent to which city leaders perceive broadband to be an essential component of the current and future global economy. The most effective way to ensure San Diego's place among America's most prosperous cities is to develop a broadband infrastructure that provides low-cost access to all San Diegans. There are currently only two broadband providers in San Diego county, each charging nearly $40 a month for their service which is not available in certain areas of the county. Municipality-supported broadband networks are often opposed by those who believe the government should not interfere with the competition among private broadband providers. Major cities and rural communities across the country have faced teams of lawyers representing telecommunications companies attempting to prevent municipalities from establishing their own broadband infrastructures. Naturally, none of the service providers want to lose business to a publicly funded broadband infrastructure. However, the threat of municipal broadband networks can benefit overall public broadband access. The push for public broadband in the Tri-Cities area of Illinois, for example, ultimately resulted in expanded access after local telecommunications companies fought to gain market share by improving their services. Philadelphians will be charged less than $20 a month for access to the city's Earthlink developed Wi-Fi network. These are excellent examples of capitalism and competition working to the benefit of the people. However, many cities such as San Diego have a system in place where competition between providers is less fierce. A stagnant situation such as this does not promote the progress necessary for a city to achieve a strong economic hold in the impending Information Age. The key to developing an effective broadband infrastructure in San Diego is to maintain a delicate balance between municipal and private ownership. The initial push provided by government policy can spark competition among private service providers and eventually create an ideal situation where San Diegans have low cost access to high speed broadband. In a 2004 study that examined European cities, several roles were identified that governments may use when promoting broadband infrastructure. These strategies include the city as network owner and Internet service provider, the city as investor and co-owner of the electronic infrastructure, the city as organizer of a community network, and the city as subsidizer of broadband subscribers. Findings concluded that the most successful type of policy is that of the city as investor and co-owner of the electronic infrastructure. This policy allows cities to build an infrastructure so private companies can compete on the basis of their services. Everyone ultimately wins under this philosophy. The city is able to attract more business to its region through an extensive
<urn:uuid:be9127f0-91a2-4ae7-9b55-256ac35f255d>
CC-MAIN-2017-04
http://www.govtech.com/e-government/San-Diego-Broadband-Securing-a-Promising.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00008-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954581
704
2.734375
3
One of the reasons many security awareness programs fail is that they rely on a "push" mentality, where they force employees to take awareness training and expect or, more likely, hope that employees will seek out additional training, because it is the right thing to do. While many there are programs that do this that are successful, they are relatively rare. Recently, we began experimenting with helping our clients implement gamification techniques, which switches the whole awareness paradigm. Instead of employees being forced to take training or risk potential punishment, employees do the right things by default and seek out additional training, because they want to. Too many people confuse the term gamification to mean that you create a game to do awareness training, and there are many companies who are developing such games. They can be useful, but much like a poster, newsletter, or phishing campaign, they are just a single component in what should be a well rounded security awareness program. Gamification is actually a scientific term that roughly means applying game principles to a situation. The simplest definition of those principles is: 1) Goal establishment, 2) Rules, 3) Feedback, and 4) Participation is voluntary. Every game has to incorporate those principles. Goal establishment is the desired outcome for people participating in the game. Rules are actually limitations that people adhere to that allow the game to be a challenge. Feedback means that participants are made aware of how they are doing compared to their goal. Voluntary participation means that nobody is forced to play the game. Using golf as an example, which we will highlight is in no way a computer-based game, the goal is to go 18 holes with the fewest number of strokes. The rules provide limitations as to how the player can get the ball in the hole. After all, the easiest way to get the ball in the hole would be to carry it and place it in the hole, but people seek out the challenge of accomplishing the goal through skill. The running number of strokes is the feedback mechanism. And, short of peer or work pressure, almost everyone plays golf on a voluntary basis. All games generally exhibit the same principles. This includes all sports, card games, playground games, chess, checkers, etc. Games do not need to involve computers. As the term is confusing, we began to call our process, "Incentivized Awareness Programs". That better represents what we are talking about, as a comprehensive awareness program does not limit itself to a single tool. With incentivized awareness, you create a reward structure that incentivizes people to exercise the desired behaviors, which could include seeking out additional training. The incentives ideally make demonstrating or learning about awareness behaviors fun. Depending on the program and the job functions, people earn points by finding bugs in software, taking a training course, reporting a phishing message, reading a security related publication, stopping a tailgater attempting to enter the facilities, etc. Different activities are worth different points, and people can accumulate points. The points go towards earning rewards. Some organizations recognize people with martial arts belts equivalents, like Six Sigma training. Some organizations provide recognition and certificates. Others provide cash awards when certain point thresholds are met. Whatever the reward system is, it should be something that is appropriate to the organization's culture. Depending on the size of the organization, you might want to have different reward structures for different subcultures. Roles, divisions, or geography might define these subcultures. For example, Japanese workers tend to be much more impressed by being personally recognized by a senior manager and the rewards should reflect this preference. Clearly, some points would lead to the professional equivalent of "participation" trophies that many children's sports leagues now give out, which basically reward people for just showing up. There is actually nothing wrong with that. Security Department's tend to get a bad reputation for being organizations that punish people for bad behavior. Rewarding people for doing the right behaviors gets them to be more security conscious, while creating a better reputation for the Security Department as a whole. There of course must be an appropriate balance between points awarded for meeting base expectations and points awarded for going beyond those limited expectations. Give a low value reward for meeting base expectations. A second level should be created that is within reasonable reach for most employees by demonstrating some additional, relatively simple behaviors. Further levels and rewards should be increasingly more difficult to achieve, but the rewards should be on par with the required level of effort. Some people might say that many of their employees will not participate in this type of reward system, and that is reasonable. However, they might be surprised at the number of people who are interested in some type of reward system. Nevertheless, even if the program is not accepted by the entire employee base, the measure of success is not in participation, but in the metrics that matter to the organization. Our past article discusses this in more detail, that fundamentally any security measure is measured not in participation or perfection, but in the amount of loss mitigated by the measure compared to the cost of implementing the program. Creating an Incentivized Awareness program does take some effort, but the companies that have successfully implemented such a program are reaping the benefits by reduced losses and having better relationship between the security team and the general user base. Gamification has proven itself to be an effective measure to further a wide variety of business interests. It is time to start implementing it to further security awareness and educate your employees to the next level. Ira Winkler, CISSP and Samantha Manke can be contacted at www.securementem.com. This story, "How to create security awareness with incentives" was originally published by CIO.
<urn:uuid:19d3951e-2c9e-4f0d-9919-07c035547b8b>
CC-MAIN-2017-04
http://www.networkworld.com/article/2172333/lan-wan/how-to-create-security-awareness-with-incentives.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00313-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961648
1,156
2.53125
3
User Authentication Beyond the Password Editor's Note: This article has been updated. Here's a simple fact: the security of your organization is at risk every time anyone logs on to your network. If it's an authorized user then you're probably safe, but if it's a hacker that's logging on then here's what could be on the menu: malware infections, network unavailability, server downtime, data loss or corruption, leakage of confidential or proprietary information, and much more besides. Given all this, it's astounding that most businesses require only a user name and password to authenticate users onto their networks, even when logging in remotely. According to research house Gartner, about 94 percent of companies of all sizes require only single-factor authentication of this sort from their users. It's astounding because single-factor authentication using "something you know"—a password, in other words—is notoriously insecure. If a password is to be easily remembered then it's probably easily guessable and rarely changed. If users are forced to use more secure passwords which are long, random and frequently changed, then the chances are they'll write them down on a sticky note "hidden" somewhere obvious. Factor In Tokens A sensible way to beef up security is to bump up authentication to a two factor process, involving "something you have"—some form of security token which users must be in possession of when they authenticate themselves to the network—as well as the "something you know" password. This is the model that ATMs use: a PIN that the user has to know, and an ATM card that has to be inserted to prove that it in their possession. The most common form of network authentication credential is the SecureID token from RSA Security, part of storage company EMC. The SecureID token generates a one time password (OTP) which changes every minute or so, and the user has to type in this OTP to prove that the token is in his or her possession. The OTP is generated by putting a time value into an encryption algorithm using the token's unique "seed record" as the key. Since the only other entity in possession of the key is the authentication server, and since the server's and the token's clocks are kept in synchronization, the server is able to compare the OTP the user enters with the one it is expecting, and authenticate the user if it is correct. But RSA is far from being the only player in town, with a number of other vendors active in the security token market including Vasco with its Digipass range, Secure Computing's SafeWord tokens, the ActivIdentity token range and Entrust IdentityGuard tokens. These products use a variety of systems, including event synchronous authentication. Such tokens generate an OTP each time they are activated (usually by pressing a button) and this OTP is compared with the next OTP that the server generates using the same crypto algorithm and key, and an incremental counter. These are in theory less secure than time synchronous systems as a hacker who gained access to one of these tokens temporarily could generate a sequence of OTPs for later use. These OTPs would remain useful until the next time the owner generated an OTP and submitted it for authentication, as at that point all previous OTPs would cease to be valid. These and other vendors (including memory stick manufacturers) also sell USB dongle tokens and smart cards which have to be physically inserted into a USB port or card reader of some sort during authentication. Cost Slows Adoption One reason why many organizations have so far been reluctant to introduce two factor authentication is the cost involved, according to Dr. Ant Allan, a research vice president at Gartner. "For a small enterprise, with a few hundred people working remotely, the cost has been something like $50 per user for a token, plus the same again for the infrastructure required," he says. But Dr. Allan says the economics are changing rapidly. As well as RSA's time-synchronous tokens and time- or event-synchronous tokens from companies like Vasco and ActivIdentity, which use the ANSI X9.9 standard for identification codes, there's a significant project called OATH: the Initiative for Open Authentication. All tokens that use the OATH standard can be used with OATH-compatible authentication systems, unlike RSA SecureID tokens, for example, which only work with RSA back-end systems. "OATH has enabled the commoditization of security tokens," says Dr. Allan. "It provides the interoperability so you can implement a solution with OATH and buy some tokens from one vendor and others from another vendor. " OATH has been heavily promoted by security services vendor VeriSign, which wants to offer managed authentication services without having to be a token manufacturer or locking its customers in to a single token supplier, Dr. Allan says. Entrust, another security vendor, now supplies OATH based tokens for $5 each (albeit with a minimum order of 100), so token hardware costs have become almost negligible. In fact, token hardware cost is rapidly becoming irrelevant for another reason: The increasing power and sophistication of mobile phones means that it is now perfectly practical to give users soft tokens-software which runs on a mobile phone or other handheld device which emulates a hardware token. "We actually see phone-based authentication tokens becoming increasingly popular, and we predict that 50 percent of future two factor authentication implementations will use phone-based tokens," says Dr. Allan. Once up and running these offer a similar level of security to hardware token based systems, he says, although he warns that enrollment issues (essentially getting the software to the right mobile phone) can be a potential security problem. Vendors that provide authentication systems using cheap hardware tokens or software tokens make their money from the back-end systems (which they either license or provide as a service). Interestingly, authentication systems are available that uses precisely the opposite model: open source authentication server code which is supplied at no cost to work with more costly tokens. For this to work the tokens have to be differentiated in some way to be worth paying more for. An example of this is the YubiKey, a tiny USB token from a Sweden-based outfit called Yubico. The YubiKey is "seen" by the user's device's operating system as a USB keyboard. Touching the YubiKey's single button automatically generates and enters an OTP into the active field on the user's computer without any other activity required on the part of the user. YubiKeys cost $20 each (in orders over 100), but since the authentication software is open source there are no annual license fees to be paid (although there are obviously costs associated with integration and maintenance). Yubico also offers a free basic managed authentication service -- it previously cost $2 per user per year -- for companies that do not wish to run their own authentication servers. (Ed. Note: See update) "There are many companies providing expensive validation services and there is clearly a void in the market today for a no-subscription, "no strings attached" offering," says Stina Ehrensvärd, Yubico's CEO. "A buyer needs to look at the total cost of ownership and for large deployments that run for many years the Yubico offering is less expensive than the competition. We do not subsidize the tokens to regain on services." Ehrensvärd expects the price of the YubiKey to drop in the near future, and says by mid-August the device will support OATH. Because the cost of token based authentication has historically been high, a number of other authentication methods have appeared, providing a variety of levels of security. The prevalence of mobile phones has led to a degree of popularity for out of band authentication methods using SMS messages, email, or even voice messages. A user attempting to log on has a security code sent to their mobile phone using one of these methods, and this code must be entered as part of the log on procedure. As long as the communication channel (in this case the mobile phone connection) is not compromised, this method is actually pretty secure. Problems occur if network latency means that the user has to wait too long for the security code to arrive - or if the user is outside a mobile phone coverage area. Other authentication methods involve identifying the IP address from which a user logs in, or the device the user is operating (using network access control devices, or proprietary systems). These, however, authenticate a location or a device not a user, so they can't be used when a user is mobile (in the case of IP address authentication) or when a user wants to use a different computer system (in the case of NAC or other systems). It also leaves a network vulnerable to attacks from malware-infected, authorized machines operated remotely. What's clear is that with the commoditization of tokens thanks to standards like OATH, and with open-source based solution using low cost hardware such as the YubiKey, the cost barrier to implementing strong two factor authentication is falling fast. "There has historically been an authentication chasm because the cost of hardware has been high," says Dr Allen, "but now that cost is shrinking." What that means is that there is now less of a reason than ever before to rely on user names and passwords for the security of your network. For a fairly modest cost you could introduce two factor authentication and increase the level of your network's security significantly.
<urn:uuid:c62d1620-617f-40f3-ab6b-fbabf2fc4ffe>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netsecur/article.php/3756206/User-Authentication-Beyond-the-Password.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00247-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948194
1,950
2.8125
3
Grayware is a general term used often used for spyware, adware, remote access tools, dialers and other applications that cannot be defined strictly as malware, but do affect negatively computer performance, augment the attack surface of the computer and, in general, annoy users with pop-up windows and ads. One of the main differences between malware and grayware is the fact that its developers and/or distributers are known and are often (quasi-)legitimate businesses who try to convince – nicely or otherwise – antivirus companies not to detect their grayware as malware. In this podcast recorded at Virus Bulletin 2011, malware researcher Robert Lipovsky talks about how security companies view the subject on grayware and the latest developments regarding the distribution and anti-detection techniques used by its propagators. Listen to the podcast here. Robert Lipovsky is a malware researcher in ESET’s Security Research Laboratory in Bratislava. He is responsible for malware intelligence and research, in which, among other areas, he focuses on analyzing rootkit techniques. He has given presentations at several security conferences, including EICAR, CARO, and Virus Bulletin. He holds a Master’s Degree in Computer Science from the Slovak University of Technology in Bratislava.
<urn:uuid:34d72854-bdce-4f71-b77e-c9e02288e49b>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2012/01/05/the-antivirus-industry-and-the-grayware-problem/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00183-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921721
262
2.53125
3
The Napa Valley Unified School District has been using a hybrid diesel-electric school bus for nearly a year and has seen significant benefits. With the diesel-electric bus, the school district has been able to reduce its green house gas emissions and double the gas mileage it gets with the hybrid bus as compared to its diesel-only buses. As a result, the school district saves about $5,000 in fuel costs for the hybrid bus. While most "diesel-only" powered school buses achieve an average of six to seven miles per gallon, Ralph Knight, transportation director at Napa Valley School District, was surprised to learn just how much fuel the hybrid diesel electric school bus could save. "Fuel costs are a major concern to me," said Knight. "Cutting annual fuel costs in half for this bus is a major advantage -- both for taxpayers' wallets and for the environment." The fuel efficiency of the hybrid bus was close to 13 miles per gallon -- nearly double the fuel efficiency of a typical diesel school bus. Based on 13,000 miles the hybrid bus traveled during the 2007-08 school year, annual fuel costs for a standard school bus would be just under $10,000 at $4.87 per gallon. Conversely, fuel for the hybrid bus costs approximately $5,000 at the same price per gallon. Traveling about 65 miles per day, the hybrid bus typically transports roughly 60 children each morning and 60 each afternoon through a mixed route of highway and city driving. Even the community has started to recognize the impact the bus could have on the environment and are excited about it. "The children are excited to be riding one of the first hybrid school buses in the nation," said Knight. "The parents have also commented on the positive environmental benefits of the bus." Drivers also enjoy driving the bus. To the driver, it operates similar to a standard school bus. However, the diesel engine receives assistance from an electric motor at certain points during acceleration and deceleration. The hybrid drive system on Napa Valley's bus is recharged by plugging it into a standard outlet at night or between morning and afternoon routes. The word in the industry has gotten out. Knight says he has fielded calls from school districts all over the country asking him about the performance of this new bus. "I've told them the truth," said Knight. "I'm very pleased with the hybrid school bus." One of the other advantages of the bus hasn't really been "seen." The exhaust of the hybrid school bus is smokeless with dramatically reduced emissions compared to older buses operating in California. In fact, emissions of particulate matter have been reduced by up to 90 percent. "There's a host of new technologies incorporated into the hybrid school bus that provide the improvement in fuel economy and reduction in emissions," said David Hillman, marketing director at IC Bus. "With a year of customer experience in Napa, and the additional experience gained from hybrid buses at customers throughout the U.S. and Canada, we have shown that hybrid technology is a viable solution for bus operators in North America. The volume provided by our current customer base has allowed us to reduce our prices by $30,000 to $40,000. We encourage further efforts to provide federal and state funding, such as the California Proposition 1B funds, to help offset purchase prices and provide the opportunity for more school districts and bus operators to implement this environmentally vital technology." In the case of Napa's hybrid unit, PG&E provided $30,000 to help with the purchase of the plug-in hybrid school bus. An additional $30,000 to fund the bus was provided by the U.S. EPA through the Clean School Bus USA program and the West Coast Collaborative, a public-private partnership to reduce diesel emissions. Schools in California can use funds allocated by Proposition 1B to direct toward the purchase of a hybrid school bus. Funding to districts to support hybrid purchases from Proposition 1B and distributed through the California Air Resources Board can be up to $40,000 per bus.
<urn:uuid:4a3b0339-f648-4c03-b651-894f333b42d7>
CC-MAIN-2017-04
http://www.govtech.com/products/Napa-Valley-Calif-School.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00001-ip-10-171-10-70.ec2.internal.warc.gz
en
0.971081
832
2.546875
3
Bigger bandwidth with just algebra I’m sure many of us, at one point or another during our mathematics education, thought, “What good can this possibly do me in the real world?” I know I did. I considered taking a class in multivariable calculus one semester, but then decided it was beyond what I’d ever need as a basic computer science major. In fact, probably all of the calculus I’d taken up to that point wasn’t necessary for my eventual career path, but I actually had considered it fun at times. Yeah, I’m a nerd, so sue me. Apparently researchers at universities all over the world, led by the Research Laboratory of Electronics at the Massachusetts Institute of Technology, are trying to prove just how practical an application of mathematics can be. They have been concentrating on relieving a bottleneck that exists in every data network – what to do with lost information packets. And they are using algebra to do it, according to MIT Technology Review. A typical wireless network can drop about 2 percent to 3 percent of its packets on average. In an environment such as a fast-moving train, the drop rate can go higher. But even a 2 percent rate is a bigger problem than it sounds. That’s because when a packet is dropped, the sending and receiving stations have to start a conversation about what was dropped and how to recover it. Usually the receiver ends up asking the sender to re-send the package. That extra traffic can result in quite a drag on transfer times. The process prompted MIT scientists to come up with what they termed “coded TCP.” Instead of information packets, the sending station sends algebraic equations that describe the information. So if one packet goes missing, the receiver has a good chance of being able to reassemble the data without having to bother the sender for another copy. In lab studies, researchers produced a 1,500 percent increase in bandwidth using this method. Whether this level of benefit will come out in full-scale development remains to be seen. Several companies have licensed the basic technology, Technology Review reported, though non-disclosure agreements have kept the details private. But with an already tight spectrum that is getting more crowded by the year as public-sector agencies and other organizations add to their wireless, mobile networks, every little bit will help. And this algebraic approach could turn out to add more than a little bit. Posted by Greg Crowe on Oct 31, 2012 at 9:39 AM
<urn:uuid:6df2c11d-94f7-43fa-bc7e-04af4ac88523>
CC-MAIN-2017-04
https://gcn.com/blogs/mobile/2012/10/bigger-bandwidth-without-hardware-or-spectrum-just-algebra.aspx?admgarea=TC_Mobile
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00303-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958388
519
2.59375
3
Fiber optic light source is one of the fiber optic test equipments, used to measure the fiber optical loss for fiber optic cables. Usually fiber optic light sources are used together with the fiber optic power meter. A fiber optic light source is often used to light enclosed areas that do not have any direct line of sight to an external light source. This makes them useful in applications such as medicine. Some buildings incorporate optical fibers as light pipes or light tubes, which channel sunlight collected from the exterior of the building to provide lighting to locations in the interior. A fiber-optic light source with strands of optical fiber that are designed to intentionally allow significant amounts of light to leak through their cladding and out of the fiber are also used decoratively. This is common in Christmas decorations and can also be incorporated in things such as store displays, clothing, and decorative lights. Basically there are two types of semiconductor light sources available for fiber optic communication – The LED sources and the laser sources. A basic LED light source is a semiconductor diode with a p region and an n region. When the LED is forward biased, current flows through the LED. As current flows through the LED, the junction where the p and n regions meet emits random photons. This process is referred to as spontaneous emission. Like the LED, the laser is a semiconductor diode with a p and an n region. Unlike LED, the laser has an optical cavity that contains the emitted photons with reflecting mirrors on each end of the diode. One of the reflecting mirror is only partially reflective. This mirror allows some of the photons to escape the optical cavity. But fiber optic light sources have been identified as a fire ignition mechanism in the operating room. This study attempted to determine whether a forced-air warming blanket (FAWB) could affect the ignition or spread of fire caused by a fiber optic light source. Advances in light source and fiber optic technology may increase the radiation output of visible and infrared wavelengths at the end of the cable and at the distal tip of the endoscope. Higher outputs not only increase the risk of fire, but may introduce the risk of burns during close-range inspection of tissue with the endoscope. Since absorption of high-intensity radiation at visible light wavelengths may also cause tissue heating, additional filtering of infrared wavelengths may not eliminate this hazard. Furthermore, with the increasing use of television systems with video cameras connected to the endoscopes, many physicians operate light sources at their maximum intensities and believe they need even greater light intensities. Now, Princeton Lightwave of Cranbury, N.J. and OFS Labs have introduced a fiber-optics-based solution. The new fiber-based light source combines all the ideal features necessary for accurate and efficient scanning: uniform, intense illumination over a rectangular region; a directional beam that avoids wasting unused light by only illuminating the rectangle; and a “cool” source that does not heat up the objects to be imaged. Currently employed fiber optic light sources such as tungsten halogen lamps or arrays of light-emitting diodes lack at least one of these features. Laser light source used for high-speed networking is a vertical-cavity surface-emitter laser (VCSEL). The semiconductor diode combines high bandwidth with low cost and is an ideal choice for the gigabit networking options. Have a wide selection of light sources at FiberStore, who offers both handheld fiber optical light source and laser light source, covering a variety of wavelength ranges to suit all optical testing needs.
<urn:uuid:f5ea266c-c893-4c76-bfa4-e135dd07d94a>
CC-MAIN-2017-04
http://www.fs.com/blog/overview-mainstream-fiber-optic-power-meter.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00423-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929551
725
3.5
4
Definition: The problem of finding occurrence(s) of a pattern string within another string or body of text. There are many different algorithms for efficient searching. Also known as exact string matching, string searching, text searching. Specialization (... is a kind of me.) brute force string search, Knuth-Morris-Pratt algorithm, Boyer-Moore, Zhu-Takaoka, quick search, deterministic finite automata string search, Karp-Rabin, Shift-Or, Aho-Corasick, Smith algorithm. See also string matching with errors, optimal mismatch, phonetic coding, string matching on ordered alphabets, suffix tree, inverted index. Note: For large collections that are searched often, it may be far faster, though more complicated, to start with an inverted index. The name "exact string matching" is in contrast to string matching with errors. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 2 September 2014. HTML page formatted Mon Feb 2 13:10:40 2015. Cite this as: Paul E. Black, "string matching", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 2 September 2014. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/stringMatching.html
<urn:uuid:d905c85f-46bf-4f10-bdab-8ac8a26d065b>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/stringMatching.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00505-ip-10-171-10-70.ec2.internal.warc.gz
en
0.800847
305
3.359375
3
Lees D.C.,French National Institute for Agricultural Research | Lees D.C.,Natural History Museum in London | Rougerie R.,Barcoding | Christof Z.-L.,Zeller Lukashort | Kristensen N.P.,Copenhagen University Zoologica Scripta | Year: 2010 Lees, D. C., Rougerie, R., Zeller-Lukashort, C. & Kristensen, N. P. (2010). DNA mini-barcodes in taxonomic assignment: a morphologically unique new homoneurous moth clade from the Indian Himalayas described in Micropterix (Lepidoptera, Micropterigidae). -Zoologica Scripta, 39, 642-661.The first micropterigid moths recorded from the Himalayas, Micropterix cornuella sp. n. and Micropterix longicornuella sp. n. (collected, respectively, in 1935 in the Arunachel Pradesh Province and in 1874 in Darjeeling, both Northeastern India) constitute a new clade, which is unique within the family because of striking specializations of the female postabdomen: tergum VIII ventral plate forming a continuous sclerotized ring, segment IX bearing a pair of strongly sclerotized lateroventral plates, each with a prominent horn-like posterior process. Fore wing vein R unforked, all Rs veins preapical; hind wing devoid of a discrete vein R. The combination of the two first-mentioned vein characters suggests close affinity to the large Palearctic genus Micropterix (to some species of which the members of the new clade bear strong superficial resemblance). Whilst absence of the hind wing R is unknown in that genus, this specialization is not incompatible with the new clade being subordinate within it. A 136-bp fragment of Cytochrome oxidase I successfully amplified from both of the 75-year-old specimens strongly supports this generic assignment. Translated to amino acids, this DNA fragment is highly diagnostic of this genus, being identical to that of most (16 of the 26) Micropterix species studied comparatively here, 1-4 codons different from nine other species (including Micropterix wockei that in phylogenetic analyses we infer to be sister to other examined species), whilst 7-15 codons different to other amphiesmenopteran genera examined here. A dating analysis also suggests that the large clade excluding M. wockei to which M. cornuella belongs appeared <31 million years ago. These findings encourage discovery of a significant radiation of Micropterix in the Himalayan region. Our analysis has more general implications for testing the assignment of DNA mini-barcodes to a taxon, in cases such as museum specimens where the full DNA barcode cannot be recovered. © 2010 The Authors. Zoologica Scripta © 2010 The Norwegian Academy of Science and Letters. Source Barcoding | Date: 2011-12-20 A system for identifying medication in the form of pills, capsules or tablets, and communicating medicine dosage and intake instructions to a user, Utilizing Radio Frequency Identification Devices (RFID) and optical recognition technology. The RFID is performed by labeling a medicine container with a tag containing a unique identifier, associating the unique identifier with an audio file comprising instructions related to medicine usage, and delivering the audio file to an electromagnetic wave-enabled device. A wireless device, such as a mobile telephone or PDA, via a service, plays an audio and/or vibrational file associated with the unique identifier when the RFID tag is read by the device. The mobile device has a camera therein and is operable to capture an image of the pill, capsule or tablet and, via execution of optical recognition software, identify the pill, tablet or capsule, and verify the identity thereof. Barcoding | Date: 2010-05-24 An RFID-based data collection, correlation and transmission system and method carried out thereby is provided. The system, which comprises one or more RFID-readers, a radio frequency identification (RFID) recognition and control component. a storage device interface, a portable and/or internal data storage device in communication with the storage device interface, one or more antennas, and a configuration and command component, is operable to collect data of interest from detected RFID tags, and detect and identify system participants and data related thereto. In addition, the system is operable to correlate potential data of interest, such as product advertising information, to the detected system participants, and transmit the data of interest to the system participants via numerous methods of communication. Thus, the system provides a means of highly targeted information distribution, as well as providing user reports valuable in future planning. News Article | August 9, 2011 You’ve heard of the CSI effect, right? It’s this wacky “syndrome” whereby we’ve watched so much CSI Miami and Law and Order that we can’t fully put our weight behind a verdict without some solid DNA evidence. I guess it’s easy to forget that we had an entire legal system sans DNA for quite a while. In any case, we’ve apparently got an itch to be a bunch of white-coated forensic scientists, which is why we’re so lucky that this crazy, and also beautiful, machine exists in the world. It’s called OpenPCR, and it’ll make science-style DIYers drool. PCR stands for polymerase chain reaction, and it’s a crucial tool for just about any type of modern molecular biology. The way it works is by amplifying a specific region of a super teency-weency strain of DNA, and after that I kind of got lost in the biological jargon, but it’s all explained here. With OpenPCR, you can do two different types of tests: DNA Sequencing and DNA Barcoding. Sequencing is where you use the PCR machine to check out some of your own genome, while Barcoding is checking out what kind of species a certain bit of DNA belongs to. If you have yet to be convinced, just check out how these two girls used DNA Barcoding to uncover a New York City scandal (hint: 2 out of 4 Sushi restaurants and 6 out of 10 grocery stores were selling mislabeled fish.) For $599, you’ll get all the parts to the machine, instructions to set it up, and 16 PCR samples — the way by which you target certain regions of the DNA. Features include a heated lid that eliminates condensation, 2-degree per second ramp time (Centigrade), and compatibility with Mac and PC. Richard B.,CNRS Biodiversity Studies Laboratory | Decaens T.,CNRS Biodiversity Studies Laboratory | Rougerie R.,Barcoding | James S.W.,University of Kansas | And 2 more authors. Molecular Ecology Resources | Year: 2010 Species identification of earthworms is usually achieved by careful observation of morphological features, often sexual characters only present in adult specimens. Consequently, juveniles or cocoons are often impossible to identify, creating a possible bias in studies that aim to document species richness and abundance. DNA barcoding, the use of a short standardized DNA fragment for species identification, is a promising approach for species discrimination. When a reference library is available, DNA-based identification is possible for all life stages. In this study, we show that DNA barcoding is an unrivaled tool for high volume identification of juvenile earthworms. To illustrate this advance, we generated DNA barcodes for specimens of Lumbricus collected from three temperate grasslands in western France. The analysis of genetic distances between individuals shows that juvenile sequences unequivocally match DNA barcode clusters of previously identified adult specimens, demonstrating the potential of DNA barcoding to provide exhaustive specimen identification for soil ecological research. © 2009 Blackwell Publishing Ltd. Source
<urn:uuid:f8b2f786-1dd4-467e-b9e4-2b5521aad3a7>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/barcoding-463848/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00533-ip-10-171-10-70.ec2.internal.warc.gz
en
0.882755
1,682
2.578125
3
NOAA expands geodetic reference network - By William Jackson - Oct 13, 2008 The National Oceanic and Atmospheric Administration (NOAA) has expanded its international geodetic network with the addition of 43 new Global Positioning System tracking sites. The sites are part of the Continuously Operating Reference Station (CORS) network maintained by NOAA's National Geodetic Survey (NGS), which helps surveyors and other users determine the 3-D positions of sites or objects to within a few centimeters. The additions bring the number of CORS sites to more than 1,200 in the United States, its territories and several foreign countries. The Federal Aviation Administration established 13 of the new sites as part of its Wide Area Augmentation System for aircraft navigation. Four WAAS sites are in Alaska, four are in Canada, and five are in Mexico. The CORS network is part of the National Spatial Reference System, a nationwide array of more than 1 million survey reference points that dates to 1816 and serves as the foundation for all of the mapping, charting and surveying performed in the country. Many of the reference points are passive markers ' brass medallions embedded at known points to establish spatial baselines. They are sometimes embedded in sidewalks or on stone markers; others are on top of broadcast towers, and some are buried a foot or more underground in fields so they won't interfere with farmers' plowing. When the system was established, surveyors could locate the points within an accuracy of half a mile. Improvements in technology in the past 190 years have allowed NGS to refine those locations, and satellite-based GPS technology is helping to correct errors in the system as large as 5 centimeters, or about 2 inches. Modern technology can now place a point to within 1 centimeter on the Earth's surface. NGS recently completed a general realignment of the National Spatial Reference System using data gathered during a 15-year survey to more accurately place about 60,000 existing reference points. NGS has also been expanding its CORS network, which broadcasts GPS data. Surveyors and mapmakers can use these known starting points to chart distances and create maps without having to physically set a transit on top of a marker. Although the million or more passive markers are expected to remain in use for many years, CORS is becoming a more important part of the National Spatial Reference System. Surveyors, users of geographic information systems and others can combine their own GPS data with data from CORS sites to determine 3-D position coordinates to within a few centimeters of accuracy. They can also submit GPS information to NGS' Online Positioning User Service tool to have coordinates computed for them. William Jackson is a Maryland-based freelance writer.
<urn:uuid:c3f5b1a1-fcfa-47d7-a9d4-77cca36a6869>
CC-MAIN-2017-04
https://gcn.com/articles/2008/10/13/noaa-expands-geodetic-reference-network.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00495-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942364
561
2.9375
3
As the debate rages on about the dangers of cell phone radiation, a new measure in San Francisco is causing a lot of static. With its final approval June 22 by the city's Board of Supervisors, the landmark ordinance is on track to make San Francisco the first city in the nation to require retailers to display the specific absorption rate (SAR) of its products. (SAR measures the rate of energy absorbed by the body when exposed to radio frequency fields.) Mayor Gavin Newsom, who introduced the measure, is expected to sign it into law within 10 days, despite opposition from cell phone retailers and the inconclusive scientific support. "The science on cell phones is all over the place," said Mark Westlund, communication/education program manager for San Francisco's Department of the Environment. "And it was the mayor's opinion that people have a right to know the levels of exposure. Since there was uncertainty, it is incumbent upon the government provide information to our citizens." The legislation passed a preliminary vote by the Board of Supervisors last week, leading up to Tuesday's final vote. Once Newsom signs the proposal, the law will take effect in February 2011, and violators will face fines up to $500. The city isn't the first to attempt such a measure, but it would be the first to succeed. Last year, a similar proposal never saw the light of day in the California Legislature after intense lobbying by the mobile phone industry. Earlier this year in Maine, lawmakers shot down a bill that would've required manufacturers to put warning labels on cell phones about the potential link between cell phone radiation and brain cancer. View Full Story
<urn:uuid:745fdac2-3daa-4a8a-91a5-0b84fe7e4e40>
CC-MAIN-2017-04
http://www.govtech.com/e-government/San-Francisco-Approves-Cell-Phone-Radiation.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00403-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952704
335
2.78125
3