text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
In 2006, 10 percent of the computer science majors at Harvey Mudd College in Claremont, Calif., were female. This past year, that number had risen to 50 percent. Mudd, which specializes in science, math, and engineering, has a total of 800 students. The college’s president, Maria Klawe, said that 47 percent of this student body is female. She also said that computer science is an increasingly popular major among both male and female students. Mudd graduates about 185 students a year; between 70 and 80 of those students are computer science majors. “It’s not CS light in any way,” Klawe said. “A lot of colleges have seen an increase in the number of CS majors.” In order to attract more female students to computer science, Klawe said that one important step was making the introductory computer science class more welcoming. She said that, while the number of female chemists and biologists has increased, there is still a dearth of women in computer science. To retain the increasing number of female students, Mudd uses a lottery system to take 25 first-year women to the Grace Hopper Celebration of Women in Computing Conference, where students can listen to some of the best female technologists in the world speak. All Mudd students are required to take a computer science class during the first semester of their freshman year. Klawe said the majority of incoming students have had no computer science preparation at all. The institution changed the name of the course from “Introduction to Computer Science” to “Creative Problem Solving” in an effort to make the class sound more attractive to students, especially those who had no prior experience with the subject. “I’ve met very few 18- and 19-year-olds who don’t want to be seen as creative,” Klawe said. “It’s appealing to not just women, but all students.” Mudd’s administration divided the introductory computer science course into black and gold sections in an effort to make more freshmen feel comfortable with the subject. The black section is for students who have had prior computer science experience; the gold section is for students with no prior experience. Klawe also said that, to create a sense of teamwork, the course started offering peer programming sessions. The computer science department also recruited gruders, or grading tutors, to help students with assignments. There is a gruder for every eight students, who can offer guidance when students come to the lab to do homework. “One deterrent was when you got stuck, you were on your own in your dorm room,” Klawe said. “It went from an experience that’s lonely to something that’s not lonely at all. In one year, it’s became one of the most disliked courses to one of the most liked.”
<urn:uuid:bcfcec5a-79a9-48fb-8f21-d88b1e3f7f28>
CC-MAIN-2017-04
https://www.meritalk.com/articles/college-attracts-more-female-computer-science-majors/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00514-ip-10-171-10-70.ec2.internal.warc.gz
en
0.972394
612
2.59375
3
This write-up is just to demonstrate that how one’s browser history can go off track misleading the examiner. An investigator can identify it by noticing the odd in history, sample given in Figure 2. Let’s first take a closer look at this page below (Figure 1)– the URL (says cnn.com) and the title of tab (says BBC-Homepage). Imagine how the browser history would look like? Check out the below snapshot. Now let’s see how that happened. Here is the little trick we did to demonstrate the idea. We set up a proxy in the browser, apply breaks and amend GET packets (see Figure 3). What’s the point? Above is just one technique of doing this, there might be other ways but the point is that being forensic investigators we should think in all directions and not just the result of the tools. Don’t ignore any inconsistency found in the logs; they might be there for some reason. Few of them might be: - System was compromised. - The user intentionally tried to cover the tracks.
<urn:uuid:aeca6e61-7186-4720-8146-7d4487e1c485>
CC-MAIN-2017-04
https://articles.forensicfocus.com/2014/07/05/browser-anti-forensics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00359-ip-10-171-10-70.ec2.internal.warc.gz
en
0.919751
231
2.609375
3
0.10.2 Integer Coding Section note: this description and source code are provided courtesy of Hugh Williams. Integer coding is a method by which a set of integer values can be represented more efficiently. Two of the most often used methods of integer coding are called Elias codes and Golomb codes. Both Elias and Golomb codes are frequently used in compressing inverted indexes of English text. In the Elias gamma code, a positive integer x is represented by: in unary. (that is, by a 1-bit), followed by the binary representation of x without its most significant bit. Thus the number nine is represented by 0001001, since: or 0001 in unary, and 9 is 001 in binary with the most significant bit removed. In this way, 1 is represented by 1, that is, is represented in one bit. Gamma coding is efficient for small integers but is not suited to large integers for which parameterized Golomb codes or a second Elias code, the delta code, are more suitable. Elias delta codes are somewhat longer than gamma codes for small integers, but for larger integers, such as ordinal sequence numbers, the situation is reversed. A delta code stores the gamma code representation of an integer x, followed by the binary representation of x less the most significant bit. However, while Elias codes yield acceptable compression and fast decoding, better performance in both respects is possible with Golomb codes. Golomb codes are a form of parameterized coding in which integers to be coded are stored as values relative to a constant k. Using Golomb coding, a positive integer x is represented in two parts: the first is a unary representation of the quotient: the second is a binary representation of the remainder: x-qk-1. In this way, the binary representation Witten et al. report that for cases where the probability of any particular value occurring is small, an approximate calculation of kcan be used. Where there is a wide range of values to be coded and each occurs with reasonable frequency, a practical global approximation of the Golomb parameter k is where N is the number of documents, p is the number of distinct terms in the collection, and f is the count of document identifiers stored in inverted lists, that is, fis the sum of the lengths of all inverted lists. This model for selection of k is often referred to as a global Bernoulli model, since each term is assumed to have an independent probability of occurrence and the occurrences of terms have a geometric distribution. Another approach to selecting k is to use a local model. Local models use the information stored within a list to calculate an appropriate k value for that list; local models result in better compression than global models, but require a parameter for each locality. For example, by using a simple local Bernoulli model for storing sequence identifiers, a possible choice of a k value for a given list is an approximation of the mean difference between the document identifiers in that list, or, using the scheme above, where l is the length of a given list, that is, the count of entries in the list. Skewed Bernoulli models, where a simple mean difference is not used, typically result in better compression than simple local models. With integers of varying magnitudes, as is the case in document occurrence counts and inverted-file offsets that vary from 1 to the database size, efficient storage is possible by using a variable-byte integer scheme. We use a representation in which seven bits in each byte is used to code an integer, with the least significant bit set to 0 if this is the last byte, or to 1 if further bytes follow. In this way, we represent small integers efficiently; for example, we represent 135 in two bytes, since it lies in the range , as 00000011 00001110; this is read as 00000010000111 by removing the least significant bit from each byte and concatenating the remaining 14 bits.
<urn:uuid:0c27318a-3444-4e7b-a776-8227f7976899>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/alg/node167.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00387-ip-10-171-10-70.ec2.internal.warc.gz
en
0.893861
880
3.90625
4
In the hills above Berkeley, Cindy Regnier steps into a small office with sweeping views of the bay and urges visitors to watch their feet. "We are actively testing here," she says, her voice hushed. "Try not to knock over any sensors - please!" The office looks like any other corporate nest, with nondescript desks and chairs arranged across a well-lit floor. But the desks are empty of papers or people. Instead, a series of slender poles - some knee-high, others reaching to the ceiling - hold instruments that constantly monitor air temperature and light levels. White metal tubes stationed near the desks give off a small amount of heat - the same amount as a human being. The "office" is really a laboratory, created to help buildings save energy. It is the only facility of its kind in the world. And as California tries to cut the amount of electricity that buildings use, the Flexlab at Lawrence Berkeley National Laboratory could be a game-changer. "This is about understanding the performance of a building before you spend millions of dollars on it," said Regnier, executive manager of the Flexlab project. Researchers can easily swap out the lab's heating, air conditioning and lighting - even its windows. They can see how all of those elements perform together, not just one system at a time. A portion of the lab, resting on a concrete turntable that weighs a half-million pounds, can rotate 270 degrees to test how different angles of sunlight affect energy use. The sensors inside ensure that the space stays pleasant for workers, not too hot or too cold. "We built Flexlab with reconfiguration in mind," Regnier said. "It's like a kit of parts." Slashing the energy use from buildings has become a key goal for both California's government and federal authorities. The electricity and natural gas needed to keep America's buildings heated in winter, cooled in summer and lit year-round accounts for roughly 40 percent of the nation's greenhouse gas emissions, by most estimates. California this month enacted a tough new set of building codes in response. And by 2030, the state wants all new commercial buildings to produce as much energy - most likely through solar power or fuel cells - as they consume. "That's a very big challenge," said Andrew McAllister, a member of the California Energy Commission. "The builders and the developers are going to put a lot of money into this stuff, and if they're going to do that, they need some certainty that it's going to work the way it's supposed to and their people are going to be comfortable." Designed and built with $15.7 million from the U.S. Department of Energy, Flexlab isn't just an academic facility. The Webcor development company is building a 250,000-square-foot office for biotech giant Genentech in South San Francisco. So Webcor has installed at Flexlab the same ventilation and lighting systems it plans to use for Genentech, as well as the same windows. Tests will reveal, in detail, how well the systems work together to cut energy use while keeping office workers comfortable. "Sometimes, energy saving would appear to be in conflict with comfort," said Webcor Vice President Phillip Williams. "Occupants, if they're comfortable, are going to be able to save more energy because they won't be overriding the building controls." Genentech will incorporate the lessons learned into its next building project, whenever that might be. And that is considered Flexlab's most important feature. Over the years, it will build up a trove of test data that developers and others - including the companies that make heating and lighting systems - can use to refine their work. "We'll be learning a lot that we can apply to future buildings," said Carla Boragno, Genentech's vice president for site services. "It's not a one-time effort." ©2014 the San Francisco Chronicle
<urn:uuid:977c7409-8b18-46f6-b00c-9134acd92db5>
CC-MAIN-2017-04
http://www.govtech.com/education/Lawrence-Berkeley-Flexlab-Researches-Buildings-Energy-Use.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00295-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96487
820
3.15625
3
When satellites die or malfunction they typically begin tumbling in space to a certain death. But no one seems to know why. The European Space Agency has launched a study to figure out this death tumble as part of its effort to clean up orbital debris. +More on Network World: NASA: On millions of teeny-tiny copper hairs and orbital debris; 13 awesome and scary things in near Earth space+ ESA says this Clean Space initiative - tasked with reducing the space industry's environmental impact on Earth and space - plans to transform scientists understanding of how large, dead objects behave in space, encompassing launcher upper stages as well as satellites. In recent years, satellites beginning uncontrolled reentries have been tracked, such as Russia's Phobos-Grunt and Germany's Rosat, the ESA states. In a few cases, satellites suffering unexpected failures in orbit have also been followed, including ESA's Envisat and Japan's ADEOS-II. The aim of a new study is to combine detailed computer analysis with a range of ground-based observations, some which have only rarely been tried. For example, Optical telescopes and ground radar are today's favored monitoring methods, but the study will also investigate the potential of optical and radar satellites in nearby orbits for space-to-space observations. Highly accurate laser ranging will also be attempted. A global network of ground stations would bounce lasers off a satellite's retroreflectors - like 'cat's eyes' built into an expressway, the ESA stated. Figuring out satellites tumbling death dance will also help the ESA's Clean Space program when it launches its dedicated satellite salvage mission called e.DeOrbit. ESA says e.DeOrbit is designed to target debris items in well-trafficked polar orbits, between 800 km to 1000 km altitude. At around 1600 kg, e.DeOrbit will be launched on ESA's Vega rocket. "The first technical challenge the mission will face is to capture a massive, drifting object left in an uncertain state, which may well be tumbling rapidly. Sophisticated imaging sensors and advanced autonomous control will be essential, first to assess its condition and then approach it," ESA stated. In the US, DARPA has a satellite recovery technology under development that will enable a recovery spacecraft try to mimic tumbling dead satellites in order to grab them. That project, called Phoenix would use a squadron "satlets" and a larger tender craft to grab out-of-commission satellites and retrofit or retrieve them for parts or reuse. DARPA in 2012 said it concluded some of the most critical design tests of the Phoenix program - designing the algorithms that would help these satlets approach and tumble in sequence with the system they are trying to catch. Check out these other hot stories:
<urn:uuid:590e9716-e4fa-42c5-b1eb-b2b2255d976d>
CC-MAIN-2017-04
http://www.networkworld.com/article/2358526/security/why-do-satellites-tumble-to-death.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00111-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925752
571
3.71875
4
2.1.1 What is public-key cryptography? In traditional cryptography, the sender and receiver of a message know and use the same secret key; the sender uses the secret key to encrypt the message, and the receiver uses the same secret key to decrypt the message. This method is known as secret key or symmetric cryptography (see Question 2.1.2). The main challenge is getting the sender and receiver to agree on the secret key without anyone else finding out. If they are in separate physical locations, they must trust a courier, a phone system, or some other transmission medium to prevent the disclosure of the secret key. Anyone who overhears or intercepts the key in transit can later read, modify, and forge all messages encrypted or authenticated using that key. The generation, transmission and storage of keys is called key management (see Section 4.1); all cryptosystems must deal with key management issues. Because all keys in a secret-key cryptosystem must remain secret, secret-key cryptography often has difficulty providing secure key management, especially in open systems with a large number of users. In order to solve the key management problem, Whitfield Diffie and Martin Hellman [DH76] introduced the concept of public-key cryptography in 1976. Public-key cryptosystems have two primary uses, encryption and digital signatures. In their system, each person gets a pair of keys, one called the public key and the other called the private key. The public key is published, while the private key is kept secret. The need for the sender and receiver to share secret information is eliminated; all communications involve only public keys, and no private key is ever transmitted or shared. In this system, it is no longer necessary to trust the security of some means of communications. The only requirement is that public keys be associated with their users in a trusted (authenticated) manner (for instance, in a trusted directory). Anyone can send a confidential message by just using public information, but the message can only be decrypted with a private key, which is in the sole possession of the intended recipient. Furthermore, public-key cryptography can be used not only for privacy (encryption), but also for authentication (digital signatures) and other various techniques. In a public-key cryptosystem, the private key is always linked mathematically to the public key. Therefore, it is always possible to attack a public-key system by deriving the private key from the public key. Typically, the defense against this is to make the problem of deriving the private key from the public key as difficult as possible. For instance, some public-key cryptosystems are designed such that deriving the private key from the public key requires the attacker to factor a large number, it this case it is computationally infeasible to perform the derivation. This is the idea behind the RSA public-key cryptosystem. When Alice wishes to send a secret message to Bob, she looks up Bob's public key in a directory, uses it to encrypt the message and sends it off. Bob then uses his private key to decrypt the message and read it. No one listening in can decrypt the message. Anyone can send an encrypted message To Bob, but only Bob can read it (because only Bob knows Bob's private key). To sign a message, Alice does a computation involving both her private key and the message itself. The output is called a digital signature and is attached to the message. To verify the signature, Bob does a computation involving the message, the purported signature, and Alice's public key. If the result is correct according to a simple, prescribed mathematical relation, the signature is verified to be genuine; otherwise, the signature is fraudulent, or the message may have been altered. A good history of public-key cryptography is given by Diffie [Dif88].
<urn:uuid:6a9df882-0b4d-4af9-ab58-883b8e490195>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-is-public-key-cryptography.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00323-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920479
791
4.125
4
Essential .NET with C# for .NET 4.5 Learn .NET development using C# In this course, you will learn how modern applications are assembled as well as how the various pieces work together to form a cohesive environment. This course begins with a thorough exploration of the managed code model. You'll learn common idioms like code-behind and partial classes, and you'll also learn how to handle memory management issues and the IDisposable design pattern. Additionally, this course will teach you to use C# functional idioms and LINQ to write compact, powerful, expressive, fluent code. You'll learn how to work with designers and tools to manage XAML, code behind, and partial classes. Using configuration files to tweak application settings after deployment is also covered in this course. Finally, you'll take a look at the major class Note: You are required to bring your own laptop.
<urn:uuid:3e616bd4-c3e7-4dde-aee2-fb91401d1934>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/course/121168/essential-net-with-c-for-net-45/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00259-ip-10-171-10-70.ec2.internal.warc.gz
en
0.904419
197
2.84375
3
I just recently got married, so pardon me for the analogy, but I have had wedding on the brain for months now. But it got me thinking…the marriage of the Internet with machine-to-machine connected devices brings this concept into a brand new light. Something old – the Internet. Something new – the Internet of Things. “What do you mean by the ‘Internet of Things’?” you ask. Well, according to Wikipedia, the Internet of Things is commonly defined as: “Uniquely identifiable objects (things) and their virtual representations in an Internet-like structure…Radio-frequency identification (RFID) is often seen as a prerequisite for the Internet of Things. If all objects of daily life were equipped with radio tags, [so] they could be identified and inventoried by computers...” McKinsey & Company takes the concept even further in the following: “In what’s called the Internet of Things, sensors and actuators embedded in physical objects—from roadways to pacemakers—are linked through wired and wireless networks, often using the same Internet Protocol (IP) that connects the Internet. These networks churn out huge volumes of data that flow to computers for analysis. When objects can both sense the environment and communicate, they become tools for understanding complexity and responding to it swiftly.” There’s even a new comic book dedicated to explaining the Internet of Things. Translation: Every person, car, appliance – thing– will be married to an IP-address and connected through the Internet to transmit data back and forth, just like computers “talk” to one another today. For example, the day will likely come when you run out of milk and the refrigerator transmits a wireless signal to your smartphone, so the next time you stop at the grocery store, you pick up a half gallon. Or your car will tell, and schedule an appointment with, your mechanic directly when it needs a replacement part. IBM released this thought provoking video about the Internet of Things. Our lives are becoming more intertwined with the Internet and machine-to-machine applications (i.e. the Internet of Things) are becoming more evident by the day. Many have never considered the fact that technology now truly has this capability. However, we at KORE, have long since recognized this benefit of machine-to-machine (M2M) connected devices. Rather than proclaiming the future possibilities of these connections, we strive every day to create value within businesses; whether it is providing the wireless connection for remote healthcare monitoring systems, so the patient never has to leave the comforts of their own home. Or helping fleets determine the exact locations of their shipments, the speed of the trucks and the specific temperature of the cargo hold. Regardless of the application, KORE aims to provide cost effective wireless connection to keep all your devices communicating. So the next time you read about the trendy name, Internet of Things, just imagine that bride in white and think to yourself, a good marriage is a great thing and while you’re at it - wish me luck in this new adventure too. By Felix Chuang, Senior Product Manager Felix Chuang is Senior Product Manager at KORE Telematics, an industry leader in the Machine-to-Machine (M2M) wireless market. He has more than fifteen years of experience in the Internet and wireless industries in a broad range of roles such as product management, business development, and operations. He is currently focused on the KORE Global Connect product line, which provides a single SIM for M2M network service in 180+ countries and 230+ carriers. He can be found on twitter at: @felixc and KORE Telematics can be found at: @koretelematics
<urn:uuid:01c6e8c5-26bb-47a0-9788-f90040fba0f9>
CC-MAIN-2017-04
http://www.koretelematics.com/blog/something-old-something-new
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00497-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933393
781
2.578125
3
Cheating and Technology While most won’t, some people will always try to cheat. Warnings, threats and punishment don’t have long-lasting effects, or even short-term ramifications. The reason for cheating usually has to do with money or something that leads to money—and that’s a pretty strong incentive. Take the very visible cheating that occurs in professional sports, particularly baseball, where some athletes take performance-enhancing drugs. These drugs allow some to perform well enough to break records, reach all-star levels and attract very high salaries. In a USA Today opinion expressed on Dec. 7, 2004, the author cited increased cheating on tests in schools and the growing acceptability of cheating on taxes, and went into detail on the drug problem in major league sports, providing the recent examples of Marion Jones from track and field, and Jason Giambi and Barry Bonds of major league baseball. Each of these athletes was being investigated for steroid use. Giambi even admitted it. The article described some effects of the cheating: “Clean players are put at an unfair disadvantage, tempting them to cheat to keep up.” It continued, “Fans, young and old, are cheated out of seeing a game honestly played…Records become lies.” The article called for a solution to find better ways to catch cheaters. The current methods are not working. Of course, cheating occurs in testing, as well. This includes tests to get into college (ACT or SAT), graduate school (GRE) or medical, business or law school (MCAT, GMAT or LSAT). Certification (or licensure) tests like those you take for IT certification, or those other professionals take to become registered nurses, certified public accountants, automobile mechanics, cosmetologists, hazardous material handlers, commercial truck drivers, airport security screeners and so on are no exception. Tests are given by HR departments for hiring or promotion, such as specific personality or skills tests. Tests are given to place kids in special programs in elementary school or to make sure a defendant is competent to stand trial. Tests are used for grading in schools (midterm or final exams) or for advancement (state assessments). Tests are even used to evaluate compliance with the No Child Left Behind Act. All of these tests have important consequences and lead to cheating by some, but it’s not always the people taking the tests. Teachers, for example, have been found to change answer sheets so their students get higher scores, leading to better teacher and school evaluations. High-stakes consequences lead to cheating. It’s that simple—and the problem is getting worse. Like steroid use in major league baseball, where it is estimated that 44 percent of the current players use such drugs, the data strongly indicate that cheating on tests also is on the rise. Getting ready for a recent flight, I decided to buy a paperback copy of “Harry Potter and the Sorcerer’s Stone” by J.K. Rowling. Both my wife and my daughter had read the entire series. I was looking for a good book, and they recommended it enthusiastically. It is, of course, very well written and describes a school, called Hogwarts, where magic is taught to young future wizards and witches. At this school, like at all normal schools, there are final exams at the end of the courses. Each student is required to answer the questions using special parchment that has been bewitched with an anti-cheating spell. Banned from testing in the fifth book in the series were Auto-Answer Quills, Remembralls, Detachable Cribbing Cuffs and Self-Correcting Ink. Do we have enchanted paper in high-stakes testing? Or even an enchanted keyboard? Not yet, but the next best thing is possible. Using sophisticated statistics, it is possible to tell if a person is cheating on a test—while the test is going on. After a few questions, the cheating is detected and verified, and the test can be stopped. Not providing a score is an immediate punishment for the cheater who loses his or her testing fee, perhaps jeopardizes the chance to test again and is unable to steal many questions (if stealing was the intention). Biometrics will help as well. Digital photographs taken of each test can be compared with each other and an original to make sure the right person is taking the test. These can be printed with score reports and certificates. Fingerprints work the same way, and are used more than ever before. Cameras and microphones installed at workstations can record what each test-taker is doing during the test and can be used to verify inappropriate behavior. The value of our test scores and their ultimate use in determining who gets certified, who gets into college and who gets good grades must be protected. It’s time to create and use the magic of technology to help. David Foster, Ph.D., is president of Caveon (www.caveon.com) and is a member of the International Test Commission, as well as several measurement industry boards. He can be reached at firstname.lastname@example.org.
<urn:uuid:e4846dbb-e2ac-4e6d-b10d-0ef22caf6715>
CC-MAIN-2017-04
http://certmag.com/cheating-and-technology/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00405-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96665
1,056
2.859375
3
Under his recently unveiled fiscal stimulus plan, President Obama seeks to invest up to $20 Billion in federal funds to achieve widespread deployment of Electronic Medical Records (EMRs). A principal reason for his initiative is to improve our nation's health care system by reducing long term costs and increasing effectiveness of our health outlays. So what exactly is an Electronic Medical Record and what does this new direction mean for security and privacy professionals? At its core, an Electronic Medical Record (EMR) is the effective capture, dissemination, and analysis of medical and health related information for a single patient. All participants in the health care delivery system have a stake in efficient information flows. They include health care providers, insurers, government agencies, claims processors, and patients. Thus the term EMR has a slightly different meaning depending on one's perspective. Indeed, Electronic Medical Records managed by individuals are termed Personal Health Records (PHRs). PHRs capture all relevant personal health details, including diagnoses, X-Rays, and similar items into a single repository. Individuals are then empowered to make health decisions for themselves, to easily choose among providers, to selectively disclose medical conditions, and to receive optimum care during emergencies. Both Google and Microsoft offer services for individuals to create, manage, and store their PHRs. We expect that there will be an explosion in demand as the computer-savvy population ages. The focus of this article, however, is on the secure use of EMRs by institutions and health providers in a regulatory arena rife with complexity and with strict privacy and safety requirements. Consider a typical hospital with a relatively well functioning EMR system. Using EMRs, doctors can conduct much of their business totally electronically. This is in sharp contrast to traditional care environments where paper shuffling is the norm. Using EMRs, doctors can review patient histories and charts, obtain laboratory results, generate referrals for specialist consultations, prescribe medicines, and diagnose images all without the use of paper. This sounds utopian, and in many ways it is. But the soft underbelly of EMRs is the difficulty in adequately securing such records. Key security and privacy concerns for EMR systems include: - Hacking incidents on EMR systems that lead to altering of patient data or destruction of clinical systems - Misuse of health information records by authorized users of EMR systems - Long term data management concerns surrounding EMR systems - Government or corporate intrusion into private health care matters At first glance, these issues do not appear to be very difficult to solve. The reality is that hospitals and other care environments are complex institutions with complex workflows. A great many staff need immediate access to medical records. These include emergency technicians, admitting staff, doctors, nurses, and back-office personnel in billing and accounting. A quick fix might be to install role based access control (RBAC) mechanisms that allow for fine-grained permissions. But in a security and remediation effort we conducted for a large health care provider, we discovered that retrofitting RBAC mechanisms into an existing EMR system was actually quite a complex undertaking. Assigning roles is particularly tricky across various hospital departments and personnel. An inadvertent stripping of viewing rights, for example, could result in a surgeon unable to view critical images in the operating theater. That could easily lead to a catastrophe and so ease of access considerations remain paramount. In our view, this has resulted in most EMR systems implementations to have less than desirable security postures. Take, for example, an unauthorized disclosure of medical records to the press for an individual with the HIV virus. The effect could be devastating. Unintended outcomes might include family or community ostracism, job loss and denial of medical benefits. While there are legal statutes to prevent harmful effects of such disclosures, practically these may be of little solace to the individual whose record was released. One can imagine an insurer denying claims by insisting that the condition was pre-existing. These situations can and do occur in real life, and hospitals and care providers must take heed. The probability of a large security breach (of the network or EMR application) also leaves many hospital administrators and compliance officers shuddering over the specter of privacy violations. Health Information Portability and Accounting Act (HIPAA) violations can have severe consequences, and new state regulations such as in California impose considerable penalties for the errant disclosure of medical records. From our work at a large health care provider, we found that security breaches could be relatively easy to accomplish. Many EMRs are now connected to web applications (or are web applications themselves) making for relatively easy targets. We also found diagnostic systems that have direct connections to the hospital networks. Since these systems also have remote diagnostic capabilities for troubleshooting or downloading new software, installing a worm on the network that incapacitates, for example, all networked X-Ray machines is not out of the realm of possibility. At one facility, observations that subsequently led us to a focused remediation path included: - The compliance organization at the facility was hampered by inadequate technology, resources and processes for monitoring and acting on potential privacy violations. - Application security vulnerability identification and management by the EMR vendor was inadequate and sorely needed - Security monitoring especially at the application and database level needed substantial improvement. Secure data lifecycle management was not a priority during EMR system deployment. As a result items of specific concern included: - Haphazard long term data storage and archiving approach - Inappropriate data purging - Murky data ownership responsibilities - Inadequate procedures and systems for information asset discovery - Inadequate data classification - Insecure handling of physical media While contemplating doomsday scenarios alone is not helpful, we believe that hospitals and large health institutions must tackle the notion of security and privacy in a very diligent and holistic way—almost akin to what the financial industry did to secure their transaction systems in the mid 2000's. Without a concerted effort at every layer of the information infrastructure (device, network, and application), strict policies and use guidelines, and accurate monitoring capabilities, EMR deployments could crawl to a halt. The country needs better answers for securing EMRs. With the imminent outlays proposed by our new President to modernize our health care system, security professionals must step to the fore. ## Feisal Nanji, CISSP, is Executive Director at Techumen, a consulting firm that focuses on security, compliance, and privacy issues for health institutions. He can be reached at: email@example.com.
<urn:uuid:00e6f3e0-848f-4ecb-805f-88d87ad4c15c>
CC-MAIN-2017-04
http://www.csoonline.com/article/2123726/identity-access/security-challenges-of-electronic-medical-records.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00405-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946339
1,330
2.71875
3
BGP Routing Tutorial Series, Part 1 BGP Basics: Routes, Peers, and Paths Designed before the dawn of the commercial Internet, the Border Gateway Protocol (BGP) is a policy-based routing protocol that has long been an established part of the Internet infrastructure. In fact, I wrote a series of articles about BGP, Internet connectivity, and multi-homing back in 1996, and two decades later the core concepts remain basically the same. There have been a few changes at the edge (which we’ll cover in future posts), but these have been implemented as the designers anticipated, by adding “attributes” to the BGP specification and implementations. In general, BGP’s original design still holds true today, including both its strengths (describing and enforcing policy) and weaknesses (lack of authentication or verification of routing claims). Why is an understanding of BGP helpful in understanding Internet connectivity and interconnectivity? Because effective BGP configuration is part of controlling your own destiny on the Internet. And that can benefit your organization in several key areas: - Preserve and grow revenue. - Protect the availability and uptime of your infrastructure and applications. - Use the economics of the Internet to your advantage. - Protect against the global security risks that can arise when Internet operators don’t agree on how to address security problems. BGP and Internet connectivity is a big subject, so there’s a lot of ground to cover in this series. The following list will give you a sense of the range of the topics we’ll be looking at: - The structure and state of the Internet; - How BGP has evolved and what its future might hold; - DDoS detection and prevention; - Down the road, additional topics such as MPLS and global networking, internal routing protocols and applications, and other topics that customers, friends, and readers are interested in seeing covered. For this first post we’ll get our feet wet with some basic concepts related to BGP: Autonomous Systems, routes, peering, and AS_PATH. Routes and Autonomous Systems To fully understand BGP we’ll first get familiar with a couple of underlying concepts, starting with what it actually means to be connected to the Internet. For a host to be connected there must be a path or “route” over which it is possible for you to send a packet that will ultimately wind up at that host, and for that host to have a path over which to send a packet back to you. That means that the provider of Internet connectivity to that host has to know of a route to you; they must have a way to see routes in the section of the IP space that you are using. For reasons of enforced obfuscation by RFC writers, routes are also called Network Layer Reachability Information (NLRI). As of December 2015, there are over 580,000 IPv4 routes and nearly 26,000 IPv6 routes. Another foundational concept is the Autonomous System (AS), which is a way of referring to a network. That network could be yours, or belong to any other enterprise, service provider, or nerd with her own network. Each network on the Internet is referred to as an AS, and each AS has at least one Autonomous System Number (ASN). There are tens of thousands of ASNs in use on the Internet. Normally the following elements are associated with each AS: - An entity (a point of contact, typically called a NOC, or Network Operations Center) that is responsible for the AS. - An internal routing scheme so that every router in a given AS knows how to get to every other router and destination within the same AS. This would typically be accomplished with an interior gateway protocol (IGP) such as Open Shortest Path First (OSPF) or Intermediate System to Intermediate System (IS-IS). - One or multiple border routers. A border router is a router that is configured to peer with a router in a different AS, meaning that it creates a TCP session on port 179 and maintains the connection by sending a keep-alive message every 60 seconds. This peering connection is used by border routers in one AS to “advertise” routes to border routers in a different AS (more on this below). As explained above, the interconnections that are created to carry traffic from and between Autonomous Systems result in the creation of “routes” (paths from one host to another). Each route is made up of the ASN of every AS in the path to a given destination AS. BGP (more explicitly, BGPv4) is the routing protocol that is used by your border routers to “advertise” these routes to and from your AS to the other systems that need them in order to deliver traffic to your network: - Peer networks, which are the ASs with which you’ve established a direct reciprocal connection; - Upstream or transit networks, which are the providers that connect you to other networks. Specifically, your border routers advertise routes to the portions of the IPv4 and IPv6 address space that you and your customers are responsible for and know how to get to, either on or through your network. Advertising routes that “cover” (include) your network is what enables other networks to “hear” a route to the hosts within your network. In other words every IP address that you can get to on the Internet is reachable because someone, somewhere, has advertised a route that covers it. If there is not a generally advertised route to cover an IP address, then at least some hosts on the Internet will not be able to reach it. The advertising of routes helps a network operator do two very important things. One is to make semi-intelligent routing decisions concerning the best path for a particular route to take outbound from your network. Otherwise you would simply set a default route from your border routers into your providers, which might cause some of your traffic to take a sub-optimal external route to its destination. Second, and more importantly, you can announce your routes to those providers, for them to announce in turn to others (transit) or just use internally (in the case of peers). In addition to their essential role in getting traffic to its destination, advertised routes are used for several other important purposes: - To help track the origin and path of network traffic; - To enable policy enforcement and traffic preferences; - To avoid creating routing, and thus packet, loops. Besides being used to advertise routes, BGP is also used to listen to the routes from other networks. The sum of all of the route advertisements from all of the networks on the Internet contributes to the “global routing table” that is the Internet’s packet directory system. If you have one or more transit provider, you will usually be able to hear that full list of routes. One further complication: BGP actually comes in two flavors depending on what it’s used for: - External BGP (eBGP) is the form used when routers that aren’t in the same AS advertise routes to one another. From here on out you can assume that, unless otherwise stated, we’re talking about eBGP. - Internal BGP (iBGP) is used between routers within the same AS. The AS_PATH attribute BGP supports a number of attributes, the most important of which is AS_PATH. Every time a route is advertised by one BGP router to another over a peering session, the receiving router prepends the remote ASN to this attribute. For example, when Verizon hears a route from NTT America, Verizon “stamps” the incoming route with NTT’s ASN, thereby building the route in AS_PATH. (Note that when a route is advertised between routers in the same AS, using iBGP, the ASN for both routers is the same and thus AS_PATH is left unchanged.) When multiple routes are available, remote routers will generally decide which is the best route by picking the route with the shortest AS_PATH, meaning the route that will traverse the fewest ASes to get traffic to a given destination AS. That may or may not be the fastest route, however, because there’s no information about the network represented by a given AS: nothing about that network’s bandwidth, the number of internal routers and hop-count, or how congested it is. From the standpoint of BGP, every AS is pretty much the same. Additional uses for AS_PATH include: - Loop detection: When a border router receives a BGP update (path advertisement) from its peers it scans the AS_PATH attribute for its own ASN; if found the router will ignore the update and will not advertise it further to its iBGP neighbors. This precaution prevents the creation of routing loops. - Setting policy: BGP is designed to allow providers to express “policy” decisions such as preferring Verizon over NTTA to get to Comcast. - Visibility: AS_PATH provides a way to understand where your traffic is going and how it gets there. Conclusion… and a look ahead So far we’ve just scratched the surface of BGP, but we’ve learned a few core concepts that will serve as a foundation for future exploration: - Internet connectivity: the ability of a given host to send packets across the Internet to a different host and to receive packets back from that host. - Autonomous system (AS): a network that is connected to other networks on the Internet and has unique AS number (ASN). - Route: the path travelled by traffic between Autonomous Systems. - Border router: a router that is at the edge of an AS and connects to at least one router from a different AS. - Peering: a direct connection between the border routers of two different ASs in which each router advertises the routes of its AS. - eBGP: the protocol used by border routers to advertise routes. - AS_PATH: the BGP attribute used to specify routes. In future posts we’ll get deeper into the uses and implications of the above concepts. We’ll also look at single-homed and multi-homed networks, how using BGP changes the connectivity between a network and the Internet, and who can benefit from using BGP. When we’ve got those topics down we can then look at the ins and outs of BGP configuration. Stay tuned…
<urn:uuid:d92a573c-7ab0-4ceb-a958-9a0acd71d10c>
CC-MAIN-2017-04
https://www.kentik.com/bgp-routing-tutorial-series-part-1/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00221-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939873
2,212
3.546875
4
The Web browser has been a major infection vector for years, allowing malware to be transported to millions of computers through phishing, man-in-the-middle, SQL injection and countless other attacks. But what if there was a way to stop this madness and secure the browsing channel itself? There are several key things to look for. First is in understanding your existing browser. When you use Chrome, for example, you agree to let Google track your browsing behavior and offer up search suggestions, send them error reports, track your URLs, and lots more. They claim it is to help improve the user experience, but it also leaves you vulnerable to attacks and records your movements through cyberspace. So a replacement browser should offer some additional privacy components. (There are products that can be used to anonymize your browsing history and protect your identity when you surf online, such as TOR or ZipZap.) To continue reading this article register now
<urn:uuid:f72597b5-6731-42f1-824b-f7f57be6117b>
CC-MAIN-2017-04
http://www.computerworld.com/article/2488261/cyberwarfare/secure-browsers-offer-alternatives-to-chrome--ie-and-firefox.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00433-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944805
190
2.703125
3
Autonomous Systems Numbers (ASNs) play an important role in the routing architecture of the Internet. An Autonomous System (AS) is, according to RFC 4271, â€Å“... a set of routers under a single technical administration, using an interior gateway protocol (IGP) and common metrics to determine how to route packets within the AS, and using an inter-AS routing protocol to determine how to route packets to other ASs.â€ï¿½ AS numbers are—like IP addresses—a finite resource, and predictions exist for when the AS number pool will be depleted. In our first article, Geoff Huston explains how ASNs work, and introduces us to the 4-byte ASN scheme that will allow for future growth beyond the currently predicted depletion date. Our second article looks at another aspect of Internet routing and addressing—the IPv4 number space itself. Designers and operators of internets are often required to perform various address calculations in order to properly configure their networks. Russ White takes us through several exercises and introduces some â€Å“tricks of the tradeâ€ï¿½ to make such calculations easier. Our articles on spam in the last issue of IPJ prompted some feedback from our readers, and promises of more articles from other authors. This problem space clearly has more than a single solution. We look forward to bringing you more coverage of this topic in future editions. The second issue of the IETF Journal, published by the Internet Society, is now available. Some people have asked me if I think of this new journal as a â€Å“competitorâ€ï¿½ to IPJ. I am happy to say that the IETF Journal is very much complementary to IPJ and covers important news from the IETF that we hope our readers will find interesting. You can access the IETF Journal by visiting: http://ietfjournal.isoc.org The IPJ Reader Survey will soon close. We are grateful to the many readers who took the time to tell us about their reading habits, ideas for future articles, and other suggestions. Of course, we always welcome your feedback on any aspect of IPJ. Just drop us a line via e-mail to: firstname.lastname@example.org —Ole J. Jacobsen, Editor and Publisher
<urn:uuid:28b40a95-2b6a-41e4-abc0-67d5eca111a9>
CC-MAIN-2017-04
http://www.cisco.com/c/en/us/about/press/internet-protocol-journal/back-issues/table-contents-12/from-editor.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00065-ip-10-171-10-70.ec2.internal.warc.gz
en
0.919137
498
2.75
3
If you’re like I used to be, you always have trouble remembering the difference between how Windows and Linux terminate lines in text files. Does Windows add the extra stuff, or does Linux? What exactly is the extra stuff? How do I get the stuff out? Well, hopefully by the end of this you’ll be taken care of once and for all. First and foremost, let’s establish what the characters are and the differences between them. Both characters are control characters, meaning they’re invisible and meant to keep track of something within an application rather than be interfaced with by the user directly. The Carriage Return (CR) is represented by ASCII number 13, and came from the movement of a typwriter to the left of a sheet of paper. Think “returning of the carriage” to the left. The Line Feed is represented by ASCII number 10, and it harkens back to the action of a typewriter rolling a piece of paper up by one line. Interestingly enough, the combination of these two functions is integrated into the ENTER/RETURN key. Also known as the CRLF character, this handy shortcut both moves you to the left and down a line. Essentially, the crux of the whole CR / LF / File Corruption issue is the fact that Windows, Macs, and *Nix terminate text file lines differently. Below is a list of how they break down: - *Nix uses the LF character - Macs use the CR character - Windows uses both — with the CR coming before the LF How this ends up playing out is that if you write a file in Windows and transfer it bit for bit to a *Nix machine, it’ll have extra CR characters that can cause all sorts of havoc. On the other hand, if you transfer a file from a *Nix machine to a Windows machine in the same way, you’ll end up with a bunch of lines joined together by little boxes where there are supposed to be line breaks (because the lines are lacking the CR character). How To Fix It The good news is that there are plenty of ways to fix this problem. To start with, if you have ever used one of the more advanced FTP programs you’ve probably noticed the Binary and ASCII options. Well, if you use Binary, files are transfered “bit for bit”, or exactly as they are between the source and destination. If a text file is transfered between a *Nix and Windows box (or vice versa) using this mode the symptoms mentioned above will surface. If you use the ASCII mode, however, and you peform that same transfer, the CR / LF conversions are done for you, i.e. if it’s a Windows –> *Nix transfer, the CR characters will be removed, and if it’s a *Nix –> Windows transfer they will be added. In addition, you can always use tr to translate from one to another: Windows –> NIX: tr -d '\r' < windowsfile > nixfile // delete the carriage returns Mac –> NIX: tr '\r' '\n' < macfile > nixfile // translate carriage returns into newlines NIX –> Mac: tr '\n' '\r' < macfile > nixfile // translate newlines into carriage returns Yet another option is to do this from within vi like so: :set fileformat = unix You can simply change the format among the three (unix, mac, and dos) in this fashion. And when you save via :w, it rewrites the file in the correct format.:
<urn:uuid:3fe125d8-f1a4-48bc-9dba-d00f75adffd3>
CC-MAIN-2017-04
https://danielmiessler.com/study/crlf/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00213-ip-10-171-10-70.ec2.internal.warc.gz
en
0.896865
785
3.640625
4
Research from Javelin Research & Strategy identifies 18-24 year olds as consumers who are most likely to engage in risky electronic behavior. From public Wi-Fi through smart phone usage, this group tends to put itself in harm’s way when it comes to identity theft. PrivacyGuard announced a list of tips and precautions for college students to help them prevent identity theft. Use a strong password on your computer Friendly fraud, the type of ID theft that results when the victim knows the criminal, has been elevated in young people aged 18-24, according to Javelin Research and Strategy. The use of a strong login password on a computer is a way to prevent this type of fraud from occurring in a dormitory setting. Doing so can prevent roommates and strangers alike from logging on to a potential victim’s computer. Protect your computer with antivirus software Be certain to install antivirus software and to regularly update it to protect your computer from online threats. This is particularly important as universities move their data to the cloud, and continual connectivity to the internet becomes more prevalent in day to day studies. Don’t transact over public Wi-Fi While public Wi-Fi offers an exceptional degree of convenience, it’s important to realize that thieves are able to intercept information being sent over the technology. Students are discouraged from sending any personal information, from credit/debit card numbers to social security numbers via public Wi-Fi. “Dormitory living and the college environment can lend themselves to creating easy targets for identity thieves,” said Vin Torcasio, Director of Product for PrivacyGuard. “As students prepare for the back-to-school season and arrive at school, there are certain precautions that they should take in order to protect themselves from becoming a victim of the crime.”
<urn:uuid:6790dbfc-79c2-49b7-96b3-0ba7e49c4220>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2012/08/14/tips-for-college-students-to-deter-identity-theft/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00150-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930558
376
2.71875
3
On June 11, leading authorities on the World Wide Web will gather at Rensselaer Polytechnic Institute for an old-fashioned debate with a social media twist. The questions for discussion will be shaped and selected by the collective wisdom of Web users from around the world. After delivering a keynote address, Tim Berners-Lee, inventor of the Web, will join a panel of experts from academia and industry for a public discussion about the Web's future. The content of the debate will be collaboratively created by Web users, who can submit questions and promote them through a user-based ranking system, similar to the community-based news site Digg. The most popular questions will drive the discussion at the June 11 debate. The public debate, which will be streamed live via an interactive Webcast, is part of a daylong event to celebrate the launch of the Tetherless World Constellation at Rensselaer - a new academic center devoted to the emerging field of Web Science. A wide range of issues are up for discussion, from sustaining the usefulness of the current Web to creating a next-generation Semantic Web, as well as the role of politics, education, and sociological factors in the Web's continued evolution. Following introductory remarks by Rensselaer President Shirley Ann Jackson, participants in the panel will be: Members of the public are invited to submit and vote on questions until the day of the debate. During the discussion, viewers will be able to interact with the panelists by submitting follow-up questions and comments in real time.
<urn:uuid:ca534115-4e0f-40a9-9c0c-3193a25fcd2c>
CC-MAIN-2017-04
http://www.govtech.com/education/June-11-Webinar-Future.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00058-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938965
317
2.578125
3
RPC or a remote procedure call, in the computer world, is required for a communication session of the inner processes. That will also let the computer programs to search out subroutines or else procedures for execution using any other address spaces. But all this can be done without a programmer’s coding facilitation in order to perform the remote interaction. Anyhow, a programmer can write the same code for the both local and remote subroutine. Another point to be noted is that when at issue software is being used the object-oriented principals for the communication purpose then RPC will be either known as remote invocation or else remote routine invocation. But for the implementation of this concept, incompatible technologies usage is must. Well! Message passing procedure involves the certain actions like when a remote procedure call, containing request message for the execution of specified procedures, is initiated from the side of client to a remote server that will further send a response in this course of action to the client so as to continue the process. But you can find out the variations and intricacies in different kinds of implementations which can lead to different and incompatible RPC protocols. But during the call processing by server, the client will remain blocked and on finishing the server’s processing, the client can launch an asynchronous request (XHTTP call) to the server. But in case of the remote calls, you can face failure due to unpredictable and irregular network problems too. And all this can be happened without knowing that these remote procedures were actually called upon or not. On the other hand, procedures without added effects (known as idempotent procedure) can be easily handled if called excessively. In addition to this, an event’s sequence in the course of a RPC can be as following: - The client dubs to the end (client stub) by means of a local procedure call (along with parameters). - The stub (client side) performs marshalling by packing parameters in the form of a message which is forwarded but after making a system call. - The client message is forwarded to the server via a kernel (at the server) which duty is to transfer the packets on their reception to the server side stub end and as a result this stub provokes the server procedure. Remote Procedure Call Protocol’s Structure The RPC message protocol can be divided into two distinct constructions: - Call message - Reply message. RPC Call Message: Every remote procedure RPC call message is consisted on unsigned integer fields for the identification of the each remote procedure. These fields can be as: Program numeral, Program version number and Procedure number. RPC Reply Message: But for a reply message (a request), RPC protocol can be varied in nature and its makeup is depending upon the call message acceptance and rejection by the server. A reply message towards a request may contain the information in order to distinguish the following conditions such as: is RPC carried out the call message effectively and is remote program not accessible on the distant system etc.
<urn:uuid:66b6a026-c504-4518-8482-948c1676fa89>
CC-MAIN-2017-04
https://howdoesinternetwork.com/2012/rpc
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00140-ip-10-171-10-70.ec2.internal.warc.gz
en
0.903761
615
2.984375
3
Jobs Never Forgave Googles Eric Schmidt for Backing Android In reality, the iPhone, as nice as it is, is derivative of the products that preceded it in the market. While Apple did a beautiful job of the user interface, and made a device that's attractive enough to garner a gazillion followers and an ecosystem that was just closed enough to control while being open enough to gain a great deal of external support, the iPhone still depended on the work of others. This is true of Apple's products in general. As nice as the original Macintosh may have been, it depended on Xerox for the original design for the interface. As nice as the Apple II may have been, it too was based on predecessors. But this isn't to suggest that the Macintosh or the Apple II were bad computers or that they shouldn't have been developed using the concepts of others. There really is no alternative. Despite Apple's claims of uniqueness, the company couldn't have been completely unique if it expected to actually sell computers. Apple didn't invent computing after all. The company simply developed software using a different approach from what was emerging elsewhere at the time. Of course, Apple insisted on using a closed platform. The company refused, except for a brief time, to allow clones of its product. And when clones did appear, Apple put them out of business.
<urn:uuid:e37f43ae-e498-4eb5-98d8-93c9b4ce0b72>
CC-MAIN-2017-04
http://www.eweek.com/android/Why-Steve-Jobs-Was-Wrong-About-Android-Being-a-Stolen-Product-1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00406-ip-10-171-10-70.ec2.internal.warc.gz
en
0.978456
276
2.5625
3
When assessing the network security of an organization it is important to understand the breadth of the attack surface. A single forgotten host or web application in the network will often become the initial foothold for an attacker. Passively Mapping the Network Attack Surface Using open source intelligence (OSINT) techniques and tools it is possible to map an organizations Internet facing networks and services without actually sending any packets (or just a few standard requests) to the target network. Open source intelligence (OSINT) is defined as deriving intelligence from publicly available resources. Consider the following graphic; you will notice that as the analysis is progressed, newly discovered items (IP address / host names / net blocks) can open up new areas to explore (and attack). Identifying all known hosts for an organization allows us to continue to dig deeper for more systems and hosts to target. By examining all discovered IP address blocks (ASN) we can find other hosts within the net block of interest. Identifying related domains will lead to the discovery of more hosts. Think of a single web server; the actual open services ( rdp) are all points of attack; discovering all the virtual hosts running on the server is also important as web applications running on any of the virtual hosts are also an attack vector. Basic DNS queries Most domains will have a web site, mail server and dns servers associated with it. These will be our initial point of reference when discovering the attack surface. We can use DNS lookup tools and whois to find where the web ( A records), mail ( MX records) and DNS ( NS records) services are being hosted. hackertarget.com. 3600 IN A 220.127.116.11 hackertarget.com. 3600 IN AAAA 2a01:7e00::f03c:91ff:fe70:d437 hackertarget.com. 3600 IN MX 10 aspmx.l.google.com. hackertarget.com. 3600 IN MX 20 alt1.aspmx.l.google.com. hackertarget.com. 3600 IN MX 20 alt2.aspmx.l.google.com. hackertarget.com. 3600 IN MX 30 aspmx2.googlemail.com. hackertarget.com. 3600 IN NS ns51.domaincontrol.com. hackertarget.com. 3600 IN NS ns52.domaincontrol.com. Initial Host discovery (Google, Bing and Netcraft) A simple search for all host names related to the target domain is also a good starting point. Using search engines such as Google ( site:example.com) and Bing ( site:example.com) can reveal sub-domains and web hosts that may be of interest. When searching with Google if there are a large number of results you can remove the known domains with the following query. Google Hacking is a well documented technique that involves getting Google to reveal technical information that is of interest to an attacker. The Google hacking database is the best place to get started if you are not familiar with this technique. Another handy tool is the netcraft host search. Enter the domain as your search term (be sure to include the . before your domain name to only get sub-domains of your target domain. You can see the Netcraft search provides a quick overview of known web hosts for the domain and the net blocks that they are hosted in. Another interesting piece of information is the historical data, Netcraft have been collecting this data for a long time. Finding more hosts through data mining DNS records Passive DNS reconnaissance allows discovery of DNS host records without actively querying the target DNS servers. If DNS monitoring is in place active dns recon could be detected by the target. Techniques that can be classed as active DNS recon include brute forcing common sub-domains of the target or attempting DNS zone transfers (query type=AXFR). There are many on line resources for passive DNS analysis and searching, rather than sticking to regular DNS lookups we can perform large scale searches using DNS data sets. One such resource is that provided by the scans.io data Scans.io and Project Sonar gather Internet wide scan data and make it available to researchers and the security community. This data includes port scans and a dump of all the DNS records that they can find. Using the DNS records dump you can search through over 80GB of DNS data for all entries that match your target domain. If you do not wish to go through the trouble of downloading and extracting such a large chunk of data you can use our free tools to get started with your network reconnaissance. Name Servers (type=NS) The location of the DNS servers may be internal to the organization network or as is often the case they be a hosted service. This can be often be determined by simply looking up the net block owner (ASN) of the IP address of the DNS server. When looking at DNS servers we can not only review the Host (A) records that point to the IP address of the DNS server but also do a reverse search across DNS data for all hosts that use the same DNS server. In a hosted situation this may not be as valuable but if its an internal company DNS server we will quickly identify all related domains for the organization (that are at least using this DNS infrastructure). targetdomain.com targetdomain.co.uk targetdomain.net forgotten-footy-tipping-site-with-no-security-controls.com vpn.targetdomain.com webmail.targetdomain.com SPF Records (type=TXT) Sender Policy Framework is configured through the TXT DNS record. If configured this will contain all servers (or networks) that are allowed to send email from the domain. This can often reveal IP addresses (and net blocks) of the organization that you may not have been aware of. hackertarget.com. 3600 IN TXT "v=spf1 include:_spf.google.com ip4:18.104.22.168 ip4:22.214.171.124 ip4:126.96.36.199 ip4:188.8.131.52 ip6:2a01:7e00::f03c:91ff:fe70:d437 ip6:2600:3c03::f03c:91ff:fe6e:d558 include:_spf.google.com ~all" Reverse DNS across IP blocks of Interest Once you have a list of all IP addresses and ASN's of interest you can attempt to find more active hosts within those net blocks that the organizations owns or has assets within. A good way of finding more hosts is to perform a reverse DNS search across the full net blocks of interest. 178.79.x.22 host4.example.com 178.79.x.23 targetdomain.com 178.79.x.24 forgotten-unpatched-dev-host.targetdomain.com 178.79.x.25 host6.example.com Finding Web Servers When it comes to mapping a network the web servers of an organization open up a wide attack surface. They also contain a wealth of information, not just published but insights into the technologies in use, operating systems and even how well managed the information technology resources of the organization are. To map the attack surface of a web server it is important to consider the available network services, the virtual hosts (websites) and the web applications in use. Identifying all virtual web hosts on the web server is an important part of the information gathering process. Different web sites on the same web server will often be managed using different content management systems and web applications. A vulnerability in any of these web applications could allow code execution on the web server. To identify the virtual web hosts on a particular IP Address there are a number of well known web based tools such as Bing and Robtex. An IP address search using the ip:184.108.40.206 search term on the Bing search engine will reveal all web sites that Bing has in its index that point to that same IP address. Experience shows this is a good starting place but like the Bing search engine in general, can contain stale entries and limited results. As previously mentioned scans.io regularly compiles all the DNS data it can publicly find, we can use this data to identify the web server hosts (A records). By searching for an IP address in all the known DNS records, we can find all the hosts that resolve to that IP. This is the method we use for the Reverse IP Address Search tool we created and also parts of the dnsdumpster.com project. Other Network Services In any vulnerability assessment it is essential to identify all listening services. A web server will of course usually have a web server (port 80), an ftp server will have an FTP service (port 21) and a mail server will be listening for mail (smtp on 25, pop3 on 110, imap on 143 and more). It is important to discover all the listening services in order to determine if they are vulnerable to exploitation or authentication attacks. Traditionally these services would be identified using Port Scans with tools such as the Nmap Port Scanner. Of course using a port scanner is no longer a passive undertaking, as using a tool such as Nmap involves sending packets to the target systems. shodan.io search engine To passively find open services and the banners for the open services we can use the shodan.io search engine. From the banners of services such as web servers, mail and ftp we are able to identify the version of the server software running the service and often the operating system of the server. All without sending any packets to the target organization. Becoming the attacker and the Next Steps Putting yourself in the shoes of an attacker and attempting to map out an organizations Internet facing systems is a great way to develop an understanding of the attack surface of a network. Start by finding all the public IP addresses of known hosts for a domain, expand this to include net blocks of interest that are hosting these services. Now try to find all virtual host names that are hosted on the IP addresses and from this you can map out the web applications in use. From this initial passive analysis it may be possible to identify vulnerable or possibly vulnerable points in the network, this can inform your next steps and where to focus your attack or vulnerability assessment. Moving on from passive analysis the next steps to consider are active information gathering such as DNS zone transfers or sub-domain brute forcing. Followed by active network scanning such as Nmap Port Scans and vulnerability scanning. Ultimately the next steps will be determined by your scope and purpose for the performing the analysis.
<urn:uuid:6aead2b1-5fdb-465f-9ff7-be30b5117f1b>
CC-MAIN-2017-04
https://hackertarget.com/quietly-mapping-the-network-attack-surface/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00040-ip-10-171-10-70.ec2.internal.warc.gz
en
0.898222
2,255
2.640625
3
Explanation of the difference between creating a backup and disk cloning The Backup operation of Acronis software creates an image file for backup and disaster recovery purposes, while the Disk Clone tool simply copies/moves the entire contents of one hard disk drive to another. Here's how both tools work and when you should use them. When you create an backup with Acronis True Image or Acronis Backup, you get a compressed .tib file containing an exact copy of your hard disk, a disk partition or individual files or folders (you make this choice when you create an image archive). If you create a backup of disk or partition, this backup contains everything that resides on the selected disk/partition, including operating system, applications and all files and folders. You can save this image to any supported storage device and use it as a backup or for disaster recovery purposes. When you use the Disk Clone tool, you copy all contents of one hard disk drive onto another hard disk drive: as a result, both the source and the target disk have the same data. This function allows you to transfer all the information (including the operating system and installed programs) from a small hard disk drive to a large one without having to reinstall and reconfigure all of your software. Disk Clone operation it is not generally used as a backup strategy, as it offers little flexibility. In general, disk clone is a one time operation designed to clone one disk to a different one for the purpose of migrating to a larger hard drive or to a new machine. A backup operation offers greater flexibility as a backup strategy: - Backup can be scheduled (e.g. regular automatic backups that require no user interaction); - Backup changes can be appended incrementally or differentially (i.e. after a full backup, subsequent backups will take less time and occupy less space than the first one); - Backups allow you to keep several versions of the backed up data and you can restore to one of the previous versions (e.g. you can keep backups from one, two and three weeks ago on the same disk and you can recover the backup from the moment that you need); - Backup can be mounted and searched through (e.g. if you want to quickly find, view and copy a file from it). Either way (backup and recovery of the entire disk or disk clone) you can transfer the whole operating system and installed programs to a new disk. Backing up with Acronis True Image 2016: Cloning with Acronis True Image 2016: - Acronis True Image 2014: Cloning Disks - True Image 2013 by Acronis: Cloning Basic Disks - Acronis Disk Director 12: Cloning Basic Disks - Cloning Laptop Hard Disk - Resizing Partitions during Disk Cloning - Transferring a System from IDE to SATA Hard Disk and Vice Versa - Creating a Sector-By-Sector Backup with Acronis Products
<urn:uuid:ec74c99f-d5eb-4d82-92ed-cbc64e161786>
CC-MAIN-2017-04
https://kb.acronis.com/content/1540
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00040-ip-10-171-10-70.ec2.internal.warc.gz
en
0.90118
620
2.703125
3
|<10.1>||Caption||Any text captured within this element will be centered across the top of the table.| |<10.2>||Table width||The maximum table width (in pixels or as a percentage of the page width). If set as a number of pixels, this will force the table to the specified width regardless of page width.| |<10.3>||Border width||The table outer border width (in pixels). If set to 1 or more, this will insert border lines between table cells. Border widths greater than one only have an effect on the outer table border. The default border width is 0. See examples below.| |<10.4>||Cell spacing||This is the spacing (in pixels) that will be applied between cells within a table. This is set to 1 by default. See examples below.| |<10.5>||Cell padding||This is the padding (in pixels) that will be included between the table cell walls and the object(s) contained within the cell. This is set to 1 by default. See examples below.| The default background color to be used for cells within the table. HTML based colors are defined either as color names (for the most well known colors like "black", "white", "red") or defined as a mix of Red Green and Blue amounts. These amounts are defined in Hex (00 - FF) where 00 is the least amount of each color and FF is the most. The color sets are combined as an HTML color reference as follows: #FFFFFF (where the first byte - FF - defines the red color, the second byte defines green and the last byte defines blue). The "#" is required for these types of color definitions. Complete information on HTML colors can be found within the HTML specification at World Wide Web Consortium If set, any This option is only valuable if the table compression will not cause cells to shift into incorrect positions. Whole empty rows and columns should be compressed. |<10.9>||HTML tag inclusion| Border, cell spacing and cell padding values are illustrated in these examples; Border width=5, Cellpadding=0 Cellspacing=0 Border width=1, Cellpadding=20 Cellspacing=0 Border width=1, Cellpadding=0 Cellspacing=20
<urn:uuid:c0120d29-d66a-4f28-8178-03295b87ef2e>
CC-MAIN-2017-04
http://www.jbase.com/r5/knowledgebase/manuals/jWB/332/systabmain.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00094-ip-10-171-10-70.ec2.internal.warc.gz
en
0.694111
495
2.703125
3
The scientific community is abuzz over the recent discovery of a Higgs-like particle based on experiments performed on the Large Hadron Collider (LHC) at CERN. What is less well known is that the LHC researchers and other European scientists will soon have access to a new a cloud platform specifically designed for their computational needs. The project, known as the Helix Nebula was introduced in March to provide researchers with cloud-based computing and analytics resources.* A partnership between IT service providers and scientific facilities, Helix Nebula consists of research institutions CERN, the European Molecular Biology Laboratory (EMBL) and the European Space Agency (ESA). The program is currently undergoing a two-year pilot phase with €1.8 million in funding from the European Commission. In addition to particle discovery research, the cloud has also assisted with studies in earth observation and molecular biology. During this pilot phase, scientists have deployed applications with tens of thousands of jobs across multiple datacenters. Michael Symonds, principal architect for Atos, one of the cloud resource providers expressed positive feedback regarding the project’s status. “Setting up a public style cloud for very demanding research organizations is very different to providing private enterprise cloud services to companies,” he said. “It has taken a lot of effort but we are all pleased with these early results and are confident we can build on this in the future.” In September, representatives from each of the Helix Nebula research facilities will deliver a keynote at the ISC Cloud’12 Conference in Mannheim, Germany. Wolfgang Gentszch, general chair of the event and contributor to HPCintheCloud, interviewed CERN’s Bob Jones, ESA’s Wolfgang Lengert, and EMBL’s Rupert Lueck to talk about the new science cloud. Jones explained the main difference between the science cloud and CERN’s previous LHC Computing Grid. “The [LHC Computing Grid] that has been essential to the LHC experiments’ work to observe a particle consistent with long-sought Higgs boson consists of publicly-managed data centers,” he explained. “Helix Nebula is a public-private partnership” In this case, research is being processed at commercial datacenters. When asked what benefits the cloud would provide, Lengert mentioned that the infrastructure would simplify access to data, tools, models and a collaboration platform. The European space agency’s ERS/Envisat missions have resulted in datasets containing over two decades of information about the land, oceans, atmosphere and cryosphere. Looking to the future, Jones predicts additional collaborators and adopters to hop on board. “Assuming the pilot phase is successful,” he said, “we expect Helix Nebula to grow to include more commercial cloud services providers and public organizations as consumers.” *The original version of this article erroneously connected Helix Nebula to the Higgs boson discovery. Although the new cloud platform has demonstrated its ability to host applications that supported the LHC research work, the test runs were only used for the purpose of proof-of-concept. We regret the error — Editor
<urn:uuid:769b8ac7-7110-423e-bf9e-ffa661834f9a>
CC-MAIN-2017-04
https://www.hpcwire.com/2012/07/10/helix_nebula_cloud_contributes_to_higgs_particle_discovery/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00094-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94546
658
2.5625
3
The solid-state drive (SSD) industry has an opportunity to address the issue of data loss and recovery from failed SSD devices relatively early in the market and product development cycle. The elimination of moving parts in SSDs should increase the mean time between failure when compared to hard disk drives (HDDs). However, still-maturing technology and unpredictable operating conditions are already resulting in SSD failure. A certain percentage of these failures will involve the loss of critical data and require data recovery services. The paradigm shift from magnetic to semiconductor-based storage requires the development of a completely new set of data recovery techniques. These techniques produce varying degrees of success and are expensive and time consuming to perform. In addition, certain implementations of SSD technologies can complicate the recovery process and adversely affect the ability to recover data. By choosing to take a proactive approach and assisting data recovery professionals, the SSD industry will help to ease public concern and increase data recovery success rates while also minimizing recovery costs and turn-around times. Cost Breakdown of Data Recovery Many factors impact the cost of data recovery from failed storage devices, including equipment, facilities, and human resource expenditures. However, research and development is the biggest contributor to the relatively high price of data recovery. HDDs and SSDs are incredibly sophisticated devices with multiple potential failure points. Each failure mode requires different techniques in order to recover the data stored on the device. The research and development time required to establish reliable and cost-effective recovery procedures for each specific drive and failure mode is substantial. This work is generally performed by experienced teams of electrical and mechanical engineers and computer scientists. Hundreds of new HDD and SDD models are released every year, and drive manufacturers are continuously pushing the envelope in terms of performance and capacity. As a result, successful data recovery organizations must invest enormous resources in research and development, sometimes spending hundreds of hours on the development of a single new technique. Taking the time in the R&D phase to develop efficient data recovery tools and techniques usually results in lower average data recovery costs to the consumer. More specifically, reducing the amount of time spent by an engineer or technician to perform the recovery reduces the cost of the recovery. Faster turn-around times also mean that the value of the data to the consumer is preserved. In most data recovery scenarios, there exists an inverse relationship between the value of the data and the time it takes to recover it. In other words, data is never more valuable than the instant it is lost. As potential sales are missed, payrolls come and go, and projected deadlines delayed, the once critical data becomes less important as it is naturally recreated. Therefore, for data recovery to make economic sense, the recovery process must be both quick and cost-effective. Most data recovery professionals agree that, except in cases where data cannot be recreated, there is a precipitous drop-off in the number of customers willing to pay for their lost data when recovery times exceed three weeks. Figure 1 depicts the delicate balance that exists in the data recovery industry between the value of the lost data to the consumer and the cost and turn-around time to perform the recovery. Through a commitment to R&D, Gillware Inc. has been able to significantly reduce the turn-around time and total cost for a single HDD data recovery from an industry average of $1500 and three weeks, respectively. For the fiscal year 2009 the average HDD data recovery at Gillware Inc. cost $694 and took six business days to complete, staying well within the recovery time window shown in Figure 1. Years of experience and well-defined techniques have stabilized the average cost and turn-around time for data recovery from HDDs. SSD recovery, on the other hand, is a discipline that is being developed as SSD technology grows. As a result, the cost and recovery time from SSDs can vary dramatically depending on the manufacturer and specific parameters of the device. Solid-state storage technology represents an entirely new set of engineering problems to research teams at data recovery organizations. SSD manufacturers are pushing the technology envelope in order to increase drive storage capacities while attempting to improve device reliability. The result is a blistering pace of change, with frequent releases of new designs. Each new design represents new firmware, different wear-leveling algorithms and controllers, and revised PCB layouts. Staying ahead of the SSD recovery curve is a challenge, keeping in mind that the delicate balance between recovery turn-around times and cost must be maintained. Although SSD recovery techniques are progressing, they lag behind the streamlined and efficient procedures used to recover data from HDDs. As a result, the average SSD recovery at Gillware Inc. costs $2850 and takes approximately three weeks to perform. The data recovery cost discrepancy between SSDs and HDDs is shown in Figure 2. For data recovery from SSDs to be a viable option for the growing SSD market, the cost and turn-around time must be brought in-line with those of HDDs. Device Failure and the Customer Relationship There is a fine line that every electronic storage device manufacturer must walk when dealing with device failure. The fact that devices will fail is a virtual certainty. Regardless of the reason for failure, whether manufacturer defect or abuse by the end user, the consumer expects the device manufacturer to provide a certain level of assistance in recovering the stored electronic data. For some cues on how these situations can be handled, one need look no further than the approaches implemented by hard disk drive OEMs and computer manufacturers. Each hard disk drive and computer manufacturer has a different approach to handling data recovery situations resulting from HDD failure. These approaches range from an apology and an offer to replace the device (if it’s under warranty) to providing in-house data recovery services paid for by the customer. Although both are viable options, neither is popular with consumers, and can prove to be public relationship missteps when utilized. The most popular approach is to provide the customer with a short list of data recovery providers that have been vetted by the OEMs as capable and professional data recovery organizations. In exchange for being placed on the list the data recovery providers commonly offer the customer a small discount, and the HDD manufacturers give the data recovery providers a small amount of technical assistance when necessary. By ensuring that the customers at the very least have a positive data recovery experience, OEMs are able to lessen the potentially negative impact device failure can have on future sales. HDD Data Recovery Process The HDD data recovery process can be broken down into four phases: drive failure analysis, drive restoration, drive imaging, and data extraction. Although all four phases represent unique challenges in order to maintain process efficiency, drive failure diagnosis and restoration are where the majority of the engineering resources are required. HDDs have three primary failure modes: logical, electrical, and mechanical. Mechanical failures are largely isolated to the read/write head assembly or the spindle motor, and are usually the result of mechanical fatigue or environmental abuse (i.e. the is knocked over while running). Electrical failures can be caused by numerous conditions, but most are the result of power surges or individual circuit component failure. Logical failure usually consists of corruptions to the firmware area of the HDD, and can happen naturally or as the result of an upstream failure (such as an intermittent read/write head). Complicating the drive failure diagnosis process is the interdependence of the three failure categories. For example, it is not uncommon for a shorted control board to cause a read/write head failure, or for an intermittent read/write head to corrupt the firmware zone. As a result, the engineers tasked with diagnosing the failure mode of the device require a certain level of experience and intuition. These engineers are highly compensated individuals and account for a significant portion of the total cost of the recovery. After identifying the root cause of the drive failure recovery, technicians can perform the necessary repair work required to restore the drive back to a functional state. A proper diagnosis of the failure mode results in lower engineering costs and faster turnaround times. The case study in the following section outlines the steps that were followed in data recovery from a failed 1 TB hard drive. HDD Data Recovery Case Study Failure Description: The drive no longer spins up. No unusual sounds are heard from the HDD. Initial Failure Diagnosis: Starting with an investigation of the control board, our engineers noticed a distinct “burn” smell coming from the PCB. Further inspection identified four failed control board components. Figure 3 is a picture of the particular area of the control board that was electrically damaged. Following inspection of the HDD control board, we performed internal test of the read/write head assembly. A non-invasive electrical test of the HDD head assembly identified that at least one of the eight read/write heads was no longer functioning. At this point, we took the drive into Gillware’s ISO 5 certified Class-100 cleanroom for further analysis. Figure 4 shows the physical condition of the read/write heads. Final Failure Analysis: The drive has two failures preventing it from properly functioning. A power surge has taken out one or more components on the HDD control board. The sudden loss of power prevented the heads from parking properly. Instead, they were stuck on the platter surface. This resulted in the bent read/write head shown in figure 4. Drive Restoration: Restoration of the hard drive begins with repair work on the failed electrical components on the control board. Following the control board repairs, the damaged read/write heads are carefully replaced. Figure 5 shows the open HDD chassis with new read/write heads installed. Read/write head compatibility is a major issue on most modern hard drives. It is quite rare to find a set of replacement heads that immediately work when transplanted from one HDD to another. Therefore, following head stack replacement, Gillware engineers use proprietary logical tools to restore the drive to an operational state. Prior to HDD imaging, we address areas of minor platter damage in order to prevent further damage during the imaging process. Drive Imaging: Following drive restoration procedures, the HDD is moved to the drive imaging phase of the recovery process. Drive imaging involves making a direct byte-for-byte copy of the HDD for use in the logical processing and data extraction phase of the recovery. The amount of time required to image a drive can vary drastically depending on the specific details of the recovery case. The 1 TB HDD in this case study, with a repaired control board and replaced read/write heads, took approximately 27 hours to image, completing with a 99.99% read of the available sectors on the HDD. The reduced drive performance is the result of adaptive deviation; a symptom caused by the non-native control board and read/write heads. It is not uncommon for HDDs to take multiple days to image if the damage to the magnetic media is severe. The image copy is provided to the logical engineers, who are tasked with recreating the file system and extracting user-data. The HDD in this case study had only electrical and mechanical damage. This fact, coupled with a very good image copy of the drive, means that there is no logical corruption to the file structure. When this is the case, the data extraction process at Gillware is automated through the use of proprietary extraction tools. After the data is extracted, the customer verifies the recovered data via the Gillware File Viewer application, and the data is transferred to a new external transfer drive for delivery to the customer. HDD Recovery Summary:The recovery of the 1 TB HDD in this case study was a grade-A recovery. All user data was recovered intact and fully functional. The total in-lab time for the recovery was approximately 6 business days. The total engineering time was 5 hours [0.75 hours for evaluation, 3 hours for drive restoration, 1.25 hours for logical processing and extraction]. And the total cost for the recovery was $1000 [$875 for the recovery, $125 for a new 1TB external transfer drive to ship the data back to the customer]. SSD Recovery Process Solid-state devices share many of the same failure modes exhibited by HDDs. Since SSDs are a direct replacement for HDDs in most applications and are subject to many of the same stresses, some SSD failure modes are similar to those of HDDs. The most significant difference between the two technologies is that SSDs have no moving parts. As a result, SSDs have no instances of mechanical failure. Shared failure modes aside, the techniques and processes for recovering data from the two storage technologies differ greatly. SSDs afford data recovery professionals opportunities to recover data from failed devices not available with HDDs. The Holy Grail of HDD data recovery is a device that can read HDD platters independent of the hard drive. Although accomplished in laboratory environments with varying degrees of success, this technique is not a viable option for the recovery of large amounts of data commonly stored on modern drives. The process is simply too slow, and requires too much user input to make economic sense. Drive restoration is a more efficient and cost-effective approach. SSDs, on the other hand, store data in non-volatile memory chips that can be easily read independent of the device that originally wrote the data. This presents the possibility of an alternate recovery process for SSDs in which no repairs are necessary. Starting by individually imaging each memory chip, we can then assemble the individual chip images into a single drive image and extract the data. The reconstruction of a single drive image is the most time consuming and costly aspect of the SSD recovery process. With HDD data recovery, the end result of the drive imaging process is a single complete image, starting with sector zero and ending with the last sector on the HDD. Compare this to SSDs, where the output of the imaging phase of the recovery process is N individual chip images resulting from reading N number of chips on the device (i.e. an SSD with 16 memory chips produces 16 individual chip images). These images must be reconstructed into a single device image prior to proceeding with the data extraction process. The drive reconstruction phase is the most demanding stage of the process, as the method used to spread data across each memory chip varies from model to model. With no information about how the data is striped across the memory chips comprising the full array, the only option is to manually find key file structure indicators, then use those indicators to reassemble the data. Some of the drawbacks to the independent memory chip imaging data recovery approach are evident in the following case studies. All three have slightly different hardware, software, and end-user implementations. The impact these different implementations have on the data recovery process is illustrated in the next three case studies: SSD Recovery Case Study 1: Summary Result: Successful recovery SSD Details:128 GB SSD with 16 (8 GB) TSOP48 memory chips. No after-market or factory direct encryption. Failure Description: Computer does not recognize the SSD Initial Failure Diagnosis: The SSD appears pristine; there is no evidence of electrical damage, yet the device shows no response when connected to a host. Final Failure Analysis: The drive is suffering from a logical failure, likely due to firmware corruption. At this time, no tools for firmware repair are available and the only option for a timely successful recovery is to read the contents of each of memory chip and reconstruct the drive image. Memory Chip Read: Each chip is removed from the SSD and its contents copied to a file on a PC. Figure 6 shows the reading of one of the 16 chips on this SSD device. Drive Image Reconstruction: Without knowledge of how the SSD keeps track of the storage spread across each memory chip, file system structures are used to reconstruct the disk image. Regardless of the file system used, there will usually be partition table found at the first logically-addressable sector. The customer reported the system was running Windows XP, so a Master Boot Record (MBR) should be found at sector 0. We search each of the 16 chip images for the signature of an MBR. After locating the MBR, we can proceed with the mapping of data physical location to logical sector. As suspected, there are two partitions on this SSD: a small, FAT16 system-restore partition, and a large NTFS partition. The MBR provides the location and size of each partition. The chip images are searched for the corresponding boot sectors. Each boot sector must be located at the logical start of the partition, indicated by the MBR. The FAT16 file system has a lengthy list of mostly sequential values called a FAT Table immediately following the boot sector. The FAT Table provides a lot of useful information because, due its relatively sequential nature, the first value in the following logical sector can be predicted and the reads are searched to build this table. Using the physical location of successive logical sectors, a pattern begins to appear as to how the data is organized throughout the reads that can be used to build the rest of the disk image. Data Extraction: Once the disk image is built, data extraction can proceed in the same manner as with an HDD. Detailed Final Result: The result of the 128 GB SSD in this case study was a grade-A recovery. All of the user data was recovered intact and fully functional. The total in-lab time for the recovery was approximately 2.5 weeks. The total engineering/machine time was 22 hours [2 hours for evaluation, 8 hours for de-soldering and reading memory chips, 12 hours for logical processing and data extraction]. And the total cost for the recovery was $3000. SSD Recovery Case Study 2: Summary Result: Unsuccessful recovery SSD Details: 128 GB SSD with 16 (8 GB) TSOP48 memory chips. Aftermarket full disk encryption had been implemented. Detailed Final Result: We followed the same standard SSD recovery procedure outlined in case study 1. Images of all 16 individual memory chips were generated by desoldering the chips and reading them on a TSOP48 fixture. Following chip imaging, we began the image reconstruction process. As in case study 1, we attempted to identify common file structure indicators in order to identify the striping of the data across the 16 images. However, the drive in this case study was from a large enterprise customer that implements full-disk encryption on all company computers. As a result, no file structure indicators could be identified and the data reconstruction procedure failed. No data could be recovered in a timeframe acceptable to the customer. Additional Comments: Full-disk encryption is becoming increasingly more common among Gillware’s enterprise client base. Customers often ask Gillware’s assistance when sourcing encryption products in order to avoid data recovery complications in the future. Most encryption vendors provide tools for decrypting a drive image with proper credentials, which Gillware uses in the recovery process. The merits of these tools often influence Gillware’s advice to clients. Unfortunately, these tools only succeed when given a complete, correct image copy. With no knowledge of how to reassemble the individual memory chip images back into a single disk image, Gillware technicians are unable to recover data. Potential Future Solution: Gillware hopes to partner with SSD manufacturers in order to help enterprise customers recover data in situations where full-disk encryption is utilized. Formal partnerships will allow for the protected sharing of sensitive and proprietary technical information about each SSD. With detailed knowledge of the device’s Flash Translation Layer, firmware, controller, and ECC implementation, Gillware technicians will no longer need to rely on file system structures and will be able to successfully recover data from SSDs with full-disk encryption. The end result of these partnerships will be higher SSD recovery success rates and the preservation of relationships with key enterprise customers for both SSD manufacturers and Gillware alike. SSD Recovery Case Study 3: Summary Result: Unsuccessful recovery SSD Details: 128 GB SSD with 16 (8 GB) TSOP48 memory chips. Full-disk hardware-level encryption of the data stored on memory chips. Detailed Final Result: Similar to case study 2, no file structure indicators were discernible, as a result of the data being encrypted by the SSD device. With no knowledge of the Flash Translation Layer (FTL) or the manner in which the encryption was performed, Gillware technicians are unable to recover any data. Potential Future Solution: Offering storage devices with hardware-level encryption can be a powerful marketing tool. This is especially true when looking to land lucrative enterprise contracts where encryption is an absolute requirement. There are two issues that Gillware encounters when dealing with storage devices implementing full-disk encryption. First, many end-users are unaware of the encryption and are frustrated when they discover the negative impact encryption technology can have on the recovery process. Second, without vendor support, Gillware has no means of decrypting the data. Successful recovery of data from SSDs with hardware-level encryption requires both an understanding of the FTL and the means to decrypt the drive image. Depending on how encryption keys are generated and stored, Gillware suggests maintaining a protected database of keys for use in extreme case where the key cannot be directly obtained from the failed device. As an alternative, SSD manufacturers may choose to provide emulation and decryption software tools to data recovery partners, similar to those provided by software-based encryption vendors. Critical points for future discussions include the formation of partnerships between SSD manufacturers and data recovery providers that both protect intellectual property while still allowing data recovery to be performed successfully. The Future of SSD Recovery Gillware currently has two primary organizational objectives focused on dealing with the issues surrounding SSD recovery. The first is the development of reliable data recovery tools and techniques that allow our engineers to recover data from failed SSD devices. The second is to eliminate the large discrepancy in cost and turn-around time that exists between HDD and SSD recovery. Both objectives are closely linked and must be accomplished in unison. That is, the ability to recover data from failed devices is inconsequential if the recovery cannot be performed in a cost-effective and efficient manner. The R&D team at Gillware Inc. is working hard to meet both objectives and become the industry leader in SSD recovery. However, as the SSD case studies point out, improvements are required in order to decrease the amount of engineering time necessary to perform the recoveries and to improve recovery success rates. A reduction in engineering time will directly correlate to lower overall recovery costs and improved success rates help to bolster customer satisfaction. Solid-state storage devices offer many advantages over HDDs when looking at data recovery techniques and procedures. The most significant of these advantages is the ability to read data from individual memory chips independent of the host device. This recovery technique will eventually lead to better overall success rates and potentially lower data recovery costs when compared to data recovery from HDDs. However, certain obstacles must first be overcome before these benefits are made a reality. The reading of the individual memory chips is an essential and inefficient step in the SSD recovery process. Most SSD memory chips come in a TSOP48 package which requires the use of a specialized fixture (Figure 6) to be read with a commercial device programmer. These fixtures are quite effective when used with new, pristine devices. However, ensuring proper electrical contact with the pins of a device unsoldered from a PCB is a constant struggle. Pins can easily be bent if the IC is not removed carefully. Furthermore, any residual bits of solder can affect the delicate alignment of the pincontact fingers. It remains to be seen whether this situation will improve with the transition to ball grid array packages. Gillware engineers are currently working on a solution that will streamline the memory chip imaging process significantly reducing turn-around times. It is also possible that SSD manufactures could decrease the need for removing and individually reading the memory chips by implementing certain technology common on HDDs. Every HDD has a vendor-specific mechanism for manipulating device firmware over the ATA interface and sometimes through other means, such as an undocumented RS-232 or JTAG connection. Gillware engineers are speculating that SSDs might have a similar implementation. If this is the case, SSDs with certain firmware corruptions could potentially be repaired and the device restored to a functional state. If the firmware could not be repaired, the ability to obtain the raw contents of the memory chip would be incredibly valuable. In either situation the need to desolder and individually read each memory chip is eliminated. Whether dumping the drive image directly from the SSD, or reconstructing it manually, an understanding about how the controller maintains the Flash Translation Layer (FTL) is critical. This logical-sector to physical-address mapping is the heart of any wear-leveling implementation and is currently the biggest hurdle Gillware faces with SSD recovery. In SSD case study 1, Gillware engineers were able to discern enough about the FTL by looking at the physical location of critical file system structures known to reside at a given logical sector. While yielding a successfully recovery, this method will not scale to the volume of business Gillware currently does with HDDs. This technique is also unsuccessful in situation involving full-disk encryption. Without assistance from SSD manufacturers, data recovery providers will struggle to match the success rate levels established with HDD recoveries. There are six ways by which SSD manufacturers can assist data recovery partners, helping to improve SSD recovery success rates and reducing costs: Provide technical details of the FTL Supply documentation of vendor-specific ATA commands for firmware and FTL manipulation Allow access to the appropriate cipher in the presence of hardware-level encryption 4. Provide information about the ECC implementation Grant access to data sheets for SSD controllers and non-volatile memory Supply controller emulation tools Co-develop with Gillware engineers improved systems for obtaining memory chip reads The cost and turn-around times associated with SSD recovery will improve over the years to come. As SSD technology, standards, and designs stabilize, so will the tools and techniques required to recover data from failed devices. How quickly this happens will depend largely on the level of cooperation that is provided by the SSD industry to key data recovery partners. For these partnerships to be successful, both parties will need to work together to guarantee that sensitive proprietary information is protected. Current SSD recovery techniques are reactive, primarily being developed as device failures occur and arrive in the lab. This approach to data recovery is effective, but expensive and time consuming to perform. Through a collaborative effort between the solid state and data recovery industries, it is possible to predict and plan for a majority of SSD failures. The benefits to this proactive approach will be lower recovery costs, improved turnaround times, and better overall success rates. Working closely with data recovery professionals has the added benefit of in-the-trenches failure analysis that can be relayed back to device manufacturers. This information can be used by reliability and design engineering groups to improve device reliability, preventing future failures and moving solid-state technology forward.
<urn:uuid:23d934ae-6dd6-44e3-93ce-f0db58c157e3>
CC-MAIN-2017-04
https://www.gillware.com/future-ssd-recovery-white-paper/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00424-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933453
5,486
2.640625
3
Wi-Fi and WiMAX will coexist and become increasingly complementary technologies for their respective applications. Wi-Fi technology was designed and optimized for Local Area Networks (LAN), whereas WiMAX was designed and optimized for Metropolitan Area Networks (MAN). WiMAX typically is not thought of as a replacement for Wi-Fi. Rather, WiMAX complements Wi-Fi by extending its reach and providing a "Wi-Fi like" user experience on a larger geographical scale. In the near future, it is expected that both 802.16 and 802.11 will be widely available in end user devices from laptops to PDAs, as both will deliver wireless connectivity directly to the end user - at home, in the office and on the move. The vast majority of viable spectrum in the United States simply goes unused, or else is grossly underutilized. The U.S. typically uses only about five percent of one of its most precious resources. Unlike other natural resources, there is no benefit to allowing this spectrum to lie fallow. The airwaves can provide huge economic and social gains if used more efficiently, as seen today with the relatively tiny slices utilized by mobile phones and WiFi services. The unique qualities of the TV white spaceunused spectrum, large amounts of bandwidth, and excellent propagation characteristics (they travel long distances and can penetrate thick walls)offer an opportunity to provide ubiquitous wireless broadband access to all Americans. In particular, this spectrum can provide robust infrastructure to serve the needs of under-serviced rural areas, as well as first responders and others in the public safety community. Moreover, use of this spectrum will enable much needed competition to the incumbent broadband service providers. This is a large amount of untapped spectrum and you've got people in Silicon Valley and lots of smart entrepreneurs just itching to find ways to use it. But, broadcasters insist that use of these white spaces for broadband service will result in reduced-quality digital TV viewing. Unlike on traditional analog TV, where interference causes static or fuzziness, digital pictures can freeze or be lost entirely if another signal is broadcast on or near the same channel. The Wireless Innovation Alliance, which includes Google, Microsoft, HP and Dell, thinks it is possible to produce a device that detects and avoids broadcast programming so it will not interfere with existing signals. Such technology is already being used by the U.S. military. FCC officials are intrigued by the possibilities and are testing sample devices to see if they could sense and avoid TV signals. But, the results have been mixed. Broadcasters are skeptical, and the makers of wireless microphones for sporting events, concerts and churches, which also use this unlicensed spectrum, say the technology could put their productions at risk. They support auctioning off those fallow airwaves and making them licensed in order to protect against interference. There are many ways to safely and reliably protect digital TVs and wireless microphones, not all of which require spectrum sensing. Last fall, Motorola submitted a proposal that relies on a combination of geo-location (to protect broadcast TV) and beacons (to protect wireless microphones). Google believes both concepts, along with a safe harbor approach, should be seriously considered for incorporation into the FCCs service requirements for the spectrum.
<urn:uuid:218fbe00-e4b9-4ea5-8cc9-9c3446bb307f>
CC-MAIN-2017-04
http://www.cioupdate.com/trends/article.php/3745811/The-Ubiquitous-Promise-of-WiFi-is-Near.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00087-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944783
652
2.90625
3
As any data center operator will tell you, data centers use large amounts of power. In fact, one data center can use enough energy to power 180,000 homes. What with the costs and the eco-issues with fossil fuels, there's a race on to try to find better ways of powering these cathedrals to digital life. Many heavy data center users are looking to place their centers near sources of renewable power, for example. Facebook has opened one in Sweden that's near a hydro-electric plant. Solar is also pretty good, and wind-power turbines are another alternative power source attracting knee-jerk exuberance, despite their disadvantages, like uneven supply. All of these options have disadvantages, though. It may be that we need to take a step back in time for fresh answers. An underused power-generating technology, implemented haphazardly in the 1960's, called pumped storage, is attracting interest from energy pundits. It can now be used with solar. Data centers should take note. Solar systems obviously only create power during the daytime, but power use also occurs at night. So a solar system needs some kind of storage mechanism for the daytime-created power. Batteries are commonly used in current systems. However, batteries aren't very efficient, and they're heavy to transport, expensive to make, are not particularly tree-hugger-friendly. They also expend over time and consume building-space, among other problems. Some say we need an alternative to batteries for solar. An aging power plant in New Jersey might hold the answer. Prior to the days of solar, back in 1965, the innovative Yards Creek Pumped Storage Electric Generating Station opened in Blairstown, New Jersey. The idea behind its revolutionary technology was simple. Two lakes are separated by a vertical distance. When power is cheap during the day, the water from the lower lake is pumped uphill to the top reservoir. At night, when power is expensive and is in more demand from the community, water is flowed downhill, through electricity-creating turbines to make cheap power. This nighttime "harvesting," as Stephanie Matteson called it in an EnergyCollective article about solar storage issues and pumped storage, lets power plants be more evenly loaded. Even though it uses more power to pump the water uphill than the power it gains by the turbines, time-of-day manipulation of energy rates creates savings. And although it wasn't a very important issue at the time, using the same water each day helps with eco-concerns. Fast forward a few generations and the idea conceivably could be perfect for solar, too. Solar systems, just like the western New Jersey grid, and other conurbations' power systems, need more even loading. One scenario for the solar storage tech could be to have solar plants inject excess power into a newly built, pumped hydroelectric storage-enabled grid during the day for water pumping, and then tapping the needed electricity at night for an associated data center. Another could be to send solar-created electricity straight to the water pumps, bypassing the grid altogether. Why solar for data centers Solar, overall, is a good energy solution for data centers for a few reasons. Cost is stable, whereas fossil fuel costs are variable and unknown; solar is self-contained, and self-managed, so it is not susceptible to spikes and brownouts; it provides secured capacity; and as an added benefit, it's a not-inconsiderable public relations tool. Both Apple and Google find it hard to stop plugging their renewable energy sources, for example. Apple says its data centers are 100% powered by renewable energy. Germany is now using solar to power pumped storage, says Matteson. Low-cost solar is used in the afternoon to prime upper reservoirs. Already, the pumped storage power plants operating in Germany have a combined output of about 7 gigawatts. For comparison, total nuclear power output in the U.S. is 98 gigawatts. Add the likelihood that you'll be able to picnic, hike, view wildlife, and maybe swim with your future digital energy consumption should this take off, if nothing else, just think of the rhetoric Apple, Google, et al will be able to bombast. This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:cb2d673d-d780-4ead-ba94-3de6e64f4ed3>
CC-MAIN-2017-04
http://www.networkworld.com/article/2906856/data-center/best-way-to-store-solar-power-isn-t-with-batteries.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00389-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954162
904
3.359375
3
A new threat called the Heartbleed Bug has just been reported by some researchers at Codenomicon and Google. Heartbleed attacks the heartbeat extension (RFC 6520) implemented in OpenSSL. The official reference to the Heartbleed bug is CVE-2014-0160. Heartbleed allows an attacker to read the memory of a system over the Internet and compromise the private keys, names, passwords and content. An attack is not logged and would not be detectable. The attack can be from client to server or server to client. Heartbleed is not a flaw with the SSL/TLS protocol specification, nor is it a flaw with the certificate authority (CA) or certificate management system. Heartbleed is an implementation bug. The bug impacts OpenSSL versions 1.0.1 through 1.0.1f. The fix is in OpenSSL version 1.0.1g. The 0.9.8 and 1.0.0 version lines are not impacted. OpenSSL 1.0.1 was introduced in March 2012, so the vulnerability is 2 years old. The impacted systems are widespread. OpenSSL is used in Apache and NGINX, which Netcraft reports are 66 percent of the market share. OpenSSL is also used in operating systems such as Debian Wheezy, Ubuntu 12.04.4 LTS, CentOS 6.5, Fedora 18, OpenBSD 5.3 and 5.4, FreeBSD 8.4 and 9.1, NetBSD 5.0.2 and OpenSUSE 12.2. If you are using an impacted version of OpenSSL, you need to consider the following: - Upgrade your system to a software version that uses OpenSSL 1.0.1g or higher. You may have to wait until your software vendor publishes a new release - Renew your SSL certificates with a new private key - Ask your users to change their passwords - As content may have been compromised, you will need to consider whether you need to notify users Updated April 9, 2014: Qualys SSL Labs has added a Heartbleed test to their SSL Server Test.
<urn:uuid:d1155bf8-5c10-441c-8024-1f19e99a7c40>
CC-MAIN-2017-04
https://www.entrust.com/openssl-heartbleed-bug/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00141-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921122
441
2.65625
3
The MD5 algorithm has a new vulnerability: Google! Here's a piece of news that will worry anyone interested in security (which should be pretty much everyone who reads Network World): A programmer by the name of Juuso Salonen has created a Ruby script called BozoCrack that cracks MD5 hashed passwords with remarkable success and with very little effort. Before we go any further, let's have a little background: Computer systems need a mechanism to authenticate users and processes so that the good guys can get in to do work and the bad guys are locked out. IN PICTURES: The Security Industry All-Stars The most common authentication method is to use a name and a password, but if you were to store the password in plaintext on the system you would run the risk that passwords could be exposed. A common solution is to not store the password at all but instead keep something called an MD5 hash of the password. MD5 is the fifth version of the Message-Digest cryptographic hash "function" created by the renowned computer scientist Ron Rivest. An algorithm implementing MD5 takes in strings and outputs 128-bit hash values that have several interesting attributes. Those attributes: Any input string can be hashed (the string can be of arbitrary length and character set) in a reasonable amount of time and it is computationally impossible in practical terms (unless you have years and access to a supercomputer) to generate a string with a specific hash value, make a change in a string without changing the hash value, and find two different strings with the same hash value. So, if you take a string such as "mysecretpassword" and run it through an MD5 implementation you get the hash value "4cab2a2db6a3c31b01d804def28276e6." Change a single character and the hash value will also change and do so unpredictably. With 128 bits you have 3.4 x 10^38, or around 340 undecillion possible hash values. As the relationship of input string to hash value and vice versa isn't predictable, you have what is called a "one way" function; you can go from string to hash value but not from hash value to string. In practice, when a user logs in to a computer, the password's MD5 hash value is calculated on the fly, the account name looked up in a database, and the saved and calculated hash values compared. Only if the values match is the user allowed access. You can see that storing the account name and its password hash value together on a computer system is obviously far more secure than saving the account name with a plain text password, and this is the basis of user authentication checking for many operating systems and applications. The problem with the MD5 function is that it has been shown to be "breakable" through several types of sophisticated attacks. But as these attacks are technically very complicated to perform, MD5 hashes are still widely used. CASE IN POINT: Researchers devise undetectable phishing attack Alas, the BozoCrack algorithm adds a whole new dimension of vulnerability to MD5, as Salonen commented: "BozoCrack is a depressingly effective MD5 password hash cracker with almost zero CPU/GPU load." How does BozoCrack do its voodoo? The author explains: "Instead of rainbow tables, dictionaries, or brute force, BozoCrack simply finds the plaintext password. Specifically, it googles the MD5 hash and hopes the plaintext appears somewhere on the first page of results. / It works way better than it ever should." Why did he create it? "To show just how bad an idea it is to use plain MD5 as a password hashing mechanism. Honestly, if the passwords can be cracked with this software, there are no excuses." Thus, once again, does the power of Google make fools of us all. Gibbs is secure in Ventura, Calif. Settle your hash at firstname.lastname@example.org.
<urn:uuid:65b7b98b-58c6-4908-980c-b4ebceadbdea>
CC-MAIN-2017-04
http://www.networkworld.com/article/2183610/security/cracking-md5-----with-google--.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00197-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9257
840
2.546875
3
Next week, I suspect some of the kids on my block will be playing with radio-controlled cars orplanes. If you are not familiar with these, here's a [video on BoingBoing]that shows Carl Rankin's flying machines that he made out of household materials. Which brings me to the thought of scalability. For the most part, the physics involvedwith cars, planes, trains or sailboats apply at the toy-size level as well as the real-world level. One human operator can drive/manage/sail one vehicle. While I have seen a chess master play seven opponents on seven chess boards concurrently, itwould be difficult for a single person to fly seven radio-controlled airplanes at the same time. How can this concept be extended to IT administrators in the data center? They have to deal withhundreds of applications running on thousands of distributed servers.In a whitepaper titled [Single System Image (SSI)], the threeauthors write: A single system image (SSI) is the property of a systemthat hides the heterogeneous and distributed nature of theavailable resources and presents them to users and applicationsas a single unified computing resource.IBM has some offerings that can help towards this goal. - Server clusters Even in the case where yourvehicle is being pulled by eight horses--(or eight reindeer?)--a single operator can manage it, holding the reins in both hands. In the same manner,IBM has spent a lot of investment and research into supercomputers, where hundreds of individualservers all work together towards a common task. The operator submits a math problem, for example,and the "system system image" takes care of the rest, dividing the work up into smaller chunksthat are executed on each machine. When done with IBM mainframes, it is called a Parallel Sysplex. The world's largest business workloadsare processed by mainframes, and connecting several together and working in concert makes this possible.In this case, the tasks are typically just single transactions, no need to divide them up further, justbalance the workload across the various machines, with shared access to a common database and storageinfrastructure so they can all do the work equally. Last August, in my post [Fundamental Changes for Green Data Centers], I mentioned that IBM consolidated 3900 Intel-based servers onto 33 mainframes. This not only saves lots of electricity, but makes it much easier for the IT administratorsto manage the environment. - Storage virtualization Parallel Sysplex configurations often require thousands of disk volumes, which would have been quitea headache dealing with them individually. With DFSMS, IBM was able to create "storage groups" wherea few groups held the data. You might have reasons to separate some data from others, put them inseparate groups. An IT administrator could handle a handful of storage groups much easier than thousandsof disk volumes. As businesses grow, there would be more data in each storage group, but the numberof storage groups remains flat, so an IT administrator could manage the growth easily. IBM System Storage SAN Volume Controller (SVC) is able to accomplish this for other distributed systems.All of the physical disk space assigned to an SVC cluster is placed into a handful of "managed diskgroups". As the system grows in capacity, more space is added to each managed disk group, but few IT administrators can continue to manage this easily. The new IBM System Storage Virtual File Manager (VFM) is able to aggregate file systems into one globalname space, again simplifying heterogeneous resources into a single system image. End users have a singledrive letter or mount point to deal with, rather than many to connect to all the disparate systems. - Centralized Administration Lastly we get to the actual management aspect of it all. Wouldn't it be nice if your entire data centercould be managed by a hand-held device with two joysticks and a couple of buttons? We're not quite there yet, but last October we announced the [IBM System Storage Productivity Center (SSPC)]. This is a master consolethat has a variety of software pre-installed to manage your IBM and non-IBM storage hardware, includingSAN fabric gear, disk arrays and even tape libraries. It lets the storage admin see the entire data centeras a single system image, displaying the topology in graphical view that can be drilled down using semanticzooming to look at or manage a particular device or component. Customers are growing their storage capacity on average 60 percent per year. They could do this by havingmore and more things to deal with, and gripe about the complexity, or they can try to grow theirsingle system image bigger, with interfaces and technologies that allow the existing IT staff to manage. technorati tags: Winter solstice, Golden Compass, Richard Dawkins, radio-controlled, cars, planes, trains, sailboats, automobiles, IBM, mainframe, system z, parallel sysplex, single system image, DFSMS, SAN Volume Controller, SVC, Virtual File Manager, VFM, System Storage, Productivity Center, SSPC, master console, SAN, fabric, gear, disk, tape, libraries, data center, topology, semantic zooming
<urn:uuid:fd3f1dbe-5919-4c84-bbe7-f1099ce2fc18>
CC-MAIN-2017-04
https://www.ibm.com/developerworks/community/blogs/InsideSystemStorage/entry/planes_trains_and_automobiles?lang=en
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00435-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926892
1,091
2.6875
3
Keyloggers: The Most Dangerous Security Risk in Your Enterprise How keyloggers work and spread, why anti-virus applications won't stop them, and how you can protect your enterprise. By George Waller Keyloggers are on the rise and they are no match for even the most security-conscious organizations. Just look at some of the names done in by a tiny chunk of code in the last 12 months: RSA, Lockheed Martin, Epsilon, Oakridge Nuclear Weapons Lab, Sony, Iranian Nuclear Program and Linked-In to name just a few. Keyloggers have been around for a long time, but today they may be the most dangerous threat an enterprise faces. What is a Keylogger? A keylogger is a piece of malicious software, usually called "spyware" or "malware," that records every keystroke you make on a keyboard. Keyloggers can be installed without your knowledge or consent when you visit a Web site or read an e-mail, install a program, or perform other activities. Once installed, the keylogger records all your keystrokes, and then e-mails the information and other data to the computer hacker. How Keyloggers are Constructed The main idea behind keyloggers is to get in between any two links in the chain of events between when a key is pressed and when information about that keystroke is displayed on the monitor. This can be achieved using video surveillance: a hardware bug in the keyboard, wiring or the computer itself; intercepting input/output; substituting the keyboard driver; using a filter driver in the keyboard stack; intercepting kernel functions by any means possible (substituting addresses in system tables, splicing function code, etc.); intercepting DLL functions in user mode, and requesting information from the keyboard using standard documented methods. Keyloggers can be divided into two categories: keylogging devices and keylogging software. Keyloggers that fall into the first category are usually small devices that can be fixed to the keyboard or placed within a cable or the computer itself. The keylogging software category is made up of dedicated programs designed to track and log keystrokes. The most common methods used to construct keylogging software are: - A system hook that intercepts notification that a key has been pressed (installed using WinAPI SetWindowsHook for messages sent by the window procedure). This hook is most often written in C. - A cyclical information keyboard request from the keyboard (using WinAPI Get(Async)KeyState or GetKeyboardState. This software is most often written in Visual Basic, sometimes in Borland Delphi. - Using a filter driver. This requires specialized knowledge and is typically written in C. Recently, keyloggers that disguise their files to keep them from being found manually or by an antivirus program have become more numerous. These stealth techniques are called rootkit technologies. There are two main rootkit technologies used by keyloggers: masking in user mode and masking in kernel mode. How Keyloggers Spread Keyloggers spread in much the same way that other malicious programs do. Keyloggers are often hidden inside what appears to be legitimate applications, graphics, music files, or downloaded pictures. Identity thieves and hackers get you to unwittingly download their malicious software through an e-mail or instant message that "makes sense." The world–renowned Australian Computer Emergency Response Team (ausCert), has published a report showing that 80 percent of all keyloggers are not detectable by anti-virus software, anti-spyware software, or firewalls. Identity thieves have also been known to portray themselves as kids on popular teen sites and share infected files. Listed below are just some of the creative ways in which Identity thieves have been known to distribute their keyloggers: - MP3 music files - E-mail attachments - Clicking on deceptive pop–ups - P2P networks - AVI files (i.e., "YouTube" or other videos) - A legitimate Web site link, picture, or story that was malfaced - Downloaded games or any other PC tools or programs - Faked malicious Web sites that impersonate popular sites (sites such as Google, eBay, Amazon, Yahoo, banks) or anti-virus programs Why Your Anti-Virus Program Doesn't Stop Keyloggers Anti-virus programs are reactive programs. They can only stop and detect against "known" and already "catalogued" viruses; they cannot protect you against a brand new virus that has just been written. Most anti-virus software requires a frequently updated database of threats. As new virus programs are released, anti-virus developers discover and evaluate them, making "signatures" or "definitions" that allow their software to detect and remove the virus. This update process can take anywhere from several months up to a full year for your anti-virus manufacturer to build a "fix" for a single virus. It is estimated that there are currently millions of new viruses introduced on the Internet every month. It is an impossible task to immediately identify a new virus and protect against it. Many recent lab tests have shown that anti-virus is only about 25 percent effective in stopping keyloggers. How to Keep Confidential Information Safe from Keyloggers There are few ways that enterprises can protect themselves. One way is to prevent employees from installing downloaded software. Obviously, this isn't always practical. Regardless, some level of employee training is always helpful. Teaching employees about malware and keyloggers may prevent some level of identity theft, espionage, or data breach, but it's hardly foolproof. There's a clicker in every crowd. Filtering and detection is pointless because hackers will always find ways to avoid detection, so the focus should be on how to keep your data from getting to the cybercriminals. You can set egress filters to prevent the data from being "sent back" to hackers, but these techniques have also been easily avoided by the bad guys. Encryption has always been considered as the most secure way to protect data, which is true here. The most successful way to protect your keystrokes is by installing "anti-keylogging keystroke encryption software" in addition to your existing anti-virus software. Keystroke encryption secures everything you type, in real time, at the point of origin (when you type on the keyboard), making your keystrokes invisible to any undetected keyloggers that are hiding on your computer. George Waller is the EVP and co-founder of StrikeForce Technologies, Inc., the creator and key patent holder for two-factor, out-of-band authentication as well as an anti-keylogging keystroke encryption technology (patent pending). Their software protects over four million individuals and businesses in over 100 countries from identity theft and data breaches. You can contact the author at email@example.com.
<urn:uuid:a97d0e32-b646-4f2b-b0df-f8065ec4b5e1>
CC-MAIN-2017-04
https://esj.com/articles/2012/11/12/keylogger-security-risk.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00343-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922614
1,469
2.546875
3
They are not the same thing. Unfortunately, too many industry people conflate them. Worse, resolution and compression can silently undermine each other. Compare the two images below. The resolution (aka pixel count) is exactly the same but the compression levels (and bandwidth) are much different: Because of the compression difference, even though everything else is the same, the visual quality is much different. Resolution vs Compression Resolution, in surveillance, means the number of pixels (1MP, 2MP, 5MP, etc.) See: IPVM's Resolution Tutorial Compression, in surveillance, means how much the pixels / video is compressed. There is a scale from minimum to maximum. In H.264, it is called quantization, ranging from 0 (least) to 51 (most). See: IPVM's Video Quality / Compression Tutorial. All IP cameras compress video, typically in the middle of that scale. IPVM testing shows 28 is average though manufacturers vary somewhat. Moreover, manufacturers generally do not reveal the actual quantization level, displaying their own scale. To learn more, see: IP Camera Manufacturer Compression Comparison Making Lower Resolution Look Like Higher Resolution In this test, we took a series of 720p IP cameras, decreased the compression levels to see how far we had to go to make it 'look' like a 1080p camera with default compression levels. We then compared bandwidth consumption of each camera. Making Higher Resolution Looks Like Lower Resolution Also, in this test, we took a series of 5MP and 1080p cameras, increased their compression levels to see how far we had to go to make it 'look' like lower resolution cameras (1080p / 720p respectively) with default compression levels. Our goal is to understand and show you: - How much compression can impact visual quality? - What benefits are there to reducing compression levels? - What benefits are there to increasing compression levels? - Can you get better quality or bandwidth consumption from such changes?
<urn:uuid:b8582871-c06c-48ff-938b-9b4db5e8c71d>
CC-MAIN-2017-04
https://ipvm.com/reports/resolution-vs-compression-tested
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00343-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933648
416
2.78125
3
According to the latest PandaLabs report, malware creation hit a new milestone. In 2013 alone, cyber-criminals created and distributed 20 percent of all malware that has ever existed, with a total of 30 million new malicious strains in circulation, at an average of 82,000 per day. Despite Trojans have continued to be the most common security threat, the company’s anti-malware laboratory has observed a wide variety of attacks, with a notable resurgence of ransomware (CryptoLocker being one of the nastiest examples). The proportion of infected computers around the world was 31.53 percent, very similar to the 2012 figures. Besides offering an overview of the most significant events in the computer security field, the 2013 Annual Security Report also forecasts future trends for 2014. Much of 2014’s headlines will focus on the Internet of Things (IoT) and Android devices, which will continue to be exploited by attackers to steal users’ data and money. PandaLabs expects to see hundreds of thousands of new strains of Android-targeting malware in circulation. 2013 saw a large number of Android scams that used malicious ads in legitimate apps, and it has been estimated that last year alone cyber-criminals released more than two million new malware threats for Android. Social media attacks also grabbed headlines. The number of account hijacking attempts rose spectacularly, affecting companies, celebrities and even politicians. Looking at the types of malware that were created, PandaLabs identified Trojans as being the top threat, accounting for 77.11 percent of all new malware. There was a significant growth in the number of viruses in circulation, rising from 9.67 percent in 2012 to 13.30 percent in 2013. “This increase is mainly down to two particular virus families: Sality and Xpiro. The first virus family has been around a long time, whereas the second one is more recent and capable of infecting executable files on 32-bit and 64-bit systems,” said Luis Corrons, technical director of PandaLabs. When it comes to the number of infections caused by each malware category, data gathered by Panda Security’s Collective Intelligence platform indicates that three out of every four malware infections were caused by Trojans (78.97 percent), followed by viruses (6.89 percent) and worms (5.83 percent). “It seems that cyber-criminals managed to infect more computers with Trojans in 2013 than in previous years. In 2011, Trojans accounted for 66 percent of all computer infections, whereas this percentage rose to 76 percent in 2012. This growing trend was confirmed in 2013,” said Corrons. Malware is a global plague, but some countries are affected more than others. The countries leading the list of most infections are China, Turkey and Ecuador, with 54.03, 42.15 and 40.35 percent of infected computers respectively. Nine of the ten least infected countries are in Europe with the only exception being Japan. The ranking is topped by Scandinavian countries: Sweden (20.28 percent of infected PCs), followed by Norway (21.13 percent), and Finland (21.22 percent).
<urn:uuid:4dfc9183-fdae-4573-b1ed-483fffc5814c>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2014/03/19/20-of-all-malware-ever-created-appeared-in-2013/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00067-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940264
655
2.65625
3
Science and Technology Quiz - Questions & Answers Are you looking for quiz questions and answers about science and technology? You've come at the right place. Check out this Science and Technology quiz and see if you can answer the following questions on science, technology and electronics? Play this quiz to see how good you are at Science and Technology questions. Science and Technology Quiz Questions Here is the list of quiz questions and answers about Science and Technology. Can you answer the following questions on electronics, science and technology? Question: Which is a type of Electrically-Erasable Programmable Read-Only Memory? Question: What is made from a variety of materials, such as carbon, and inhibits the flow of current? Question: "FET" is a type of transistor, Its full name is ________ Effect Transistor. Question: A given signal's second harmonic is twice the given signal's __________ frequency. Fill in the blank? Question: Voltage is sometimes referred to as EMF, or Electromotive ________? Question: When measuring the characteristics of a small-signal amplifier, say for a radio receiver, one might be concerned with its "Noise ________"? Question: The average power (in watts) used by a 20 to 25 inch home color television is ________? - Over 1000 Question: The most common format for a home video recorder is VHS. VHS stands for ________? - Video Home System - Very High Speed - Video Horizontal Standard - Voltage House Standard Question: If the picture is stretched or distorted up and down like a fun house mirror the circuit to adjust or repair is ________? Question: The electromagnetic coils on the neck of the picture tube or tubes which pull the electron beam from side to side and up and down are called a ________? Question: The input used by an antenna or cable to a TV set uses ________ frequencies? Question: The transformer that develops the high voltage in a home television is commonly called a ________? - Tesla coil - Van de Graaf Question: Most modern TV's draw power even if turned off. The circuit the power is used in does what function? - Remote Control - Color Balance - High Voltage Question: In a color television set using a picture tube a high voltage is used to accelerate electron beams to light the screen. What is that voltage? - 500 Volts - 5 Thousand Volts - 25 Thousand Volts - 100 Thousand Volts Question: The NTSC (National Television Standards Committee) is also used in the country of ________? Question: In the USA, the television broadcast standard is ________? Question: Which is NOT an acceptable method of distributing small power outlets throughout an open plan office area? - Power Poles - Power Skirting - Flush Floor Ducting - Extension Cords Question: In the UK, what type of installation requires a fireman's switch? - Neon Lighting - High Pressure Sodium Lighting - Water Features - Hotel Rooms Question: What will a UPS be used for in a building? - To provide power to essential equipment - To monitor building electricity use - To carry messages between departments - To control lighting and power systems Question: Larger buildings may be supplied with a medium voltage electricity supply, and will required a substation or mini-sub. What is the main item of equipment contained in these? Question: Some lasers are referred to as being CW. What does CW mean? - Circular wave - Constant white - Continuous wave - Clear white Question: What is the process responsible for producing photons in a diode laser? - Fermi level shift - Majority carrier injection - Carrier freeze out - Electron-hole recombination Question: What are three types of lasers? - Gas, Metal Vapor, Rock - Pointer, Diode, CD - Diode, Inverted, Pointer - Gas, Solid State, Diode Question: What was the active medium used in the first working laser ever constructed? - A Diamond Block - Helium-Neon Gas - A Ruby Rod - Carbon Dioxide Gas Question: After the first photons of light are produced, which process is responsible for amplification of the light? - Blackbody radiation - Stimulated emission - Planck's radiation - Einstein oscillation Question: Once the active medium is excited, the first photons of light are produced by which physical process? - Blackbody radiation - Spontaneous emission - Synchrotron radiation - Planck's oscillation Question: The first step to getting output from a laser is to excite an active medium. What is this process called? Question: What does AM mean? - Angelo marconi - Anno median - Amplitude modulation Question: What frequency range is the High Frequency band? - 100 kHz - 1 GHz - 30 to 300 MHz - 3 to 30 MHz Question: What does EPROM stand for? - Electric Programmable Read Only Memory - Erasable Programmable Read Only Memory - Evaluable Philotic Random Optic Memory - Every Person Requires One Mind Question: What does the term PLC stand for? - Programmable Lift Computer - Program List Control - Programmable Logic Controller - Piezo Lamp Connector Question: Which motor is NOT suitable for use as a DC machine? - Permanent Magnet Motor - Series Motor - Squirrel Cage Motor - Synchronous Motor Question: What does VVVF stand for? - Variant Voltage Vile Frequency - Variable Velocity Variable Fun - Very Very Vicious Frequency - Variable Voltage Variable Frequency Question: The sampling rate, (how many samples per second are stored) for a CD is...? - 48.4 kHz - 22,050 Hz - 44.1 kHz - 48 kHz Question: A Compact disc (according to the original CD specifications) hold how many minutes of music? - 74 mins - 56 mins - 60 mins - 90 mins Question: Sometimes computers and cash registers in a foodmart are connected to a UPS system. What does UPS mean? - United Parcel Service - Uniform Product Support - Under Paneling Storage - Uninterruptable Power Supply Question: What does AC and DC stand for in the electrical field? - Alternating Current and Direct Current - A Rock Band from Australia - Average Current and Discharged Capacitor - Atlantic City and District of Columbia Question: Which consists of two plates separated by a dielectric and can store a charge? Have a science and technology question? Ask it! We are always available to answer your questions and help you understand science and technology. This science and technology quiz will be updated on regular basis. This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you. We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus
<urn:uuid:1fdb9cca-fb2e-470f-907f-9c3156279dd6>
CC-MAIN-2017-04
http://www.knowledgepublisher.com/article-1062.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00187-ip-10-171-10-70.ec2.internal.warc.gz
en
0.872234
1,541
2.96875
3
5.1.3 What is S/WAN? The S/WAN (Secure Wide Area Network, pronounced "swan") was an initiative to promote the widespread deployment of Internet-based Virtual Private Networks (VPNs). This was accomplished by adopting a standard specification for implementing IPSec, the security architecture for the Internet Protocol (see Question 5.1.4), thereby ensuring interoperability among firewall and TCP/IP products. The use of IPSec allows companies to mix-and-match the best firewall and TCP/IP stack products to build Internet-based VPNs. Currently, users and administrators are often locked in to single-vendor solutions network-wide, because vendors have been unable to agree upon the details of an IPSec implementation. The S/WAN effort should therefore remove a major obstacle to the widespread deployment of secure VPNs. S/WAN supported encryption at the IP level, which provides more fundamental and lower-level security than higher-level protocols, such as SSL (see Question 5.1.2). It was expected that higher-level security specifications, including SSL, would be routinely layered on top of S/WAN implementations, and these security specifications would work together. While S/WAN is no longer an active initiative, there are other related ongoing projects such as Linux FreeS/WAN (http://www.freeswan.org/) and the Virtual Private Network Consortium (VPNC; see http://www.vpnc.org/). Linux FreeS/Wan is a free implementation of IPSec and IKE (Internet Key Exchange) for Linux, while VPNC is an international trade association for manufacturers in the VPN market.
<urn:uuid:27de3c94-37b4-4538-9d0b-eace2c134a13>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/s-wan.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00307-ip-10-171-10-70.ec2.internal.warc.gz
en
0.90976
345
3.796875
4
Real programmers don’t do documentation. That’s the way it has been since the first line of code was written. It was true for Assembler. It was true for Cobol, Fortran, C++ and Visual Basic. And it probably will be true for future languages even into the next millennium. A programmer considers his/her job to be one of a problem solver, and some of the problems solved by programmers are worthy of a Sherlock Holmes novel. At the end of a program, when the testing has been done and when all of the output has been checked out, the demand for the programmer’s attention is always high elsewhere. Documentation just seems to be left out. Or on the rare occasions when programming and documentation has been done, the maintenance programmer fails to do his/her share. The maintenance programmer takes the old code, fixes whatever needs to be fixed, and is on to the next program. One way or the other, old code arrives at the doorstep and no one understands what it does. Making matters worse is that much of old code was written in technologies that are either not supported today or it is difficult to find anyone that can or will support them. Try finding an available and enthusiastic programmer for Total, IMS, VSAM, CICS, System 2000 or Model 204. There are many languages and technologies of the past that simply do not have an active support audience today. And that is where much of the old code and system development of the past is. Why would an organization want to delve into the code of the past? Some of the reasons organizations need to look at old code include: - There is a need to convert to a new system. In order to convert to the new system, the organization must delve into the logic of the past to see either how things ran or still run. - There is a need to merge systems. It is difficult to merge systems when you don’t know what a system does. - There is a legal reason to know what a system either does or once did. If you have to go to a court of law, you want to know what you are talking about. You don’t want to be responsible to testimony that is simply incorrect. - There is a pressing need to go back and make changes to a system. It is dangerous to make changes to a system when you don’t know what the system does. This short list merely scratches the surface. In reality, there are many reasons why undocumented old code is a corporate liability. How difficult would it be to put together a documentation package? Indeed, when it comes to data, there often is reasonably good documentation. There are old dictionaries. Old repositories. Old copybooks. It is fairly simple to gather the data definitions that once existed. (However, this is not a perfect science.) But data is only one element of older system documentation. There is the need to document processes as well. Stated differently, when you are trying to do old system documentation, focusing on only data is like one hand clapping. You don’t make much noise when only one hand is clapping. In order to create applause, you need two hands clapping. You need to document processes as well as data. The long-term implications of programmers not doing documentation are now coming home to roost. The corporation is left holding the bag, and it is a pretty ugly bag. SOURCE: Corporate Liability: Undocumented Code Recent articles by Bill Inmon
<urn:uuid:d3d22db8-22ff-4bef-a5cf-ca6cfc9c7543>
CC-MAIN-2017-04
http://www.b-eye-network.com/channels/1134/view/16428
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00031-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952617
729
2.796875
3
A thin-film photovoltaic cell (TFPV) is a type of solar cell that is made by depositing one or more thin film (TF) of photovoltaic material on a substrate, such as plastic, glass, or metal. The film thickness varies from a few nanometers (nm) to tens of micrometers (µm), which is much thinner than the conventional crystalline silicon solar cell that uses silicon wafers of up to 200 µm. This allows thin film cells to be of lower weight, flexible, less drag, and limited resistance to foot traffic. Renewable energy sources, such as solar photovoltaic technologies are gaining importance due to changing climate conditions and rising energy demand. Thin film photovoltaic is the second generation of photovoltaic, which is gaining importance because of optimum efficiency at low cost. These factors, along with increasing government support and shorter energy payback time, are driving the thin film photovoltaic market in Latin America. Latin America is one of the growing markets in thin film photovoltaic (PV) market. Silicon shortage and the growing demand for PV modules are some of the factors fueling the growth in thin-film PV manufacturing. Thin film PV cells require very little silicon as compared to crystalline PV cells. The technical benefits of thin film PV, such as flexibility and better performance in low light conditions are other reasons for rapid growth of the market. The market is segmented and forecast based on type of thin film PV cells and modules, such as amorphous silicon (a-Si), cadmium telluride (CdTe), copper indium diselenide/copper indium gallium diselenide (CIS/CIGS), and others. Market share analysis, by revenue of the leading companies, is also included in the report. The market share analysis of these key players are arrived at, based on key facts, annual financial information, and interviews with key opinion leaders, such as CEOs, directors, and marketing executives. In order to present an in-depth understanding of the competitive landscape, the report on thin film photovoltaic (PV) in Latin America consists of company profiles of the key market players. This report also includes the market share and value chain analyses, along with the market metrics, such as drivers and restraints. Please fill in the form below to receive a free copy of the Summary of this Report Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement
<urn:uuid:0bb3d0a2-3e2e-48fd-ba9b-e3e792162d5f>
CC-MAIN-2017-04
http://www.micromarketmonitor.com/market/latin-america-thin-film-photovoltaics-pv-3919773520.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00244-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934571
538
2.953125
3
Protocol converter is a highly beneficial device used by various industries in order to convert the proprietary or standard protocol of a device into suitable protocol of other tools or device in order to attain inter compatibility. Protocols are determined by several factors such as data rate, encryption methods, file and message formats and associated service. A protocol converter is tasked with taking this protocol and changing it to another one, making devices connected across these networks to communicate directly. Protocol converters, much like a language translator, translate messages or data streams between networks, to enable both networks easily interpret the data. ypical types of protocol converters include E1 to Ethernet, V35 to Ethernet and E1 to V35. The E1 protocol converter is used to convert E1 signal to 10/100Base-T Ethernet signal, and vice versa. It extends the bandwidth to 7.68Mbps. It can be used in two LAN connection, remote monitor or video broadcasting. E1 to V35 protocol converter realizes the bi-directional data transfer from E1 port to V. 35 network. This equipment is used in communication network including WAN and LAN, realizing the transfer from E1 channel of SDH or PDH equipment to V. 35, which maybe provided by routers. V35 to Ethernet Protocol Converter accomplish the converting between the 10/100M Ethernet port and the V. 35 port. It provides at most bandwidth N*64kbps data transmission channel for Ethernet through V. 35 Lines. It is suitable for many situations, such as increasing the range of LAN, founding a special Ethernet network, and so on. The protocol converters have the capacity to support the Modbus ASCII, Modbus RTU, Modbus TCP and the RFC-2217, E1, Ethernet, V.35, RS232, RS422 and beyond. There are protocol converters that even allow great solution developers the ability to add the proprietary applications and protocols. Also there are converters like RS422 converter and RS232 converter available. The most attractive benefit of the protocol converter is that the users can carry out the networking and serial communication without even bothering about the programming performed at the hardware level. Without the need of any additional programming for the end user, the protocol converter manages well to transmit the transparent data along the channel which connects a combination of two communication ports. Another key feature of the protocol converter is that of being a programmable driver. Most protocol converter units are programmed to understand a handful of different protocols, and these units use an internal database to track all the protocols. This database will store all the factors associated with the known protocols, and the database also is tasked with helping this device understand what needs to be changed to alter one protocol to another. Unlike regular databases, which can be manually updated, this database typically is locked from users.
<urn:uuid:077dd107-d802-4942-9bf7-f995e7521b97>
CC-MAIN-2017-04
http://www.fs.com/blog/protocol-converter.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00270-ip-10-171-10-70.ec2.internal.warc.gz
en
0.911078
576
3.125
3
You don't have to be a master at RF to be a good WiFi engineer. But you do need to understand the basics. RF Frequency is a electromagnetic wave using AC (Alternating Current). Just as the name implies, “frequency”, its something that happens over and over and over again. It is very frequent, consistent, and repetitive. There are different types of frequency; light, sound and in our case radio frequency (RF). "Frequency is the number of times a specified event occurs within a specified time interval. A standard measure of frequency is hertz (Hz)” - The CWNA definition of frequency v106 This specified event mentioned in the CWNA Study Guide is the cycle. "An oscillation, or cycle, of this alternating current is defined as a single change from up to down to up, or as a change from positive, to negative to positive.” - The CWNA definition of cycle v106 Lets look at a few examples of a Cycle. Example 1 - (1) Cycle One cycle, specified event, is measured 1 second in time which equals 1 Hz. As the CWNA mentioned, “alternating current is defined as a single change from up to down to up, or as a change from positive, to negative to positive” Example 2 - (5) Cycles Five cycles, specified events, measured 1 second in time which equals 5 Hz. We are dealing with simple math - 1 and 5 cycles per second. Imagine for a moment 2,400,000,000 / 5,000,000,000 billion cycles in 1 second. Thats a lot of cycles, eh ? That is the number of cycles 2.4 GHz and 5 GHz (WiFi) uses to transport data from one radio over the air to another radio. High frequency simple means there are more cycles per second. Example 3 - Low and High frequency example So — Remember —- Frequency is simply something that repeats itself over and over again. It is measured in cycles per seconds. The more cycles per second, the more frequency or referenced as higher frequency. “Wavelength is the distance between similar points on two back-to-back waves.” - The CWNA definition of Wavelength v106 RF Waves can be measured at different points. In the below example, reference #1 is the most often way wavelength is measured. “Amplitude is the height, force or power of the wave” - The CWNA definition of Amplitude v106 What is important to remember — frequency, cycle and wavelength remain constant, however, the hight of the wave form is dynamic based on the power of the wave. The higher power, or amplitude, the higher the wave form peeks. The lower the power, or amplitude, the lower the wave form peeks all while frequency, cycle and wavelength remain the same. Example 5 - Amplitude shown by the hight or peeks of the wave form. Phase is the same frequency, same cycle, same wavelength, but are 2 or more wave forms not exactly aligned together. “Phase is not a property of just one RF signal but instead involves the relationship between two or more signals that share the same frequency. The phase involves the relationship between the position of the amplitude crests and troughs of two waveforms. Phase can be measured in distance, time, or degrees. If the peaks of two signals with the same frequency are in exact alignment at the same time, they are said to be in phase. Conversely, if the peaks of two signals with the same frequency are not in exact alignment at the same time, they are said to be out of phase.” - The CWNA definition of Phase v106 Below is an example of 2 wave forms 90 degree out of phase. “What is important to understand is the effect that phase has on amplitude when a radio receives multiple signals. Signals that have 0 (zero) degree phase separation actually combine their amplitude, which results in a received signal of much greater signal strength, potentially as much as twice the amplitude. If two RF signals are 180 degrees out of phase (the peak of one signal is in exact alignment with the trough of the second signal), they cancel each other out and the effective received signal strength is null. Phase separation has a cumulative effect. Depending on the amount of phase separation of two signals, the received signal strength may be either increased or diminished. The phase difference between two signals is very important to understanding the effects of an RF phenomenon known as multipath, ” - The CWNA definition of Phase v106 Below is an example of 2 wave forms 180 degree out of phase. “If two RF signals are 180 degrees out of phase (the peak of one signal is in exact alignment with the trough of the second signal), they cancel each other out and the effective received signal strength is null.” The CWNA definition of Phase v106 It’s always important to revisit the basics. A firm understanding of RF is an important building block to become a good WiFi engineer! You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
<urn:uuid:7a1dd8e2-0844-433d-b739-1b7c1a850e8d>
CC-MAIN-2017-04
http://community.arubanetworks.com/t5/Technology-Blog/Frequency-Cycle-Wavelength-Amplitude-and-Phase/ba-p/222900
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00574-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943444
1,085
4
4
Zero Day Exploits For several years, most news articles about a computer, network, or Internet-based compromise have mentioned the phrase "zero day exploit" or "zero day attack," but rarely do these articles define what this is. A zero day exploit is any attack that was previously unknown to the target or security experts in general. Many believe that the term refers to attacks that were just released into the wild or developed by hackers in the current calendar day. This is generally not the case. The "zero day" component of the term refers to the lack of prior knowledge about the attack, highlighting the idea that the victim has zero day's notice of an attack. The main feature of a zero day attack is that since it is an unknown attack, there are no specific defenses or filters for it. Thus, a wide number of targets are vulnerable to the exploit. Zero day attacks have been discovered recently that are potentially at least seven years old. I'm specifically referencing the Flame or Skywiper discovered in early 2012. However, it is much more common for zero day exploits to have existed for months before discovery. Again, whenever you see the phrase zero-day exploits, keep in mind it just means a newly discovered, previously unknown attack, for which there is no defense at the time of discovery. Once security researchers become aware of a new zero day exploit, they quickly develop detection and prevention measures in the process of their forensic analysis. These new detection and defense options are distributed and shared with the security community. Once organizations and individuals install updates or make configuration changes, they can be assured that their risk of compromise from that specific attack as been significantly reduced or eliminated. Once detection and defense is possible, then an exploit is not longer considered a zero day as there is now notification of its existence. A search using the term "zero day" reveals numerous recent compromises and exploitations. In fact, this should be obvious as new attacks are by nature zero day. But since we often label the attack as a zero day exploit at the time of discovery and for a moderate period of time afterwards, that label is a useful term for tracking down the appearance of historical attacks. In 2012, there have been several fairly significant discoveries of exploits and attacks that were labeled as zero day. These include: Flame/Skywiper is used for targeted cyber espionage against Middle Eastern countries An IE exploit that allows hackers to remotely install malware onto Windows systems running IE 7, 8, or 9 A Java exploit that allows hackers to remotely install malware onto system running Java 5, 6, or 7 Exploit and Vulnerability Awareness Sites To learn of more examples of zero day exploit discoveries, I recommend visiting a few sites on a regular basis. Exploit Database (exploit-db.com) is a community driven notification site about newly discovered zero day attacks. What I like about this site, and which is unique to this site, is that in addition to disclosing the attack, it also provides access to the exploit itself. Most other vulnerability and exploit research sites do not provide you the actual attack code. I think this is an overlooked opportunity. When you have access to the exploit code, you can develop your own filter for the attack. You don't need to wait for a vendor to release a patch or a security vendor to update their tool's database; instead, you can add in your own detection filter and stop the attack. The MITRE organization's Common Vulnerability and Exploit database (cve.mitre.org) is one of the better known collections of attack and compromise information and research. Perusing their collection will help you stay aware of recently discovered exploits and steps you can take to avoid compromise and reduce your vulnerability. The US Cert site (us-cert.gov) is a US government-managed site with emphasis on providing security and exploit information to protect the nation's IT. Their mission is to provide information, promote awareness, and assist in protection preparations against all forms of compromise and abuse of computers and networking. I recommend signing up for their weekly bulletin which summaries the previous week's newly discovered exploits and vulnerabilities, but also provides references back to their main site as well as the CVE from MITRE. If you visit these exploit and vulnerability awareness sites, you will notice that new zero day exploits are uncovered on a fairly consistent basis - daily. Malicious hackers across the globe (as well as security experts we perceive as being good guys) are writing new attack code with new exploits in an attempt to develop the next best computer weapon. They seek to compromise the most systems in the shortest amount of time, while gaining the most control, learning the most information, all while going without detection for as long as possible. It is a race and a battle of intelligence and creativity. How Do Hackers Uncover New Vulnerabilities and Weaknesses? A common question I hear from students is, "How does a hacking programmer learn about a flaw or vulnerability in the first place?" There are many ways by which new weaknesses or vulnerabilities are uncovered, but three are the most common: source code review, patch dissection, and fuzzy testing.
<urn:uuid:31556819-5cfa-47a2-9091-78b905f898fd>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/resources/resource-library/white-paper/zero-day-exploits/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00482-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954744
1,048
3.21875
3
Organizations defend their networks on each of the six levels in the diagram you see. End-user Security Awareness Training resides in the outer layer: ‘Policies, Procedures, and Awareness’. As you see, this is the outer shell and in reality it is where security starts. You don’t open the door for the bad guy to come freely into your building, right? Let’s have a quick and admittedly highly simplified look at defense-in-depth. End-user Security Awareness is an important piece of your security puzzle because many attack types go after the end user (called social engineering) to succeed. Once an organization has published policies, has implemented security procedures, and has trained all employees, the first step of defense-in-depth has been established. The second step is defending the perimeter. In the case of IT that usually means a firewall, and related tools to block intrusions. Part three is protection of the internal network. There are various software tools that scan the network for attackers, traffic that should not be there, and many other ways to detect attacks. Next, protecting each individual computer in the network (called ‘hosts’) is also crucial. Here is where end-point security tools live, which attempt to block attacks on the individual computer level. Then, there are many ways to protect the individual applications that are running on computers in the organization, and last but not least, the data also needs to be protected, and yet again, there are many, many ways to do that, for example encryption. However, end-user security awareness can affect every aspect of an organization’s security profile, as it truly is where security starts! That is why it is so important that small and medium enterprises (including non-profits) give their end-users Internet Security Awareness Training, and enforce compliance.
<urn:uuid:cadb6360-66fe-43d3-bd61-24cdbb55863e>
CC-MAIN-2017-04
https://www.knowbe4.com/resources/defense-in-depth/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00482-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953639
386
2.640625
3
Power over Ethernet (PoE) has found widespread applications in markets such as VoIP telephony, wireless LANs, IP video security and access control, since its standardisation in 2003. It is gaining popularity across the spectrum of IP surveillance and access control applications. But what is PoE? This article will analyze it step by step. PoE is defined across a single network link which includes three basic components. The first component is an equipment delivering power to the cable (often referred to as a PSE, which stands for power sourcing equipment). The second component is a device receiving power from the cable (also known as a powered device, or PD). The third is the cable itself. Typical PDs include IP cameras, wireless access points, and the PSE would normally be a PoE switch or a midspan power injector, patched in to add PoE capability to a non-PoE network switch channel or similar. These two configurations are shown in the following picture. Note: this part will be described more clearly later in this article. Its primary advantage is time and cost savings. By reducing the time and expense of having electrical power cabling installed, network cables do not require a qualified electrician to fit them, and can be located anywhere. Then it has great flexibility. Without being tethered to an electrical outlet, the PDs (IP cameras, wireless access points) could be located wherever they are needed most. Safety is the third advantage. PoE delivery is intelligent and it is designed to protect network equipment from overload, or incorrect installation. Also it has reliability and scalability. PoE power comes from a central and universally compatible source, rather than a collection of distributed wall adapters. It can be backed-up by an uninterruptible power supply, or controlled to easily disable or reset devices. The original PoE application is VoIP phones, which have a single connection to a wall socket, and can be remotely powered down, just like with the older analog systems. PoE could also be used in IP cameras. It is ubiquitous on networked surveillance cameras where it enables fast deployment and easy repositioning. Wifi and bluetooth APs and RFID (radio frequency identification devices) readers are commonly PoE-compatible, to allow remote location away from AC outlets, and relocation following site surveys. PoE is designed to operate over standard network cable: Cat 3, Cat 5, Cat 5e or Cat 6 (often collectively referred to as Cat 5), using conventional RJ45 connectors. The principles of carrying electrical power over Cat5 are no difference to those of other power distribution systems, but as the power is being transferred over light-duty cable for long distances, the effects of the power loss and voltage drop become significant. The arrangement and connection to the cabling used for PoE also differ slightly from conventional power wiring, in order to work around the existing standard for Ethernet data. Cat 5 network cables contain a bundle of eight wires, arranged as four twisted pairs shown in the following picture. In the most common type of Ethernet, 100BASE-T or Fast Ethernet, only two of the four pairs are used to carry data; each pair carrying a signal in one direction. These are known as the data pairs, and the remaining two are unused and are referred to as the spare pairs. Although each data signal can be carried within a single pair, PoE treats each pair of wires as a single conductor (a reason for this is that using both wires halves the overall resistance). As electrical current must flow in a loop, two pairs are required to allow power to be carried by the cable, and either the data or spare pairs can be used for this. The PD must be able to accept power from whichever pairs the PSE delivers it to. Two ways are available with PoE connection. A PoE switch is a network switch that has power over Ethernet injection built-in. Simply connect other network devices to the switch as normal, and then the PoE switch will detect whether they are PoE compatible and enable power automatically. PoE switch is available to suit all applications, from low-cost unmanaged switches with a few ports, up to complex multi-port rack-mounted units with sophisticated management. A midspan or PoE injector is utilized to add PoE capability to regular non-PoE network links. Midspans could be used to upgrade existing LAN installations to PoE, and provide a versatile solution where fewer PoE ports are required. Upgrading each network connection to PoE is as simple as patching it through the midspan, and as with PoE switches, power injection is controlled and automatic. Midspans are available as multi-port rack-mounted units or low-cost single-port injectors. Like all technologies, PoE can be used most effectively if its working basis is known and understood. The above statements only briefly explain its concepts, advantages, applications, connection and working principles. More information about PoE would be desired and we should keep learning about it.
<urn:uuid:561a75bf-2321-4141-924a-b5ccf526d0d6>
CC-MAIN-2017-04
http://www.fs.com/blog/power-over-ethernet-analysis.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00510-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944953
1,033
3.46875
3
It's no secret that data centers are power gluttons. Its no secret that data centers are power gluttons. With power densities of 100 watts per square foot or more, electricity usage of a single data center can far exceed that of a comparably sized office building. But InfiniBand may provide data centers with dramatic reductions in energy needs. The new standard does away with the peripheral component interconnect bus, known as the PCI bus, which will allow server makers to do away with PCI cards. "A PCI card consumes 30 watts of power," said Philip Brace, director of product marketing at Intel. "There are significant reductions in electricity consumption that will be gained through the elimination of PCI cards." The potential savings are easily calculated. If a data center can eliminate PCI cards in 200 servers, it will reduce its power consumption by about 6,000 watts enough to power six average homes. In InfiniBand-capable servers, the work formerly done by the PCI card managing the input and output of information will be done outside of the server by switches on the InfiniBand network. And that network can handle dozens or even hundreds of servers or server-like devices. InfiniBand may also allow further gains. Because data flow to and from the server will be managed by the InfiniBand network, data center operators may be able to do away with many of the network interface cards, Ethernet and Fibre Channel cards that are often used inside individual servers. "Eventually under InfiniBand, youll only have one cable coming out of the server, and it will be used by InfiniBand for clustering, communications and storage," explained Eyal Waldman, CEO of Mellanox Technologies, an InfiniBand semiconductor start-up. Mellanox predicted a standard server would use about one-third more power than an InfiniBand-capable server. The company estimated that a standard server equipped with all of the needed switching and I/O cards consumes 254 watts of power, and that a comparable InfiniBand server will use 190 watts. The company believes InfiniBand equipment will cost less, take up one-third less space in the data center and perform better than standard equipment.
<urn:uuid:66baf88e-ba03-4729-9bdd-7d07634cb41c>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Data-Storage/Power-Down
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00536-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924182
456
2.65625
3
A text file describing a new virus called PROTO-T was distributed via electronic bulletin boards and the Internet late in the year 1992. This text told about a virus of a new kind that was threateningly spreading itself all over the world. The virus was, among other things, claimed to be impossible to spot and supposedly able to hide itself in the RAM memory of a modem or a hard disk. This text and the things described in it are pure invention, it would be technically impossible to build a virus to match the description. Depending on the settings of your F-Secure security product, it will either automatically delete, quarantine or rename the suspect file, or ask you for a desired action. More scanning & removal options More information on the scanning and removal options available in your F-Secure product can be found in the Help Center. You may also refer to the Knowledge Base on the F-Secure Community site for more information. A virus cannot hide its code in the buffers of modems or hard disks, because these memory areas are very small and unprotected - in reality the virus code would be overwritten almost immediately. In any case, part of the viral code would have to be stored in normal DOS memory in order for a virus to function. PC computers execute code that is located in their core memory, and that code only. It is possible to hide part of the viral code in the memory of a VGA card. Some viruses (like Starship and GoldBug) do so, but even in this case the virus can be found by normal means. The text was apparently a practical joke that spread uncommonly far. On the other hand, this joke inspired the development of several new viruses. As rumors of PROTO-T spread, some individuals decided to take advantage of its reputation and wrote viruses that contained the text "PROTO-T". Naturally enough, these viruses contained none of the characteristics mentioned in the original description. The 'real' Proto-T viruses are not known to be in the wild. Their characteristics differ a lot from each other.
<urn:uuid:4e8b2596-1c05-40cf-9742-59328cd48cf4>
CC-MAIN-2017-04
https://www.f-secure.com/v-descs/proto-t.shtml
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00262-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948291
421
3.109375
3
World Wide Web inventor Sir Tim Berners-Lee has given a speech in London, re-affirming the importance of privacy, but unfortunately he has muddied the waters by casting aspersions on privacy law. Berners-Lee makes a technologist's error, calling for unworkable new privacy mechanisms where none in fact are warranted. The Telegraph reports Berners-Lee as saying "Some people say privacy is dead – get over it. I don't agree with that. The idea that privacy is dead is hopeless and sad." He highlighted that peoples' participation in potentially beneficial programs like e-health is hampered by a lack of trust, and a sense that spying online is constant. Of course he's right about that. Yet he seems to underestimate the data privacy protections we already have. Instead he envisions "a world in which I have control of my data. I can sell it to you and we can negotiate a price, but more importantly I will have legal ownership of all the data about me" he said according to The Telegraph. It's a classic case of being careful what you ask for, in case you get it. What would control over "all data about you" look like? Most of the data about us these days - most of the personal data, aka Personally Identifiable Information (PII) - is collected or created behind our backs, by increasingly sophisticated algorithms. Now, people certainly don't know enough about these processes in general, and in too few cases are they given a proper opportunity to opt in to Big Data processes. Better notice and consent mechanisms are needed for sure, but I don't see that ownership could fix a privacy problem. What could "ownership" of data even mean? If personal information has been gathered by a business process, or created by clever proprietary algorithms, we get into obvious debates over intellectual property. Look at medical records: in Australia and I suspect elsewhere, it is understood that doctors legally own the medical records about a patient, but that patients have rights to access the contents. The interpretation of medical tests is regarded as the intellectual property of the healthcare professional. The philosophical and legal quandries are many. With data that is only potentially identifiable, at what point would ownership flip from the data's creator to the individual to whom it applies? What if data applies to more than one person, as in household electricity records, or, more seriously, DNA? What really matters is preventing the exploitation of people through data about them. Privacy (or, strictly speaking, data protection) is fundamentally about restraint. When an organisation knows you, they should be restrained in what they can do with that knowledge, and not use it against your interests. And thus, in over 100 countries, we see legislated privacy principles which require that organisations only collect the PII they really need for stated purposes, that PII collected for one reason not be re-purposed for others, that people are made reasonably aware of what's going on with their PII, and so on. Berners-Lee alluded to the privacy threats of Big Data, and he's absolutely right. But I point out that existing privacy law can substantially deal with Big Data. It's not necessary to make new and novel laws about data ownership. When an algorithm works out something about you, such as your risk of developing diabetes, without you having to fill out a questionnaire, then that process has collected PII, albeit indirectly. Technology-neutral privacy laws don't care about the method of collection or creation of PII. Synthetic personal data, collected as it were algorithmically, is treated by the law in the same way as data gathered overtly. An example of this principle is found in the successful European legal action against Facebook for automatic tag suggestions, in which biometric facial recognition algorithms identify people in photos without consent. Technologists often under-estimate the powers of existing broadly framed privacy laws, doubtless because technology neutrality is not their regular stance. It is perhaps surprising, yet gratifying, that conventional privacy laws treat new technologies like Big Data and the Internet of Things as merely potential new sources of personal information. If brand new algorithms give businesses the power to read the minds of shoppers or social network users, then those businesses are limited in law as to what they can do with that information, just as if they had collected it in person. Which is surely what regular people expect. For many years, American businesses have enjoyed a bit of special treatment under European data privacy laws. The so-called "Safe Harbor" arrangement was negotiated by the Federal Communications Commission (FCC) so that companies could self-declare broad compliance with data security rules. Normally organisations are not permitted to move Personally Identifiable Information (PII) about Europeans beyond the EU unless the destination has equivalent privacy measures in place. The "Safe Harbor" arrangement was a shortcut around full compliance; as such it was widely derided by privacy advocates outside the USA, and for some years had been questioned by the more activist regulators in Europe. And so it seemed inevitable that the arrangement would be eventually annulled, as it was last October. With the threat of most personal data flows from Europe into America being halted, US and EU trade officials have worked overtime for five months to strike a new deal. Today (January 29) the US Department of Commerce announced the "EU-US Privacy Shield". The Privacy Shield is good news for commerce of course. But I hope that in the excitement, American businesses don't lose sight of the broader sweep of privacy law. Even better would be to look beyond compliance, and take the opportunity to rethink privacy, because there is more to it than security and regulatory short cuts. The Privacy Shield and the earlier Safe Harbor arrangement are really only about satisfying one corner of European data protection laws, namely transborder flows. The transborder data flow rules basically say you must not move personal data from an EU state into a jurisdiction where the privacy protections are weaker than in Europe. Many countries actually have the same sort of laws, including Australia. Normally, as a business, you would have to demonstrate to a European data protection authority (DPA) that your information handling is complying with EU laws, either by situating your data centre in a similar jurisdiction, or by implementing legally binding measures for safeguarding data to EU standards. This is why so many cloud service providers are now building fresh infrastructure in the EU. But there is more to privacy than security and data centre location. American businesses must not think that just because there is a new get-out-of-jail clause for transborder flows, their privacy obligations are met. Much more important than raw data security are the bedrocks of privacy: Collection Limitation, Usage Limitation, and Transparency. Basic data privacy laws the world-over require organisations to exercise constraint and openness. That is, Personal Information must not be collected without a real demonstrated need (or without consent); once collected for a primary purpose, Personal Information should not be used for unrelated secondary purposes; and individuals must be given reasonable notice of what personal data is being collected about them, how it is collected, and why. It's worth repeating: general data protection is not unique to Europe; at last count, over 100 countries around the world had passed similar laws; see Prof Graham Greenleaf's Global Tables of Data Privacy Laws and Bills, January 2015. Over and above Safe Harbor, American businesses have suffered some major privacy missteps. The Privacy Shield isn't going to make overall privacy better by magic. For instance, Google in 2010 was caught over-collecting personal information through its StreetView cars. It is widely known (and perfectly acceptable) that mapping companies use the positions of unique WiFi routers for their geolocation databases. Google continuously collects WiFi IDs and coordinates via its StreetView cars. The privacy problem here was that some of the StreetView cars were also collecting unencrypted WiFi traffic (for "research purposes") whenever they came across it. In over a dozen countries around the world, Google admitted they had breached local privacy laws by colelcting excessive PII, apologised for the overreach, explained it as inadvertent, and deleted all the WiFi records in question. The matter was settled in just a few months in places like Korea, Japan and Australia. But in the US, where there is no general collection limitation privacy rule, Google has been defending what they did. Absent general data privacy protection, the strongest legislation that seems to apply to the StreetView case is wire tap law, but its application to the Internet is complex. And so the legal action has taken years and years, and it's still not resolved. I don't know why Google doesn't see that a privacy breach in the rest of the world is a privacy breach in the US, and instead of fighting it, concede that the collection of WiFi traffic was unnecessary and wrong. Other proof that European privacy law is deeper and broader than the Privacy Shield is found in social networking mishaps. Over the years, many of Facebook's business practices for instance have been found unlawful in the EU. Recently there was the final ruling against "Find Friends", which uploads the contact details of third parties without their consent. Before that there was the long running dispute over biometric photo tagging. When Facebook generates tag suggestions, what they're doing is running facial recognition algorithms over photos in their vast store of albums, without the consent of the people in those photos. Identifying otherwise anonymous people, without consent (and without restraint as to what might be done next with that new PII), seems to be an unlawful under the Collection Limitation and Usage Limitation principles. In 2012, Facebook was required to shut down their photo tagging in Europe. They have been trying to re-introduce it ever since. Whether they are successful or not will have nothing to do with the "Privacy Shield". The Privacy Shield comes into a troubled trans-Atlantic privacy environment. Whether or not the new EU-US arrangement fares better than the Safe Harbor remains to be seen. But in any case, since the Privacy Shield really aims to free up business access to data, sadly it's unlikely to do much good for true privacy. The examples cited here are special cases of the collision of Big Data with data privacy, which is one of my special interest areas at Constellation Research. See for example "Big Privacy" Rises to the Challenges of Big Data. A big part of my research agenda in the Digital Safety theme at Constellation is privacy. And what a vexed topic it is! It's hard to even know how to talk about privacy. For many years, folks have covered privacy in more or less academic terms, drawing on sociology, politics and pop psychology, joining privacy to human rights, and crafting new various legal models. Meanwhile the data breaches get worse, and most businesses have just bumped along. When you think about it, it’s obvious really: there’s no such thing as perfect privacy. The real question is not about ‘fundamental human rights’ versus business, but rather, how can we optimise a swarm of competing interests around the value of information? Privacy is emerging as one of the most critical and strategic of our information assets. If we treat privacy as an asset, instead of a burden, businesses can start to cut through this tough topic. But here’s an urgent issue. A recent regulatory development means privacy may just stop a lot of business getting done. It's the European Court of Justice decision to shut down the US-EU Safe Harbor arrangement. The privacy Safe Harbor was a work-around negotiated by the Federal Trade Commission, allowing companies to send personal data from Europe into the US. But the Safe Harbor is no more. It's been ruled unlawful. So it’s a big, big problem for European operations, many multinationals, and especially US cloud service providers. At Constellation we've researched cloud geography and previously identified competitive opportunities for service providers to differentiate and compete on privacy. But now this is an urgent issue. It's time American businesses stopped getting caught out by global privacy rulings. There shouldn't be too many surprises here, if you understand what data protection means internationally. Even the infamous "Right To Be Forgotten" ruling on Google’s search engine – which strikes so many technologists as counter intuitive – was a rational and even predictable outcome of decades old data privacy law. The leading edge of privacy is all about Big Data. And we aint seen nothin yet! Look at artificial intelligence, Watson Health, intelligent personal assistants, hackable cars, and the Internet of Everything where everything is instrumented, and you see information assets multiplying exponentially. Privacy is actually just one part of this. It’s another dimension of information, one that can add value, but not in a neat linear way. The interplay of privacy, utility, usability, efficiency, efficacy, security, scalability and so on is incredibly complex. The broader issue is Digital Safety: safety for your customers, and safety for your business. The identerati sometimes refer to the challenge of “binding carbon to silicon”. That’s a poetic way of describing how the field of Identity and Access Management (IDAM) is concerned with associating carbon-based life forms (as geeks fondly refer to people) with computers (or silicon chips). To securely bind users’ identities or attributes to their computerised activities is indeed a technical challenge. In most conventional IDAM systems, there is only circumstantial evidence of who did what and when, in the form of access logs and audit trails, most of which can be tampered with or counterfeited by a sufficiently determined fraudster. To create a lasting, tamper-resistant impression of what people do online requires some sophisticated technology (in particular, digital signatures created using hardware-based cryptography). On the other hand, working out looser associations between people and computers is the stock-in-trade of social networking operators and Big Data analysts. So many signals are emitted as a side effect of routine information processing today that even the shyest of users may be uncovered by third parties with sufficient analytics know-how and access to data. So privacy is in peril. For the past two years, big data breaches have only got bigger: witness the losses at Target (110 million), EBay (145 million), Home Depot (109 million records) and JPMorgan Chase (83 million) to name a few. Breaches have got deeper, too. Most notably, in June 2015 the U.S. federal government’s Office of Personnel Management (OPM) revealed it had been hacked, with the loss of detailed background profiles on 15 million past and present employees. I see a terrible systemic weakness in the standard practice of information security. Look at the OPM breach: what was going on that led to application forms for employees dating back 15 years remaining in a database accessible from the Internet? What was the real need for this availability? Instead of relying on firewalls and access policies to protect valuable data from attack, enterprises need to review which data needs to be online at all. We urgently need to reduce the exposed attack surface of our information assets. But in the information age, the default has become to make data as available as possible. This liberality is driven both by the convenience of having all possible data on hand, just in case in it might be handy one day, and by the plummeting cost of mass storage. But it's also the result of a technocratic culture that knows "knowledge is power," and gorges on data. In communications theory, Metcalfe’s Law states that the value of a network is proportional to the square of the number of devices that are connected. This is an objective mathematical reality, but technocrats have transformed it into a moral imperative. Many think it axiomatic that good things come automatically from inter-connection and information sharing; that is, the more connection the better. Openness is an unexamined rallying call for both technology and society. “Publicness” advocate Jeff Jarvis wrote (admittedly provocatively) that: “The more public society is, the safer it is”. And so a sort of forced promiscuity is shaping up as the norm on the Internet of Things. We can call it "superconnectivity", with a nod to the special state of matter where electrical resistance drops to zero. In thinking about privacy on the IoT, a key question is this: how much of the data emitted from Internet-enabled devices will actually be personal data? If great care is not taken in the design of these systems, the unfortunate answer will be most of it. My latest investigation into IoT privacy uses the example of the Internet connected motor car. "Rationing Identity on the Internet of Things" will be released soon by Constellation Research. And don't forget Constellation's annual innovation summit, Connected Enterprise at Half Moon Bay outside San Francisco, November 4th-6th. Early bird registration closes soon. An unpublished letter to New Yorker magazine, August 2015. Kelefa Sanneh ("The Hell You Say", Aug 10 & 17) poses a question close to the heart of society’s analog-to-digital conversion: What is speech? Internet policy makers worldwide are struggling with a recent European Court of Justice decision which grants some rights to individuals to have search engines like Google block results that are inaccurate, irrelevant or out of date. Colloquially known as the "Right To Be Forgotten" (RTBF), the ruling has raised the ire of many Americans in particular, who typically frame it as yet another attack on free speech. Better defined as a right to be de-listed, RTBF makes search providers consider the impact on individuals of search algorithms, alongside their commercial interests. For there should be no doubt – search is very big business. Google and its competitors use search to get to know people, so they can sell better advertising. Search results are categorically not the sort of text which contributes to "democratic deliberation". Free speech may be many things but surely not the mechanical by-products of advertising processes. To protect search results as such mocks the First Amendment. Some of my other RTBF thoughts: - Search is not a passive reproduction; Google makes the public domain public. - Google's deeply divided Advisory Council was strangely silent on the business nature of search. - Search results are a special form of Big Data, and not the sort of thing that counts as speech. In the latest course of a 15 month security feast, BlackBerry has announced it is acquiring mobile device management (MDM) provider Good Technology. The deal is said to be definitive, for US$425 million in cash. As BlackBerry boldly re-positions itself as a managed service play in the Internet of Things, adding an established MDM capability to its portfolio will bolster its claim -- which still surprises many -- to be handset neutral. But the Good buy is much more than that. It has to be seen in the context of John Chen's drive for cross-sector security and privacy infrastructure for the IoT. As I reported from the recent BlackBerry Security Summit in New York, the company has knitted together a comprehensive IoT security fabric. Look at how they paint their security platform: And see how Good will slip neatly into the Platform Services column. It's the latest in what is now a $575 million investment in non-organic security growth (following purchases of Secusmart, Watchdox, Movirtu and Athoc). According to BlackBerry, - Good will bring complementary capabilities and technologies to BlackBerry, including secure applications and containerization that protects end user privacy. With Good, BlackBerry will expand its ability to offer cross-platform EMM solutions that are critical in a world with varying deployment models such as bring-your-own-device (BYOD); corporate owned, personally enabled (COPE); as well as environments with multiple user interfaces and operating systems. Good has expertise in multi-OS management with 64 percent of activations from iOS devices, followed by a broad Android and Windows customer base.(1) This experience combined with BlackBerry’s strength in BlackBerry 10 and Android management – including Samsung KNOX-enabled devices – will provide customers with increased choice for securely deploying any leading operating system in their organization. The strategic acquisition of Good Technology will also give the Identity-as-a-Service sector a big kick. IDaaS is become a crowded space with at least ten vendors (CA, Centrify, IBM, Microsoft, Okta, OneLogin, Ping, Salepoint, Salesforce, VMware) competing strongly around a pretty well settled set of features and functions. BlackBerry themselves launched an IDaaS a few months ago. At the Security Summit, I asked their COO Marty Beard what is going to distinguishe their offering in such a tight market, and he said, simply, mobility. Presto! But IDaaS is set to pivot. We all know that mobility is now the locus of security , and we've seen VMware parlay its AirWatch investment into a competitive new cloud identity service. This must be more than a catch-up play with so many entrenched IDaaS vendors. Here's the thing. I foresee identity actually disappearing from the user experience, which more and more will just be about the apps. I discussed this development in a really fun "Identity Innovators" video interview recorded with Ping at the recent Cloud Identity Summit. For identity to become seamless with the mobile application UX, we need two things. Firstly, federation protocols so that different pieces of software can hand over attributes and authentication signals to one another, and these are all in place now. But secondly we also need fully automated mobile device management as a service, and that's where Good truly fits with the growing BlackBerry platform. Now stay tuned for new research coming soon via Constellation on the Internet of Things, identity, privacy and software reliability. See also The State of Identity Management in 2015. On July 23, BlackBerry hosted its second annual Security Summit, once again in New York City. As with last year’s event, this was a relatively intimate gathering of analysts and IT journalists, brought together for the lowdown on BlackBerry’s security and privacy vision. By his own account, CEO John Chen has met plenty of scepticism over his diverse and, some say, chaotic product and services portfolio. And yet it’s beginning to make sense. There is a strong credible thread running through Chen’s initiatives. It all has to do with the Internet of Things. Disclosure: I traveled to the Blackberry Security Summit as a guest of Blackberry, which covered my transport and accommodation. The Growth Continues In 2014, John Chen opened the show with the announcement he was buying the German voice encryption firm Secusmart. That acquisition appears to have gone well for all concerned; they say nobody has left the new organisation in the 12 months since. News of BlackBerry’s latest purchase - of crisis communications platform AtHoc - broke a few days before this year’s Summit, and it was only the most recent addition to the family. In the past 12 months, BlackBerry has been busy spending $150M on inorganic growth, picking up: Chen has also overseen an additional $100M expenditure in the same timeframe on organic security expansion (over and above baseline product development). Amongst other things BlackBerry has: The Growth Explained - Secure Mobile Communications Executives from different business units and different technology horizontals all organised their presentations around what is now a comprehensive security product and services matrix. It looks like this (before adding AtHoc): BlackBerry is striving to lead in Secure Mobile Communications. In that context the highlights of the Security Summit for mine were as follows. The Internet of Things BlackBerry’s special play is in the Internet of Things. It’s the consistent theme that runs through all their security investments, because as COO Marty Beard says, IoT involves a lot more than machine-to-machine communications. It’s more about how to extract meaningful data from unbelievable numbers of devices, with security and privacy. That is, IoT for BlackBerry is really a security-as-a-service play. Chief Security Officer David Kleidermacher repeatedly stressed the looming challenge of “how to patch and upgrade devices at scale”. - MyPOV: Functional upgrades for smart devices will of course be part and parcel of IoT, but at the same time, we need to work much harder to significantly reduce the need for reactive security patches. I foresee an angry consumer revolt if things that never were computers start to behave and fail like computers. A radically higher standard of quality and reliability is required. Just look at the Jeep Uconnect debacle, where it appears Chrysler eventually thought better of foisting a patch on car owners and instead opted for a much more expensive vehicle recall. It was BlackBerry’s commitment to ultra high reliability software that really caught my attention at the 2014 Security Summit, and it convinces me they grasp what’s going to be required to make ubiquitous computing properly seamless. Refreshingly, COO Beard preferred to talk about economic value of the IoT, rather than the bazillions of devices we are all getting a little jaded about. He said the IoT would bring about $4 trillion of required technology within a decade, and that the global economic impact could be $11 trillion. BlackBerry’s real time operating system QNX is in 50 million cars today. AtHoc is a secure crisis communications service, with its roots in the first responder environment. It’s used by three million U.S. government workers today, and the company is now pushing into healthcare. Founder and CEO Guy Miasnik explained that emergency communications involves more than just outbound alerts to people dealing with disasters. Critical to crisis management is the secure inbound collection of info from remote users. AtHoc is also not just about data transmission (as important as that is) but it works also at the application layer, enabling sophisticated workflow management. This allows procedures for example to be defined for certain events, guiding sets of users and devices through expected responses, escalating issues if things don’t get done as expected. We heard more about BlackBerry’s collaboration with Oxford University on the Centre for High Assurance Computing Excellence, first announced in April at the RSA Conference. CHACE is concerned with a range of fundamental topics, including formal methods for verifying program correctness (an objective that resonates with BlackBerry’s secure operating system division QNX) and new security certification methodologies, with technical approaches based on the Common Criteria of ISO 15408 but with more agile administration to reduce that standard’s overhead and infamous rigidity. CSO Kleidermacher announced that CHACE will work with the Diabetes Technology Society on a new healthcare security standards initiative. The need for improved medical device security was brought home vividly by an enthralling live demonstration of hacking a hospital drug infusion pump. These vulnerabilities have been exposed before at hacker conferences but BlackBerry’s demo was especially clear and informative, and crafted for a non-technical executive audience. - MyPOV: The message needs to be broadcast loud and clear: there are life-critical machines in widespread use, built on commercial computing platforms, without any careful thought for security. It’s a shameful and intolerable situation. I was impressed by BlackBerry’s privacy line. It's broader and more sophisticated than most security companies, going way beyond the obvious matters of encryption and VPNs. In particular, the firm champions identity plurality. For instance, WorkLife by BlackBerry, powered by Movirtu technology, realizes multiple identities on a single phone. BlackBerry is promoting this capability in the health sector especially, where there is rarely a clean separation of work and life for professionals. Chen said he wants to “separate work and private life”. The health sector in general is one of the company’s two biggest business development priorities (the other being automotive). In addition to sophisticated telephony like virtual SIMs, they plan to extend extend AtHoc into healthcare messaging, and have tasked the CHACE think-tank with medical device security. These actions complement BlackBerry’s fine words about privacy. So BlackBerry’s acquisition plan has gelled. It now has perhaps the best secure real time OS for smart devices, a hardened device-independent Mobile Device Management backbone, new data-centric privacy and rights management technology, remote certificate management, and multi-layered emergency communications services that can be diffused into mission-critical rules-based e-health settings and, eventually, automated M2M messaging. It’s a powerful portfolio that makes strong sense in the Internet of Things. BlackBerry says IoT is 'much more than device-to-device'. It’s more important to be able to manage secure data being ejected from ubiquitous devices in enormous volumes, and to service those things – and their users – seamlessly. For BlackBerry, the Internet of Things is really all about the service. Every year the Constellation SuperNova Awards recognise eight individuals for their leadership in digital business. Nominate yourself or someone you know by August 7, 2015. The SuperNova Awards honour leaders that demonstrate excellence in the application and adoption of new and emerging technologies. In its fifth year, the SuperNova Awards program will recognise eight individuals who demonstrate true leadership in digital business through their application of new and emerging technologies. Constellation Research is searching for leaders and corporate teams who have innovatively applied disruptive technolgies to their businesses, to adapt to the rapidly-changing digital business environment. Special emphasis will be given to projects that seek to redefine how the enterprise uses technology on a large scale. We’re searching for the boldest, most transformative technology projects out there. Apply for a SuperNova Award by filling out the application here: http://www.constellationr.com/node/3137/apply SuperNova Award Categories - Consumerization of IT & The New C-Suite - The Enterprise embraces consumer tech, and perfects it. - Data to Decisions - Using data to make informed business decisions. - Digital Marketing Transformation - Put away that megaphone. Marketing in the digital age requires a new approach. - Future of Work - The processes and technologies addressing the rapidly shifting work paradigm. - Matrix Commerce - Commerce responds to changing realities from the supply chain to the storefront. - Next Generation Customer Experience - Customers in the digital age demand seamless service throughout all lifecycle stages and across all channels. - Safety and Privacy - Not 'security'. Safety and Privacy is the art and science of the art and science of protecting information assets, including your most important assets: your people. - Technology Optimization & Innovation - Innovative methods to balance innovation and budget requirements. Five reasons to apply for a SuperNova Award - Exposure to the SuperNova Award judges, comprised of the top influencers in enterprise technology - Case study highlighting the achievements of the winners written by Constellation analysts - Complimentary admission to the SuperNova Award Gala Dinner and Constellation's Connected Enterprise for all finalists November 4-6, 2015 (NB: lodging and travel not included) - One year unlimited access to Constellation's research library - Winners featured on Constellation's blog and weekly newsletter. Ray Wang tells us now that writing a book and launching a company are incredibly fulfilling things to do - but ideally, not at the same time. He thought it would take a year to write "Disrupting Digital Business", but since it overlapped with building Constellation Research, it took three! But at the same time, his book is all the richer for that experience. Ray is on a world-wide book tour (tweeting under the hash tag #cxotour). I was thrilled to participate in the Melbourne leg last week. We convened a dinner at Melbourne restaurant The Deck and were joined by a good cross section of Australian private and public sector businesses. There were current and recent executives from Energy Australia, Rio Tinto, the Victorian Government and Australia Post among others, plus the founders of several exciting local start-ups. And we were lucky to have special guests Brian Katz and Ben Robbins - two renowned mobility gurus. The format for all the launch events has one or two topical short speeches from Constellation analysts and Associates, and a fireside chat by Ray. In Melbourne, we were joined by two of Australia's deep digital economy experts, Gavin Heaton and Joanne Jacobs. Gavin got us going on the night, surveying the importance of innovation, and the double edged opportunities and threats of digital disruption. Then Ray spoke off-the-cuff about his book, summarising years of technology research and analysis, and the a great many cases of business disruption, old and new. Ray has an encyclopedic grasp of tech-driven successes and failures going back decades, yet his presentations are always up-to-the-minute and full of practical can-do calls to action. He's hugely engaging, and having him on a small stage for a change lets him have a real conversation with the audience. Speaking with no notes and PowerPoint-free, Ray ranged across all sorts of disruptions in all sorts of sectors, including: - Sony's double cassette Walkman (which Ray argues playfully was their "last innovation") - Coca Cola going digital, and the speculative "ten cent sip" - the real lesson of the iPhone: geeks spend time arguing about whether Apple's technology is original or appropriated, when the point is their phone disrupted 20 or more other business models - the contrasting Boeing 787 Dreamliner and Airbus A380 mega jumbo - radically different ways to maximise the one thing that matters to airlines: dollars per passenger-miles, and - Uber, which observers don't always fully comprehend as a rich mix of mobility, cloud and Big Data. And I closed the scheduled part of the evening with a provocation on privacy. I asked the group to think about what it means to call any online business practice "creepy". Have community norms and standards really changed in the move online? What's worse: government surveillance for political ends, or private sector surveillance for profit? If we pay for free online services with our personal information, do regular consumers understand the bargain? And if cynics have been asking "Is Privacy Dead?" for over 100 years, doesn't it mean the question is purely rhetorical? Who amongst us truly wants privacy to be over?! The discussion quickly attained a life of its own - muscular, but civilized. And it provided ample proof that whatever you think about privacy, it is complicated and surprising, and definitely disruptive! (For people who want to dig further into the paradoxes of modern digital privacy, Ray and I recently recorded a nice long chat about it). Here are some of the Digital Disruption tour dates coming up: Acknowledgement: Daniel Barth-Jones kindly engaged with me after this blog was initially published, and pointed out several significant factual errors, for which I am grateful. In 2014, the New York Taxi & Limousine Company (TLC) released a large "anonymised" dataset containing 173 million taxi rides taken in 2013. Soon after, software developer Vijay Pandurangan managed to undo the hashed taxi registration numbers. Subsequently, privacy researcher Anthony Tockar went on to combine public photos of celebrities getting in or out of cabs, to recreate their trips. See Anna Johnston's analysis here. This re-identification demonstration has been used by some to bolster a general claim that anonymity online is increasingly impossible. On the other hand, medical research advocates like Columbia University epidemiologist Daniel Barth-Jones argue that the practice of de-identification can be robust and should not be dismissed as impractical on the basis of demonstrations such as this. The identifiability of celebrities in these sorts of datasets is a statistical anomaly reasons Barth-Jones and should not be used to frighten regular people out of participating in medical research on anonymised data. He wrote in a blog that: - "However, it would hopefully be clear that examining a miniscule proportion of cases from a population of 173 million rides couldn’t possibly form any meaningful basis of evidence for broad assertions about the risks that taxi-riders might face from such a data release (at least with the taxi medallion/license data removed as will now be the practice for FOIL request data)." As a health researcher, Barth-Jones is understandably worried that re-identification of small proportions of special cases is being used to exaggerate the risks to ordinary people. He says that the HIPAA de-identification protocols if properly applied leave no significant risk of re-id. But even if that's the case, HIPAA processes are not applied to data across the board. The TLC data was described as "de-identified" and the fact that any people at all (even stand-out celebrities) could be re-identified from data does create a broad basis for concern - "de-identified" is not what it seems. Barth-Jones stresses that in the TLC case, the de-identification was fatally flawed [technically: it's no use hashing data like registration numbers with limited value ranges because the hashed values can be reversed by brute force] but my point is this: who among us who can tell the difference between poorly de-identified and "properly" de-identified? And how long can "properly de-identified" last? What does it mean to say casually that only a "minuscule proportion" of data can be re-identified? In this case, the re-identification of celebrities was helped by the fact lots of photos of them are readily available on social media, yet there are so many photos in the public domain now, regular people are going to get easier to be identified. But my purpose here is not to play what-if games, and I know Daniel advocates statistically rigorous measures of identifiability. We agree on that -- in fact, over the years, we have agreed on most things. The point I am trying to make in this blog post is that, just as nobody should exaggerate the risk of re-identification, nor should anyone play it down. Claims of de-identification are made almost daily for really crucial datasets, like compulsorily retained metadata, public health data, biometric templates, social media activity used for advertising, and web searches. Some of these claims are made with statistical rigor, using formal standards like the HIPAA protocols; but other times the claim is casual, made with no qualification, with the aim of comforting end users. "De-identified" is a helluva promise to make, with far-reaching ramifications. Daniel says de-identification researchers use the term with caution, knowing there are technical qualifications around the finite probability of individuals remaining identifiable. But my position is that the fine print doesn't translate to the general public who only hear that a database is "anonymous". So I am afraid the term "de-identified" is meaningless outside academia, and in casual use is misleading. Barth-Jones objects to the conclusion that "it's virtually impossible to anonymise large data sets" but in an absolute sense, that claim is surely true. If any proportion of people in a dataset may be identified, then that data set is plainly not "anonymous". Moreover, as statistics and mathematical techniques (like facial recognition) improve, and as more ancillary datasets (like social media photos) become accessible, the proportion of individuals who may be re-identified will keep going up.[Readers who wish to pursue these matters further should look at the recent Harvard Law School online symposium on "Re-identification Demonstrations", hosted by Michelle Meyer, in which Daniel Barth-Jones and I participated, among many others.] Both sides of this vexed debate need more nuance. Privacy advocates have no wish to quell medical research per se, nor do they call for absolute privacy guarantees, but we do seek full disclosure of the risks, so that the cost-benefit equation is understood by all. One of the obvious lessons in all this is that "anonymous" or "de-identified" on their own are poor descriptions. We need tools that meaningfully describe the probability of re-identification. If statisticians and medical researchers take "de-identified" to mean "there is an acceptably small probability, namely X percent, of identification" then let's have that fine print. Absent the detail, lay people can be forgiven for thinking re-identification isn't going to happen. Period. And we need policy and regulatory mechanisms to curb inappropriate re-identification. Anonymity is a brittle, essentially temporary, and inadequate privacy tool. I argue that the act of re-identification ought to be treated as an act of Algorithmic Collection of PII, and regulated as just another type of collection, albeit an indirect one. If a statistical process results in a person's name being added to a hitherto anonymous record in a database, it is as if the data custodian went to a third party and asked them "do you know the name of the person this record is about?". The fact that the data custodian was clever enough to avoid having to ask anyone about the identity of people in the re-identified dataset does not alter the privacy responsibilities arising. If the effect of an action is to convert anonymous data into personally identifiable information (PII), then that action collects PII. And in most places around the world, any collection of PII automatically falls under privacy regulations. It looks like we will never guarantee anonymity, but the good news is that for privacy, we don't actually need to. Privacy is the protection you need when you affairs are not anonymous, for privacy is a regulated state where organisations that have knowledge about you are restrained in what they do with it. Equally, the ability to de-anonymise should be restricted in accordance with orthodox privacy regulations. If a party chooses to re-identify people in an ostensibly de-identified dataset, without a good reason and without consent, then that party may be in breach of data privacy laws, just as they would be if they collected the same PII by conventional means like questionnaires or surveillance. Surely we can all agree that re-identification demonstrations serve to shine a light on the comforting claims made by governments for instance that certain citizen datasets can be anonymised. In Australia, the government is now implementing telecommunications metadata retention laws, in the interests of national security; the metadata we are told is de-identified and "secure". In the UK, the National Health Service plans to make de-identified patient data available to researchers. Whatever the merits of data mining in diverse fields like law enforcement and medical research, my point is that any government's claims of anonymisation must be treated critically (if not skeptically), and subjected to strenuous and ongoing privacy impact assessment. Privacy, like security, can never be perfect. Privacy advocates must avoid giving the impression that they seek unrealistic guarantees of anonymity. There must be more to privacy than identity obscuration (to use a more technically correct term than "de-identification"). Medical research should proceed on the basis of reasonable risks being taken in return for beneficial outcomes, with strong sanctions against abuses including unwarranted re-identification. And then there wouldn't need to be a moral panic over re-identification if and when it does occur, because anonymity, while highly desirable, is not essential for privacy in any case.
<urn:uuid:8ae0b65c-a871-48a5-a769-3e64b5f905d6>
CC-MAIN-2017-04
http://lockstep.com.au/blog/big-data
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00014-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948578
9,028
2.59375
3
Data deduplication has become a hot storage feature in disk-to-disk backup and even some primary disk storage. But not all dedupe works the same way. As competition heats up, so does the rhetoric about what works best. Separate the fear, uncertainty, and doubt (FUD) from the facts when it comes to using hash values in data deduplication. Deduplication and Hashing Deduplication – also sometimes called single instancing – eliminates redundant data on a given storage media. As data is stored to disk, duplicate blocks of data are identified. Instead of writing the duplicate block to disk, a much smaller pointer is inserted in its place. An index of the data blocks is maintained so the single instance of data may be retrieved for multiple different file requests.
<urn:uuid:aace6425-a84c-4f59-aa76-7bf7525065c9>
CC-MAIN-2017-04
https://www.infotech.com/research/busting-the-fud-on-hash-collisions
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00500-ip-10-171-10-70.ec2.internal.warc.gz
en
0.905906
165
3.203125
3
There is so much about MPLS and how MPLS works. Here I wrote some simple introductory lines about it but only from one perspective. The costumer side one. There is nothing here about BGP and all the things that need to be done and configured in order for MPLS to function in ISP cloud. As an introductory in MPLS this text will take you to the central office and branch side of the MPLS configuration and in this way it will be simpler to explain and enter in the world of MPLS networking technology. In MPLS networks, packets are sending with special MPLS prefix before IP packet data. With an MPLS header that is sometimes mentioned as a label stack. In MPLS header there are labels, every label with some value: - Traffic-class field, important for quality of service – QoS - Bottom-of-stack flag - 8-bit time-to-live – TTL field
<urn:uuid:9a9e1f79-d034-4943-918f-add969e09076>
CC-MAIN-2017-04
https://howdoesinternetwork.com/tag/mpls
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00344-ip-10-171-10-70.ec2.internal.warc.gz
en
0.916674
190
2.734375
3
Definition: The vertex which an edge of a directed graph enters. See also source, st-digraph. Note: That is, an edge goes from the source to the target. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 17 December 2004. HTML page formatted Mon Feb 2 13:10:40 2015. Cite this as: Paul E. Black, "target", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/target.html
<urn:uuid:3982ff45-3be0-41c5-81ea-f6ed98fd6094>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/target.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00464-ip-10-171-10-70.ec2.internal.warc.gz
en
0.854451
151
2.625
3
What Is Cloud Computing? No universally accepted definition exists for cloud computing -- yet. Wikipedia offers this vague description: "Cloud computing is Internet-based computing, whereby shared resources, software and information are provided to computers and other devices on demand, like the electricity grid." The results of BS&T's survey confirm the confusion over the definition of cloud computing: 13 percent of respondents said cloud computing is "using specific applications, like CRM or ERP, over the Internet," 8 percent said it is "using a provider's raw CPU time and storage resources to run apps we buy/develop on our own," 6 percent identified it as "using a provider's development system to create apps running on a hosted platform" and 9 percent said it's "a marketing term used haphazardly." Meanwhile, the majority of survey participants (62 percent) said it was all of the above. "It definitely is a marketing term," ING's Boehme concurs. But, he notes, he sees a strong distinction between cloud computing and simple hosted software (generally referred to as "software as a service"). To truly be cloud, in Boehme's view, a software architecture needs to be multitenant -- in other words, an environment in which a single instance of software on a server serves multiple clients (tenants) rather than the one-to-one relationship of a software instance to a client commonly found with SaaS arrangements. "If it's multitenant and running in a virtualized environment -- servers, storage, the network and all -- it's cloud," Boehme says. According to Andrew Greenway, global cloud computing program lead at Accenture, "Cloud is a style of computing based on the Internet that allows customers to pay for exactly the resources and infrastructure they use." Its characteristics, he says, include lack of an up-front capital requirement, shared service delivery over the Internet and pay for use. Although the technologies underpinning cloud computing are not brand-new, two recent developments have made it more viable, according to Greenway. One is the introduction of many cloud offerings that truly let users "pay by the drink," whereas in the past, that often has not been the case, he says. The other is the trend of technology giants such as Amazon, Google and Microsoft making huge investments in cloud infrastructure that they extend to other companies. There are three basic flavors of clouds in use at banks -- internal clouds, external clouds and hybrids. Nineteen percent of the participants in the BS&T/InformationWeek Analytics survey said they are developing or using an internal, or private, cloud network. "An internal cloud means you've virtualized your storage, your operating environment, your servers, your network, everything -- and the whole environment is self-provisioning and self-adjusting to people's resource needs," ING's Boehme explains. Internal clouds are also multitenant, with many people, divisions or functions accessing the same resources almost simultaneously. "You really are talking about the ultimate in virtualization," Boehme says. But Laurent Lachal, senior analyst at Ovum, argues that virtualization is not the point of a private cloud, although it is a prerequisite. "It's looking at the way IT delivers its services in a new way, moving from the traditional project management and procurement processes to something more dynamic, whereby users have access to a self-service portal they can use to select resources and then give them back to the pool when they're done," he contends. "Virtualization is relevant because it's part of the effort to allow users to use resources dynamically."
<urn:uuid:7cd558af-dd8d-49d3-9144-79cf1d9bf187>
CC-MAIN-2017-04
http://www.banktech.com/infrastructure/bsandt-survey-banks-take-to-cloud-computing/d/d-id/1293984?page_number=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00188-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947255
746
3.171875
3
After the release of Kinect sensor, in the wake of its success, other non-contact motion control devices began to appear. Kinect was the basis for the growth and development of the market for such devices: Investors have seen the prospects and understood the value of investing into gesture control devices. However, the most significant and successful was the Leap Motion Controller. As its prototype, the latter version is based on motion capture technology. This device connects to the USB port; in size, it is like two flash drives put together. Technically, there is Leap device that captures the projection of the user’s hands in space by using two optical sensors (cameras) and an infrared light source (the developers do not exclude the possibility that the future versions of the device will have a different number of cameras). The device is placed with its worktop up next to the screen to create the feeling that the objects on the screen are controlled by hand. After the device is connected, a virtual inverted pyramid is generated over it, with the apex in the central unit. The most effective range extends from 25 to 600 mm above the controller, and has a 150-degree field of view. Leap Motion âseesâ all the movements within the scope of the pyramid and forwards them to the software which converts the data signals into coordinates and messages. The software is able to recognize both simple gestures (virtual touch and pressing) and complex movements: scaling, dragging, rotation, drawing different geometric shapes. Thus, the device itself does not perform any calculations and transformations, leaving it all at the mercy of the software of the host, which removes noise from the image and builds models of hands and fingers, which are used as pointers. With the origin of coordinates at the center of the device, Leap Device interprets the coordinate axes as follows: X is negative on the left side from the device, and, accordingly, positive on the right side. Y coordinate grows up and does not have negative values, since Leap only “sees” objects that are located at least 25 mm above it. Positive Z is directed towards the user, while the negative Z is towards the screen. Leap Motion SDK Leap Motion SDK is developing surprisingly fast, and its new versions are released with enviable regularity: being around for a relatively short time, it already has a full second version of TOOLS, and modifications have also already emerged. To be precise, modes are still in their beta stage, and we will use the most recent version of SDK that is available at the time of the writing of the article, because each new version provides obvious improvements, which include more opportunities in tracking the skeleton (“the bones” of the hands). As we could expect, Leap Motion SDK works on all popular platforms: Windows NT, OS X, Linux. Recently I most often have to work on the Mac (Whereas I’m editing this article on EEE PC with Win XP, and it’s OK. â Editor’s Note), so the description that follows (with some reservations) will refer to this particular operating system. If you do not get on with it, do not despair, because Leap Motion SDK is cross-platform, and you can easily adapt the information from this article for any of the operating systems supported. Ready to work hard! To start working with the Leap Motion controller, you need to registered at the website of the device manufacturer, and then download archive LeapDeveloperKit_2.1.1 + 21671_mac.tar.from the Downloads. Unzip it to find a folder that contains the bundle Leap_Motion_Installer_skeleton-release_public_mac_x64_2.1.1 + 21671_ah1704.dmg (disk image for OS X) which contains the drivers for the device, as well as demo applications. Next to the bundle, you will find LeapSDK directory that contains all the required libraries and API for the development of applications that work with the Leap Motion device. The folder also includes documents and samples. Apart from demo applications, bundle contains Airspace Home, a kind of client for the Leap Motion app store: you can upload your applications and sell them in the same way that it is done on other digital distribution platforms. The main difference of the second version of the SDK from the first one is the new system for tracking the “skeleton” of the upper extremities. It includes processing additional information about the bones of hands and fingers, the ability to predict the location of the bones invisible to the device and construct models of the hands which are not fully visible. First, install the contents of the bundle (I am sure that under Windows it has the same name, but with the exe extension). The installation program itself, located inside the image, is called Leap Motion.pkg. It launches the installation of all of the above. When the software for Leap Motion has been installed, driver starts automatically that “settles” as a demon in the menu bar (top right). Three new applications will appear in the Programs folder: the driver itself, Leap Motion Orientation demo program (which I recommend to start with) and Airspace. If a controller has not been connected yet, it’s time to do it. The icon (in the menu bar) is highlighted in green. When you click on it, a menu with five items opens. The first step of Launch Airspace starts the eponymous window client. By default, it contains seven demo applications and two links leading to the Airspace Store and the developers community. Each of the demonstrations shows some functionality of Leap Motion. By clicking on Visualiser item you open a demonstrator in which you can see how the device “sees” the limbs. That is, if you move your hands over the active area of the device, the application will display them in the virtual space. Pause Tracking button suspends tracking, Quit disables the daemon. When the software for Leap Motion is installed, you can set up the developer tools. However, I believe that you have the latest version of the operating system and developers tools (Xcode). As I already said, after the archive is unpacked, the folder with the SDK is next to the installation bundle. This folder contains documents, samples, header and object files for all officially supported languages. The client computer and the controller communicate over a TCP-connection, which opens ports 6437, 6438, 6439, so for the correct operation of the device you have to make sure that they are not blocked by the firewall. Coding for Leap Motion Today we will focus on native applications for OS X, but as the tools are cross-platform, you can easily alter our progs for another supported operating system. We will not develop a console application that shows the coordinates passed to it by the controller, this is boring. We’ll immediately get immersed in some serious coding and write an application that displays a graphical representation. Leap Motion SDK provides wonderful means to obtain data from the controller, but it has nothing for the output of the graphics. Therefore, we’ll have to use additional tools. To bring the graphics from native applications to OS X you need to use OpenGL. This idea can make us sad: the level is too low, the description won’t fit into any article, and it all makes you yawn. So we will use the customization for OpenGL. From the rich variety of such libraries I chose Cinder. Cinder is a set of libraries with open source code for image processing, graphics, sound, computational geometry. As I said above, Cinder is cross-platform, and the same code will work not only on desktop platforms, but also on smartphones and Apple tablets. In the future, developers are going to expand the range of supported hardware and software platforms. In addition to the generation of a draft for a new project, Cinder has a TinderBox utility, which can be used to create a project with the support for OpenGL, DirectX, CocoaView (OpenGL), and each of these drafts can contain support for the Box 2D physical engine, Cairo rendering library, FMOD audio library, OpenCV computer vision library. For the Apple devices, you can generate a draft which will use geolocation and movement managers and is based on standard frameworks (Core Location, Core Motion). All this can be easily included in the project during its creation through the GUI-interface. In addition, a project can be generated for a specific programming environment and operating environment: Xcode (Mac), Xcode (iOS), VC 12/13 (WinRT). As a result, we have more than an API library, as all this reminds us a cross-platform game engine! At the very beginning, you can also immediately create a local Git-repository. In my humble opinion, Cinder will soon become the best cross-platform solution, even compared with Qt. As Cinder uses boost quite a lot, it’s a good idea to upgrade boost to the latest version. Let’s open the favorite console, and the first thing we do is to set Homebrew, package management software for the packages that Apple believes to be obsolete: ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)" Then we’ll install boost 1.55 from this system: brew install boost. To work with Cinder directly, it is enough to download and unpack it, and to generate the project you can use TinderBox utility located in the tools subfolder. Hands, fingers, space management OK, to warm up we’ll create an app that displays what the sensor sees in a special window. If you read my articles about Kinect, then you can remember that we started there in the same way, so let’s say that’s our tradition. We’ll do very well with a draft from TinderBox for OpenGL, we’ll just have to add Leap Motion support to it. To do this, drag the two files specified below from the subdirectory include of the LeapSDK folder you had unpacked (more details about this are provided above) into the directory tree of the Xcode environment project: Leap.h and LeapMath.h. When the transfer is complete, a dialog box appears, where you have to specify how to insert / bind files to the project; check the box Destination -> Copy items into destination group’s folder (if needed), tick Folders -> Create groups for any added folders, and below tick the project which the files are added to. We also need a dynamic library. Since the compiler C++ (LLVM) included in Xcode follows the C++11 standard, it is necessary to use a library that is compiled on that standard. There is such a lib, it is called (version for OS X) libLeap.dylib and is located in a subdirectory libc ++ of LeapSDK subfolder of the lib directory. The lib also has to be placed into the Xcode system, and you need to go through the same dialogue after that. Now you have to instruct the Xcode environment to use the lib added to the project. In the tree of files / directories of the project, click the name of the project (the top item), and a menu for project configuration will appear. Go to Build Phases tab. In the top left corner of the tab, click on the plus sign, and in the pop-up menu that appears select New copy files build phase. A minimized Copy Files panel will appear in the bottom of the tab. Maximize it, select Executables from the Destination drop-down list, and drag the dynamic lib to an empty file list (below) from the tree of the project, and remove the flag on Copy only when installing. Now it is connected to the project. The next step is required in order that the sensor submitted “raw” image data of what it sees; on the General tab in Leap Motion settings (Settings in the context menu of the device symbol in the menu bar) check Allow Images box. The draft generated by TinderBox includes several folders, files and necessary frameworks. As I called the project RawImagesApp, I added a header file RawImages.h. It is there where I put the connection for header files of Cinder and Leap, inclusion of Leap namespace and the declaration of Leap Motion object controller, which, in fact, it is the central subject of this review. In addition, TinderBox generated the source code for this project, which we’ll use as a good starting point for the development. The cpp-file contains the main class (in my case, RawImagesApp) of the application that matches the name of the project and is inherited from the Cindera base class, AppNative. The window is created with a macro, CINDER_APP_NATIVE. In RawImagesApp class, virtual functions of the base class are announced and implemented. Setup function is called at the launch of the application, and here we put the code for its initialization: to display “raw” graphic data in this method, you have to install a special flag for the policies of the sensor; to do this, call setPolicyFlag method, to which you pass the value of POLICY_IMAGES controller. The update function is called in each frame for updating; draw is called to redraw the content; mouseDown â when you press the mouse. Default does not include all possible functions, you can, for example, add prepareSettings, a function that is called before the creation of a window and allows you to pass the parameters to it. Let’s add this function to enlarge the window we create, and let’s also set the refresh rate for it. An announcement in RawImagesApp class looks like this: void prepareSettings( ci::app::AppBasic::Settings* settings ); and the implementation is like that: void RawImagesApp::prepareSettings( Settings* settings ) settings->setWindowSize( 1024, 768 ); settings->setFrameRate( 60.0f ); I am sure that comments are superfluous here. Let’s add OpenGL texture to the main class of the application: gl::Texture tex; We’ll need it for displaying. In the update function we will get the images from the sensor frame by frame, then process them and display them on the texture (see source). At each frame we get a frame of the controller: Frame frame = controller.frame();. A frame class object contains all the other objects, and the information about them is generated by the controller. We just need to get them out of it. Incidentally, getting frames this way, i.e., by taking them from the controller (sequential polling of the device), is the simplest and the most commonly used method. Any intermediate steps are predetermined: if at the next poll a new frame is not ready yet, then the old one returns; if at the moment of a subsequent polling several shots are ready, they are sent to history. There is another way to get frames, but we do not need it now, so we will discuss it in the next section. After receiving the frame, we extract the images made by the sensor: ImageList images = frame.images();. There are two of them, as the sensor has two cameras, so there are two images at any given moment. Next, we process both of them in consecutive stages. First, in the line: const unsigned char * image_buffer = image.data (); we obtain the data on the image; at a specified moment we obtain images from the controller that are different not only in terms of content but also in size. The next line creates a graphics surface object (Surface) included into Cinder API. Its constructor is given four parameters: the width and the height of the surface, the use of an alpha channel, the sequence of the color channels (constant SurfaceChannelOrder:: RGBA complies with the following standard: red, green, blue, alpha, but there are also some other channels, for example, different color sequences are used in GDI or Quartz). Then we use the iterator to go through all the pixels of the surface (which is still empty). Within this cycle, the color of pixels is set. I decided to give the displayed image a reddish hue (like in DOOM :)). Therefore, the value for the red channel of each pixel is set in accordance to the value in the image data. The remaining channels are set to zero. After going round the whole image, we construct a texture object by using gl::Texture method and on the basis of the surface passed in the parameter. If you now display the texture on the screen, it will be too small. Therefore, we need to pre-scale it: glScalef(2.0, 3.0, 0.0);. now let’s display that: gl::draw(tex);. In the following example we will display our hands in the machine context, that is, we’ll draw them in the corresponding coordinates. This task is more difficult than the previous one, and LeapSDK still has a fairly low-level interface, so to simplify our problem we will use some available developments. Stephen Schieberl, an American programmer working under a nickname Ban the Rewind, has developed a couple of classes (Listener is inherited from Leap::Listener and Device) that perform all the typical work related to processing and recovery of the status of the device. In addition, Steven added some functions to the file that perform calculations of the coordinates and matrix, which allows us to focus on more sophisticated tasks. These calculations are basically related to the fact that, unlike the coordinates of a desktop operating system, where the Y axis increases from top to bottom, the origin of coordinates for Leap Motion (0, 0, 0) is in the left bottom corner (Y increases from bottom to top), and, therefore, when you use the Y coordinate values, they have to be inverted. Additional calculations, as it was mentioned above, are carried out on vectors and matrices. So, let us create a new project in the same way as the pervious one. Additionally, include files Cinder-LeapMotion.h and Cinder-LeapMotion.cpp (see the supplementary materials to this article). There are now new additions to member variables of the main class of the application, including: mDevice â a link to the device â an object of recorder class, mFrame â Frame class (we have already discussed this class in the previous section), mCamera â an object of CameraPersp class from Cindera lib, and onFrame method has also been added (callback from the ancestor class), which, upon the acceptance of Frame class object, makes it current by assigning it to the mFrame member variable. In the Setup method, we can enable the modes of drawing, smoothing out lines and polygons; camera initialization: setting the visibility scope (within the parameters of the constructor), setting of the point of view (in the lookAt method); after that, a recorder Device class object is created, which includes three essential objects of the following classes: Controller, Device (from Leap namespace) and Listener, moreover, we can not do without mutex. Here we come to the second method to receive frames from the device: listening. Our class of device is inherited from Listener class, which enables us to implement this option, that is, we get frames from the controller with a regularity corresponding to the frequency of its work. When the controller is ready to transmit a frame, Listener class calls onFrame method which we had newly defined and passes it the frame (in the parameter); we have mentioned this method above. By the way, why do we need the mutex? The fact is that when listening, a callback function, is used, onFrame is called in a multithread mode. That is, each of its calls is made in an independent thread. So we need to take care of the thread safety at the time when we receive a frame from the device, and this is where mutex is used. When listening, it is also possible to ignore the arrival of a new frame (e.g., if the previous frame has not been processed) and add it to the history (for subsequent processing). Let’s return to our code, to the place where we created our Device class object. After its creation, we set the callback function for it. But the most interesting thing happens in the method of redrawing. First, we do some preparation: cleaning of the screen, setting the current matrix for the camera, enabling alpha-blending, ability to read and write into the depth buffer, setting the color for drawing. And this is after that when drawing begins: the device provides us with three-dimensional vectors of the position of the elbow and wrist, and by gl::drawLine method we draw a line between these points. Then we get the number of fingers and, using a loop with the iterator, go through the container with the fingers. In Leap Motion, each finger consists of four parts (phalanges): peripheral, intermediate, proximal and metacarpal. While a thumb of a human hand does not have that last phalange, here it is present but it has a value of zero. The nested loop that goes through all the phalanges provides us with the coordinates of their different parts: start, center, end, direction. The coordinates are represented as vectors (Vec3f). Within this sub-cycle, the phalanges are drawn with the help of drawLine, a drawing method which receives the coordinates detected. Additionally, the first phalanges form the container of joints (knuckles). When the outer loop is terminated, lines are drawn that connect the fingers and form hands. And here we complete the fun mission of redraw. Compile and launch the program, hold your hands over the sensor, and a window will display their silhouette. To sum up As the device includes two cameras, it is often mounted on the virtual reality goggles to create the effect of augmented reality, which is achieved due to the inclusion into the images captured by the camera of the measured values of brightness of the infrared emitters and calibration data required for the correction of the complex lens. In today’s article we discussed the issue of creating application solutions that interact with the device through API. This is a very broad topic, and we managed to review only a few aspects, leaving behind such issues as gestures, special movements, touch emulation, and many other things. All that, as well as many other topics, such as the use of the controller in Windows and Web, integration with gaming / graphics engines, can be discussed in the following articles. It is up to you, so write to us and ask for a follow-up :). And now, good luck in everything you do, and I’m looking forward to talking to you in our next issue!
<urn:uuid:c00117ec-8b41-4107-84b8-2061713245da>
CC-MAIN-2017-04
https://hackmag.com/coding/lets-code-for-leap-motion/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00096-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923235
4,799
2.640625
3
If you have a text heavy image and want to draw the reader’s eye to a particular location, you can add highlighting just like in real life. In just a few steps: Choose the shape tool for a rectangle. Change its properties in the row to be “Draw Filled Shape.” Next choose the color you want to highlight the text with. We’ll use yellow here, so just select it in the Color selector in the bottom-left. Now, we’ll just draw a rectangle over the text we want to highlight. Start at the top-left of the text and drag it down to the bottom-right. It looks like it is just blocking the text but don’t worry we’re not done yet. In the Layer Properties window that opens up, change the name of the layer to something like ‘Highlighting’ or something descriptive to you. Now drag the slider for Opacity to the left until your highlighting is at the proper transparency that the text is visible, but the color catches the eye. Now you’ve highlighted the text. Highlight any other text in this layer that you want and create a new layer if you want to make any other changes to the image without the decreased opacity.
<urn:uuid:d1ef9616-67fe-434c-bbe1-ab86a62047ac>
CC-MAIN-2017-04
https://www.404techsupport.com/2008/06/highlighting-with-paintnet/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00216-ip-10-171-10-70.ec2.internal.warc.gz
en
0.881099
264
2.90625
3
If you were to guess, how many people would you say use Microsoft Office? According to Microsoft, the number is over a billion. While the majority of these users tend to stick with the big three – Word, Excel and PowerPoint – there are other programs included in Office bundles that are equally useful. Take OneNote for example, which is great for taking notes and so much more. So what is so great about OneNote? Here are five things you may not know about this program: If you are in a meeting where figures are flying fast and furious, or you are struggling with some math and have OneNote open, you can use this program to help. Simply enter the numbers into any blank line in a note, add an = sign and hit the spacebar: e.g., (4587×2)-(3900×4)=. The answer should pop up right beside the formula. If it doesn’t, try using different symbols like the asterisk instead of x. You can also insert more complex equations by clicking on the Insert tab, selecting Equation followed by Insert New Equation. You can then select, draw or even type the symbols to create your equation. By default, new notes are created using a white background, or ‘paper’ as it is called in OneNote. However, you don’t have to stick with a white background, as there are actually a plethora of templates you can choose from in order to make your notes stand out. To change the template click on the Insert tab and select Page Templates. A menu will open on the right side of the screen with the different templates to apply to the note. For some users, the templates may be too much. Try looking at the Insert tab and you will be able to select the page color and the lines. If you are in an important meeting and don’t want to miss anything, you can actually use OneNote to record proceedings. The first thing you will want to do is to create a new note and enter the details of the meeting. Then press the Insert tab and select Record Audio. After the meeting is finished, you can stop the recording and it should be saved in the note. While the audio is recording, you can make small notes that should put a play button beside it. When you press the play button, it should start the recording at the time you made the note. Many times when you are heading to meetings or appointments, you probably don’t want to take your laptop with you so you can take notes. Luckily, OneNote is available as an app for Windows Phone, Android, iPhone and iPad devices. If you log in with your Microsoft account, any notes you create will be synced and available on any device. Check out the OneNote site for links to the apps that you can download and install on your device. So many business tasks are now collaborative, you will likely need to share notes from time-to-time. With some note apps this can be a bit of a chore, but on OneNote it’s actually quite easy: Simply press File followed byShare and then Get a Sharing Link. This will generate a link that you can send to people you want to collaborate with. When you press Get a Sharing Link you will be able to set whether people will be allowed to just view or edit your note. If you choose to allow others to edit your notes, they will be allowed to change everything and have changes sync across all versions of the note in their own version of the OneNote app or in their browser. This makes it a great tool for collaboration. Looking to learn more about using OneNote in your office? Contact us today to see how our solutions can help.
<urn:uuid:8835a85e-fe33-44b8-b4dd-10060d56cd2b>
CC-MAIN-2017-04
https://www.apex.com/5-things-may-know-onenote/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00216-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931131
771
2.625
3
When, as an adult, you look back at your childhood experiences, they appear to unfold in slow motion probably because the sheer number of them gives you the impression that they must have taken forever to acquire. So when you recall the summer vacation when you first learned to swim or row a boat, it feels endless. But this is merely an illusion, the way adults understand the past when they look through the telescope of lost time. This, though, is not an illusion: almost all of us faced far steeper learning curves when we were young. Most adults do not explore and learn about the world the way they did when they were young; adult life lacks the constant discovery and endless novelty of childhood. Studies have shown that the greater the cognitive demands of a task, the longer its duration is perceived to be. Dr. David Eagleman at Baylor College of Medicine found that repeated stimuli appear briefer in duration than novel stimuli of equal duration. Is it possible that learning new things might slow down our internal sense of time? Yet another reason to simply keep approaching life as if you’re 6, or 16. The approach itself keeps you young.
<urn:uuid:dbd18f13-fd8e-4579-b21d-20277ab6dc9a>
CC-MAIN-2017-04
https://danielmiessler.com/blog/time-memory-and-learning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00245-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955127
234
2.5625
3
2.5.2 What is random number generation? Random number generation is used in a wide variety of cryptographic operations, such as key generation and challenge/response protocols. A random number generator is a function that outputs a sequence of 0s and 1s such that at any point, the next bit cannot be predicted based on the previous bits. However, true random number generation is difficult to do on a computer, since computers are deterministic devices. Thus, if the same random generator is run twice, identical results are received. True random number generators are in use, but they can be difficult to build. They typically take input from something in the physical world, such as the rate of neutron emission from a radioactive substance or a user's idle mouse movements. Because of these difficulties, random number generation on a computer is usually only pseudo-random number generation. A pseudo-random number generator produces a sequence of bits that has a random looking distribution. With each different seed (a typically random stream of bits used to generate a usually longer pseudo-random stream), the pseudo-random number generator generates a different pseudo-random sequence. With a relatively small random seed a pseudo-random number generator can produce a long apparently random string. Pseudo-random number generators are often based on cryptographic functions like block ciphers or stream ciphers. For instance, iterated DES encryption starting with a 56-bit seed produces a pseudo-random sequence.
<urn:uuid:2ec1cafd-bfef-411a-9b73-d21d8bd6b382>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-is-random-number-generation.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00061-ip-10-171-10-70.ec2.internal.warc.gz
en
0.900698
294
4.375
4
Fundamentals of the PKI Infrastructure Securing corporate information can be a challenge, considering the numerous technologies and platforms that need to be protected. One technology that definitely helps achieve secure data is public key infrastructure (PKI), which enhances the security of data by using advanced access methods and making sure the authenticity of the data flow is preserved. Securing corporate information can be a challenge these days, considering the numerous technologies and platforms that need to be protected-it can be especially challenging for companies that lack a unified security system that can circumvent and mitigate issues. It is well known that organizations have to rely on an in-depth defense approach to make sure access to information stays secure at all times. In order to remain in a secured state, information needs to: - Be accessible only by authorized entities (confidentiality) - Be unable to be tampered with (integrity) - Remain online without interruption (availability) These three conditions are known as the CIA triad. One technology that definitely helps achieve some of these objectives is public key infrastructure (PKI). Although it does not guarantee data availability, the PKI is a mechanism that greatly enhances the security of our data by using advanced access methods and making sure the authenticity of the data flow is preserved. Given a complex system, how does the PKI help achieve these objectives? How does it integrate into the CIA triad? What major components does it use, and what are system administrators required to perform to maintain this technology? This paper is all about the foundations of PKI, since it's becoming mainstream in many companies to implement PKI. But I've often seen a lack of real understanding of the PKI in order to insure proper integration of the system. Let's explore and see how PKI works! Symmetric and Asymmetric Cryptography The PKI relies on cryptography to encrypt and decrypt data. At the very base of the PKI, we find different ciphers or encryption algorithms that are able to masquerade the data (initially known as the plaintext) and make it unreadable, if some conditions are not met. To help us understand how a cipher works, let's take this basic example: Plaintext (unsecured data): PKI IS FUN! Cipher text (secured data): CXV VF SHA! How did we actually generate that cipher text? If you take a deeper look, you will find we have used a working cipher by the name of ROT13. This cipher replaces a letter with the letter 13 letters after it. Knowing this fact is the key that helps in the decryption of the message. Of course, much stronger ciphers are now used to make the data unreadable in case the user or system does not have the key. Within PKI, we need two kinds of encryption systems: 1. Symmetric cryptography: This type of encryption uses a very low overhead method to encrypt data and is very fast and efficient. We use symmetric encryption as a first layer of security over a plaintext. With this kind of encryption, the key that encrypts and decrypts the message is the same. Some examples of encryption algorithms within this category include AES, Blowfish, CAST, IDEA, and RC4. The main reason we have to use another kind of encryption on top of it is because of a fundamental problem this technology has: the use of the same key. Why is it an issue? Because there is no way to guarantee a safe delivery of the key to the other end. The key cannot be kept secure, presenting a problem when information is sent. 2. Asymmetric cryptography: This type of encryption adds much more overhead to the encryption process. It uses a pair of keys, making users performing the encryption and decryption process use different keys. One key is used to encrypt the data (public key), and the other is used to decrypt it (private key). These two keys are mathematically related and unlock each other. But, in strong ciphers, one key cannot be found by having the other from the pair. In the most basic sense, the asymmetric system has been developed to allow the secure transport of the first key, initialized by symmetric cryptography. Some examples of asymmetric algorithms include Diffie-Hellman, El Gamal, RSA, and Elliptic curve. When dealing with PKI, we mostly deal with asymmetric cryptography, as we do not have much control over the symmetric encryption process. But, with our own pair of keys, we can decide to encrypt data and select who else is able to read it through decryption.
<urn:uuid:afc23192-fd48-4f6c-b899-84dbd4bbd67d>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/resources/resource-library/white-paper/fundamentals-of-the-pki-infrastructure/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00547-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929837
957
3.421875
3
The biggest problem with password (in)security is that most of us are using passwords that are all too predictable: The include words that can be found in a dictionary, popular phrases, or follow common patterns. (And we're not even talking about the most commonly used passwords, "123456" and "password".) Microsoft's Telepathwords is a clever way to test if your passwords are predictable. Password strength meters--you know, the kind that tell you how "weak" or "strong" your password is--aren't very reliable. They use archaic rules such as "at least 6 characters" and "must include a number" to decide how secure your password is. Telepathwords is different. As you type, it looks for common passwords, phrases, and key sequences, to guess your password. If it can easily predict it, your password can be easily hacked. Typing in "p" at the start, for example, immediately calls up "password," "princess," and "porn." You're smart enough to not to use those as your password, but you might be surprised--if you don't use a random password generator--at how good this Telepathwords tool is at guessing what you're going to type next. While it can't account for personal information you use in your password that could make your password weaker (a pet's name, for example, if the hacker finds it), Telepathwords is one step closer to true password security knowledge. [h/t Alan Henry] Read more of Melanie Pinola’s Tech IT Out blog and follow the latest IT news at ITworld. Follow Melanie on Twitter at @melaniepinola. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
<urn:uuid:76cb42d2-20db-4c17-a8b2-3327dabd1af7>
CC-MAIN-2017-04
http://www.itworld.com/article/2703406/consumerization/microsoft-s-password-strength-tool-attempts-to-read-your-mind.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00271-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920199
372
2.71875
3
Without adequate Internet safety awareness, children unknowingly become vulnerable to online dangers. Effective July 1, 2006, a new law in Virginia will require state educators to "include a component on Internet safety for students that is integrated in a division's instructional program." The bill -- sponsored by Delegate William H. Fralin, Jr. -- was passed by the General Assembly and signed by Gov. Timothy Kaine in March 2006. As students increasingly rely on the Internet for research and education, instructors and parents become more aware of the risks involved. For instance, when children use online networking Web sites to socialize, they are often unaware that they are disclosing vital information -- age, gender, location, etc. -- used by sexual predators and online criminals when searching for victims. Their information becomes fair game in the dark underworld of online criminal activity, and innocent fun becomes serious danger. According to the National Center for Missing Children's (NCMEC) Online Victimization Report, one in every five children between the ages of 10 and 17 are sexually solicited online, and only 25 percent of those children tell a parent or guardian. In addition, one in four children are exposed to unwanted pornographic images. "As technology evolves, so does the creativity of the online predator," said Ernie Allen, president and CEO of the NCMEC, at a hearing in front of the U.S. House of Representatives Energy and Commerce Committee's Subcommittee on Oversight and Investigations. "It is essential to give parents, guardians and children the tools to help protect their families from this possible risk." The law in Virginia is designed to help schools spark awareness of Internet safety in students statewide. Integrated Safety Instruction As technology usage grows in education, many educational institutions are providing some type of Internet safety education. T.C. Williams High School in Alexandria distributes laptops to students in grades 9 through 12 and has already taken measures to promote responsible computer usage; however, according to Principal John Porter, the new Internet safety law will help unify Internet safety instruction statewide. "The law ensures that each student in the public schools in Virginia will have had some sort of similar instruction relative to the dangers and the concerns of using the Internet," he said. The bill's language doesn't set forth specific requirements for teaching online safety, though after the law goes into effect, the superintendent of public instruction has 45 days to institute guidelines for Internet safety instruction for individual school districts. The superintendent and educators involved in creating the curriculum are encouraged to utilize sources from law enforcement and industry experts to provide the best instruction possible. Another important benefit of the law is repetition, Porter said. Students don't always focus on what adults tell them, but providing ongoing, consistent instruction yearly from class to class will hopefully help to reinforce the importance of Internet safety, he added. Despite the law's focus on education, parental involvement is a necessary component, Porter said. "We can't monitor what happens at home and parents need to take that responsibility," he said. "We all must work together to make sure that the message gets across and appropriate monitoring takes place at both school and at home." The problem is that many parents don't recognize the dangers or take the steps necessary to keep their children safe online. According to the Parent's Internet Monitoring Study commissioned by Cox Communications and the NCMEC in 2005, 42 percent of parents surveyed do not review their children's online activities in chat rooms or through instant messaging; 51 percent do not have or do not know if they have software that can monitor what Web sites their children visit or with whom they communicate; 30 percent of parents said their teenagers use computers in private areas of the house, where their activities are less likely to be monitored. "I think the idea is that we need to expand the overall understanding of what is appropriate Internet usage for kids -- not only what's appropriate, but where the danger lies out there," Porter said. Educating students is the first step to educating everyone involved. Unfortunately the tendency is for individuals to deny the risks to themselves, Porter said, adding that many children tell themselves, "Oh, that's not going to happen to me," claiming that teachers and parents are being too overprotective. It's important to know that it can happen to anybody. At T.C. Williams High School, computers are a valuable resource, but Porter recognizes that along with today's technological advances come inevitable risks. "Problems can develop, and we try to cover as many bases as we can to protect people," he said. Providing Internet safety awareness to students in classrooms today should lead to more technologically savvy parents of the future, who will better understand the dangers children face online and take the necessary precautionary steps to help protect them.
<urn:uuid:cd3edba6-8b91-47e8-829f-5498f8cbdab9>
CC-MAIN-2017-04
http://www.govtech.com/security/Safety-in-Numbers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00089-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961549
970
2.859375
3
Tech jobs don't just help tech employees; they help the entire economy, according to a recent report (pdf). Released by Engine Advocacy and the Bay Area Council Economic Institute, "Technology Works: High-Tech Employment and Wages in the United States" offers several key findings: - Since 2004, technology sector employment growth has outperformed growth in the private sector by a ratio of three to one. High-tech jobs have also proven to be more resistant to fluctuations in the economy. - Between 2002 and 2011, employment growth in Science, Technology, Engineering and Mathematics (STEM) fields has outpaced job growth in all other occupations by a ratio of 27 to one. - This high demand is expected to continue through 2020 and potentially beyond. Employment growth in high-tech industries is projected to continue to surpass growth in other sectors. - Workers in high-tech and STEM jobs are paid between 17 and 27 percent more than employees in other fields. - High-tech job growth is a key factor in regional economic development. “The creation of one job in the high-tech sector of a region is associated with the creation of 4.3 additional jobs in the local goods and services economy of the same region in the long run,” the study states. Evidence of the report's findings can be found around the country. The state of Michigan, Evansville, Ind., and Augusta, Ga., demonstrated the correlation by posting high-tech employment growth and related economic gains.
<urn:uuid:29d5f419-d834-45d1-8ed2-768e15ca97ec>
CC-MAIN-2017-04
http://www.govtech.com/Study-Tech-Job-Growth-Strong.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00419-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95387
303
2.609375
3
Washington, DC – Cable in the Classroom (CIC) today launched its fourth online learning game, Coaster Crafter: Build. Ride. Scream!, which illustrates broadband’s learning potential and teaches STEM (science, technology, engineering and mathematics) concepts in fun and immersive ways. The game is now available for play at www.ciconline.org/CoasterCrafter. In Coaster Crafter, important science, math, and engineering concepts are embedded in the activities of designing, building, testing, and then taking a virtual ride in your roller coaster. Coaster Crafter is aimed at middle school and older students, particularly girls. Elements of the game include: - Build. Design challenges that explore how Newton’s laws of motion and a variety of principles like inertia, velocity, acceleration, and potential and kinetic energy affect roller coasters in an immersive, game-like environment. - Ride. Coaster challenges apply what students learn to coaster design. Success earns players extra track segments and tools to improve their own designs. - Scream. A free play area for students to use what they have learned, build the wildest roller coaster ride imaginable and see it brought to life as a virtual ride. - Teacher’s guide. An extensive teacher’s guide with connections to state science and math standards and expert videos with scientists and engineers from the Johns Hopkins University and NASA. - Inspiration. Game play, learning and enthusiasm are linked, and rewards for both success and failure motivate students to experiment more boldly, think more deeply and learn more completely. “By engaging users in designing and testing roller coasters in a game-like environment, Coaster Crafter provides an engaging and relevant context for learning important math and science concepts,” said Frank Gallagher, Executive Director of Cable in the Classroom. “The cable industry relies in large part on STEM professionals to help design the great new products and services we offer, keep our technology reliable and running smoothly, and keep our customers satisfied. We recognize the importance of STEM skills not only for our own industry, but also for the economy.” Cable in the Classroom (CIC) is the national education foundation of the U.S. cable industry. Working in partnership with, and on behalf of, the National Cable & Telecommunications Association (NCTA) and our cable industry partners, CIC advocates digital citizenship and the visionary, sensible and effective use of media in homes, schools, and communities. Since 1989, through CIC, local cable companies have been providing complimentary connections to schools, and cable programmers have offered quality educational programming. CIC enables educators to access the best of cable’s video and web content.
<urn:uuid:ece770c5-4cf3-4a5b-b9e1-0e282c6f4ac5>
CC-MAIN-2017-04
https://www.ncta.com/news-and-events/media-room/article/2444
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00143-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925702
558
2.734375
3
What is it? Instead of reloading whole pages every time the user makes a change or request, the application exchanges the minimum possible data with the server, while the application remains available to the user. Ajax web applications can have the kind of rich user interfaces usually only available with desktop applications. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. In February, Microsoft threw in its lot with the OpenAjax Alliance, having changed the name of its Atlas initiative to ASP.net Ajax. It is working with more than 70 organisations, including Google, Mozilla, Sun Microsystems and the Eclipse Foundation, to ensure that Ajax technologies remain open and interoperable. Where did it originate? The term Ajax was not coined until 2005, but the key technologies date back a decade or more. What's it for? As Jesse James Garrett, who coined the name Ajax, points out, HTML was developed to deliver hypertext, not interactive applications. Ajax acts as an intermediary between the user and the server, smoothing out the stops and starts that make using a web application so different to the desktop experience. Ajax is being used to develop collaborative applications, and composite applications, or "mashups", which assemble content from multiple sources and applications. What makes it special? The OpenAjax Alliance, says, "Ajax enables rich user experiences while preserving existing back-end infrastructure. Users benefit from next-generation applications that have the feel of desktop applications and provide new capabilities, while IT preserves existing benefits from web-based application deployment and continuity with existing HTML-based back-end infrastructure." However, there are drawbacks: ● The back button will not necessarily return the user to the unmodified page ● If an author copies a URL to include it as a hypertext anchor in one of their own pages, that anchor will not lead readers to the desired view but to the initial state of the page. How difficult is it to master? What systems does it run on? Alongside Microsoft and Sun, other members of the OpenAjax Alliance include Adobe, BEA, IBM, Novell, Oracle, SAP and Zend. What's coming up? There are many free Ajax tutorials on the web. A good starting point is the Mozilla developer site. Rates of pay Salaries for Ajax web developers start at £25,000. Specialist graphical user interface developers can expect £35,000-plus. Comment on this article: email@example.com
<urn:uuid:49f39b4d-9de9-41d7-857a-a6f6c8e2060b>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240081622/Use-Ajax-skills-to-enhance-the-online-user-experience
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00079-ip-10-171-10-70.ec2.internal.warc.gz
en
0.90622
540
2.5625
3
Vulnerability, Virus, No Patch 21 May 2003 Trojan program infects computers by exploiting an Internet Explorer vulnerability Kaspersky Lab, an international data security software developer, reports the appearance of the Trojan program, 'StartPage' - the first malware to infect computers via the "Exploit.SelfExecHtml" vulnerability in the Internet Explorer security system. Making infection particularly dangerous is the fact that Microsoft has yet to release the required patch, essentially leaving users defenseless in the face of this and other, potentially more dangerous threats choosing to exploit the very same vulnerability. StartPage is a classic Trojan - it is sent to victim addresses directly from the author and does not have an automatic send function. The first mass mailing to several hundred thousand addresses was registered in Russia on May 20. The text accompanying the Trojan program is written in Russian and clearly indicates the program's birthplace as either Russia or the former USSR. The StartPage program is a Zip-archive that contains two files - one HTML file and one EXE file. Upon opening the HTML file the StartPage code is launched and proceeds to exploit the Internet Explorer security system vulnerability known as "Exploit.SelfExecHtml". It then proceeds to clandestinely launch the EXE file carrying the Trojan program. "It is hard to call this program dangerous, its collateral effects include only the altering of an old Internet Explorer page. Still, StartPage has set a precedent with its usage of a vulnerability for which there is not yet a patch", commented Eugene Kaspersky, Head of Anti-virus Research at Kaspersky Lab. According to Kaspersky Lab statistics, over 85% of virus incidences in 2002 were caused by malicious programs such as 'Klez' and 'Lentin' that exploit the IFRAME Internet Explorer vulnerability, which was discovered over two years ago, and thus users have had plenty of time to install the patch and protect themselves against any similar virus appearing in the future. "With StartPage we are dealing with an open vulnerability. Users can protect themselves with anti-virus software, but not all of them have strong heuristic technology to protect against future viruses", continued Eugene Kaspersky. "A new vulnerability has been exposed that may incite the creation of a multitude of new malware that could lead to new epidemics of a global scale." The following programs are vulnerable to the "Exploit.SelfExecHtml" breech: - Microsoft Internet Explorer 5.0 for Windows 2000 - Microsoft Internet Explorer 5.0 for Windows 95 - Microsoft Internet Explorer 5.0 for Windows 98 - Microsoft Internet Explorer 5.0 for Windows NT 4.0 Kaspersky Lab appeals to Microsoft to make a strong effort to release the necessary patch, as soon other malicious programs will appear that exploit the very same technology. If a solution is not provided soon we can expect a long lasting, large-scale epidemic that could surpass even the Klez epidemic.
<urn:uuid:c65a4554-1a18-464f-959a-00164eda05b4>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2003/Vulnerability_Virus_No_Patch
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00199-ip-10-171-10-70.ec2.internal.warc.gz
en
0.915222
605
2.78125
3
Definition: (1) A division of a set into nonempty disjoint sets that completely cover the set. (2) To rearrange the elements of an array into two (or more) groups, typically, such that elements in the first group are less than a value and elements in the second group are greater. Formal Definition: (1) A partition P of a set S is a set of subsets with the following properties: Thanks to Julio A. Cartaya <email@example.com>. Generalization (I am a kind of ...) Specialization (... is a kind of me.) select and partition. Aggregate parent (I am a part of or used in ...) quicksort, Dutch national flag, American flag sort. See also set packing, subset, connected components. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 2 March 2015. HTML page formatted Mon Mar 2 16:13:48 2015. Cite this as: Paul E. Black, "partition", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 2 March 2015. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/partition.html
<urn:uuid:153a1ec5-e6bd-49f8-82fe-7183b825dc1b>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/partition.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00015-ip-10-171-10-70.ec2.internal.warc.gz
en
0.858294
291
3.328125
3
When your hard drive fails, it can start to smoke, trapping your data (and possibly setting off your smoke detector). If you’ve lost data due to a smoking hard drive, the professional hard drive repair experts at Gillware can recover your data and help you get back on your feet. People don’t often think about the heat produced by their hard drives. When a hard drive is running, after all, it’s usually doing so from inside your computer (or on your desk, connected via USB), not cradled in your arms like a newborn infant. But rest assured, if you were to pop open your PC’s side panel while your computer is running and lay your hand on the hard drive inside, you would find it to be quite hot. There are several factors creating this heat. The speed of the spinning platters and the spindle motor inside your hard drive creates friction with the air, which makes things hotter inside your hard drive. But most of the heat you would feel from your drive comes from the circuit board on the back of the hard drive. When you power on your computer, electricity flows through the circuit board and into the spindle motor, setting the hard drive’s internal components in motion. When you put your hard drive to work, this component can become the hottest part of the drive. Too much heat—and too much electricity—can burn out parts of a hard drive’s control board. This can cause the smoking hard drive to exhibit its inflammatory behavior. Often, the culprit behind a smoking hard drive is a power surge. These power surges happen most frequently in the summer, when thunderstorms are more common. Forcing more electricity through a hard drive’s control board than it was designed to handle is a bit like trying to fill a water balloon with a firehose. Even if a power surge only lasts a few nanoseconds, in that short time frame it can cram enough power through your hard drive to scorch the circuit board. External hard drives, many of which receive their power straight from a wall outlet, can be especially vulnerable to a power surge burning their circuit boards. Many external hard drives are especially vulnerable because they have two circuit boards, in fact, and one is not always as robustly designed as the other. Attached to the drive is a SATA-USB bridging dongle with a SATA plug on one end and a USB port on the other. It is actually far easier for this dongle to burn out than the control board on the hard drive itself. This renders the hard drive inaccessible not just because the drive is now trapped inside its casing, but because the dongle can contain encryption metadata. For example, even if a Western Digital My Book external drive isn’t password-protected by the user, it still has its hardware-level SmartWare encryption, and the USB dongle handles data encryption and decryption. Without the dongle, the hard drive will show up as blank, even if the drive itself is perfectly healthy. Fortunately, under most circumstances our engineers can circumvent these issues. Where There’s Smoke, There’s Fire… and Data Loss Smoking hard drives are dangerous things. Not only are they a fire hazard, but they can also cause other electronic devices to short out and fail. Plugging a hard drive with a smoked PCB into a power supply unit, for example, can fry the unit and render it inoperable. At the risk of sounding like an anti-smoking PSA, when your hard drive starts to smoke, everything around it feels the consequences. And, of course, the most pressing problem associated with a smoking hard drive is that your data is trapped on it. All of that data is lost. But with the help of professional data recovery experts in a world-class data recovery lab, what once was lost can still be found. There was a time, in the days of yore, when this wasn’t so. When a hard drive’s PCB died, you could just go out and find the same model of drive, remove its control board, and attach it to the failed drive, with a reasonably good chance of recovering your data. What happened? Hard drives grew more complex. As the areal density of the hard disk platters inside hard drives grew and manufacturers found new ways to pack ever-increasing amounts of data into the same space, margins for error grew razor-thin. Every hard drive today needs to be individually calibrated in the factory. The unique calibration settings for every hard drive must be stored in a ROM chip on the control board. Nowadays, if you simply replace the control board of a hard drive, the drive can’t access its unique ROM chip. Without the proper calibration data to guide it, your hard drive won’t work. It may even cause further damage to its internal components if you try to run it! And so, to properly replace a burned and smoking circuit board, a professional hard drive engineer must carefully replace the ROM chip as well. This delicate operation must only be attempted by a professional data recovery expert. In some cases, a smoking hard drive may have suffered more damage than just its control board, and other parts of it may need to be replaced. These types of hard drive surgeries can only be successfully performed in a cleanroom data recovery lab. Reasons to Choose Gillware for Smoking Hard Drive Repair When your hard drive starts smoking, Gillware Data Recovery is the data recovery company you want by your side. Our data recovery experts are seasoned hard drive repair veterans with years of experience and thousands of successful data recovery cases under their belts. With world-class expertise and state-of-the-art data recovery tools, Gillware can successfully recover the data from your smoking hard drive. Gillware’s services are recommended by Western Digital and Dell, as well as computer repair and IT professionals across the United States. Gillware’s data recovery lab uses ISO-5 Class 100 rated cleanroom workstations to make sure failed hard drives are repaired in clean and contaminant-free environments. With our SOC II Type 2 audited facilities, your data is as secure as it can be. Our data recovery evaluations are free, and we only charge you for our data recovery efforts when we’ve successfully recovered your data and met your goals. We can even cover the cost of inbound shipping for you. With prices lower than the industry-standard rates charged by other data recovery labs, Gillware’s data recovery services are both affordable and financially risk-free. Ready to Have Gillware Help with Your Smoking Hard Drive? Best-in-class engineering and software development staff Gillware employs a full time staff of electrical engineers, mechanical engineers, computer scientists and software developers to handle the most complex data recovery situations and data solutions Strategic partnerships with leading technology companies Gillware is proud to be a recommended provider for Dell, Western Digital and other major hardware and software vendors. These partnerships allow us to gain unique insight into recovering from these devices. RAID Array / NAS / SAN data recovery Using advanced engineering techniques, we can recover data from large capacity, enterprise grade storage devices such as RAID arrays, network attached storage (NAS) devices and storage area network (SAN) devices. Virtual machine data recovery Thanks to special engineering and programming efforts, Gillware is able to recover data from virtualized environments with a high degree of success. SOC 2 Type II audited Gillware has been security audited to ensure data safety, meaning all our facilities, networks, policies and practices have been independently reviewed and determined as completely secure. Facility and staff Gillware’s facilities meet the SOC 2 Type II audit requirements for security to prevent entry by unauthorized personnel. All staff are pre-screened, background checked and fully instructed in the security protocol of the company. We are a GSA contract holder. We meet the criteria to be approved for use by government agencies GSA Contract No.: GS-35F-0547W Our entire data recovery process can be handled to meet HIPAA requirements for encryption, transfer and protection of e-PHI. No obligation, no up-front fees, free inbound shipping and no-cost evaluations. Gillware’s data recovery process is 100% financially risk free. We only charge if the data you want is successfully recovered. Our pricing is 40-50% less than our competition. By using cutting edge engineering techniques, we are able to control costs and keep data recovery prices low. Instant online estimates. By providing us with some basic information about your case, we can give you an idea of how much it will cost before you proceed with the recovery. We only charge for successful data recovery efforts. We work with you to define clear data recovery goals for our technicians, and only charge you upon successfully meeting these goals and recovering the data that is most important to you. Gillware is trusted, reviewed and certified Gillware has the seal of approval from a number of different independent review organizations, including SOC 2 Type II audit status, so our customers can be sure they’re getting the best data recovery service possible. Gillware is a proud member of IDEMA and the Apple Consultants Network.
<urn:uuid:3fbf2f4a-588f-4253-8177-2f3b8f788b43>
CC-MAIN-2017-04
https://www.gillware.com/smoking-hard-drive-data-recovery/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00161-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939773
1,937
2.859375
3
Sustainability consultants develop and implement energy-efficient strategies to help reduce a company’s overall carbon footprint. But designing environmentally sustainable solutions isn’t their only purpose. Their implemented policies can improve a company’s performance while drastically reducing costs on utilities and additional resources. With the increase of our dependency on technology, the data center industry – the backbone of the Digital Age – has seen substantial growth. As a result, data centers have become one of the fastest-growing consumers of electricity in the United States, creating a need for intelligent, sustainable architecture within the industry. The Natural Resource Defense Council (NRDC) projects that by 2020, the energy consumed by data centers will cost American businesses $13 billion in electric bills and will emit 150 million metric tons of carbon pollution on an annual basis. These numbers seem staggering, especially when you take into account that the data centers run by large cloud providers aren’t the culprit of this massive energy consumption. In fact, they only take up about 5% of consumed energy by the industry. The other 95% of energy usage comes from corporate and multi-tenant data centers that lack a sustainable design. Another alarming stat presented by the NRDC is that the US data center industry is consuming enough electricity to power all the households in New York City for two years. This massive amount of energy and pollution output is equal to that of 34 coal-fired power plants. And by 2020, the output is projected to be equivalent to 50 power plants. These statistics highlight that in the data center industry alone, sustainability consultants are a necessity in helping reduce the industry’s energy consumption by designing green data centers. In the video below, Randy Ortiz, VP of Data Center Design and Engineering at Internap, and Dan Prows, Sustainability Consultant at Morrison Hershfield, discuss how Internap’s design team works with sustainability consultants to construct a highly energy-efficient and sustainable data center. As the expected need for data center growth continues, it is economically and fiscally responsible for data center providers to design their facilities to be as energy efficient and sustainable as possible. And in working with sustainability consultants, data center engineers can ensure that their facility is designed to drive performance while reducing operational costs and environmental impact. Learn more about Internap’s energy-efficient data centers.
<urn:uuid:c79b82aa-845e-48ce-af69-a88f8729d6f8>
CC-MAIN-2017-04
http://www.internap.com/2015/11/10/sustainability-consultant/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00465-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918729
479
3.046875
3
Climate change spurs adoption of temperature monitoring, new farming practices Wednesday, Jun 5th 2013 While climate change has been a subject of much political debate, there are many undeniable facts that demonstrate markedly different environmental conditions. Climate change affects a wide variety of industries and world development, perhaps most directly the agricultural sector. More fluctuations in climate create a growing need for environmental monitoring, especially as farming practices are adapted to meet changing conditions. In a recent article for the Daily Monitor, contributor Lominda Afedraru reported that the effects of climate change on the agriculture sector are particularly noticeable in Uganda. "In Uganda, climate change and increased weather variability has been observed and is manifested in the increase in frequency and intensity of weather extremes, including high temperatures leading to prolonged drought and erratic rainfall patterns," Afedraru wrote. She later added, "These changing weather patterns have come with challenges such as tropical storms, wildfire, siltation, soil erosion, pests and diseases which are causing devastating loss to farmer's yields." Afedraru explained that the fluctuations in weather patterns are making it increasingly difficult for farmers to adequately plan using traditional knowledge of the region's two planting seasons. In order to accurately predict seasonal fluctuations, farmers can utilize environmental monitor tools to gain a better sense of temperature and humidity changes. With a fuller picture of heat and water density, farmers can adjust their farming strategies and maintain their livelihoods. For example, if droughts are imminent, farmers can grow quickly maturing crops like vegetables, which will afford them greater success in prolonged dry spells. Monitoring temperature in animals One of the side effects of climate change is that animals are exposed to higher temperatures than traditionally used to, which can result in heat stress that has a whole slew of complications as well. Heat stress can negatively affect cattle specifically, and temperature monitoring can go a long way in reducing the devastating effects. A recent Minnesota Farm Guide article detailed that the thermal heat index was developed as a way to evaluate the potential that cattle will fall victim to heat stress. It is calculated based on air temperature and humidity. The source provided a detailed example, noting that if a day's temperature is 95 degrees Fahrenheit and has a relative humidity of 70 percent, conditions are considered fatal. These conditions are obviously more extreme, but this example highlights how humidity monitoring can help enable farmers to take a proactive approach to ensuring the health of their herd. The higher the humidity and temperature, the higher cattle water intake will be. A recent press release from CAS DataLoggers detailed that many different types of organizations need to monitor animal temperature to support efforts of conservation and research. "A common application involves continually monitoring animals by recording their skin or internal temperature data," the release stated. "Surface or rectal temperature probes (usually thermocouples) connected to portable data loggers are an effective means for measuring and viewing all of the minute temperature changes occurring over both short and long time periods. After looking at the data, staff can then take preventative measures to help prevent a fever, serious infection or disease." If environmental monitors indicate that an animal is at high risk of heat stress or other diseases, there are many steps that can be taken to prevent conditions from escalating. For example, Minnesota Farm Guide explained how cattle farmers can make adjustments in the areas of water supply, sprinklers, shade, feed, bedding, cattle handling and air movement. In a scenario where a humidity monitor indicates troublesome conditions, cattle handlers can improve the direct environment and make animals more comfortable by adjusting the water supply and pressure as well as installing sprinklers to cool the animals. However, animal caretakers should implement a sprinkler system that administers large water droplets rather than misting, as misting can actually increase humidity and be counterproductive to cooling initiatives.
<urn:uuid:0d8c7656-20cc-4030-bb3c-5f4f38cb901a>
CC-MAIN-2017-04
http://www.itwatchdogs.com/environmental-monitoring-news/research-labs/climate-change-spurs-adoption-of-temperature-monitoring,-new-farming-practices-451242
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00373-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938425
771
3.296875
3
Dr. Bill Highleyman will be griving an in-depth presentation about DDoS attacks at The Continuity Insights Regional Business Continuity Conference on Wednesday, Oct. 8. The concept of a DDoS attack is simple. Generate enough malicious traffic to a web site, and it will be unable to respond to legitimate requests. The data rate generated by recent DDoS attacks has been measured in the hundreds of gigabits per second. Not many corporate web services can withstand even a fraction of that amount of malicious data. The damage DDoS attacks can do to a company’s public-facing Internet services, such as web sites or to the Internet in general is massive. DDoS attacks are launched from botnets comprising thousands of compromised PCs and servers controlled by a bot master. DDoS attacks are easy to create using rented botnets and publicly available software. DDoS attacks take many forms. Some attack the Internet Layer (Level 3) and Transport Layer (Level 4) of the Internet Protocol suite. Others attack the Application Layer (level 7). A particularly vicious form of a DDoS attack is a DNS reflection attack, in which a short request to a DNS server results in a large message sent to the victim machine. Gigabits per second can be directed to a victim’s system from an attacker who only has to generate megabits per second of malicious data. DDoS attackers are very sophisticated. They monitor the success of their attack, and if the victim throws up defenses to mitigate the attack, they change their method of attack. One defense against DDoS attacks is to subscribe to a DDoS mitigation service provider. These are companies with a large number of massive data centers. They can spread the attack over multiple data centers, scrub the attack data, and return only valid data to the victim. They also monitor the nature of the attack and change the victim’s defenses to meet the current attack strategy. DDoS attacks are increasing in frequency and in size every year. Companies must prepare for the likelihood of losing their public-facing web services and must make plans for how they will continue in operation if these services are taken down. This should be a major topic in their Business Continuity Plans. This presentation describes the technology behind DDoS attacks. The creation of the many types of botnets used to drive DDoS attacks is discussed. Some recent DDoS massive attacks are described. Included are several attacks on major U.S. banks that took down their online services for days. The various types of DDoS attacks are explored, including attacks at the network level, the infrastructure level, and the application level. Mitigation services that are available to thwart a DDoS attack are presented. For more information, or to register for the conference, click here.
<urn:uuid:62d6f5a2-931e-4ffd-80de-88919ded9717>
CC-MAIN-2017-04
http://www.continuityinsights.com/article/2014/08/data-generated-ddos-attacks-almost-impossible-websites-withstand
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00491-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9415
569
3.03125
3
Long considered a waste product of the pulping industry, lignin, a major component of biomass, is currently being used for low and medium-value applications (e.g. binding and dispersing agents), representing a market of USD 730 million. However, strong signals indicate that lignin could be set to address high-value opportunities as early as 2017, such as substituting phenol or as a component in polyurethane formulation. The development of public and private R&D projects in this field could make lignin-based phenolic monomers or carbon fibre a reality by 2020. The industry is just beginning to scratch the surface of lignin’s potential: it could become the main renewable aromatic resource for the chemical industry in the future. Catching this wave early could allow large industry players to hedge against the volatility of raw material prices while decreasing their environmental impact. With their support, the lignin industry could unlock opportunities in a shorter time-frame. Being the first mover on this market can assure technology leadership, strategic partnerships and a competitive edge. The purpose of this Market Insight is to provide information on the potential of lignin. During our analysis work, we have identified key barriers and analysed different ways to address them. In particular, we explored four promising lignin applications. What is lignin? As the biochemical industry emerges1, it is bringing out new products and replacing existing oil-based ones. Bio-based chemicals are expected to grow significantly and increase their share to an estimated 9 per cent of all chemical production in 2020.2 The existing biofuel and bio-based chemical industries prefer to use feedstock with high sugar or starch content, such as corn, sugar cane and sugar beet. This represents the first generation bio-based industry. So-called second generation feedstocks include wood, bagasse from sugarcane or sweet sorghum, corn stover and grasses. In contrast to first generation feedstocks, these have a slightly lower sugar content but, more critically, the sugars are more difficult to access as they are bound in cellulose and hemicellulose macromolecules. Therefore, they remain underused as feedstocks for biofuels and bio-based chemicals. Click here to read more. 10.3 billion tonnes of organic chemicals yearly produced by the chemical industry, Haveren et al., 2008 2The Future of Industrial Biorefineries, World Economic Forum, 2010
<urn:uuid:a7f7e0b0-1de9-43d2-acfc-8ee2e14bd003>
CC-MAIN-2017-04
http://www.frost.com/c/481418/sublib/display-market-insight.do?id=290584392&bdata=aHR0cDovL3d3dy5mcm9zdC5jb20vYy80ODE0MTgvc3VibGliL2luZGV4LmRvQH5AUmVzZWFyY2hAfkAxMzk5NjQ0NDE1ODgw
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00217-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934495
511
2.84375
3
May 8, 2016 was a day that garnered much discussion. On that day, Germany’s solar and wind power peaked at 11 am local time, enabling renewables to supply 54.8 GW at a time when demand—according to provisional data from Agora Energiewende, a research institute in Berlin—was running at 57.8 GW. In plain terms, Germany, albeit briefly, got almost all of its power from renewable sources—and roughly 40 months earlier than industry experts expected. Meanwhile, just as striking but less widely noticed, Portugal has recently run 107 consecutive hours with 100 percent renewable generation. Alongside these landmarks, we cannot neglect to mention the world record for wind penetration held by Denmark, which managed to fulfill 42 percent of its energy consumption with wind in 2015. Nevertheless, while some transmission system operators (TSOs), distribution system operators (DSOs) and power producers are doing an impressive job of inverting the fossil-versus-renewable energy mix, there is still room for improvement. Indeed, it is questionable whether utilities across Europe are ready to reach the EU guidelines for 20 percent of energy to come from renewable sources by 2020. Wind and sunshine—depending on the region in Europe—are potentially abundant. However, the big challenge is managing their intermittency. Given this, one of the keys to increasing renewables penetration will be improving the accuracy of forecasting and controlling volatility. A more accurate forecast contributes to day-to-day operational effectiveness, through advances such as more effective day-ahead planning (including calculation of required reserves, congestion management and so on), and improved asset management. Greater precision around wind and solar forecasts also underpins business success, as it can help renewables operators make better decisions around energy production and how much they can trade. Going back to the German example, due to market-wide oversupply that day in May 2016, power prices turned negative during several 15-minute periods, dropping as low as minus €130.07 per megawatt-hour (according to data from Agora Energiewende). In the area of wind power forecasting, there are multiple tools to choose from: Some are proprietary to the utility, some developed by academic organizations, and some are recognized products from technology vendors. These tools are all largely powered by algorithms carefully cultivated on the basis of historical data. And since each one seemingly has its specific strengths, most utilities will have more than one. The question executives are asking themselves is this: Which forecasting tool could help them most significantly improve their estimates, and thus further optimize wind and solar power generation? As utilities look to answer this question, the good news is that they may not need to rely on a single tool to master the weather. Digital technology techniques such as data analytics and machine learning can be applied to operational data, potentially enabling utilities to develop a more intelligent forecasting approach. For example, by combining the results from multiple tools, data scientists can assess each forecast’s accuracy over short- or long-term horizons, and/or according to different scenarios, such as high-wind conditions. As such, analytics approaches can enable utilities to develop a smarter combination. And over time, by continuing to apply analytics techniques, the outcome would be tweaked to deliver increasing accuracy. Nevertheless, if it was as easy as applying an analytics application, it seems likely that some utilities would have already tested this option. But the smart combination is not just about combining the forecasts—it’s also about bringing together the appropriate people. More effective wind and solar power forecasts require bringing experts to the table from operations, IT, and data science backgrounds. The results can be powerful, but it takes time for this collaboration to be effective. Through the journey of analyzing, interpreting and comparing the data, the team needs pool together their collective expertise, methods and insights, and bridge differences in language and experience. By integrating information and operational technologies and data, as well as having practitioners aligned to those domains, utilities can generate a more accurate forecasting approach. Like the code on a safe, achieving the smart combination requires lots of numbers and a sharp focus on unlocking the guarded prize. For those that succeed, the results will be worth the effort. Guest author, Stéphanie Lakkis, MSc An engineer in the Grid Operations team, Stéphanie leads Wind Power Forecast Optimization at OMNETRIC Group. Working largely with European transmission system operators and distribution system operators, she collaborates with OMNETRIC Group's data scientist team to identify and develop analytics use cases for improved forecasting.
<urn:uuid:93467b8a-26b7-490e-9703-458612d907b4>
CC-MAIN-2017-04
https://www.accenture.com/bd-en/insight-highlights-utilities-key-unlocking-potential-europe
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00548-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946082
927
2.671875
3
While firms are up on their toes in their respective bids to take their companies to the cloud–touted to be the future of computing–a visiting MIT (Massachusetts Institute of Technology) professor suggested recently a complementary technology that will harness new silos of computing power, by way of the crowd. Called “crowd computing,” this relatively new approach to computing is described as billions of human beings connected to the Internet that analyse, synthesise, inform, perceive, and provide opinion of data using only the power of the human brain. If it is not patently obvious, such a mechanism is currently in place in the prime examples of social media and Wikis, both of which have grown in popularity in recent years. Explained Srini Devadas, professor of electrical engineering and computer science at the MIT Computer Science and Artificial Intelligence Laboratory, “Crowd computing will complement the cloud as one of two burgeoning infrastructures that will enable the world to become more ‘collectively intelligent’.” Such a feat is essential in mitigating various human concerns, such as in predicting and mitigating the effects of natural disasters, among many others. “It helps if we can have a competent, orchestrated response to these disasters,” he emphasised. In the case of earthquakes, for example, a concerned individual can tap on various data available through the cloud in order to predict the impacts of ground movement, and draft strategies for evacuation and relief operations by using the wisdom of the crowd. Such an initiative has been employed by citizens during the onslaught of typhoon Ondoy (international name: Ketsana) to effectively direct disaster management efforts to places they are needed the most. But the system is far from being perfect. In the case of crowd computing, Devadas said that there needs to be a significant improvement in the way current technology systems moderate opinion, resolve conflicts, and check facts. With cloud computing, on the other hand, the MIT professor urged providers to maintain the security and privacy of their offerings. “The challenge also lies in determining how to write parallel applications in order to access billions of processors to give quick answers to queries,” he pointed out. While Devadas stressed that these challenges need to be addressed in the next ten years to make the technology viable, he predicted that it’s possible the solutions for these issues will become available by 2020. With this concerted push towards collective intelligence, Devadas noted that humans are not going to be eventually replaced by machines–as many would-be doomsayers would prefer to believe–but would instead create “a symbiotic relationship between software and humans.” Devadas was in the UK to deliver a lecture before employees of global processing firm Accenture (ACN), IT professionals, and students, as part of the BPO company’s Accenture Solutions Delivery Academy (ASDA) program, a training module for their employees crafted in partnership with the prime technology education institution. The training and certification program, launched in 2006, is afforded to new recruits, preferably during the first five years on the job, in order to encapsulate and prove the Accenture experience to the employees, and to ensure that professionals are competent enough to handle tasks in their respective fields.
<urn:uuid:94f8de5c-d8ba-48ba-836d-d53e66b4fb0a>
CC-MAIN-2017-04
http://www.cnmeonline.com/insight/crowd-computing-the-future/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00364-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95224
672
3.03125
3
The Dark Side of Packet Capture Network Troubleshooting Tools by Joseph D. Sloan 1.7. Dark Side of Packet Capture What you can do, others can do. Pretty much anything you can discover through packet capture can be discovered by anyone else using packet capture in a similar manner. Moreover, some technologies that were once thought to be immune to packet capture, such as switches, are not as safe as once believed. 1.7.1. Switch Security Switches are often cited as a way to protect traffic from sniffing. And they really do provide some degree of protection from casual sniffing. Unfortunately, there are several ways to defeat the protection that switches provide. First, many switches will operate as hubs, forwarding traffic out on every port, whenever their address tables are full. When first initialized, this is the default behavior until the address table is built. Unfortunately, tools like macof, part of the dsniff suite of tools, will flood switches with MAC addresses overflowing a switch's address table. If your switch is susceptible, all you need to do to circumvent security is run the program. Second, if two machines have the same MAC address, some switches will forward traffic to both machines. So if you want copies of traffic sent to a particular machine on your switch, you can change the MAC address on your interface to match the target devices' MAC address. This is easily done on many Unix computers with the ifconfig command. A third approach, sometimes called ARP poisoning, is to send a forged ARP packet to the source device. This can be done with a tool like arpredirect, also part of dsniff. The idea is to substitute the packet capture device's MAC address for the destination's MAC address. Traffic will be sent to a packet capture device, which can then forward the traffic to its destination. Of course, the forged ARP packets can be sent to any number of devices on the switch. The result, with any of these three techniques, is that traffic will be copied to a device that can capture it. Not all switches are susceptible to all of these attacks. Some switches provide various types of port security including static ARP assignments. You can also use tools like arpwatch to watch for suspicious activities on your network. (arpwatch is described in .) If sniffing is a concern, you may want to investigate what options you have with your switches. While these techniques could be used to routinely capture traffic as part of normal management, the techniques previously suggested are preferable. Flooding the address table can significantly degrade network performance. Duplicating a MAC address will allow you to watch traffic only to a single host. ARP poisoning is a lot of work when monitoring more than one host and can introduce traffic delays. Consequently, these aren't really techniques that you'll want to use if you have a choice. 1.7.2. Protecting Yourself Because of the potential for abuse, you should be very circumspect about who has access to packet capture tools. If you are operating in a Unix-only environment, you may have some success in restricting access to capture programs. packet capture programs should always be configured as privileged commands. If you want to allow access to a group of users, the recommended approach is to create an administrative group, restrict execution of packet capture programs to that group, and give group membership only to a small number of trusted individuals. This amounts to setting the SUID bit for the program, but limiting execution to the owner and any group members. With some versions of Unix, you might even consider recompiling the kernel so the packet capture software can't be run on machines where it isn't needed. For example, with FreeBSD, it is very straightforward to disable the Berkeley packet filter in the kernel. (With older versions of FreeBSD, you needed to explicitly enable it.) Another possibility is to use interfaces that don't support promiscuous mode. Unfortunately, these can be hard to find. There is also software that can be used to check to see if your interface is in promiscuous mode. You can do this manually with the ifconfig command. Look for PROMISC in the flags for the interface. For example, here is the output for one interface in promiscuous mode: bsd2# ifconfig ep0 ep0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500 inet 172.16.2.236 netmask 0xffffff00 broadcast 172.16.2.255 inet6 fe80::260:97ff:fe06:2222%ep0 prefixlen 64 scopeid 0x2 ether 00:60:97:06:22:22 media: 10baseT/UTP supported media: 10baseT/UTP Of course, you'll want to check every interface. Alternately, you could use a program like cpm, check promiscuous mode from CERT/CC. lsof, described in , can be used to look for large open files that might be packet sniffer output. But if you have Microsoft Windows computers on your network or allow user-controlled computers on your network, this approach isn't enough. While it may appear that packet capture is a purely passive activity that is undetectable, this is often not the case. There are several techniques and tools that can be used to indicate packet capture or to test remote interfaces to see if they are in promiscuous mode. One of the simplest techniques is to turn your packet capture software on, ping an unused IP address, and watch for DNS queries trying to resolve that IP address. An unused address should be ignored. If someone is trying to resolve the address, it is likely they have captured a packet. Another possibility is the tool antisniff from L0pht Heavy Industries. This is a commercial tool, but a version is available for noncommercial uses. There are subtle changes in the behavior of an interface when placed in promiscuous mode. This tool is designed to look for those changes. It can probe the systems on a network, examine their responses, and usually determine which devices have an interface in promiscuous mode. Another approach is to restructure your network for greater security. To the extent you can limit access to traffic, you can reduce the packet capture. Use of virtual LANs can help, but no approach is really foolproof. Ultimately, strong encryption is your best bet. This won't stop sniffing, but it will protect your data. Finally, it is always helpful to have clearly defined policies. Make sure your users know that unauthorized packet capture is not acceptable. 1.8. Microsoft Windows In general, it is inadvisable to leave packet capture programs installed on Windows systems unless you are quite comfortable with the physical security you provide for those machines. Certainly, packet capture programs should never be installed on publicly accessible computers using consumer versions of Windows. The programs WinDump95 and WinDump are ports of tcpdump to Windows 95/98 and Windows NT, respectively. Each requires the installation of the appropriate drivers. They are run in DOS windows and have the same basic syntax as tcpdump. As tcpdump has already been described, there is little to add here. ethereal is also available for Windows and, on the whole, works quite well. The one area in which the port doesn't seem to work is in sending output directly to a printer. However, printing to files works nicely so you can save any output you want and then print it. One of the more notable capture programs available for Windows platforms is netmon (Network Monitor), a basic version of which is included with Windows NT Server. The netmon program was originally included with Windows NT 3.5 as a means of collecting data to send to Microsoft's technical support. As such, it was not widely advertised. Figure 1-5 shows the packet display window. The basic version supplied with Windows NT Server is quite limited in scope. It restricts capture to traffic to or from the server and severely limits the services it provides. The full version is included as part of the Systems Management Server (SMS), part of the BackOffice suite, and is an extremely powerful program. Of concern with any capture and analysis program is what protocols can be effectively decoded. As might be expected, netmon is extremely capable when dealing with Microsoft protocols but offers only basic decoding of Novell protocols. (For Novell protocols, consider Novell's LANalyzer.) One particularly nice feature of netmon is the ability to set up collection agents on any Windows NT workstation and have them collect data remotely. The collected data resides with the agent until needed, thus minimizing traffic over the network. The program is, by default, not installed. The program can be added as a service under network configuration in the setup window. It is included under Administrative Tools (Common). The program, once started, is very intuitive and has a strong help system.-- That was our final segment from the O'Reilly book, Network Troubleshooting Tools. For further information on that title, click the cover image at right.
<urn:uuid:d5cb7352-baba-42e7-afa9-b23177d90dac>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netsysm/article.php/933801/The-Dark-Side-of-Packet-Capture.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00392-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940289
1,897
2.796875
3
Troy Hunt, in his article “SSL is not about encryption,” says that SSL is about assurance and “establishing a degree of trust in a site’s legitimacy.” I have mixed feelings about the title, but agree with the points that Hunt makes. Here are some highlights: - Users assume that high-profile sites (e.g., Facebook, Twitter, Dropbox) provide assurance even though they do not provide positive feedback of assurance. That is, although they use SSL, they do not present their logon pages over HTTPS and, as such, no positive security indicators are provided to the end-user. - Some sites provide implicit assurance by providing an indication in the Web page (e.g., the ubiquitous padlock icon to indicate that the site is secure). This means nothing, can be misleading and create a false sense of security. - Every major browser has the ability to proactively advise the end-user of the validity and authenticity of the site by providing positive explicit assurance. - Due to man-in-the-middle attacks, not loading the logon form over HTTPS gives zero assurance of the authenticity of the site before submitting your credentials. This is further backed up by the OWASP SSL Best Practices. - My favorite, “SSL is the only outwardly facing assurance that we have. It’s the one thing that’s ubiquitously used to create confidence in the integrity of the data and assurance of the site we’re transacting with.”
<urn:uuid:510c9c71-b597-49ad-ae53-b10bc7be4d4d>
CC-MAIN-2017-04
https://www.entrust.com/ssl-is-about-assurance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00512-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923604
317
2.65625
3
2.4.5 What are the most important attacks on symmetric block ciphers? There are several attacks which are specific to block ciphers (see Question 2.1.4). Four such attacks are differential cryptanalysis, linear cryptanalysis, the exploitation of weak keys, and algebraic attacks. Differential cryptanalysis is a type of attack that can be mounted on iterative block ciphers (see Question 220.127.116.11. These techniques were first introduced by Murphy [Mur90] in an attack on FEAL-4 (see Question 3.6.7), but they were later improved and perfected by Biham and Shamir [BS91a] [BS93b] who used them to attack DES (see Section 3.2). Differential cryptanalysis is basically a chosen plaintext attack (see Question 2.4.2); it relies on an analysis of the evolution of the differences between two related plaintexts as they are encrypted under the same key. By careful analysis of the available data, probabilities can be assigned to each of the possible keys, and eventually the most probable key is identified as the correct one. Differential cryptanalysis has been used against a great many ciphers with varying degrees of success. In attacks against DES, its effectiveness is limited by very careful design of the S-boxes during the design of DES in the mid-1970s [Cop92]. Studies on protecting ciphers against differential cryptanalysis have been conducted by Nyberg and Knudsen [NK95] as well as Lai, Massey, and Murphy [LMM92]. Differential cryptanalysis has also been useful in attacking other cryptographic primitives such as hash functions (see Section 2.1.6). Matsui and Yamagishi [MY92] first devised linear cryptanalysis in an attack on FEAL (see Question 3.6.7). It was extended by Matsui [Mat93] to attack DES (see Section 3.2). Linear cryptanalysis is a known plaintext attack (see Question 2.4.2) which uses a linear approximation to describe the behavior of the block cipher. Given sufficient pairs of plaintext and corresponding ciphertext, bits of information about the key can be obtained, and increased amounts of data will usually give a higher probability of success. There have been a variety of enhancements and improvements to the basic attack. Langford and Hellman [LH94] introduced an attack called differential-linear cryptanalysis that combines elements of differential cryptanalysis with those of linear cryptanalysis. Also, Kaliski and Robshaw [KR94] showed that a linear cryptanalytic attack using multiple approximations might allow for a reduction in the amount of data required for a successful attack. Other issues such as protecting ciphers against linear cryptanalysis have been considered by Nyberg [Nyb95], Knudsen [Knu93], and O'Conner [Oco95]. Weak keys are secret keys with a certain value for which the block cipher in question will exhibit certain regularities in encryption or, in other cases, a poor level of encryption. For instance, with DES (see Section 3.2), there are four keys for which encryption is exactly the same as decryption. This means that if one were to encrypt twice with one of these weak keys, then the original plaintext would be recovered. For IDEA (see Question 3.6.7), there is a class of keys for which cryptanalysis is greatly facilitated and the key can be recovered. However, in both these cases, the number of weak keys is such a small fraction of all possible keys that the chance of picking one at random is exceptionally slight. In such cases, they pose no significant threat to the security of the block cipher when used for encryption. Of course, for other block ciphers, there might well be a large set of weak keys (perhaps even with the weakness exhibiting itself in a different way) for which the chance of picking a weak key is too large for comfort. In such a case, the presence of weak keys would have an obvious impact on the security of the block cipher. Algebraic attacks are a class of techniques that rely for their success on block ciphers exhibiting a high degree of mathematical structure. For instance, it is conceivable that a block cipher might exhibit a group structure (see Section A.3). If this were the case, then encrypting a plaintext under one key and then encrypting the result under another key would always be equivalent to single encryption under some other single key. If so, then the block cipher would be considerably weaker, and the use of multiple encryption would offer no additional security over single encryption; see [KRS88] for a more complete discussion. For most block ciphers, the question of whether they form a group is still open. DES, however, is known not to be a group; see Question 3.2.5.
<urn:uuid:245be506-90e1-4c58-9c35-484c3204c4ba>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/important-attacks-on-symmetric-block-ciphers.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00236-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96072
1,020
3.28125
3
All About VLAN Memberships The administrator create LANs and also assign switch ports to every VLAN. This type of VLAN is referred to as static VLAN . If the administrator is ready to put little more effort then he can assign hardware addresses to all the host devices in a database, in order to assign VLANs vitally the switches can be configured every time a host is connected/plugged into a switch. What are Static VLANs? Usually VLANs are created with the help of static VLANs, and the best part is that they are quite safe and secure. The change in switch port assigned to VLAN can be done manually by an administrator only otherwise it remains the same. It is quite easy to set up and monitor this kind of VLAN configuration as compared to others, it performs well in a network environment where it is possible to control the movement of users within the network. In order to configure the ports it is best to make use of network management software. In the figure you can see that the administrator has configured each switch port with a VLAN membership which depends on which VLAN the host wanted the membership—it does not matter where the device is actually and physically located. Also the broadcast domaining the hosts will become a member as per the choice of administrator. Keep in mind that every host must have correct and authentic information of IP address. Here is an example that configuration of each host in VLAN 2 must be into the 172.16.20.0/24 network. Another important thing to remember is that, when you are plugging a host into a switch, then the VLAN membership verification of that port becomes essential. If the membership does not match with the one that is needed for the particular host, then it won’t be possible for the host to reach the desired network services, like a workgroup server. An eye on dynamic VLANs A dynamic VLAN, on the other hand can automatically find out a node’s VLAN assignment. If you use an efficient management software, then it is possible to enable hardware (MAC) addresses, applications, protocols in order to build dynamic VLANs. The choice is simply yours! For instance, we enter the MAC addresses into the application of centralized VLAN management. Now if a node is connected to switch port, that is not assigned then the database of the VLAN management can check out the hardware address configure as we assign the switch port to the relevant VLAN. This has made things easier as when the user is on the move then the management and configuration becomes easy because the switch will automatically assign them to the right VLAN. But in order to enjoy this benefit it requires a lot more work at an early stage when the database is being set up. With the use of VLAN Management Policy Server (VMPS) service the cisco administrators can easily set up a MAC addresses database which in turn can be utilized for dynamic VLANs addressing. The mapping of MAC addresses to VLANs is done by VMPS database.
<urn:uuid:c9479381-d935-422c-8ace-0feeb0c75b76>
CC-MAIN-2017-04
https://howdoesinternetwork.com/2012/static-vs-dynamic-vlans
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00144-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927684
623
2.625
3
One of the most common threats to business and individual systems is phishing. This form of hacking is well known and many users have educated themselves on the more traditional methods used by hackers. This has forced hackers to come up with different phishing techniques, and one of the methods that is causing problems is spear phishing. Spear phishing is a specialized type of phishing that instead of targeting a mass number of users, as normal phishing attempts, targets specific individuals or groups of individuals with a commonality e.g., an office. Generally a hacker will first pick a target and then try to learn more about the related people. This could include visiting a website to see what a company does, who they work with, and even the staff. Or they could try hacking a server in order to get information. Once they have some sort of information, usually a name, position, address, and even information on subscriptions, the hacker will develop an email that looks similar to one that another organization might send e.g., a bank. Some hackers have been known to create fake email accounts and pose as a victim’s friend, sending emails from a fake account. These emails are often similar to official correspondence and will always use personal information such as addressing the email to you directly instead of the usual ‘dear sir or madam’. The majority of these emails will request some sort of information or talk about an urgent problem. Somewhere in the email will be a link to the sender’s website which will look almost exactly like the real thing. The site will usually ask you to input personal information e.g., an account number, name, address, or even passwords. If you went ahead and followed this request then this information would be captured by the hacker. From previous attack cases and reports, the majority of spear phishing attacks are finance related, in that the hacker wants to gain access to a bank account or credit card. Other cases include hackers posing as help desk agents looking to gain access to business systems. Should someone fall for this tactic, they will often see personal information captured and accounts drained or even their whole identity stolen. Some spear phishing attacks aren’t after your identity or money, instead clicking on the link in the email will install malicious software onto a user’s system. We are actually seeing spear phishing being used increasingly by hackers as a method to gain access to business systems. In other words, spear phishing has become a great way for people to steal trade secrets or sensitive business data. Like most other types of phishing related emails, spear phishing attempts can be easy to block. Here are five tips on how you can avoid falling victim to them. If you are looking to learn more about spear phishing or any other type of malware and security threat, get in touch.
<urn:uuid:cc870790-8e5e-4217-b583-8fb84c149ed8>
CC-MAIN-2017-04
https://www.apex.com/spear-phishing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00356-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949263
586
3
3
Next: 0.5.11.1 Source Code Up: 0.5 Miscellaneous Algorithms Previous: 0.5.10.2 References 0.5.11 Metaphone Algorithm Like the previously discussed Soundex algorithm, Metaphone is a system for transforming words into codes based on phonetic properties. However, unlike Soundex which operates on a letter-by-letter scheme, Metaphone analizes both single consonants and groups of letters called dipthongs. Metaphone was invented by Lawrence Philips and first described in Computer Language magazine in December 1990. The Metaphone algorithm operates by first removing non-English letters and characters from the word being processed. Next, all vowels are also discarded unless the word begins with an initial vowel in which case all vowels except the initial one are discarded. Finally all consonents and groups of consonents are mapped to their Metaphone code. The rules for grouping consonants and groups thereof then mapping to metaphone codes are fairly complicated; for a full list of these conversions check out the comments in the source code section.
<urn:uuid:486f4664-27c1-4bc6-8932-73eafeaa29ed>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/alg/node131.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00016-ip-10-171-10-70.ec2.internal.warc.gz
en
0.894534
242
3.453125
3
You need to learn a whole new vocabulary when you start talking with your company's facilities team about lowering data-center energy use. If you thought IT acronyms were hard to remember, wait until you sit down with your facilities team to discuss your data center's electric bill. You need to learn a whole new vocabulary when you start talking about lowering the building's energy use. Here's a crib sheet of a dozen of the most commonly used energy terms and acronyms so you can learn the jargon for going green. Yes, this is the name of Australia's greatest rock band, but it's also a key trend in data-center design. AC stands for alternating current, and DC stands for direct current. Leading-edge data-center designers are looking at power supplies based on DC power -- rather than today's AC power -- because DC power promises to be more energy efficient. 2. Carbon footprint No relation to Sasquatch, although to corporate executives it can be an equally large and scary beast. A company's carbon footprint is the amount of CO2 emissions its operations produce. In setting goals to reduce their carbon footprint, many companies target their data centers because they consume 25% or more of the electric bill. It sounds like the acronym for the Chicago Fire Department, but this version stands for computational fluid dynamics. CFD high-performance-computing modeling has been used for a long time in the design of airplanes and weapon systems. Now it's being applied to air flow in data centers for optimal air-conditioning design. This isn't what you drink at the beach on a hot day. Rather, it's a machine that uses chilled water to cool and dehumidify air in a data center. Of all the components of a data center's air conditioning system, this is the one that consumes the most amount of electricity -- as much as 33% of a data center's power. 5. Close-coupled cooling This sounds like a technique that would come in handy on Valentine's Day. In fact, it's a type of data-center air-conditioning system that brings the cooling source as close as possible to the high-density computing systems that generate the most heat. Instead of cooling down the entire room, close-coupled cooling systems located in a rack cool the hot air generated by the servers in just that rack. This is not what you sometimes see when a plumber bends over, although it's pronounced the same way. We're talking about a computer-room air-conditioning system. CRAC units monitor a data center's temperature, humidity and air flow. They consume around 10% of a data center's power. This acronym has nothing to do with the nation's capital, although its pronunciation is similar. DCiE is the Data Center Infrastructure Efficiency metric (also called DCE for Data Center Efficiency). DCiE is one of two reciprocal metrics embraced by The Green Grid industry consortium; the other is Power Usage Effectiveness (PUE, below).(See "Two ways to measure power consumption.") DCiE shows the power used by a data center's IT equipment as a percentage of the total power going into the data center. A DCiE of 50% means that 50% of the total power used by a data center goes to the IT equipment, and the other 50% goes to power and cooling overhead. The larger the DCiE, the better. Electric power is sold in units called kilowatt hours, 1 kWh is the amount of energy delivered in one hour at a power level of 1000 watts. This abbreviation for "kilowatt hour" is mostly used in writing rather than conversation. The acronym PDU stands for power distribution unit, a device that distributes electric power. PDUs function as power strips for a data center and consume around 5% of the power in a typical center. Not pronounced like the reaction to a bad odor, but one letter at a time. Power Usage Effectiveness is one of two reciprocal metrics embraced by The Green Grid industry consortium; the other is Data Center Infrastructure Efficiency (DCiE, above). PUE is the ratio of the total power going into a data center to the power used by the center's IT equipment. For example, a PUE of 2 means that half of the power used by the data center is going to the IT equipment and the other half is going to the center's power and cooling infrastructure. Experts recommend a PUE of less than 2. The closer a PUE is to 1, the better. Pronounced like the short version of the word recreation, this acronym means renewable energy certificates or renewable energy credits. RECs are tradable commodities that show that 1 megawatt-hour of electricity was purchased from a renewable source, such as solar, wind, biomass or geothermal. An increasing number of companies are buying RECs to offset the amount of electricity generated from fossil fuels that their data centers consume. We're not talking about the boys in brown, although the acronym is pronounced the same way. We're talking about uninterruptible power supply, which provides battery backup if a data center's power fails. It's essential that UPS equipment be energy efficient, because it consumes as much as 18% of the power in a typical data center. < Return to main NDC page: Power: What you don’t know will cost you > Learn more about this topicEnergy-efficiency self-assessment tool 02/18/08Two ways to measure power consumption 02/18/08Where to turn for advice about power
<urn:uuid:693e0248-5fe5-4370-98a6-d748b5bc723c>
CC-MAIN-2017-04
http://www.networkworld.com/article/2283259/data-center/data-center-power-glossary.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00410-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937866
1,149
3.078125
3
Take a few minutes from your day today and watch this video published by NASA. Three years ago (Feb. 11, 2010), NASA launched the Solar Dynamics Observatory (SDO), which provides high-resolution data of the sun. This video has taken several clips from the SDO, showcasing events such as solar flares, storms and eruptions. Depending on which wavelength the SDO is recording at, scientists get different color views and "can track how material on the sun moves." As NASA explains, "such movement, in turn, holds clues as to what causes these giant explosions, which, when Earth-directed, can disrupt technology in space. Pretty amazing stuff, and much easier to look at than trying to look at the sun up in the sky on a sunny day. Keith Shaw rounds up the best in geek video in his ITworld.tv blog. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter, Facebook, and Google+. Watch some more cool videos: Mash-up gold: Marty McFly meets The Doctor Squee alert: Elder Scrolls Online announces beta Take a tour of the International Space Station Is there a Nerd Culture War going on? Science Thursday: The Momentum Misconception
<urn:uuid:b56ec33a-6bd7-44f6-8435-93bfeb9a27d3>
CC-MAIN-2017-04
http://www.itworld.com/article/2711788/consumer-tech-science/nasa-video-shows-awesome-shots-of-the-sun.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00070-ip-10-171-10-70.ec2.internal.warc.gz
en
0.909151
265
2.953125
3
Tech Insight: Making Data Classification WorkData classification involves much more than simply buying a product and dropping it in place. Here are some dos and don'ts. The topic of data classification is one that can quickly polarize a crowd. The one side believes there is absolutely no way to make the classification of data and the requisite protection work -- probably the same group that doesn't believe in security awareness and training for employees. The other side believes in data classification as they are making it work within their environments, primarily because their businesses require it. The difficulty in choosing a side lies in the fact that both are correct. In the average corporate network, data classification is extremely difficult, if not impossible. Data sprawl across unkempt network shares, desktops, and mobile devices makes it difficult for IT to identify and secure. When left in the hands of users, most organizations make classification schemes too difficult for users to know how to label the information they're responsible for. The opposite is true when dealing with organizations that are related to or part of the Department of Defense or medical and pharmaceutical companies that have very stringent data classification and handling procedures. Data classification is part of the corporate culture. It is part of the employees' indoctrination into the company and required as part of their daily work lives. And the classifications are well defined, so there is little confusion as whether or not something should be considered sensitive or not. For classification efforts to work, there needs to be a small set of categories for which data can be classified. Any more than a handful and users are likely to become confused, or frustrated, and misclassify something. Those classifications need to be based around the value of the data and the risk associated with the data falling into the wrong hands, being destroyed, or losing its integrity. Simple guidelines need to be established so that employees can easily recognize how something should be handled when they encounter it or when they are creating new data. Don’t classify everything Where classification programs fail is when management and the implementers get stuck in a "classify everything" mindset. Attempts to seek out all data and classify from the start can quickly become time consuming and futile depending on the level of data sprawl. It's easier to start with the core business processes and workflows to see where classification can occur. Sometimes it needs to be at a macro level where entire systems are designated as sensitive instead of at the file and individual database level. This may mean that tighter, more granular controls be implemented on fileshares or entire servers to provide the adequate level of protection. With things like email, however, it's easier to accomplish by the user classifying the email when he or she creates it. Depending on the solution, the user can check a box or include a specific keyword in the subject or body of the message to trigger automatic encryption or prevent the content from being forwarded outside of the company. Automated classification systems can be used to label emails as sensitive, based on their content, but are more prone to error if the keywords are not well maintained. Similarly, solutions exist to integrate with users' workflows as they create and modify Microsoft Office documents. The documents can be labeled based on the defined classifications. Those labels are then used by controls on the file and email server to ensure that only authorized users can access them. User training is critical Even with automated and manual solutions available for data classification, how is it that some organizations have successfully implemented a classification program when so many others have failed miserably? It’s because they focus on user training and awareness from the very beginning. Employees are involved early-on in determining classification schemes and guidelines that make sense to them. Focus groups are put together from different areas of the enterprise to see how well users interpret the proposed classifications and ensure that there is no confusion on how to classify the documents and emails they create. Once the classifications have been developed, technical solutions need to be tested to find the best fit. Of the organizations I've talked to, most have found a mix of automated and manual techniques to work best, but it depends on what technologies are currently in place (e.g., Exchange and Outlook), how employees generate and work with information that needs to be classified (Microsoft Office and SharePoint), and integration capabilities with those workflows. Test groups of users need to be selected to test the products that make the shortlist to determine ease of use and clarity on how to label things within the classification scheme, and to ensure that the product does not hinder productivity. If your organization is looking into developing a data classification program, it's not a decision to take lightly, as it involves much, much more than simply buying a product and dropping it in place. Users need to be involved from the beginning to ensure that classification schemes and guidelines are straightforward and easy to understand. Automated tools need to be tested to make sure they can identify and locate the types of data that are important to your organization. And manual classification solutions need to be put into the users' hands early, to make sure they are usable and do not hinder productivity.
<urn:uuid:4725c38e-ec62-4656-900c-57872cbc722b>
CC-MAIN-2017-04
http://www.darkreading.com/tech-insight-making-data-classification-work/d/d-id/1174144
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00466-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945555
1,043
2.53125
3
The Society for Worldwide Interbank Financial Telecommunication, or SWIFT, provides an organizational platform for facilitating international payments. U.S. and foreign financial institutions use SWIFT messages to initiate, process, receive, and settle payment orders. The amount of information exchanged via SWIFT is immense. More than 9,000 financial institutions in 209 countries rely on SWIFT to process international payments, and an average of 17,000,000 SWIFT messages are sent in a given day. SWIFT messages contain sensitive financial information about consumers, businesses, and governments and for that reason raise unique financial privacy concerns. In recent years, governments such as the United States have obtained access to the SWIFT database, including transactions involving citizens as well as foreign residents, in order to combat terrorism. However, certain countries have criticized and pushed back against such access out of concerns for their citizens’ privacy. In 2010, the United States and European Union reached an agreement whereby SWIFT message information will be made available only for the purpose of preventing, detecting, and prosecuting terrorism and only upon a showing that such information is necessary. More broadly, the Dodd-Frank Act provides for Federal Reserve supervision of systemically important payment and settlement activities, and it is generally expected that the international payments system will receive more attention from regulators in the future. For instance, recent Treasury rulemakings have requested further comment on the subject of non-U.S. payment and settlement providers.
<urn:uuid:dd2aa2fa-c83d-4092-80fc-f3fa82d9d784>
CC-MAIN-2017-04
https://www.insideprivacy.com/international/swift-messaging-raises-unique-financial-privacy-concerns/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00374-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938208
292
2.671875
3
Oracle Techniques Oracle staffers did an audit and penetration test on our code when they received it and were satisfied with the parameter validation and user authentication code we had written.First, they configured the copy of Apache HTTP Server that is included with Oracle9i Application Server as well as the application server itself to display the same generic error page whenever any kind of error is encountered. This way, crackers probing for application flaws dont get any information-rich error messages that might hint where they should direct future activities. Second, Oracle staff added an HTML sanitization routine to outgoing e-mail content to prevent a form of server-side scripting attack through e-mail. Since many e-mail clients automatically render HTML tags, e-mail becomes one more attack vector if it includes any user-supplied data. Third, Oracle wrote a stored procedure that controls access to credit card and user account password data. The stored procedure uses the Triple Data Encryption Standard encryption routines shipped with Oracles database to ensure that this data is stored on disk in encrypted form. "Its more difficult to invoke a PL/SQL procedure because you have to know the parameters," said John Abel, in Reading, England, the Oracle consultant who wrote the application. "It protects against SQL injection as well as a [database administrator] having a look around." Abel also used Oracle9i Databases stored procedure wrap function to obfuscate the source code of the stored procedure itself, so that anyone who was able to access the database wouldnt be able to view the encryption code. Abel also used Oracle9i Databases stored procedure wrap function to obfuscate the source code of the stored procedure itself, so that anyone who was able to access the database wouldnt be able to view the encryption code. The Oracle test application can be accessed at www.oracle.openhack.com/openhack/index.jsp. They made three other important security changes to the application.
<urn:uuid:004ff1dd-9caf-4828-9bb4-8e72ea958259>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Application-Development/Hardening-the-OpenHack-App/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00282-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928961
391
2.59375
3
Certain groups at higher risk of foodborne illnesses Tuesday, Feb 25th 2014 Foodborne illnesses can occur due to a number of reasons, including improper refrigeration and unsafe food handling. However, some individuals, including seniors, children, pregnant women and people with chronic diseases, are more at risk of contracting food poisoning than others. According to Live Science contributor Fred Cicetti, a range of factors create a higher risk level for these groups. For example, due to their age, senior citizens' immune systems may not perform as well as that of a younger individual. For this reason, an older person may become sick with food poisoning and another, younger person with a well functioning immune system could be unaffected. However, young children and infants do not have fully developed immune systems, and are therefore among the groups that are more likely to become ill with food poisoning. Those with chronic diseases also have diminished immune systems, causing them to be at a higher risk. In addition, pregnant women are more likely to contract a foodborne illness because of natural changes in their metabolisms and circulation. Causes and symptoms A number of contaminants can cause foodborne illnesses, such as E. coli, noroviruses, rotavirus and Salmonella. Furthermore, improper food handling can also create conditions that can cause these bacteria and viruses to grow. These practices can include unsafe food processing, keeping an edible item too warm for too long and drinking unpasteurized liquids or contaminated water. Individuals can also become ill from consuming improperly prepared foods, undercooked items or raw edibles. A person suffering from a foodborne illness may be dehydrated or have diarrhea, which can be dangerous if not addressed within the first few days. How to prevent foodborne illnesses Individuals can lessen their likelihood of falling ill by following standard food preparation and storage practices. A main cause of foodborne illnesses is improper refrigeration. Therefore, organizations within the food service industry should utilize temperature monitoring systems to ensure items are kept at the proper temperature. Additionally, the York Daily Record noted that certain items should be separated within storage units to prevent cross contamination. For example, raw meats, poultry and fish should not be stored in close proximity to vegetables. The source also suggested using an internal food thermometer to ensure that items are cooked to the proper level. Also, when transporting edibles, hot food should be kept above 140 degrees Fahrenheit, and cold food should be maintained at 40 degrees F or below.
<urn:uuid:70e71f9c-f859-46c3-9bd7-51f157eb0eab>
CC-MAIN-2017-04
http://www.itwatchdogs.com/environmental-monitoring-news/cold-storage/certain-groups-at-higher-risk-of-foodborne-illnesses-586603
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00098-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944056
503
3.0625
3
One industry group is hoping to educate youngsters on how the technology and devices nearly all of them use actually work, in hopes of inspiring the next generation of IT professionals. Todd Thibodeaux, CEO of CompTIA, said last week that the trade association is launching a new effort in hopes of filling the gaps that STEM programs in grades 9 through 12 are lacking: a better understanding of how IT works -- from smartphones to Facebook. “We’ve come into a period when use of the product and adoption of the product is the new geek, instead of understanding how the product and components of it work,” Thibodeaux said. “We have this generation of kids who aren’t quite as geeky as the ones who came before them.” A recent CompTIA survey of 1,002 teens and young adults found that nearly all respondents (97 percent) said they either love or like technology. Many teens also are more than just technology consumers, with 58 percent reporting that they help family members or friends with questions or troubleshooting computers, software and mobile devices. Still, while most teens have a love affair with technology, most aren’t interested in translating that love into a career, the study found. Only 18 percent of teens and young adults reported a definitive interest in an IT career, while 43 percent identified their interest in an IT career as a “maybe.” Many respondents (47 percent) said they did not know enough about IT occupations, according to the report. As a result, Thibodeaux said CompTIA will be going to kids in grades 9 through 12 to educate them on the processes that underlie technology, such as how much infrastructure underpins Facebook, how a text message works and how online gaming is developed. “Teens think they have to be massive science geniuses to work in IT and that there’s no real upward career path mobility,” Thibodeaux said. “All of those things are completely false.”
<urn:uuid:2492fef2-f05a-47bf-85c3-3fcd492d4592>
CC-MAIN-2017-04
http://www.nextgov.com/cio-briefing/wired-workplace/2013/04/teaching-teens-about-tech/62546/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00098-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955666
415
2.90625
3
Google is starting the Endangered Languages Project as part of the company's charitable efforts to preserve languages in danger of extinction and as part of its overall mission to "organize the world's information." Google is starting and seeding a project to fight the extinction and loss of more than 3,000 endangered languages around the globe to help preserve the history, cultures and knowledge of mankind. The effort was announced today in a Google blog post by Clara Rivera Rodriguez and Jason Rissman, two managers of the Google says the new site can be used by people to find and share the most up-to-date and comprehensive information about endangered languages so that they don't disappear because they havent been passed down to younger generations. "Documenting the 3,000-plus languages that are on the verge of extinction (about half of all languages in the world) is an important step in preserving cultural diversity, honoring the knowledge of our elders and empowering our youth," the blog post stated. "Technology can strengthen these efforts by helping people create high-quality recordings of their elders (often the last speakers of a language), connecting diaspora communities through social media and facilitating One example of an endangered language, according to the post, is the Miami-Illinois language which was once used heavily by Native American communities in what is now the U.S. Midwest. The language is considered today to be extinct by some people, with its latest fluent speakers dying in the 1960s, the post reported. It is being revived slowly, though, through the efforts of one man. "Decades later, Daryl Baldwin, a citizen of the Miami Tribe of Oklahoma, began teaching himself the language from historical manuscripts and now works with the Miami University in Ohio to continue the work of revitalizing the , publishing stories, audio files and other educational materials," the post stated. "Miami children are once again learning the language andeven more inspiringteaching it to each other. Daryls work is just one example of the efforts being made to preserve and strengthen languages that are on the brink of disappearing. " In an interview, Rissman said Google unveiled the project as part of its philanthropic efforts to help organize the world's information and to make it more accessible to people everywhere. "This is more than informationthis is language" with roots in cultural history and customs, he said. "We realize this is an urgent and global problem. We realize that some of our tools might make a difference," including storage space, collaboration, connectivity "YouTube is built into the site as a way to preserve content and as a
<urn:uuid:21a33b29-73da-49c9-a202-fa3a8d3d0e6f>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Enterprise-Applications/Google-Begins-Effort-to-Help-Preserve-Languages-Nearing-Extinction-880618
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00218-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941893
563
3.234375
3
Information security systems based on quantum computing techniques are one of the holy grails of the industry but the scientists at the Defense Advanced Research Projects Agency want to change that with a program that could develop such a system in 3 years. The main goal of the new program, called Quiness is to demonstrate that quantum communications can generate secure keys at sustainable rates of 1-10 Gbps at distances of 1,000-10,000 km. The Quiness program will develop macroscopic quantum communications, like protocols that combine the security of single-photon-based quantum communications with the robustness against loss and noise of bright coherent pulses, DARPA stated. In the news: The fantastic world of steampunk technology DARPA said that the Quiness program has two secondary goals: To demonstrate that secure quantum communications can be extended to entirely new domains, such as underwater and through dirty air, and to extend quantum communications beyond key distribution to other practical, scalable quantum protocols. "Contemporary information security is algorithmic, and as a result, not provably secure. Examples of algorithmic security include pseudo-random number generation and public key encryption. Quantum communications are, in principle capable of providing a provably secure communications channel. Communications protected by quantum security can typically only be attacked "in transit" and are not vulnerable to off-line attacks at some point in the future using newly developed techniques or computational resources, " DARPA stated. The issue is that in quantum computing single photons have proven extremely fragile in the face of loss and noise, effectively limiting the range of quantum communications to thousands of secure bits per second at a range of several hundred kilometers. In contrast, optical communications based on bright coherent states routinely achieve unsecured communications rates exceeding 1,010 bits per second over distances exceeding 10,000 km, DARPA stated. Successful Quiness programs are expected to present a method for decoupling loss from secure bit rate (such that 10 dB of loss results in far less than a factor of 10 decrease in secure bit rate), DARPA said. More on security: FBI finds scammers impersonating the FBI now one of worst online threats All critical program areas should produce prototypes on 6-9 month cycles and deliver them to a central testbed. Such testbeds should simulate realistic conditions in fiber and/or free-space environments through the use of, for example, recirculating loops containing fibers, amplifiers, and transparent switches. In addition, the testbed should ideally be able to simulate realistic sources of noise and loss, DARPA said. At the government's option, a large-scale testbed may be provided for long-distance, high-rate tests under realistic conditions for long-haul communications. Layer 8 Extra Check out these other hot stories:
<urn:uuid:7ec31e21-d8f9-4eef-9a9c-bd08af33a0c6>
CC-MAIN-2017-04
http://www.networkworld.com/article/2222418/security/darpa-seeks-holy-grail--quantum-based-data-security-system.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00549-ip-10-171-10-70.ec2.internal.warc.gz
en
0.906529
568
3.25
3
Andrea Pellegrini, Valeria Bertacco, and Todd Austin at the University of Michigan's Electrical Engineering and Computer Science Department, found that they could work out pieces of the private key employed in the RSA encryption algorithm as implemented within OpenSSL by inducing variations in the power supply of a hardware device as it was running the software to process encrypted messages. Using this technique, they were able to expose four bits of the key at a time, and assemble the entire 1024-bit key in 104 hours using a cluster of 81 2.4-GHz Pentium 4 computers. "If the hardware lab in a secure system is compromised, not only would it be possible to extract secret information about the software, but it would also be extremely hard for the software to detect that an attack is underway," said the scientists in a paper that will be published this week at the Design Automation and Test in Europe conference. "The work presented in this paper further underscores the potential danger that systems face due to fault-based attacks and exposes a severe weakness to fault-based attacks in the OpenSSL libraries," they concluded. OpenSSL is said to be preparing a patch to solve the problem. The bug is relatively easy to fix, according to reports, and would simply involve the use of randomization techniques to add an error checking algorithm in the software. However, the researchers have said that it might be possible to apply the method to other cryptography libraries. OpenSSL is an open source implementation of the SSL encryption and certification system, and is used on hundreds of thousands of systems across the world.
<urn:uuid:08b291d0-c427-4070-bb2d-b983d6f2fbe7>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/news/researchers-crack-rsa-encryption-via-power-supply/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00457-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957788
321
3.375
3
After the installation of fiber optic cabling system, it is needed to test the link transmission characteristics, including several most important test projects like link attenuation characteristics, connector insertion loss, return loss, etc. Following is a brief introduction of the measurements of the key physical parameters during the fiber optic cabling process. The key physical parameters of fiber optic link: 1. Attenuation is the light power reduction in the fiber optic transmission. 2. Calculation of total attenuation of the fiber optic network: fiber LOSS is the ratio of the Power out of the fiber output end and the Power in of launched into the fiber. 3. The loss is proportional to the length of the fiber, so the total attenuation not only means the loss of fiber itself, but also reflects the length of the fiber. 4. Cable loss factor (α): In order to reflect the characteristics of the fiber attenuation, we introduce the concept of cable loss factor. 5. Attenuation measurements: As the fiber connected to the fiber optic light source and optical power meter will unavoidable take additional losses. Field test must carry on the test reference point setting (zero set). There are several methods for testing the reference point, these methods are mainly based on the link test object selection, in the fiber optic cabling system, since the length of the fiber itself is usually not long, so the test method will pay more attention to the connector and measurement of fiber optic patch cords, the method is more important. The reflection loss, also known as return loss, refers to the backward reflection light relative to the ratio of the input number of decibels at the fiber optic connector, the return loss is larger the better, in order to reduce the reflection effect of light on the fiber light source and system. The methods to improve return loss, try to use the fiber end surface is processed into spherical or oblique spherical is an effective way. Insertion loss is light signal in the optical fiber after through the connector, the output optical power relative to the input optical power ratio in decibels. Insertion loss is smaller the better. The insertion loss measuring method is the same with measuring method of attenuation. In a word, to complete the measurement of an optical loss, a calibrated optical light source and a standard optical power meter is indispensable.
<urn:uuid:d4a90c26-6c82-4035-9118-8da143cf9bd0>
CC-MAIN-2017-04
http://www.fs.com/blog/the-key-physical-parameters-of-fiber-optic-link.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00273-ip-10-171-10-70.ec2.internal.warc.gz
en
0.884384
480
2.640625
3
At Cycle Computing, we know that access to compute resources at the right scale is a key driver of scientific research. Sometimes it accelerates research, but sometimes it enables work that would otherwise be impossible. A great example of this is a project recently undertaken by the NASA Center for Climate Simulation (NCCS) in partnership with Amazon Web Services (AWS) and Intel. Researchers at NASA were studying the carbon storage in the Sub-Sahara region of Africa. The traditional method is to count and measure the shrubs and trees in the region, but over a 10 million square mile area, that’s an impossibly labor-intensive task. Instead, the researchers performed digital analysis of high-resolution satellite data. This project was made possible by leveraging several CycleCloud features. First, the data transfer tools uploaded the image data from the NASA datacenter into S3. As file transfers completed, a custom plugin submitted the computation jobs. CycleCloud autoscaled spot instances across multiple types and availability zones in order to quickly and cheaply get the resources needed to process the data. Once the data was uploaded to S3, scientists could re-process as needed. NCCS recently reprocessed 43 terabytes of image data in just three days for less than $2,000. This summer, the NCCS is re-processing with an additional data layer that will improve accuracy three-fold. The features and flexibility of CycleCloud will be a key part of performing this important climate research.
<urn:uuid:15a79b4e-a02a-4484-bb98-5aecf475fdea>
CC-MAIN-2017-04
https://cyclecomputing.com/3509-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00091-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949425
306
2.734375
3
Sir Tim Berners-Lee, the inventor of the World Wide Web, recently made an appearance on the BBC to talk about the future of the Web and how HTML5 is going to impact that technology he essentially began. His projections for HTML5's impact are astonishing in their own right, as HTML5, according to his remark in the interview, will make "every single Web page out there...like a computer.” Yet Berners-Lee also had some stern warnings about the future of the Internet as we know it. Basically, Berners-Lee projects a future in which, thanks to HTML5, individual web sites can develop such capability that they are essentially little computers in their own right. Where previously the Web was more of a static, read-only experience--still a substantial innovation in its own right--HTML5 will allow for significant gains in overall functionality, and make the Web capable of so much more than it is even today. HTML5 will offer up a variety of new features to users, like new benefits to offline storage, as well as canvas drawing capabilities without the need for Flash or other plug-ins. Better yet, it will even provide streaming support for both audio and video native to a browser app, improving the range of streaming audio and streaming video. The sheer intelligence of forms will be upgraded too--which can be welcome indeed for people who use a lot of forms online--as well as geolocation tools to better tailor things to your immediate area, especially welcome for mobile device users. Perhaps the biggest destabilizing element represented by HTML5 is its focus on Web applications, which has significant power to destabilize the concept of the individually-branded app store by making at least some apps available on any mobile device that can access the Web. But while Berners-Lee had high praise for the future of online development, he also had some dire warnings to dispense as well. He's been speaking out for some time now about the government regulation of the Internet, especially in regard to its potential to shut down innovation under the weight of red tape and choking laws. Laws regarding the storage of data, like those being recommended by an Australian proposal, have great potential for harm to the average user, and considering the sheer amount of money there is associated with the Internet as a whole, there are plenty of corporate and governmental interests interested in controlling that flow of information in their favor. Thus, Berners-Lee not only presented a bright and shining future of the Web as a new power in information and services, but also made warning about potential threats to this still comparatively new technology. There certainly are perils associated with use of the Web, from viruses and online scams clear up to the machinations of corporations and governments, but this technology is rapidly changing, and providing value for users the world over. Where will the Internet go from here? In a few years, we may not even recognize it. But the possibilities are certainly impressive, assuming the perils can be appropriately managed. Edited by Rachel Ramsey
<urn:uuid:74619029-e2dc-4f46-8a04-84ce83c16715>
CC-MAIN-2017-04
http://www.html5report.com/topics/html5/articles/325470-tim-berners-lee-html5-change-web-as-we.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00485-ip-10-171-10-70.ec2.internal.warc.gz
en
0.967325
613
2.5625
3
Making data more directly accessible to developers, with greater freedom to use it in ways not anticipated when it was compiled, is the hottest fashion statement in development technology. Making data more directly accessible to developers, with greater freedom to use it in ways not anticipated when it was compiled, is the hottest fashion statement in development technology. With rapid refinement of tools for working with XML on datas supply side and with data manipulation becoming better integrated with mainstream code on the demand side, theres a trunk full of ways for developers to win. The business-suit sleekness of an enterprise application is today often marred by ugly seams, where a procedural language has been clumsily stitched to nonprocedural SQL statements manipulating relational data. Further opportunities for ugliness arise with the growing use of XML, a nonrelational data representationbut one that programmers would like to query with the same ease and performance they get from a SQL database. Most programming languages depend on sequence and hierarchy for data reference and navigationfor example, in accessing arrays. The tables of relational databases have neither sequence nor hierarchy, and thats just the beginning of the differences that make it difficult to put together a unified look. Donald Chamberlin, an IBM fellow who co-invented SQL in the 1970s with colleague Raymond Boyce, summarized key differences between relational data and XML in an IBM Systems Journal article in October 2002. Relational data, Chamberlin observed, "tend to have a regular structure, which allows the descriptive metadata for these data to be stored in a separate catalog. XML data, in contrast, are often quite heterogeneous, and distribute their metadata throughout the document." Relational tables typically have a flat structure, Chamberlin notedthat is, each column of each record represents a single-valued instance of some fundamental data type, such as an integer or a character string. In the time since Chamberlins paper, databases have adopted structured types to handle more complex multivalued data, and foreign keys have always enabled one-to-many relationships among tables. Even so, lookups of arbitrary attribute values in a database using these facilities are not as easy as with XML documents using multilevel nested elements. "XML documents have an intrinsic order, whereas relational data are unordered except where an ordering can be derived from data values," Chamberlin added in the paper, further observing that a relational data set typically has a value in almost every column of every record while XML data are often "sparse"with many of their possible data elements left blank. But theres still great demand for XMLs versatility and the ease that it offers for repurposing data for multiple tasks. XML addresses a genuine need. Datas XQuery adventure Developers confronting an XML-demarcated data stream have been faced, until recently, with three unattractive choices. At one extreme, programmers could burn machine cycles upfront to "shred," as the process is popularly called, the multilevel XML into a collection of relational tables. At the other extreme, they could defer all processing by storing the entire XML corpus as a single chunk of textual data, delving into it as needed later on. Both these approaches leave open the door to use of standards-based technologies down the line. The third choice, in contrast, limits that flexibility by adopting the XML representations of a specific database platform. A better way, though, has been devised by IBMs Chamberlin and others with their development of XQuery, both a query and a transformation language. Released on Sept. 15 as a World Wide Web Consortium working-draft specification, XQuery goes beyond queries to meet the needs of data integration and reformatting. (The working-draft XQuery specification is online at www.w3.org/TR/xquery, while an informative page of FAQs is at www.stylusstudio.com/xquery/xquery_faq.html Before it reached even its current intermediate level of formalization, XQuery had already attracted widespread interest from developers. As reported in eWEEK in March, a privately sponsored survey of 550 developers found more than half already using XQuery and another third planning to use it within the year. That survey was commissioned by the DataDirect Technologies division of Progress Software Corp., which released last month its DataDirect XQuery 1.0 embeddable component for Java application developers. The product works with databases from IBM, Microsoft Corp., Oracle Corp. and Sybase Inc., as well as MySQL AB. Also available from DataDirect is the Stylus Studio IDE (integrated development environment) for XQuery programming , combining XML editing with XQuery debugging and other related capabilities. Meanwhile, the increasingly mainstream environment of Microsofts C# may soon offer access to nearly any kind of data source with a single syntax and with the full syntactic and semantic support of Microsofts Visual Studio environment. Microsofts LINQ (Language Integrated Query) technology, previewed at the companys Professional Developers Conference in Los Angeles last month, extends C#and Visual Basic .Net, as well as potentially any other .Net languageto unify interactions with objects, XML-formatted data and relational data under a single umbrella. "A developer who is working with a collection of customer objects and wants to query a list of names from that collection would need to write five or 10 lines of code to accomplish that simple task with todays technology. With LINQ, that list can be obtained by writing just one line of code, Select Name From Customers," said Microsoft Technical Fellow Anders Hejlsberg in a statement accompanying his conference presentation. When a company or an industry is built on the foundation of a fatally flawed idea, it takes a calamity to inspire change. Click here to read more. eWEEK Labs regards all such proposals with suspicion: Database lore is rife with tales of poorly optimized queries that interacted with disparate table sizes or varying data bandwidths to produce crippling performance problems. To the degree that LINQ succeeds in enabling data abstraction, it will do so by concealing underlying data representation and by putting the actual means of data accesswhich Hejlsberg derides as "plumbing"into a box labeled "No Developer-Serviceable Parts Inside." One must recall, however, that exactly the same concerns were raised when C replaced assembly language programming or, for that matter, when databases themselves were introduced into general enterprise use. The question that most developers will want answered is, "Will LINQ get me the application I want in less time, for less money, while Moores Law offsets the likely performance overheads?" The other issue accurately raised by some observers is that of developer lock-in. Rather than being adept at writing SQL and injecting it into any development environment, a developer may instead become proficient only at working with a universal abstraction of data thats available only within the .Net environment. Whether LINQ is a better-tailored suit or a straitjacket is very much yet to be seen. Technology Editor Peter Coffee can be reached at email@example.com. Check out eWEEK.coms for the latest news, reviews and analysis in programming environments and developer tools.
<urn:uuid:289f7dc5-ce0b-4f31-aab4-60ba094fc6fe>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Application-Development/Flexibility-Is-Key-to-Access
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00145-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92497
1,481
2.75
3
Intel researchers say they have a transistor design that could someday enable faster and more power efficient chips. Intel may have found its transistor of the future. The companys researchers feel that the combination of a three-dimensional or tri-gate transistor, which combines the use of advanced materials and manufacturing techniques, could be the answer to delivering future chips that are both speedy and energy efficient processors, Intel researchers said in a presentation at the VLSI Symposium on Technology and Circuits, which began on June 12. During a discussion of their work, the researchers said using tri-gate transistors built using its mixture would allow them to continue keeping pace with Moores Lawthe tenant that says chip transistor counts will double every two years and thus increase performancefor at least several more chip generations. "That combination of this [transistor design, material and manufacturing techniques] is really what makes it special," said Mike Mayberry, Intel vice president and director of component research, in Honolulu, Hawaii. The tri-gate design discussed at the conference pairs high-k (high electrical capacitance) gate dielectrics, along with metal gates electrodes and a manufacturing technique called strained silicon offered to cut power consumption. Transistors have a source, a drain and a gate. The channel, linking the source and drain, provides the path for electricity to follow between them. The metal gate electrode, in this case, works to keep electricity inside the channelthe path between the source and drainwhile strained silicon, which manipulates the silicon lattice the chip is built upon, speeds up the flow of electrons inside the chip. Although he characterized the tri-gate option as one option Intel researchers have developed, Mayberry said that the combination can be used to help chips stay within their power consumption targets even as they move to new manufacturing levels. Those transitions allow each chip pack on more transistorsand therefore boost performanceby arranging them more closely. Intel researchers, tasked with keeping the chip maker on pace with Moores Law, continually look ahead for potential barriers. One such barrier, called the short channel effect, can be mitigated by the tri-gate design that researchers have created, Mayberry said. The short channel effect comes into play as planar transistor gate widths shrink to minute proportions. Eventually, electricity, in standard planar transistor designs, finds it easier and easier to circumvent the gate and so-called leakage current increasesor the amount of energy burned when a transistor is in the off position. "If we didnt do the research ahead of time then if we got to a particular place wed be stuck," Mayberry said. Thus researchers turned to the tri-gate design, which surrounds the channel on three of four sides. The tri-gate transistor, its specific mix of material and the use of strained silicon all work together to block the short channel effect and thus allow the transistor to operate more efficiently. Ultimately, "It allows you to turn the device off more strongly and off more cleanly," Mayberry said. "Plus leakage through that is significantly reduced." Intel says its back in the server game. Click here to read more. When optimized for sheer speed, transistors using the design can run up to 45 percent faster than Intels current state-of-the-art 65 nanometer transistors, he said. Yet, when optimized for power, they can cut leakage by 50 times. An in-between position would see the transistor using 35 percent less power while running at the same frequency, according to tests on proof-of-concept SRAM (static random access memory) cells, he said. Still, while it shows great promise, the transistor design has not been officially adopted for use in products. Intel researchers can offer the new design to its manufacturing group responsible for implementing each new manufacturing technology transition, but the group would have to select the approach. It has not decided one way or the other, just yet, Mayberry indicated. "Weve not made the decision for which [manufacturing] node it will go in," he said. Intels 45-nanometer manufacturing, due in 2007, will not use the new transistor design. Although it is a strong candidate for 32-nanometer or 22-nanometer manufacturing processes, scheduled to arrive in 2009 and 2011, respectively, he said. Check out eWEEK.coms for the latest news in desktop and notebook computing.
<urn:uuid:b78c558a-daba-465a-a3fa-c19a69f8533e>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Desktops-and-Notebooks/Intel-Beaming-About-Swifter-Transistor
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00145-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942564
904
3.359375
3
Ebola Resurgent in West Africa as Health Authorities Seek Answers and a Vaccine By CHHS Research Assistant Jules Szanton Guinea, Liberia, and Sierra Leone are confronting a resurgence of Ebola after several encouraging months when the virus seemed to be nearly extinguished from West Africa. Public health experts suspect a number of factors are leading to a rise in the infectious disease. The resurgence lends new urgency to the effort to develop and test an Ebola vaccine, a badly needed tool in the struggle against the highly infectious disease. By late spring, the West African Ebola outbreak seemed to be under control. The World Health Organization declared Liberia to be Ebola-free on May 9, 2015. Guinea and Sierra Leone—which, along with Liberia, are among the countries most devastated by the current epidemic—have yet to fully rid their countries of Ebola. In April, however, President Ernest Bai Koroma of Sierra Leone spoke hopefully of a “battle to get to zero cases,” and Guinean health authorities seemed to have fought the virus to a standstill. Since then, however, the situation has worsened. The virus has returned to Liberia, where 11 people are currently being treated. Last week, Guinea and Sierra Leone reported 13 and 14 new cases respectively. Disturbingly, the virus is once again being transmitted in those countries’ capital cities—places where the virus had previously been eliminated. Experts are still unsure what is behind the Ebola resurgence, although anecdotal reports suggest that the disease is being transmitted to humans from infected dogs killed by hunters. West Africans have traditionally hunted bats, monkeys, rodents, and dogs as bush meat. This presents dangers, since Ebola can infect humans who come into contact with bodily fluids of infected animals. Public health authorities also worry that the resurgent disease could be coming from pockets of territory where they incorrectly believed that Ebola had been eliminated. In places where Ebola is transmitted from person to person, health workers often contract the disease after coming into contact with an infected patient’s bodily fluids. Similarly, mourners at traditional West African funerals who touch or wash an Ebola victim’s dead body are at risk of transmission. A third possibility is that Ebola is being transmitted sexually. The virus stays in a survivor’s semen for months after he is symptom free, which means that the virus could be transmitted by patients who had seemingly recovered. Ebola’s resurgence is the latest indication of how difficult it is for public health authorities to eliminate a highly contagious viral disease without a vaccine. For years, researchers seeking an Ebola vaccine struggled to attract funding from government and private-sector sources. The current outbreak, which began last year, has spurred several attempts to create a vaccine to protect against Ebola. Just this week, a team of researchers at the National Institutes of Health and the University of Texas found that an inhalable Ebola vaccine was effective in monkeys. There is, of course, no guarantee that the vaccine will be effective on human patients. Until a vaccine proves successful in human patients, public health authorities will continue to struggle to contain the Ebola outbreak.
<urn:uuid:464f17df-c919-45ee-8143-e429f6931d65>
CC-MAIN-2017-04
http://www.mdchhs.com/ebola-resurgent-in-west-africa-as-health-authorities-seek-answers-and-a-vaccine/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00145-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961225
631
3.171875
3
What is Telex? Telex is a next generation Internet freedom technology. It's designed to help citizens of repressive governments freely access the online services and information of their choice. The Telex approach differs from other tools available today: Rather than working at network endpoints, Telex works through Internet service providers and other operators of core Internet infrastructure, which helps make it very difficult to detect and block. Update, July 2013 We're ramping up for a new phase of Telex development. We'll be updating our original, lab-scale prototype to make way for a larger testbed deployment with a partner ISP. We look forward to sharing more information as these plans move forward! What makes Telex different from previous approaches: - Telex operates in the network infrastructure — at any ISP between the censor's network and non-blocked portions of the Internet — rather than at network end points. This approach, which we call “end-to-middle” proxying, can make the system robust against countermeasures (such as blocking) by the censor. - Telex focuses on avoiding detection by the censor. That is, it allows a user to circumvent a censor without alerting the censor to the act of circumvention. It complements services like Tor (which focus on hiding with whom the user is attempting to communicate instead of that that the user is attempting to have an anonymous conversation) rather than replacing them. - Telex employs a form of deep-packet inspection — a technology sometimes used to censor communication — and repurposes it to circumvent censorship. - Other systems require distributing secrets, such as encryption keys or IP addresses, to individual users. If the censor discovers these secrets, it can block the system. With Telex, there are no secrets that need to be communicated to users in advance, only the publicly available client software. - Telex can provide a state-level response to state-level censorship. We envision that friendly countries would create incentives for ISPs to deploy Telex. Government Internet censors generally use firewalls in their network to block traffic bound for certain destinations, or containing particular content. For Telex, we assume that the censor government desires generally to allow Internet access (for economic or political reasons) while still preventing access to specifically blacklisted content and sites. That means Telex doesn't help in cases where a government pulls the plug on the Internet entirely. We further assume that the censor allows access to at least some secure HTTPS websites. This is a safe assumption, since blocking all HTTPS traffic would cut off practically every site that uses password logins. Many anticensorship systems work by making an encrypted connection (called a “tunnel”) from the user's computer to a trusted proxy server located outside the censor's network. This server relays requests to censored websites and returns the responses to the user over the encrypted tunnel. This approach leads to a cat-and-mouse game, where the censor attempts to discover and block the proxy servers. Users need to learn the address and login information for a proxy server somehow, and it's very difficult to broadcast this information to a large number of users without the censor also learning it. How Telex Works Telex turns this approach on its head to create what is essentially a proxy server without an IP address. In fact, users don't need to know any secrets to connect. The user installs a Telex client app (perhaps by downloading it from an intermittently available website or by making a copy from a friend). When the user wants to visit a blacklisted site, the client establishes an encrypted HTTPS connection to a non-blacklisted web server outside the censor’s network, which could be a normal site that the user regularly visits. Since the connection looks normal, the censor allows it, but this connection is only a decoy. The client secretly marks the connection as a Telex request by inserting a cryptographic tag into the headers. We construct this tag using a mechanism called public-key steganography.This means anyone can tag a connection using only publicly available information, but only the Telex service (using a private key) can recognize that a connection has been tagged. As the connection travels over the Internet en route to the non-blacklisted site, it passes through routers at various ISPs in the core of the network. We envision that some of these ISPs would deploy equipment we call Telex stations. These devices hold a private key that lets them recognize tagged connections from Telex clients and decrypt these HTTPS connections. The stations then divert the connections to anticensorship services, such as proxy servers or Tor entry points, which clients can use to access blocked sites. This creates an encrypted tunnel between the Telex user and Telex station at the ISP, redirecting connections to any site on the Internet. Telex doesn't require active participation from the censored websites, or from the noncensored sites that serve as the apparent connection destinations. However, it does rely on ISPs to deploy Telex stations on network paths between the censor's network and many popular Internet destinations. Widespread ISP deployment might require incentives from governments. Development so Far At this point, Telex is a concept rather than a production system. It's far from ready for real users, but we have developed proof-of-concept software for researchers to experiment with. So far, there's only one Telex station, on a mock ISP that we're operating in our lab. Nevertheless, we have been using Telex for our daily web browsing for the past four months, and we're pleased with the performance and stability. We've even tested it using a client in Beijing and streamed HD YouTube videos, in spite of YouTube being censored there. Telex illustrates how it is possible to shift the balance of power in the censorship arms race, by thinking big about the problem. We hope our work will inspire discussion and further research about the future of anticensorship technology.
<urn:uuid:44f4228d-31e5-4de8-af88-076b46910920>
CC-MAIN-2017-04
https://telex.cc/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00539-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933202
1,217
2.625
3
What is it? RSS is a lightweight XML format, used to standardise news and other material so that updates on website content can be sent to end-users who have requested them. RSS also enables content to be syndicated to other websites. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. From the user’s point of view, RSS has been described as a “content personalisation tool”. Readers and aggregators in client software such as browsers check the “feeds” from the originating websites and display anything new. Effectively a mini-database of headlines and other summaries of new content, RSS is also being explored as a mechanism for content distribution services, which Microsoft’s Simple List Extensions will support. Money is being invested in medical and financial applications. RSS has had two tipping points in its history. The first was when the New York Times adopted it to provide news feeds, rapidly followed by other media organisations, including CNN and the BBC. The second was Microsoft’s decision to include it in Internet Explorer 7. A 2005 survey by Nielsen/Netratings found that even among tech-savvy blog users, about 66% had never heard of RSS. To date, only about 5% of people use RSS to get news and information delivered to them. With Internet Explorer 7, RSS will hit the mainstream. This carries the usual downsides: the Microsoft implementation is good enough, rather than good, and Microsoft has created its own extensions which, though covered by Creative Commons licensing, are likely to skew future development in a Windows-centric direction. Where is it used? By newspapers, broadcasters and other media companies, Google and Yahoo, also bloggers. Where did it originate? RSS had proprietary forebears, such as Apple’s Meta Content Framework and Microsoft’s Channel Definition Format, but the true line grew out of a project abandoned by Netscape. Rich Site Summary was cast adrift at version 0.91, just as the internet’s early adopters had started to take an interest. It was picked up by UserLand, a supplier of web authoring products. However, a breakaway faction had already created RSS 1.0 (standing for RDF Site Summary), so when UserLand had a product to release, this had to be called RSS 2.0 – which sounds like a successor to RSS 1.0, but isn’t; in this case it stands for Really Simple Syndication. Since 2003, the RSS 2.0 specification has been owned by the Harvard Law School’s Berkman Center for Internet and Society. What makes it special? RSS provides a way of promoting websites without costly advertising and can be used to set up ad-hoc content sharing partnerships. How difficult is it to master? RSS is straightforward for those with a basic grasp of XML and/or HTML/XHTML. Most tutorials involve just a few hours’ work. What systems does it run on? Internet Explorer 7, Apple’s Safari, Mozilla’s Firefox and the Opera browser can all handle web feeds. IBM has included RSS capability in the latest releases of Lotus Notes and Domino. Being “lightweight”, RSS is ideal for portable devices such as PDAs and mobile phones. What is coming up? Microsoft is building RSS technologies into Longhorn, its next-generation server operating system. Comment on this article: email@example.com Looking for Web Developers? Get free quotes from up to 6 leading Web Developers.
<urn:uuid:3f4a643a-3db4-4566-850c-16450af83a3e>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240079061/Why-web-developers-will-need-to-know-their-RSS
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00475-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943556
763
2.71875
3
A year ago (see "Capital Currents," August 2005), I wrote about the new DBS frequency band at 17 GHz, wondering when the FCC would start the multi-year rulemaking process necessary to adopt technical standards and award licenses. The process has now begun. It includes some unique technical issues, but it ignores the juicy political issues–like who will be eligible for licenses, and whether adult programming will be prohibited. So one unique issue is that, for the first time, the FCC is faced with "reverse band operations," whereby a frequency band (in this case, 17.3-17.7 GHz) is being used for both satellite uplinks and downlinks. There is a risk of interference from the 17 GHz feeder link stations into nearby 17 GHz DBS subscriber receivers in the new band. This is a particular problem during rainy conditions, because the raindrops ("hydrometeors") will scatter the uplink transmissions, and some of the signal will be deflected downward. The FCC believes this kind of operation is feasible because DirecTV and EchoStar have only a few feeder link earth stations. But the new generation of spot-beam satellites will change that. The DBS obligations to carry local broadcast stations will require more feeder link stations, and some will be located in or near major metropolitan areas. For example, DirecTV recently added feeder link stations in Los Angeles, Calif.; Castle Rock, Colo.; Winchester, Va. and St. Paul, Minn. And if the FCC ever acts on the pending proposals to create additional orbital slots for the 12.2-12.7 GHz band, even more uplink earth stations will be needed. But in addition to earth-station-to-earth-station interference, there is the risk of space-station-to-space-station interference. Satellites operating in the new band will be transmitting downlink signals at 17 GHz, in the same band that the DirecTV and EchoStar satellites are already using to receive the uplink programming. That's a particular problem if some of the orbital slots for the two bands are co-located, or if they are close to one another. If the FCC does not impose any ownership restrictions on the new band licensees, it's easy to conceive of DirecTV wanting to operate a 17 GHz DBS satellite at the same orbital location as its 12 GHz satellite, so that its customers can use the same dish to receive both satellites. But it isn't clear whether that's technically feasible. And that leads to the question of what orbital spacing will be required for the new band. The 12 GHz band uses an orbital spacing of 9 degrees. That allows the use of very small dishes, about 18 inches in diameter. If the satellites were closer together, bigger dishes would be needed, because smaller dishes have a wider beamwidth, and would receive interference from the adjacent satellite, while larger dishes would be able to reject that interference. But the beamwidth or resolving power of a dish varies with frequency, and improves as the frequency increases. So at 17 GHz, an 18-inch dish could work with satellites spaced as close as 4 degrees apart. If the FCC were to allow DirecTV and EchoStar to operate satellites in the new band, the optimal spacing might be 4.5 degrees, so that some of them could be co-located with existing DBS satellites–provided, of course, that co-location is technically feasible. If existing DBS operators are not eligible, then co-location is less important. But here's the odd thing about this new FCC Notice of Proposed Rulemaking. This 65-page document has only one page discussing licensing procedures, and it never mentions the issue of eligibility. In contrast, in the past, the FCC has prohibited some existing licensees and operators (for example, cable operators and broadcasters) from applying for certain kinds of licenses. There are evidently three licensing approaches under consideration: First-come, first-served; a processing round; or an auction. Under a recent court decision, DBS auctions might be illegal, so scratch that. The FCC says that it has three pending license applications for the new band: from DirecTV in 1997, from Echostar in 2002 and from Intelsat in 2005. It's possible that the FCC will grant the DirecTV, EchoStar and Intelsat applications under the first-come, first-served policy. That should still leave a few orbital slots for Canada, Mexico and other U.S. companies. Oh, but wait, I forgot. Under commitments the U.S. made to the World Trade Organization in 1997, non-U.S. companies can also own satellites that operate over the U.S. and can provide service to U.S. locations. Anyway, licensing procedures and eligibility are likely to be a very controversial area in this proceeding. Finally, there was nothing in the FCC's Notice about a "family tier" or adult programming. According to rumors, every decision the current FCC chairman makes is colored by his desire to constrain adult programming. So, don't be surprised if there is a second round–a political round–in this FCC proceeding, after the technical issues are resolved. That's when licensee eligibility, adult programming and other political issues will be handled. And that's when the fireworks will start. Have a comment? Contact Jeff via e-mail at: firstname.lastname@example.org
<urn:uuid:4c241672-d536-4623-9ad2-cc2440af4645>
CC-MAIN-2017-04
https://www.cedmagazine.com/article/2006/07/new-dbs-band
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00475-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939935
1,122
2.578125
3
The value of big data is no longer a secret to most companies. The main problem with big data, though, is that it’s, well, big. That is, the volume of data that companies would like to understand is so large and coming so fast that just organizing, manipulating and analyzing it is a problem and, sometimes, prohibitive. Conventional relational databases often can’t handle today’s big data, or can’t process them in a reasonable amount of time. The traditional approach to solving this problem has been to come up with more efficient and powerful systems to process larger and larger amounts of data. For example, massively parallel-processing (MPP) databases, distributed file systems and cloud-based infrastructures have all been applied to the problem. Even with these solutions, the size and increase in big data continues to be a challenge Several computer science researchers at MIT, however, are taking a new approach to the problem: they’ve come up with a method to, effectively, make big data smaller. In a paper titled The Single Pixel GPS: Learning Big Data Signals from Tiny Coresets, Dan Feldman, Cynthia Sung, and Daniela Rus outline this new approach. The basic idea is to take big data and extract a coreset, defined as a “a smart compression of the input signal,” then query and analyze these compressed data. Their compression method also has the benefit being able to be applied to data as it’s received, say daily or hourly, in manageable chunks. Put another way, they take an incoming stream of data and identify patterns via statistical estimation (e.g., regression analysis). By then representing the true (big) data with this much smaller set of approximations (along with a small set of randomly selected data points), they land up with a data set that can be managed and analyzed using traditional tools and techniques and should provide similar results to analyzing the original data. It’s a potentially revolutionary approach that could be applied to a wide range of big data problems.
<urn:uuid:76ee15a1-9cc9-4090-87a7-9303bf08656a>
CC-MAIN-2017-04
http://www.itworld.com/article/2716825/big-data/making-big-data-smaller.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00201-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927044
425
3.234375
3
Amazing Facts for General Knowledge Collection of 90 amazing facts for general knowledge to increase your general knowledge. These general knowledge facts are fun and free. General Knowledge Facts 1. The word "queue" is the only word in the English language that is still pronounced the same way when the last four letters are removed. 2. Beetles taste like apples, wasps like pine nuts, and worms like fried bacon. 3. Of all the words in the English language, the word 'set' has the most definitions! 4. What is called a "French kiss" in the English speaking world is known as an "English kiss" in France. 5. "Almost" is the longest word in the English language with all the letters in alphabetical order. 6. "Rhythm" is the longest English word without a vowel. 7. In 1386, a pig in France was executed by public hanging for the murder of a child. 8. A cockroach can live several weeks with its head cut off. 9. Human thigh bones are stronger than concrete. 10. You can't kill yourself by holding your breath 11. There is a city called Rome on every continent. 12. It is against the law to have a pet dog in Iceland! 13. Your heart beats over 100,000 times a day! 14. Horatio Nelson, one of England's most illustrious admirals was throughout his life, never able to find a cure for his sea-sickness. 15. The skeleton of Jeremy Bentham is present at all important meetings of the University of London 16. Right handed people live, on average, nine years longer than left-handed people 17. Your ribs move about 5 million times a year, every time you breathe. 18. The elephant is the only mammal that can't jump. 19. One quarter of the bones in your body, are in your feet. 20. Like fingerprints, everyone's tongue print is different. 21. The first known transfusion of blood was performed as early as 1667, when Jean-Baptiste, transfused two pints of blood from a sheep to a young man. 22. Fingernails grow nearly 4 times faster than toenails. 23. Most dust particles in your house are made from dead skin. 24. The present population of 5 billion plus people of the world is predicted to become 15 billion by 2080. 25. Women blink nearly twice as much as men. 26. Adolf Hitler was a vegetarian, and had only ONE testicle. 27. Honey is the only food that does not spoil. Honey found in the tombs of Egyptian pharaohs has been tasted by archaeologists and found edible. 28. Months that begin on a Sunday will always have a "Friday the 13th." 29. Coca-Cola would be green if coloring weren't added to it. 30. On average a hedgehog's heart beats 300 times a minute. 31. More people are killed each year from bees than from snakes. 32. The average lead pencil will draw a line 35 miles long or write approximately 50,000 English words. 33. More people are allergic to cow's milk than any other food. 34. Camels have three eyelids to protect themselves from blowing sand. 35. The placement of a donkey's eyes in it's heads enables it to see all four feet at all times! 36. The six official languages of the United Nations are: English, French, Arabic, Chinese, Russian and Spanish. 37. Earth is the only planet not named after a god. 38. It is against the law to burp, or sneeze in a church in Nebraska, USA. 39. You're born with 300 bones, but by the time you become an adult, you only have 206. 40. Some worms will eat themselves if they can't find any food! 41. Dolphins sleep with one eye open! 42. It is impossible to sneeze with your eyes open 43. The world's oldest piece of chewing gum is 9000 years old! 44. The longest recorded flight of a chicken is 13 seconds 45. Queen Elizabeth-I regarded herself as a paragon of cleanliness. She declared that she bathed once every three months, whether she needed it or not 46. Slugs have 4 noses. 47. Owls are the only birds that can see the colour blue. 48. A man named Charles Osborne had the hiccups for 69 years! 49. A giraffe can clean its ears with its 21-inch tongue! 50. The average person laughs 10 times a day! 51. An ostrich's eye is bigger than its brain 52. In the weightlessness of space a frozen pea will explode if it comes in contact with Pepsi. 53. The increased electricity used by modern appliance parts is causing a shift in the Earth's magnetic field. By the year 2327, the North Pole will be located in mid-Kansas, while the South Pole will be just off the coast of East Africa. 54. The idea for "tribbles" in "Star Trek" came from gerbils, since some gerbils are actually born pregnant. 55. Male rhesus monkeys often hang from tree branches by their amazing prehensile penises. 56. Johnny Plessey batted .331 for the Cleveland Spiders in 1891, even though he spent the entire season batting with a rolled-up, lacquered copy of the Toledo Post-Dispatch. 57. Smearing a small amount of dog feces on an insect bite will relieve the itching and swelling. 58. The Boeing-747 is capable of flying upside-down if it weren't for the fact that the wings would shear off when trying to roll it over. 59. The trucking company Elvis Presley worked at as a young man was owned by Frank Sinatra. 60. The only golf course on the island of Tonga has 15 holes, and there's no penalty if a monkey steals your golf ball. 61. Legislation passed during WWI making it illegal to say "gesundheit" to a sneezer was never repealed. 62. Manatees possess vocal chords which give them the ability to speak like humans, but don't do so because they have no ears with which to hear the sound. 63. SCUBA divers cannot pass gas at depths of 33 feet or below. 64. Catfish are the only animals that naturally have an ODD number of whiskers. 65. Replying more than 100 times to the same piece of spam e-mail will overwhelm the sender's system and interfere with their ability to send any more spam. 66. Polar bears can eat as many as 86 penguins in a single sitting. 67. The first McDonald's restaurant opened for business in 1952 in Edinburgh, Scotland, and featured the McHaggis sandwich. 68. The Air Force's F-117 fighter uses aerodynamics discovered during research into how bumblebees fly. 69. You *can* get blood from a stone, but only if contains at least 17 percent bauxite. 70. Silly Putty was "discovered" as the residue left behind after the first latex condoms were produced. It's not widely publicized for obvious reasons. 71. Approximately one-sixth of your life is spent on Wednesdays. 72. The skin needed for elbow transplants must be taken from the scrotum of a cadaver. 73. The sport of jai alai originated from a game played by Incan priests who held cats by their tails and swung at leather balls. The cats would instinctively grab at the ball with their claws, thus enabling players to catch them. 74. A cat's purr has the same romance-enhancing frequency as the voice of singer Barry White. 75. The typewriter was invented by Hungarian immigrant 'Qwert Yuiop', who left his "signature" on the keyboard. 76. The volume of water that the Giant Sequoia tree consumes in a 24-hour period contains enough suspended minerals to pave 17.3 feet of a 4-lane concrete freeway. 77. King Henry VIII slept with a gigantic axe. 78. Because printed materials are being replaced by CD-ROM, microfiche and the Internet, libraries that previously sank into their foundations under the weight of their books are now in danger of collapsing in extremely high winds. 79. In 1843, a Parisian street mime got stuck in his imaginary box and consequently died of starvation. 80. Touch-tone telephone keypads were originally planned to have buttons for Police and Fire Departments, but they were replaced with * and # when the project was cancelled in favor of developing the 911 system. 81. Human saliva has a boiling point three times that of regular water. 82. Calvin, of the "Calvin and Hobbes" comic strip, was patterned after President Calvin Coolidge, who had a pet tiger as a boy. 83. Watching an hour-long soap opera burns more calories than watching a three-hour baseball game. 84. Until 1978, Camel cigarettes contained minute particles of real camels. 85. You can actually sharpen the blades on a pencil sharpener by wrapping your pencils in aluminum foil before inserting them. 86. To human taste buds, Zima is virtually indistinguishable from zebra urine. 87. Seven out of every ten hockey-playing Canadians will lose a tooth during a game. For Canadians who don't play hockey, that figure drops to five out of ten. 88. A dog's naked behind leaves absolutely no bacteria when pressed against carpet. 89. A team of University of Virginia researchers released a study promoting the practice of picking one's nose, claiming that the health benefits of keeping nasal passages free from infectious blockages far outweigh the negative social connotations. 90. Among items left behind at Osama bin Laden's headquarters in Afghanistan were 27 issues of Mad Magazine. Al Qaeda members have admitted that bin Laden is reportedly an avid reader. This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you. We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus
<urn:uuid:5a91037b-6bc3-45bc-9c3a-9bae0b4b1e59>
CC-MAIN-2017-04
http://www.knowledgepublisher.com/article-801.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00201-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956968
2,210
2.671875
3
According to a recent Georgia Tech study, top mobile Internet browsers lack critical information that could lead to a security risk. As Phys.org eported, even expert users could be fooled by imposter websites because mobile browsers fail to meet security guidelines recommended by the World Wide Web Consortium (W3C) for browser safety. The absence of a graphical indicator in a mobile browser's URL field was the main concern outlined by researchers in Measuring SSL Indicators on Mobile Browsers: Extended Life, or End of the Road. "We found vulnerabilities in all 10 of the mobile browsers we tested, which together account for more than 90 percent of the mobile browsers in use today in the United States," Patrick Traynor, assistant professor in Georgia Tech's School of Computer Science, told Phys.org. "The basic question we asked was, 'Does this browser provide enough information for even an information-security expert to determine security standing?' With all 10 of the leading browsers on the market today, the answer was no." Usually seen as a small lock icon on a desktop or laptop computer's browser, graphical indicators typically indicate to the user that they have a secure connection to an authentic website. Without such indicators, it is easier for a user to fall victim to some kind of Internet scam in which their personal or financial information is compromised. In practice, mobile browser users are three times more likely to access phishing sites than users of standard browsers, one of the researchers said. Image courtesy of Shutterstock
<urn:uuid:3c5a2f2f-1a18-4dd2-97de-7c0858cec228>
CC-MAIN-2017-04
http://www.govtech.com/security/Mobile-Browsers-Wholly-Unsafe.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00411-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918679
302
3.109375
3
This is a 3 step series examining why File Integrity Monitoring is essential for the security of any business’ IT. This first section examines the need for malware detection, addressing the inevitable flaws in anti-virus systems. Malware Detection – How Effective is Anti-Virus? When malware hits a system - most commonly a Windows operating system, but increasingly Linux and Solaris systems are coming under threat (especially with the renewed popularity of Apple workstations running Mac OS X) - it will need to be executed in some way in order to do its evil deeds. This means that some kind of system file – an executable, driver or dll has to be planted on the system. A Trojan will make sure that it gets executed without further user intervention by replacing a legitimate operating system or program file. When the program runs, or the OS performs one of its regular tasks, the Trojan is executed instead. On a user workstation, 3rd party applications such as internet browsers, pdf readers and mundane user packages like MS Word or Excel have been targeted as a vector for intermediate malware. When the document or spreadsheet is opened, the malware can exploit vulnerabilities in the application, enabling malware to be downloaded and executed. Either way, there will always be a number of associated file changes. Legitimate system files are replaced or new system files are added to the system. If you are lucky, you won’t be the first victim of this particular strain of malware and your AV system – provided it has been updated recently – will have the necessary signature definitions to identify and stop the malware. When this is not the case, and bear in mind that millions of new malware variants are introduced every month, your system will be compromised, usually without you knowing anything about it, while the malware quietly goes about its business, damaging systems or stealing your data. FIM – Catching the Malware Other Anti-Virus Systems Miss That is, of course, unless you are using file integrity monitoring. Enterprise-level FIM provides an ideal host intrusion detection technology, reporting any unusual filesystem activity. Unusual is important, because many files will change frequently on a system, so it is crucial that the FIM system is intelligent enough to understand what regular operation looks like for your systems and only flag genuine security incidents. By extension, the same principles of integrity checking can be applied to other breach or host intrusion detection indicators, such as registry keys/values, windows security policy, user accounts, service and process lists, installed software and updates and of course, the Linux equivalents of these in terms of configuration file settings. However, exclusions and exceptions should be kept to a minimum because FIM is at its best when it is operated in a ‘zero tolerance’ approach to changes. Malware is formulated with the objective that it will be effective, and this means it must both be successfully distributed and operate without detection. The challenge of distribution has seen much in the way of innovation. Tempting emails with malware bait in the form of pictures to be viewed, prizes to be won and gossip on celebrities have all been successful in spreading malware. Phishing emails provide a convincing reason to click and enter details or download forms, and specifically targeted Spear Phishing emails have been responsible for duping even the most cybersecurity-savvy user. Whatever the vector used, once malware is welcomed into a system, it may then have the means to propagate within the network to other systems. So early detection is of paramount importance. And you simply cannot rely on your anti-virus system to be 100% effective, as we have already highlighted. FIM provides this 'zero tolerance' to filesystem changes. There is no second-guessing of what may or may not be malware, guaranteeing that all malware is reported, making FIM 100% effective in detecting any breach of this type. FIM is ideal as a malware detection technology as it is not prone to the 'signature lag' or 'zero day vulnerabilities' that are the Achilles’ Heel of anti-virus systems. As with most security best practices, the advice is always more is better, and operating anti-virus (even with its known flaws) in conjunction with FIM will give the best overall protection. AV is effective against legacy malware and its automated protection will quarantine most threats before they do any damage. But when malware does evade the AV, as some strains always will do, real-time FIM can provide a vital and host intrusion detection safety net.
<urn:uuid:cb0669bb-a037-4a9c-8c73-3a107ecc6631>
CC-MAIN-2017-04
https://www.newnettechnologies.com/security-compromised-without-file-integrity-monitoring.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00227-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931902
926
2.59375
3
Definition: An algorithm to find the greatest common divisor, g, of two positive integers, a and b, and coefficients, h and j, such that g = ha + jb. See also Euclid's algorithm. Note: These coefficients are useful for computing modular multiplicative inverses. After [CLR90, page 811]. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 17 December 2004. HTML page formatted Mon Feb 2 13:10:39 2015. Cite this as: Paul E. Black, "extended Euclid's algorithm", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/extendEuclid.html
<urn:uuid:265cbc4e-7587-4e25-98a6-ac11073c78fc>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/extendEuclid.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00439-ip-10-171-10-70.ec2.internal.warc.gz
en
0.830936
195
3.734375
4
Some time ago I was working on IPv6 implementation and in that period I wrote an article about NDP (you can read it here). After a while I received some comments that is not written very well so I reviewed a huge part of it. It looks my english was far worst two years ago that I was really aware of 🙂 In the reviewing process I realised that NDP usage of Solicited-Node multicast addresses was not clearly explained. This is the follow-up article which should explain how and why Solicited-Node multicast address are used in NDP. After all this kind of multicast addresses are there to enable IPv6 neighbor discovery function of NDP to work properly. Solicited-node multicast address is IPv6 multicast address used on the local L2 subnet by NDP Network Discovery Protocol. NDP uses that multicast address to be able to find out L2 link-local addresses of other nodes present on that subnet. NDP replaces ARP As we know, NDP in IPv6 networks replaced the ARP function from IPv4 networks. In IPv4 world ARP used broadcast to send this kind of discovery messages and find out about neighbours IPv4 addresses on the subnet. With IPv6 and NDP use of broadcast is not really a good solution so we use special type of multicast group addresses to which all nodes join to enable NDP communication. Why is not a good solution to use broadcast? ARP uses broadcasts for ARP request to the broadcast MAC address ff:ff:ff:ff:ff:ff. That kind of message will be received by everyone on L2 segment, although only one neighbour needs to respond back with an answer. Others need to receive that message, process it and discard the request afterwards. This action can cause network congestions if the amount of broadcast is excessive at some point. And all this on IPv4 network. Imagine if we implemented the same ARP in IPv6. Average IPv4 L2 segment is a subnet with, let’s say, 192.168.1.0/24 subnet that will enable us to have 254 IPv4 addresses (254 hosts) on L2 segment. Usually in IPv6 a “normal” L2 network segment will use subnet with /64 which will enable us to have 2^64 addresses. Broadcast between so many possible devices would kill our network segment, that’s the main reason broadcast does not even exist in IPv6 protocol and that is the reason NDP will need to use something better like multicast to get to all nodes on that segment. Just a quick reminder: There is no broadcast address type in IPv6, there are only: - Unicast addresses. A packet is delivered to one host - Multicast addresses. A packet is delivered to multiple hosts. - Anycast addresses. A packet is delivered to the nearest of multiple host with the same IPv6 address Solicited-node multicast addresse is our answer. Solicited-node multicast address is generated from the last 24-bits of an IPv6 unicast (or anycast) address of an interface. Number of devices on some L2 segment that are subscribed to each solicited-node multicast address is very small, typically only one device. This enable us to reduce almost to none “wrong” host interruptions by neighbour solicitation requests, compared to ARP in IPv4. There is a issue here with switches on which we have our IPv6 L2 segment devices connected. Those switched need to be multicast aware and implement MLD snooping. MLD snooping will enable the switch to send traffic that is addressed to a solicited-node multicast address only on the ports that lead to devices subscribed to receive that multicast traffic. If we do not think of MLD, Ethernet switches will probably tent to flood the multicast frames out of all switch ports converting our nice multicast setup to broadcast mess. How Solicited-Node multicast address is created We use the last 24 bits from our interface unicast or anycast address and append that part of the address to the prefix FF02::1:FF00:0/104. Our interface unicast or anycast address is maybe EUI-64 SLAAC generated or DHCPv6 configured. NDP will do his thing and calculate Solicited-Node multicast address for that interface and join that multicast group. In the process of generating.. We toked 104 bits from the address but in that way so that last byte of the penultimate field 00 is not used in the prefix. Our example shows that last 24 bits of the multicast address begin after FF. In the process of generating Solicited-Node multicast address we will get an address from multicast range from FF02:0:0:0:0:1:FF00:0000 to FF02:0:0:0:0:1:FFFF:FFFF A host is joining Solicited-Node multicast group for each of its unicast or anycast addresses for all its interfaces which is basically enabling normal NDP protocol function Let’s say that we have one interface with an address fe80::2bb:fa:ae11:1152 the associated Solicited-Node multicast address is ff02::1:ff11:1152. So in this example our host must join to the multicast group represented by this address.
<urn:uuid:725c14f7-c7a9-4235-8648-2129e980e19a>
CC-MAIN-2017-04
https://howdoesinternetwork.com/2015/solicited-node-multicast-address
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00347-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930748
1,120
2.765625
3
Information available at our fingertips in form of digital data today has swelled up to levels which had never been before. At the same time, real time communication has exponentially increased to extremely high levels. A whole class of applications have emerged that demand for transmission of high-speed data. Necessity may be the mother of invention – optical fiber networks have been invented and deployed to solve the problem of high volume data exchange. And multimode fiber patch cables have grown to be the very first choice one of the different connectors of the wired carriers with endpoint devices. What are the speed-hungry and volume-hungry data centric applications that have created this entire demand? Some examples of those applications are the Internet, the local area multi-computer networks, the phone networks and the ATM networks. There are many more applications with intense hunger for fast communication resources. For those practical purposes, these communication channels need a high-speed network that can carry enormous volumes of data with minimal attenuation and extreme accuracy. The modern fiber optic cable technology provides exactly this sort of communication. The multimode patch cables are used to connect this data transmitted over the network towards the devices that they target to cater. These patches may also be used to connect the two loose ends of two fiber optic cables. The patch cables have to be multimode when the requirement is to support multimode optical fibers. What is a multimode cable poor fiber optics? A multimode is one in which multiple packets of data can be simultaneously carried across the wire. The result is that the network can carry numerous data packets at a instant of time. The multimode mainline network cables are usually short long since the target with these cables is to support high speed and high power multiuser systems in a localized sense. The patches are compatible with the network cables to enable the machine remain aligned with the network objectives. Consequently the multimode patches support multiple user applications transferring data simultaneously, as well as retain the qualities of standard single mode patches like the high network speed, low network hindrances and occasional external interferences. It’s also interesting to note that the end point devices these patch cables connect can be heterogeneous in nature. The aperture the end point device requires and types of applications supported may be diverse. There exist several different kinds of multimode fiber patch cables you can use based upon the requirements. And depending upon the exact reason why you have to install the patch on your fiber optic network, you shall need to select your patch and go ahead with the required installation.
<urn:uuid:ec359f51-01d5-4bc6-b36b-e8a36cfc92b9>
CC-MAIN-2017-04
http://www.fs.com/blog/higher-data-throughput-from-multimode-fiber-patch-cables.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00283-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926198
513
2.640625
3
Meeting the ChallengesBy Alex Fuss | Posted 2009-09-24 Email Print Nature is full of quantum computers, but we are just beginning to figure out how to control them. When we do, we’ll be able to solve once-intractable problems. Meeting the Challenges The challenge for companies and organizations working today to build these machines—such as D-Wave Systems, Princeton Center for Theoretical Physics and The Mitre Corp.—is to capture, observe and control qubits in large enough quantities to be useful. D-Wave has a quantum computer it expects to commercialize in one or two years, according to the late Chris Hipp, who had been director of marketing. The company is working on a 128-qubit chip made of niobium, but it needs to be able to control at least 1,000 connected qubits to handle complex, multivariable, combinatorial optimization problems. Princeton and Mitre are taking a more controlled approach to harnessing quantum electrons. They expect to have a working version in their labs in a year or two. Some pressing real-world problem areas include: Code Breaking: How can the Defense Intelligence Agency, CIA and FBI break the codes our enemies and criminals create with public key encryption methods? Using classical computers to derive the private key for a 50-digit number would take 3 million years, according to Julian Brown in his book The Quest for the Quantum Computer. Quantum computers could do it in minutes. Package Delivery: How can FedEx or UPS maximize the number of packages they deliver, while minimizing the number of hangers, terminals and planes; the amount of fuel; the number of employees; and the time required to satisfy customers and beat the competition? Air Traffic Control: How can the Federal Aviation Administration maximize the number of planes it can keep in the air and land safely, with the fewest number of people and the least amount of radar and communications equipment—given the constraints of specific planes, airports and runways, as well as weather and emergency conditions—while minimizing delays and fuel consumption? Project Scheduling: How can a large consulting firm use its staff resources efficiently, given specific project requirements, various consultants’ skills, vacation schedules and geographical constraints, while maximizing revenue by getting all projects in on time and under budget? Pattern Matching: How can Google, Yahoo, Microsoft or the Air Traffic Security Administration compare images taken at different times from different angles—from cameras with different resolutions—and determine if the images are of the same person or thing? This requires examining thousands of pixels per pair; mapping those pixels into features; and abstracting the features to compare them to a database of known objects. Today, this can only be done quickly and reliably with human help. The human brain is designed for rapid pattern matching: People recognize individuals even if they have gained weight, grown a beard or changed their hair color. Classical computers struggle to do this and cannot help with the challenges listed above because they require more computing power than a serial-oriented digital processor can muster. Quantum computers promise to change all that. Quantum computation is found all around us in nature, which is made up of subatomic particles that we are just beginning to figure out how to control. When we eventually do, we’ll be able to solve once-intractable problems. Alex Fuss is a managing partner at DigitalThis, a consulting firm specializing in leading-edge technologies and strategic implementations.
<urn:uuid:9f861a25-28aa-4f41-b4a4-70f178936332>
CC-MAIN-2017-04
http://www.baselinemag.com/c/a/Innovation/The-Quantum-Advantage-840944/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00219-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927359
712
2.8125
3