text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
An Analysis of Shamir's Factoring Device Robert D. Silverman May 3, 1999 At a Eurocrypt rump session, Professor Adi Shamir of the Weizmann Institute announced the design for an unusual piece of hardware. This hardware, called "TWINKLE" (which stands for The Weizmann INstitute Key Locating Engine), is an electro-optical sieving device which will execute sieve-based factoring algorithms approximately two to three orders of magnitude as fast as a conventional fast PC. The announcement only presented a rough design, and there a number of practical difficulties involved with fabricating the device. It runs at a very high clock rate (10 GHz), must trigger LEDs at precise intervals of time, and uses wafer-scale technology. However, it is my opinion that the device is practical and could be built after some engineering effort is applied to it. Shamir estimates that the device can be fabricated (after the design process is complete) for about $5,000. What is a sieve-based factoring algorithm? A sieve based algorithm attempts to construct a solution to the congruence A2 = B2 mod N, whence GCD(A-B,N) is a factor of N. It does so by attempting to factor many congruences of the form C = D mod N, where there is some special relation between C and D. Each of C and D is attempted to be factored with a fixed set of prime numbers called a factor base. This yields congruences of the form: where Pi are the primes in the factor base associated with C and pi are the primes in the factor base associated with D. These factored congruences are found by sieving all the primes in the factor base over a long sieve interval. One collects many congruences of this form (as many as there are primes in the two factor bases) then finds a set of these congruences which when multiplied together yields squares on both sides. This set is found by solving a set of linear equations mod 2. Thus, there are two parts to a sieve-based algorithm: (1) collecting the equations by sieving, and (2) solving them. The number of equations equals the sum of the sizes of the factor bases. A variation allows somewhat larger primes in the factorizations than those in the factor bases. This has the effect of greatly speeding the sieving process, but makes the number of equations one needs to solve much larger. One could choose not to use the larger primes, but then one needs a much larger factor base, once again resulting in a larger matrix. It should be noted that sieve based algorithms can also be used to solve discrete logarithm problems as well as factor. This applies to discrete logs over finite fields, but not to elliptic curve discrete logs. Solving discrete logs takes about the same amount of time as factoring does for same-sized keys. However, the required space and time for the matrix is much larger for discrete logs. One must solve the system of equations modulo the order of the field, rather than mod 2. What has been achieved so far with conventional hardware? Recently, a group led by Peter Montgomery announced the factorization of RSA-140, a 465-bit number. The effort took about 200 computers, running in parallel, about 4 weeks to perform the sieving, then it took a large CRAY about 100 hours and 810 Mbytes of memory to solve the system of equations. The size of the factor bases used totaled about 1.5 million primes resulting in a system of about 4.7 million equations that needed to be solved. How long would RSA-140 take with TWINKLE? Each device is capable of accommodating a factor base of about 200,000 primes and a sieve interval of about 100 million. RSA-140 required a factor base of about 1.5 million, and the sieve interval is adequate, so about 7 devices would be needed. One can use a somewhat smaller factor base, but a substantially smaller one would have the effect of greatly increasing the sieving time. This set of devices would be about 1000 times faster than a single conventional computer, so the sieving could be done in about 6 days with 7 devices. The matrix would still take 4 days to solve, so the net effect would be to reduce the factorization time from about 33 days to 10 days, a factor of 3.3. This is an example of Amdahl's law which says that in a parallel algorithm the maximum amount of parallelism that can be achieved is limited by the serial parts of the algorithm. The time to solve the matrix becomes a bottleneck. Even though the matrix solution for RSA-140 required only a tiny fraction of the total CPU hours, it represented a fair fraction of the total ELAPSED time: it took about 15% of the elapsed time with conventional hardware for sieving. It would take about 40% of the elapsed time with devices. Note further that even if one could sieve infinitely fast, the speedup obtained would only be a factor of 8 over what was actually achieved. How long would a 512-bit modulus take with TWINKLE? A 512-bit modulus would take 6 to 7 times as long for the sieving and 2 to 3 times the size of the factor bases as RSA-140. The size of the matrix to be solved grows correspondingly, and the time to solve it grows by a factor of about 8. Thus, 15 to 20 devices could do the sieving in about 5-6 weeks. Doubling the number will cut sieving time in half. The matrix would take another 4 weeks and about 2 Gbytes of memory to solve. The total time would be 9-10 weeks. With the same set of conventional hardware as was used for RSA-140, the sieving would take 6 to 7 months and the matrix solving resources would remain the same. Please note that whereas with RSA-140, solving the matrix would take 40% of the elapsed time, with a 512-bit number it would take just a bit more. This problem will get worse as the size of the numbers being factored grows. How well will TWINKLE scale to larger numbers? A 768 bit number will take about 6000 times as long to sieve as a 512-bit number and will require a factor base which is about 80 times large. The length of the sieve interval would also increase by a factor of about 80. Thus, while about 1200 devicess could accommodate the factor base, they would have to be redesigned to accommodate a much longer sieve interval. Such a set of machines would still take 6000 months to do the sieving. One can, of course, reduce this time by adding more hardware. The memory needed to hold the matrix would be about 64 Gbytes and would take about 24,000 times as long to solve. A 1024-bit number is the minimum size recommended today by a variety of standards (ANSI X9.31, X9.44, X9.30, X9.42). Such a number would take 6 to 7 million times as long to do the sieving as a 512-bit number. The size of the factor base would grow by a factor of about 2500, and the length of the sieve interval would also grow by about 2500. Thus, while about 45,000 devices could accommodate the factor base, they would again have to be redesigned to accommodate much longer sieve intervals. Such a set would still take 6 to 7 million months (500,000 years) to do the sieving. The memory required to hold the matrix would grow to 5 to 10 Terabytes and the disk storage to hold all the factored relations would be in the Petabyte range. Solving the matrix would take "about" 65 million times as long as with RSA-512. These are rough estimates, of course, and can be off by an order of magnitude either way. What are the prospects for using a smaller factor base? The Number Field Sieve finds its successfully factored congruences by sieving over the norms of two sets of integers. These norms are represented by polynomials. As the algorithm progresses, the coefficients of the polynomials become larger, and the rate at which one finds successful congruences drops dramatically. Most of the successes come very early in the running of the algorithm. If one uses a sub-optimally sized factor base, the 'early' polynomials do not yield enough successes for the algorithm to succeed at all. One can try sieving more polynomials, and with a faster sieve device this can readily be done. However, the yield rate can drop so dramatically that no additional amount of sieving can make up for the too-small factor base. The situation is different if one uses the Quadratic Sieve. For this algorithm all polynomials are 'equal', and one can use a sub-optimal factor base. However, for large numbers, QS is much less efficient than NFS. At 512-bits, QS is about 4 times slower than NFS. Thus, to do 512-bit numbers with devices, QS should be the algorithm of choice, rather than NFS. However, for 1024-bit numbers, QS is slower than NFS by a factor of about 4.5 million. That's a lot. And the factor base will still be too large to manage, even for QS. What are the prospects for speeding the matrix solution? Unlike the sieving phase, solving the matrix does not parallelize easily. The reason is that while the sieving units can run independently, a parallel matrix solver would require the processors to communicate frequently and both bandwidth and communication latency would become a bottleneck. One could try reducing the size of the factor bases, but too great a reduction would have the effect of vastly increasing the sieving time. Dealing with the problems of matrix storage and matrix solution time seems to require some completely new ideas. Key Size Comparison The table below gives, for different RSA key sizes, the amount of time required by the Number Field Sieve to break the key (expressed in total number of arithmetic operations), the size of the required factor base, the amount of memory, per machine, to do the sieving, and the final matrix memory. The time column in the table below is useful for comparison purposes. It would be difficult to give a meaningful elapsed time, since elapsed time depends on the number of machines available. Further, as the numbers grow, the devices would need to grow in size as well. RSA-140 (465 bits) will take 6 days with 7 devices, plus the time to solve the matrix. This will require about 2.5 * 1018 arithmetic operations in total. A 1024-bit key will be 52 million times harder in time, and about 7200 times harder in terms of space. The data for numbers up to 512-bits may be taken as accurate. The estimates for 768 bits and higher can easily be off by an order of magnitude. |Keysize||Total Time||Factor Base||Sieve Memory||Matrix Memory| |428||5.5 * 1017||600K||24Mbytes||128M| |465||2.5 * 1018||1.2M||64Mbytes||825Mbytes| |512||1.7 * 1019||3M||128Mbytes||2 Gbytes| |768||1.1 * 1023||240M||10Gbytes||160Gbytes| |1024||1.3 * 1026||7.5G||256Gbytes||10Tbytes| The idea presented by Dr. Shamir is a nice theoretical advance, but until it can be implemented and the matrix difficulties resolved it will not be a threat to even 768-bit RSA keys, let alone 1024. - (1) A.K. Lenstra & H.W. Lenstra (eds), The Development of the Number Field Sieve, Springer-Verlag Lecture Notes in Mathematics #1554 - (2) Robert D. Silverman, The Multiple Polynomial Quadratic Sieve, Mathematics of Computation, vol. 48, 1987, pp. 329-340 - (3) H. teRiele, Factorization of RSA-140, Internet announcement in sci.crypt and sci.math, 2/4/99 - (4) R.M. Huizing, An Implementation of the Number Field Sieve, CWI Report NM-R9511, July 1995
<urn:uuid:027d4750-2fb5-45cc-9326-b546ce966cd7>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/historical/an-analysis-of-shamirs-factoring-device.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00144-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949684
2,627
3.3125
3
Hardly 15 years old, Wi-Fi technology has become the fuel for the hundreds of millions of connected devices we rely on every day. It’s allowed us to push the limits of wireless technology and innovate in unimagined ways. In fact, a new economic study released today finds that unlicensed spectrum generated $222 billion in value to the U.S. economy in 2013 and contributed $6.7 billion to U.S. GDP. Far from its days of being labeled a user of junk spectrum, Wi-Fi has gone from being a technology stepchild to a technology superstar. Today, more data is carried over Wi-Fi than any other platform – more than cellular and wireline combined. But Wi-Fi and the unlicensed spectrum it relies on can’t keep up with demand. Because Wi-Fi relies on only a small amount of usable unlicensed spectrum, congestion is increasing. That limits the number of devices that can work on the same hotspot and how much data each device can receive. But there is a solution – allowing more sharing of spectrum to keep pace with our rapidly growing collection of data hungry devices. Opening up more spectrum for Wi-Fi use would also jump-start the next generation of Wi-Fi technology, which could make gigabit speeds possible on Wi-Fi networks for the first time. For Wi-Fi to keep marching forward, it desperately needs access to more spectrum. That is why NCTA is joining a new coalition called WiFiForward made up of companies, organizations and public sector institutions calling on policymakers to solve this Wi-Fi spectrum crunch. Other members of the coalition include Google, Comcast, Microsoft, the American Libraries Association, the International Venue Managers Association, the Consumer Electronics Association, Arris and others. The benefits of freeing up unlicensed spectrum for Wi-Fi have already been proven. But lightning-fast next generation Wi-Fi will spur will even more innovation and unleash a torrent of new technologies in fields as diverse as communications, entertainment, health care, agriculture, energy, transportation and the Internet of Things. As a nation, we have the opportunity to be global leaders in Wi-Fi speed, accessibility and technology. But we must act now to free up the unlicensed spectrum that’s needed to make that happen.
<urn:uuid:51852542-c93b-4c5a-8a0d-5dae657ca379>
CC-MAIN-2017-04
https://www.ncta.com/platform/industry-news/ncta-joins-coalition-calling-for-more-unlicensed-spectrum-to-spur-wi-fi-innovation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00264-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927675
471
2.546875
3
As a new year approaches we must prepare for new Internet security threats. Every year, new and innovative ways of attacking computer users emerge and continue to increase in volume and severity. To know where we are going it is helpful to look at where we have been. Finding trends in Internet security has become a valuable, if not necessary, action for companies developing software to protect computer users. Attacks have increased in sophistication and are often tailored to their specific victim. Trend tracking has shown that in 2008, the Web has become a primary conduit for attack activity. According to Symantec’s Top Internet Security Trends of 2008, attackers have become more difficult to track as they have shifted away from mass distribution of a small family of threats to micro distribution of large numbers of threats. Spam and Phishing This may be the most well known form of computer breaching, and yet it is still the healthiest and fastest growing of attacks. In 2004, Bill Gates predicted that spam would be resolved in another two years. In 2008, we were seeing spam levels at 76 percent until the McColo incident in November 2008, at which time spam levels dropped 65 percent. The battle with spammers has turned into an all out war and spammers are showing no sign of surrendering. Spammers take advantage of current events, such as the presidential election, Chinese earthquake, Beijing Olympic Games and the economy. They use these widely socialized issues as headlines to lure people into clicking on a link to malware or sending money for unrealistic charitable campaigns. Social networks are only feeding the beast by making it easier for spam attacks to propagate quickly through a victim’s social network. Phishing walks hand in hand with spam as it utilizes current events to make their bait more convincing. Another phishing tactic particularly recognized over the last year is by offering users a false sense of security by targeting .gov and .edu domains. Although cybercriminals cannot register domains under these domains, they find ways to compromise the Web servers to grant them control. Once control is gained, it becomes harder to fix because the domain cannot be simply deactivated. Lengthy measures are taken to have the company remove the compromised page from their website and secure their servers. The time it takes to make these changes allows the phished page to remain active and hit more victims. Fake and Misleading Applications Fake security and utility programs aka “scareware” promise to secure or clean up a user’s home computer. The applications produce false and often misleading results, and hold the affected PC hostage to the program until the user pays to remedy the pretend threats. Even worse, such scareware can be used as a conduit through which attackers install other malicious software onto the victim’s machine. In 2008, the Identity Theft Resource Center (ITRC) documented 548 breaches, exposing 30,430,988 records. The significance of this data is truly spotlighted after realizing that it only took nine months in 2008 to reach the 2007 total. What is most interesting about data breaches is that most are not malicious in nature. In many cases, inadvertent employee mishandling of sensitive information and insecure business processes are the most common ways that data is exposed. This can be attributed to the increase of mergers, acquisitions and layoffs resulting from the thundering economic climate changes in 2008. What to Watch for in 2009 Looking at attack trends and techniques malware creators favored in 2008 help us predict what to expect in 2009. Some of these new attacks are already starting to show up and users need to be aware so that they can stay safe online in 2009. As we have learned, current events are utilized as headliners to bait victims. In 2009, it is easily predicted that the economic crisis will be the basis of new attacks. We expect to see an increase in emails promising easy-to-get mortgages or work opportunities. Unfortunately, the people already being hit hard by the economy who have lost jobs and who have had homes foreclosed will also become the primary prey of scams. Advanced Web Threats The number of available Web services is increasing and browsers are continuing to converge on a uniform interpretation standard for scripting languages. Consequently, we expect the number of new Web-based threats to increase. User-created content can host a number of online threats from browser exploits, distribution of malware/spyware and links to malicious websites. The widespread use of mobile phones with access to the Web will make Web-based threats more lucrative. We have already seen attacks disguised as free application downloads and games targeting Smartphones. We expect to see more truly malicious mobile attacks in 2009. Social networks will enable highly targeted and personalized spam by phishing for username accounts and/or using social context as a way to increase the “success rate” of an online attack. In 2009, we expect an upgrade in spam to the use of proper names, sophisticatedly segmented according to demographic or market. The upgraded spam will resemble legitimate messages and special offers created from personal information pulled from social networks and may even appear to come from a social networking “friend.” Once a person is hit, the threat can easily be spread through their social network. Enterprise IT organizations need to be on the alert for these types of attacks because today’s workforce often accesses these tools using corporate resources. The battle against Internet security threats will continue to rage on and tactics on both sides will become more sophisticated over time. Although no one can be certain of what the future holds, we can look back and learn from our past to identify trends that can help make educated predictions for where future attacks may be heading.
<urn:uuid:b45d667e-4dd7-4697-a45a-919d8e8e1ec9>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2008/12/24/security-trends-of-2008-and-predictions-for-2009/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00347-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949605
1,144
2.671875
3
Black Box Explains...Shielded vs. unshielded cable The environment determines whether cable should be shielded or unshielded. Shielding is the sheath surrounding and protecting the cable wires from electromagnetic leakage and interference. Sources of this electromagnetic activity (EMI)—commonly referred to as noise—include elevator motors, fluorescent lights, generators, air conditioners, and photocopiers. To protect data in areas with high EMI, choose a shielded cable. Foil is the most basic cable shield, but a copper-braid shield provides more protection. Shielding also protects cables from rodent damage. Use a foil-shielded cable in busy office or retail environments. For industrial environments, you might want to choose a copper-braid shield. For quiet office environments, choose unshielded cable.
<urn:uuid:ca5d4f03-34a1-49dc-8274-455bbc76825c>
CC-MAIN-2017-04
https://www.blackbox.com/en-pr/products/black-box-explains/black-box-explains-shielded-vs-unshielded-cable
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00165-ip-10-171-10-70.ec2.internal.warc.gz
en
0.864671
167
3.109375
3
When lives are on the line, medical professionals must be able to trust the security of the devices and systems they use. As the Internet of Things continues to grow exponentially, more and more medical devices will become connected, making them potentially vulnerable to cyberattack. As medical professionals come to rely more heavily on these ‘connected’ devices, they create the nightmare scenario in which the unexpected breakdown of those devices can create life-threatening emergencies. In 2015, the world witnessed the compromise of drug infusion pumps via the cyberattack of hospital networks, where remote hackers altered the drugs being pumped into patients, putting them into potentially life-threatening situations. As malware of this type continues to spread, these types of medical attacks will only increase. Healthcare organizations need to arm themselves with cybersecurity products based on artificial intelligence and machine learning that can prevent malware from EVER causing harm to systems and patients. In this white paper from Cylance, you will learn: - How cybercriminals are exploiting ‘connected’ medical devices - How network configuration and medical device software design flaws are putting patients in danger - The growing malware threat to the medical industry - How FDA regulations are not moving fast enough to protect patients - What healthcare professionals can do to secure every endpoint in their facility
<urn:uuid:454308be-2112-43b1-bced-8e3a15d2f082>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/white-papers/the-medical-device-paradox/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00219-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929583
260
2.71875
3
This is particularly true for makers and users of information technology, such as PCs and monitors. Data from diverse sources from the U.K. government to Greenpeace underscores why senior executives in the IT industry need to treat "green IT" as a necessity, not an option. "Certainly in the U.K., there are a number of (energy) issues which are sort of escalating the cost of the fuel bills," said Edwards. "The electricity bill for an organization is going up year-on-year then the IT manager has to find ways to close that gap between budget and rising costs." To put the above figures into perspective, Luxembourg's electricity consumption in 2005 was around six terawatt hours. According to GEM, an independent source of information relating to green tariffs, a large office block housing 1,250 employees is likely to consume around 2.5 gigawatt hours annually. The Carbon Trust (a non-profit organization funded by the UK government) estimates that office equipment now accounts for around 15% of total energy use in the U.K., and that this figure is likely to rise to 30% by 2020, unless businesses act now. "A quick call to one or two subscribers left no doubt that, while it's an issue that's been thought about in a broad sense in many organizations particularly in the manufacturing sector, it's something that really isn't biting home yet with IT managers," said Edwards. PC monitors and system units account for around two-thirds of office energy consumption, while photocopiers and printers consume around 25% of the total, so switching to flat-screen monitors, for example, not only saves electricity but also cuts the amount of power needed to cool the buildinga big savings as PCs themselves continue to generate more heat thought faster and denser chipsets. These figures are based on an office with 10 PCs, a photocopier, a fax machine and a laser printer, and so with larger organizations typically having much higher ratios of PC equipment to printers and photocopiers, then clearly this is where management needs to focus. Leaving computers, lights, and other office equipment switched on wastes an estimated £150 million worth of electricity in English offices every year. A typical desktop PC, with a CRT monitor, will consume around £240 worth of electricity over a four-year life span. A laptop, however, will consume only around a quarter of this amount depending on its specification, and so organizations should consider their equipment purchases carefully. A good place to start when considering IT purchases is to buy Energy Star rated equipment that automatically switches off when not in use and can be turned back on by network admins remotely. "So something has to be done to bring down the cost that every organization has to incur," said Edwards. "And, obviously, if you can do it through simply means; whether that's changing the configuration of desktop PCs so they don't use screen savers but instead use the power management function, then that's relatively low hanging fruit," said Edwards. The Energy Star logo has been around for many years, and so one would have thought that the IT industry has its "green house" well in order. However, the environmental group, Greenpeace, doesn't think so, and so has published a report naming and shaming leading suppliers of mobile and PC equipment, based on their global policies and practice on eliminating harmful chemicals, and on taking responsibility for their products once they have been discarded by consumers. Top of the PC manufacturer's list came Dell, with Lenovo being the worst of all companies ranked. So look at the whole picture when making purchases, said Edwards. Going green can save a lot of money in the long-run verses just buying the cheapest equipment up front. It also has the added benefit of making your employees happier. "The necessity, from an organizational point-of-view, to be completely honest, is that of cost," said Edwards. "Clearly, that's the primary motive here but also companies employ people and, increasingly, there's more of a desire of 'doing our bit'."
<urn:uuid:a85065f1-54f8-451f-8c4e-1c28954fae5a>
CC-MAIN-2017-04
http://www.cioupdate.com/trends/article.php/3644961/Green-IT-is-a-Necessity-Not-an-Option.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00247-ip-10-171-10-70.ec2.internal.warc.gz
en
0.972622
830
2.578125
3
HOUSTON, TX--(Marketwired - May 12, 2014) - Across the country, teenagers are heading to school long before their biological clocks are ready. While many high schools start at 8:00 am, some start as early as 7:00 am -- or even earlier. Dr. Gail Gross, Ph.D., Ed.D., is a human behavior, parenting, and education expert who supports the late start movement, urging high schools to push back start times by at least 30 minutes. "If we want to give our high school students the best advantage for optimal learning and productivity, we need to address the biological differences in teenage sleep cycles," Dr. Gross says. "Allowing teenagers to get the sleep they need and start school just 30 minutes later can make all the difference." While adults need 7 to 8 hours of sleep, teens -- whose bodies are still growing -- need between 8.5 to 9.5 hours each night. According to Dr. Gross, their circadian rhythm, which regulates the sleep hormone melatonin, directs them to a later bedtime and awakening. When teens get enough sleep, the benefits are bountiful: - Sleep restores the brain and metabolism, while helping teens' memory, learning, and emotional balance. - Sleep has been known to help stave off depression, erratic behavior, truancy, absenteeism, impaired cognitive function, obesity, and even car accidents. - Sleep helps students have better focus, impulse control, homework results, improved attendance, concentration, sociability, and alertness during the day. When teens do not get enough sleep, the problems can be very serious and affect almost every aspect of their lives. - Teens lose the ability to focus and stay on task. - Teens experience fatigue and mental lapses. - Teens may experience symptoms of ADHD, including hyperactivity and attention deficit. According to Dr. Gross, when teenagers are stressed through the loss of sleep, the amygdala enlarges, making them more emotional in decision-making, while the hippocampus narrows where learning and memory live. As a result, not sleeping long enough can affect not only decision-making ability, but also creativity. "We know late school starts work," Dr. Gross says. "When researchers studied the impact of a 25-minute delay in school start time on teens, students experienced significant reduction in daytime sleepiness, as well as improvements to mood and focus, and student caffeine intake dropped." In just the last two years, high schools in California, Oklahoma, Georgia, and New York have adopted later start times. They join schools in Connecticut, North Carolina, Kentucky and Minnesota who have already implemented later start times. The Seattle school board currently is researching the issue, with advocates hopeful that a later start time will go into place by the 2016-2017 school year, if not sooner. For more insight from Dr. Gail Gross, please visit her website at www.drgailgross.com. Dr. Gail Gross is a nationally recognized family and child expert, author, and educator who is frequently called upon by broadcast, print, and online media to offer expert insight on breaking news as well as topics involving family relationships, education, behavior, and development issues. The following files are available for download:
<urn:uuid:c2383e49-2e96-4818-a909-9173e5269e2d>
CC-MAIN-2017-04
http://www.marketwired.com/press-release/late-start-movement-in-line-with-teens-1909022.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00183-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955481
674
3.359375
3
The Bombardier Dash 8 or Q-Series, previously known as the de Havilland Canada Dash 8 or DHC-8, is a series of twin-engined, medium range, turboprop airliners. Introduced by de Havilland Canada in 1984, they are now produced by Bombardier Aerospace. Over 1,000 Dash 8s of all models have been built, with Bombardier forecasting a total production run of 1,192 aircraft of all variants through to 2016.The Dash 8 was developed from the de Havilland Canada Dash 7, which featured extreme short take-off and landing performance. With the Dash 8, DHC focused on improving cruise performance and lowering operational costs. The engine chosen was the Pratt & Whitney Canada PW100. The aircraft has been delivered in four series. The Series 100 has a maximum capacity of 39, the Series 200 has the same capacity but offers more powerful engines, the Series 300 is a stretched, 50-seat version, and the Series 400 is further stretched to 78 passengers. Models delivered after 1997 have cabin noise suppression and are designated with the prefix "Q". Production of the Series 100 ceased in 2005, and the Q200 and Q300 in 2009. Bombardier is considering launching a stretched version of the Q400. Wikipedia. Bombardier | Date: 2015-06-19 A method of limiting belt slip in a continuously variable transmission (CVT) of a vehicle. The CVT is operatively connected to an engine. The method includes determining a slip speed of a belt of the CVT, determining the accumulated energy based on the slip speed of the belt and an engine torque produced by the engine, and controlling the engine torque in an intervention mode when the accumulated energy is greater than a threshold energy. Controlling the engine torque in the intervention mode comprises controlling at least one of cycling the engine torque, and limiting the engine torque. Systems and vehicles for performing the method are also disclosed. Bombardier | Date: 2015-07-30 A method of operating a vehicle. The vehicle includes an engine. The method including determining if the engine is to be operated at idle and determining a current mode of operation of the vehicle. The current mode of operation is any one of a plurality of modes of operation including at least a first mode and a second mode. The method includes, if the engine is to be operated at idle, operating the engine at a first idle speed if the current mode of operation of the vehicle is the first mode of operation; and operating the engine at a second idle speed if the current mode of operation of the vehicle is the second mode of operation. The first idle speed is greater than the second idle speed. Bombardier | Date: 2015-07-23 A track system for a vehicle traveling on a ground. An endless track is connected to a frame by a plurality of wheels rotatably connected to the frame. A track contact area portion supports the vehicle on flat ground. A corner idler wheel is rotatably mounted on a shaft movably connected to the frame by an adjuster. A track portion extends from the corner idler wheel towards the contact area portion defining an attack angle with respect to the flat ground. The corner idler wheel is thereby movable at least between a first position having a first track tension and a first attack angle, and a second position having a second track tension and a second attack angle. The first and the second track tensions are substantially the same. The second attack angle is larger than the first attack angle. Vehicles having the track system are also disclosed. Bombardier | Date: 2015-08-28 A vehicle has a frame, an engine connected to the frame, an output shaft driven by the engine, a bracket resiliently mounted to the engine, a countershaft rotationally supported by the bracket, a driving pulley disposed on the output shaft and rotating therewith, a driven pulley disposed on the countershaft and rotating therewith, a drive belt looped around the driving and driven pulleys to transfer torque from the driving pulley to the driven pulley, the driving pulley, the driven pulley and the drive belt together forming a continuously variable transmission, and at least one ground engaging member operatively connected to the countershaft. Bombardier | Date: 2015-02-20 A cross-over switch system for switching a monorail vehicle between two fixed guideway beams is provided. The cross-over switch system includes two articulated beams and a median beam. Each one of the two articulated beams, constructed of a chain of pivotably interconnected segments, and is pivotably connected to a different one of the two parallel fixed guideways. The median beam, which is pivotable in its center, is located at a median distance between the two parallel fixed guideway beams. When in a switching mode, the median beam is pivoted and each segment of each articulated beam is also pivoted so that each one of the articulated beam abuts an opposed end of the median beam.
<urn:uuid:e18dd014-bd54-403b-bb06-da84dbdec798>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/bombardier-4670/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00091-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945373
1,029
2.5625
3
Video data-sharing library opens up big data for behavioral research - By Rutrell Yasin - Jul 22, 2013 A cadre of researchers, digital library and computer scientists are creating a web-based video library to encourage widespread data sharing in the behavioral sciences where video is commonly used but rarely shared. Databrary, the largest open-source video-data sharing project of its kind, will let researchers store and openly share videos and related information about their studies, according to the organization. Researchers and clinicians can use Databrary to browse, download and re-analyze video data. The goal of the National Science Foundation and National Institutes of Health-sponsored project is to accelerate the pace of scientific discovery and make more efficient use of public investments in scientific research. To better understand the complexities of behavioral development, scientists analyze an average of 12 hours of video per week. But researchers and clinicians seldom share this recorded data, experts said. "By creating tools for open video data sharing, we expect to increase scientific transparency, deepen insights and better exploit prior investments in developmental and behavioral research," said Karen Adolph, a member of the Databrary team and professor of psychology and neural science at New York University. Video data sharing will open up a whole new world of big data research to developmental scientists, noted Adolph, whose research examines the process of learning and development in infant motor-skill acquisition. "Because raw video data are so rich and complex, research teams will be able to access a wealth of data from studies around the world and pursue countless lines of inquiry into behavior and its development," Adolph said. "Researchers can build on each other's efforts to learn from prior examples, test competing hypotheses, and repurpose data in ways unimagined by the original researcher." Other leading members of the team include Rick Gilmore, associate professor of psychology at Penn State, and David Millman, director of Digital Library Technology Services at NYU. "Video can be combined with other data sources like brain imaging, eye movements and heart rate to give a more complete and integrated picture of the brain, body and behavior,” said Gilmore, who studies visual perception and brain development at Penn State. In addition to the web-based data library, the project also involves enhancing an existing, free, open-source software tool called Datavyu, that researchers can use to score, explore and analyze video recordings, team members said. Using the Datavyu tool, researchers can mine video recordings for new information and discover previously unrecognized patterns in behavior, the researchers said. Videos contain faces and voices, so only authorized researchers who have signed a written agreement with Databrary will have full access to the library. People depicted in recordings also must give written permission for their information to be shared. The Databrary project is part of a series of big data and data science initiatives underway at NYU. The university's Division of Libraries and Information Technology Services are providing infrastructure and curation support in a close partnership with the project. Databrary will be housed at NYU. Other project partners include NYU's Center for Data Science and Penn State's Social, Life, and Engineering Sciences Imaging Center. Databrary also provides a response to the growing federal mandate for the management and sharing of data from federally funded research, officials said. The NIH's support comes from the Eunice Kennedy Shriver National Institute of Child Health and Human Development. "I am very excited that NICHD is supporting this endeavor," said Lisa Freund, branch chief for the child development and behavior branch. "Databrary has tremendous potential for enhancing developmental behavioral science and facilitating discoveries that wouldn't be possible without such a sharing infrastructure." Rutrell Yasin is is a freelance technology writer for GCN.
<urn:uuid:8a4715a9-481c-4b63-9130-7e7567e668e2>
CC-MAIN-2017-04
https://gcn.com/articles/2013/07/22/databrary-video-sharing-library.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00577-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929977
772
2.546875
3
The 3D Torus architecture and the Eurotech approach The ability of supercomputers to progressively run job faster meets the computational needs of both scientific research and an increasingly higher number of industry sectors. Processor power is centerpiece to determine the performance of an HPC system, but it is not the only factor. One of the key aspects of parallel computers is the communication network that interconnects the computing nodes. The network is the one guaranteeing fast interactions between CPUs and allowing the processors to cooperate: this is essential to solve complex computational problems in a fast and efficient way. Together with speed, HPC systems are increasingly asked to be more available. Downtime can affect a high performance machines quite badly. It appears clear that a reliable average machine with a great uptime is better than a high performance one with a low MTBF (mean time before faults): ultimately, the former would process more jobs in a week than the latter. One additional challenge with large systems is scalability, so the ability to add nodes to a cluster without affecting performance and reliability or affecting them as little as possible. Petascale and then exascale installations require and will require hundreds of thousands of cores to efficiently work together. It is also paramount for future machines to consume less energy, being cost and availability of electrical power an issue that is becoming the most demanding challenge to overcome along the road to exascale computing. 3D torus connectivity In a computer cluster, the way the nodes are connected together could provide some determinant help to solve the above mentioned issues. Despite being available for quite a while, the torus architecture has now the potential to surge from niche application to mainstream. This is because, like never before, we face nowadays some severe challenges posed by a raising number of nodes. The problem, before being one of performance, is one of topology and scalability. The more a system grows, the more fat tree switched topologies show limits of cost, maintainability, consumption, reliability and, above all, scalability. Connecting nodes using a 3D Torus configuration means than each node in a cluster is connected to the adjacent ones via short cabling. The signal is routed directly from one node to the other with no need of switches. 3D means that the communication takes places in 6 different “directions”: X+, X-, Y+, Y-, Z+, Z-. In practical terms, each node can be connected to 6 other nodes: in this way, the graph of the connections resembles a tri-dimensional matrix. Such configuration allows the addition of nodes to a system without degrading performance. Each new node is joint as an addition of a grid, linked to it with no extensive cabling or switching. While scaling linearly, with little or no performance loss, is strictly true for those problems that heavily rely on next neighbor communication, it is also true that, avoiding switches, hundreds of nodes can be added without causing problems of clogged links or busy fat tree switch leafs. This comes without considering that the addition of a node in a large system happens with much less working and potential troubles on a 3D torus network than on switched fat tree one. The pairwise connectivity between nearest neighbor nodes of a 3D Torus configuration helps to reduce latency and the typical bottlenecks of switched networks. Being the connections between nodes short and direct, the latency of the links is very low: this affects positively the machine performance, especially for solving local patters problems, which can be effectively mapped onto the matrix mentioned above. The switchless nature of the 3D Torus facilitates fast communication between nodes. Switches are also potential points of failure. Decreasing their number should improve the operational functioning of a system. In other words, the 3D Torus makes a system more agile, so more prompt to react to failures: if a connection or a node fails, the affected communication can be routed in many different directions. The inherent nature of the 3D Torus is the connectivity of each node to its nearest neighbors to form a tridimensional lattice that guarantees multiple ways for a node to reach another one. Eliminating costly and power-hungry external spine and leaf switches, as well as their accompanying rack chassis and cooling systems, torus architectures also contribute to reduce installation costs and energy consumption. When it comes to applications that can fully benefit from the 3D Torus configuration we could touch one of the caveats of this intelligent connection schema. The maximization of the performance of the 3D Torus takes place with a subset of problems which is rather large but specific. These are local pattern problems, which typically deal with modeling systems whose functioning/reaction depends on adjacent systems. Typical examples are computer simulations of Lattice QCD and fluid-dynamics. More in general, many Monte Carlo simulations and embarrassingly parallel problems can exploit the full performance advantage of the 3D Torus architecture, making the range of possible applications quite vast, especially within the field of scientific research. Problems that require all to all dialogue between nodes are less prone to exploit the full performance of the 3D torus interconnection. However, independently from the type of application and problem, the 3D torus still bears the massive advantage of scalability and serviceability, contributing also to increase the availability of the systems and reduce power consumption. In case of large systems, it may be so advantageous to resort to the 3D Torus architecture that the perfect match with the problem that better maximize the computational performance may well become secondary. The Eurotech approach It is rather interesting to analyze what Eurotech, a leading European computer manufacturer, has done with the 3D Torus network of their supercomputer line Aurora. Eurotech wanted to leverage the 3D Torus benefits for their high end products, but at the same time leave to the users the flexibility and the freedom to run all the applications they need. Taking in account these diverging characteristics, Eurotech and its scientific partners took and approach called Unified Network Architecture in designing the Aurora datacenter clusters. This fundamentally means that the Aurora systems have 3 different networks working in concomitance on the same machine: 2 fast independent networks (Infiniband, 3D Torus) and a multi-level synchronization network. The coexistence of Infiniband and 3D Torus facilitates flexibility of use: depending on the application, one or the other network can be utilized. The synchronization networks act at different levels, synchronizing the CPUs and de facto reducing or eliminating the OS jitter and hence making the system more scalable. Torus topologies have traditionally been implemented with proprietary, costly application-specific integrated circuit (ASIC) technology. Eurotech chose to drive the torus with FPGAs, injecting more flexibility in the hardware, and to rely both on a GPL and on a commercial distribution for the 3D torus software. The 3D torus network is managed by a network processor implemented in the FPGAs, which interfaces the system hub through two x8 PCI Express Gen 3 connections, for a total internal bandwidth of 120Gbs. Each link in the torus architecture is physically implemented by two lines (main and redundant) that can be selected (in software) to configure the machine partitioning (full 3D Torus or one of the many 3D sub-tori available). In this way, redundant channels allow system repartitioning on-the-fly. The possibility of partitioning the system into sub-domains permits to create system partitions that communicate on independent tori, effectively creating different execution domains. In addition, each subdomain can benefit from a dedicated synchronization network. As an example of partitioning, if a backplane with 16 boards is considered, the available topologies for the partitioning of the machine in smaller sub-machines with periodic boundaries (sub-tori) are: Half Unit: 2 x [1:2*NC] x [1:2*NR] Unit: 4 x [1:2*NC] x [1:2*NR] Double Unit:8 x [1:2*NC] x [1:2*NR], Rack: 8 x 2*NC x 2, Machine: 8 x 2*NC x 2*NR Chassis: 16 x [1:NC] x [1, 2, 2*NR], Rack: 16 x NC x 2, Machine: 16 x NC x 2*NR – NC : Number of chassis in a rack (8). – NR : Number of racks in a machine (from 1 to many hundreds) Partitioning, FPGAs, redundant channels, synchronization networks are some of the unique characteristics that Eurotech wanted in the torus architecture to create Intel based clusters with the flavor of a special machines.
<urn:uuid:db7b65c4-a9df-42af-90d2-db0db5538de7>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/06/20/the_3d_torus_architecture_and_the_eurotech_approach/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00485-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918535
1,811
3.078125
3
What Are the Business Benefits of SANs? There`s more than one way to implement storage area networks and network-attached storage. By Richard Barker The computing industry is about to go through another major change, catalyzed by high-bandwidth Fibre Channel and scalable switches and hubs. It comes under the umbrella terms of storage area networks (SAN) and the associated concept of intelligent network-attached storage (NAS) devices. In some ways, these technologies are simply natural evolutions of today`s network systems. But they also provide an opportunity to run businesses and systems in a fundamentally different--storage-centric--way. A SAN is a separate Fiber Channel-based network that connects storage devices to a heterogeneous set of servers on a many-to-many basis. A SAN also enables direct storage-to- storage interconnectivity, separating storage devices from servers and making storage generally available across the network. In its simplest form, a SAN provides server-to-storage access across fiber--a "SCSI on steroids" capability. However, in a more sophisticated form, a SAN enables disk or tape arrays to be accessed by two or more similar servers at high speeds across Fibre Channel (see Figure 1). This provides immediate performance and connectivity benefits and should eliminate redundant data. Other benefits include the opportunity to relocate backup, restore, file migration, and data replication processes from servers and local/wide area networks and the ability to move data directly between storage devices across the SAN. This frees up server power for business applications and network capacity for users. These benefits are realized without changing existing applications, database management systems, or user connections to applications via existing local area networks. A SAN simply moves storage from servers to a separate network; applications deliver "local" data-access speed on the SAN, though the data may be many kilometers away. A NAS device is a box that contains disk or tape arrays. It intelligently manages storage for performance, reliability, and space for one or more servers. A NAS device can be connected via SCSI or file-system interfaces to a SAN to provide storage for multiple homogeneous or heterogeneous servers. These self-managing boxes can also perform local disk-to-tape data transfer without going across the SAN. A NAS device offers heterogeneous access to two or more servers, and the ability to migrate data without affecting any other part of the system (see Figure 2). In a SAN architecture, availability can be dramatically improved in several ways (e.g., enabling one of several servers to take over for a failed server). RAID technology is inherent in a SAN structure, providing improved reliability and performance. And Fibre Channel and sophisticated switched fabric technology ensure no single point-of-failure on the network. Further, over time, users will be able to replace or repair any component of SAN during normal operation. Another way of implementing a SAN provides storage-centric connectivity via a LAN or a WAN to application servers (see Figure 3). Each application server connects to a pool of shared on-line, off-line, and intelligent storage devices via the fiber fabric. Other than data redundancy for resiliency purposes, duplicate data can be removed, saving disk space. Backup, hierarchical storage management (HSM), and replication can then take place within the storage pool and optional NAS devices. Further, should an application server fail, spare servers can take over processing. Therefore, the benefits of SAN and NAS technology include: - High-performance access to global data - Data access from heterogeneous servers - High levels of scalability - Ease of management and change - Cost reduction by removal of duplication - High availability So, what does this mean for your business? SANs may enable you to operate in a radically new way, especially if your company is geographically dispersed. Businesses that would potentially benefit include supermarket chains that need to switch pricing on hourly basis or global car manufacturers that needs to cut three months off the production time or defense or health organizations that require ready access to shared data in life-threatening situations. For these organizations, data is the life-blood of the business. Today, data may be stored in large data warehouses, but it is more likely to be in widely disperse, isolated locations (e.g., data feeds from outside the company, Intranet web sites, multiple applications, and remote databases and email servers). Unmanaged data duplication results in inaccurate information, while isolated data makes it impossible to get a true business "picture" in a timely manner. A SAN provides access to isolated "islands" of data through high-speed network communications. Another business requirement is continuous data availability. The industry needs to turn in its "system availability" attitude for a "data and application availability" approach. The SAN model provides automatic data redundancy, automatic backups, and disaster recovery copies. User-level replication can add further resiliency. Clustered servers with shared access to data can dynamically switch users and applications among peer servers, dramatically improving availability. Companies also require consistent service, that is, the availability of all user applications as well as consistent performance. Here, automated monitoring and management tools are critical, enabling users to identify, isolate, and fix faults without human intervention and without affecting service. Given the scope of SANs, these tools must also be able to monitor trends and pre-empt failures or service problems. A few years down the road, SANs and related cluster technology will have evolved to the point where all hardware will be able to be replaced on the fly and maintenance will be able to be performed on production systems. Operating system and application software will have to follow suit so that upgrades can be done without bringing systems down. SAN technology may also prove to be the catalyst for a new breed of global applications. Initial examples may include worldwide Intranet sites that have completely replicated copies of the entire corporate web site on each SAN "continent." These sites would enable subsidiaries or departments to update information locally on the Intranet; all replicas would be updated worldwide shortly thereafter. These applications will create security issues within and across SAN "continents," necessitating new security tools. SANs will take several years to mature. The key to success is the establishment of a "storage-centric" environment that is supported by a high-performance, low-latency fabric that provides users with highly available access to clusters of application servers with many-to-many connectivity to shared storage. Figure 1: SANs enable disk or tape arrays to be accessed by two or more similar servers at high speeds across Fibre Channel. Figure 2: A NAS device offers heterogeneous access to two or more servers, and the ability to do data migration without affecting other parts of the system. Figure 3: Backup, HSM, and replication can take place within the storage pool and optional NAS devices.
<urn:uuid:942fa736-028a-42dd-8d98-4def0f22ad2d>
CC-MAIN-2017-04
http://www.infostor.com/index/articles/display/56302/articles/infostor/volume-2/issue-11/news-analysis-trends/what-are-the-business-benefits-of-sans.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00393-ip-10-171-10-70.ec2.internal.warc.gz
en
0.911538
1,410
2.578125
3
Loh M.,Center for Offshore Research and Engineering | Too J.L.,Center for Offshore Research and Engineering | Falser S.,Center for Offshore Research and Engineering | Falser S.,Royal Dutch Shell | And 3 more authors. Energy and Fuels | Year: 2015 In a previous study using a single wellbore production system, it was demonstrated that a combination of depressurization and wellbore heating is more efficient than depressurization alone, where the endothermic dissociation process rapidly consumes the specific heat of the formation, leading to a sharp decrease in the dissociation rate. This study extends the work on gas production and explores the feasibility of a novel dual wellbore production scheme, where heating and depressurization are conducted on separate wellbores. The drawback with combining heating and depressurization on a single wellbore is that the produced fluids are flowing in an opposite direction to the heat from the wellbore, and this forced convection may slow the dissociation process. Gas production tests are carried out using the dual wellbore system with different combinations of pressure and temperature at the depressurization and heating wellbores, respectively. The ensuing experimental results showed that both increased depressurization and heating can lead to optimized gas production. A production scheme with a higher depressurization compared to a lower depressurization at the same wellbore heating is generally more energy-efficient, while a higher wellbore temperature at the same depressurization resulted in more gas produced but no improvement in efficiency. Although a dual wellbore scheme has been an established practice in the petroleum industry, this is likely to be the first employed in the hydrate gas production tests. © 2014 American Chemical Society. Source
<urn:uuid:11661cb8-7972-41cb-a0c3-d8fcdbbed36a>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/center-for-offshore-research-and-engineering-2742023/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00393-ip-10-171-10-70.ec2.internal.warc.gz
en
0.904709
361
2.8125
3
Over the years much has been written about how users are the weakest link in security, and there are surely not many people who would disagree. Despite this, companies often under value the importance of educating personnel about the important security issues of the day. It is easy to see what the reasons might be; companies have spent considerable time and money deploying firewalls and other technical controls. This makes sense to IT staff and it is often what people expect IT to spend their time on. Users, however, are an integral part of the business and often require more Tender-Loving-Care than we give them credit for. No one will dispute that the technical work is required but unless the organisation provides security awareness training to users in an effort to help protect the organisation’s information the risks are high. The purpose of this article is to highlight the dangers of the growing phenomenon of social engineering, and to offer some practical advice for dealing with the same. In a nutshell, social engineering is a method of gaining access privileges to an organization and its assets by querying personnel over communications medium such as telephone, e-mail, chat, bulletin boards, face-to-face etc. from a fraudulent “privileged” position. The methodology employs a number of techniques to determine the level of ‘security awareness’ that exists in the organization under review. In fact, reformed computer criminal and security consultant Kevin Mitnick popularized the term social engineering, pointing out that it’s much easier to trick someone into giving you his or her password for a system than to spend time hacking in. He claims it to be the single most effective method in his arsenal. I’ve worked on many projects over the years where we’ve attempted to gain access to a network and the data on it using social engineering techniques. One of the more common tactics used involves calling end-users and impersonating IT staff and other, usually non-existent, companies. The % of username and passwords given away by staff always astonishes me, typically we have a 75% plus success rate. This carries across private and non-private companies, medium to large organizations and works equally well against high-end business managers who are likely to have remote access. When this is combined with scanning for publicly accessible services, it can prove a highly effective way to gain remote access to a system or network. External PPTP VPN and SSL VPNs are prime examples of such services. But how much work does an attacker have to do to get usernames and passwords from end-users? In my experience, unless a company already has a security awareness program in place, a few hours on the Internet and a few phone calls is all that’s needed. Before making calls an attacker need only spend a couple of hours researching user names, phone numbers and addresses on the Internet. This is followed up with a few initial phone calls to reception to get the names of the IT manager, the number for IT support and maybe even some names of the IT support staff. All this information is ideal for some name-dropping in the phone calls to add an air of authenticity to your call, especially if you are pretending to do an authorised third party audit or pretending to actually be from the IT support team. A few calls to users later and the chances are you will have some usernames and passwords, all without having to worry about technical things like password cracking or any other mathematically challenging work. Of course, unless the user has received some security awareness training why wouldn’t they give their username and password to someone who says they are working in IT support, especially if the IT manager (who’s name could be dropped in the telephone call) has authorized this as part of a major fictitious incident? In some companies it is common practice for the IT support team to ask users for their password to resolve support cases more quickly, which makes users even more likely to give away their password when asked for it. If all this seems unlikely to catch your well-trained users out, perhaps a more focused and targeted phishing attack may be more effective. This modus operandi is surprisingly proficient and has the potential to harvest more usernames and passwords from even the most savvy of users. At first glance, the technicalities involved in setting up a phishing attack may appear a bit complicated but for the technically adept it should not take more than a couple of hours. After harvesting a number of user names and email addresses from the Internet the only task left is to send off some emails, and wait for a bite! In a recent project for a client, within 24 hours 10% of the users emailed had supplied their username and passwords to our bogus website. Could you be sure you or your users would not do the same? Without security aware users it is unlikely that this type of attack would even be noticed. The main indicators that the users could have picked up on were the fact that we used a http site and that the survey website was hosted on an external server with the link being in the form of http://IPAddress/itsurvey.html. Worryingly, however, if a company has any cross site scripting problems on their web server it would be possible to use a link with the real company web site address in it rather than just an IP address. If further justification were required that security awareness training should be implemented it is recommended as part of many security standards including ISO 27001 and the payment card industry standard PCI. Both standards mandate that staff shall be aware of information security threats and issues and shall be equipped to support organizational security policy. The bottom line is that by taking the time to run a short and simple end-user awareness program you will benefit from seeing remarkable changes in the behavior of end-users. A 30-minute seminar run once a year can inform and educate your personnel on the dangers lurking on the other end of the phone or in that friendly email. An educated, security savvy user is more of a friend and less of a liability!
<urn:uuid:cd6330e2-bd2c-431d-ba5a-79358eea6da7>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2008/01/24/social-engineering-threats-and-countermeasures/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00081-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965331
1,231
2.625
3
Most new products begin life with a marketing pitch that extols the product’s virtues. A similarly optimistic property holds in user-centered design, where most books and classes take for granted that interface designers are out to help the user. Users themselves are assumed to be good natured, upstanding citizens somewhere out of the Leave it to Beaver universe. In reality, however, the opposite is often true. Products have substantial flaws, technology designers seek ways to extract money from users, and many users twist well intentioned technology in ways the designers never expected, often involving baser instincts. These realities should come as no surprise to security professionals who are usually most effective when assuming the worst in people. One sure to be abused emerging technology is augmented reality. Augmented reality technologies overlay computer generated data on a live view of the real world. Anticipated application domains include entertainment, travel, education, collaboration, and law enforcement, among numerous others. A pragmatic, and advertisement laden, view of future augmented reality by YouTube user rebelliouspixels Augmented reality bears great promise as exemplified by Google’s highly optimistic “Project Glass: One day…” video. In the video, a theoretical descendent of Google’s Project Glass helps the user navigate a city, communicate, learn the weather, and otherwise manage his day. A day after Google posted the video, YouTube user rebelliouspixels posted a parody video “ADmented Reality” that remixed Google’s Project Glass vision with Google Ads. As we look to the future, this less optimistic view likely will be closer to the mark. It is important for the security community to start considering unintended, malicious, and evil applications now, before we see widespread adoption of augmented reality technologies. In this article, we combine augmented reality with reasonable assumptions of technological advancement, business incentives, and human nature to present less optimistic, but probable, future augmented reality applications. Admittedly, some are dystopian. We end with suggestions for the security and usability communities to consider now — so that we may be better prepared for our future of augmented reality and the threats and opportunities it presents. We do not intend to propose science fiction, but instead consider technologies available today or likely to arrive in the next five to ten years. Unless otherwise stated, we assume the capabilities and overall popularity of today’s iPhone/iPad – always on networking, high resolution video cameras, microphones, audio, voice recognition, location awareness, ability to run third-party applications, and processing support from back-end cloud services – but resident in a lightweight set of eyewear with an integrated heads-up display. Learning from the past As we consider potential misuse and risks associated with augmented reality we can learn a great deal from past desktop applications and current iPhone and Android apps to gain insight into both human nature and technical possibilities. From this analysis we identify at least three primary threat categories. The first category is simplest, current applications that are easily ported to future systems, with little to no augmentation. The next category includes hybrid threats that are likely to evolve due to enhanced capabilities provided by augmented reality. The final category, and the hardest to predict, are entirely new applications which have little similarity to current applications. These threats will lean heavily on new capabilities and have the potential to revolutionize misuse. In particular, these applications will spring from widespread use, always on sensing, high speed network connectivity to cloud based data sources, and, perhaps most importantly, the integration of an ever present heads-up display, that current cell phones and tablets lack. Regardless from which category new threats emerge, we assume that human nature and its puerile and baser aspects will remain constant, acting as a driving force for the inception of numerous malicious or inappropriate applications. This section lists potential misuse applications for augmented reality. Of course, we do not mean to imply that Google or any other company would endorse or support these applications, but such applications will likely be in our augmented future nonetheless. Persistent cyber bullying In the world defined by Google Glasses users are given unparalleled customizability of digital information overlaid on top of the physical environment. Through these glasses this information gains an anchor into the physical space and allows associations that other individuals can also view, share, vote on, and interact with just as they would via comments on YouTube, Facebook, or restaurant review sites. Persistent virtual tagging opens up the possibility of graffiti or digital art overlaid upon physical objects, but only seen through the glasses. However, hateful or hurtful information could just as easily be shared among groups (imagine what the local fraternity could come up with) or widely published to greater audiences just as it can today, but gains an increasing degree of severity when labeling becomes a persistent part of physical interactions. Imagine comments like “Probably on her period” or “Her husband is cheating” being part of what appears above your head or in a friend’s glasses, without your knowledge. Such abuse isn’t limited to adult users. The propensity for middle and high school age youth to play games that embarrass others is something to be expected. The bright future predicted by Google may be tainted by virtual “kick me” signs on the backs of others which float behind them in the digital realm. Lie detection and assisted lying Augmented reality glasses likely will include lie detection applications that monitor people and look for common signs of deception. According to research by Frank Enos of Columbia University, the average person performs worse than chance at detecting lies based on speech patterns and automated systems perform better than chance. Augmented reality can exploit this. The glasses could conduct voice stress analysis and detect micro-expressions in the target’s face such as eye dilation or blushing. Micro-expressions are very fleeting, occurring in 1/15 of a second, beyond the capabilities of human perception. However, augmented reality systems could detect these fleeting expressions and help determine those attempting to hide the truth. An implication is that people who use this application will become aware of most lies told to them. It could also provide a market for applications that help a person lie. Gamblers, students, and everyday people will likely use augmented reality to gain an unfair advantage in games of chance or tests of skill. Gamblers could have augmented reality applications that will count cards, assist in following the “money card” in Three Card Monte, or provide real-time odds assessments. Students could use future cheating applications to look at exam questions and immediately see the answers. Future augmented reality applications will likely assist cheating. In this notional example the student sees the answers by simply looking at the test. Theft and other related crimes may also be facilitated by augmented reality. For example, persistent tagging and change detection could be used to identify homes where the occupants are away on vacation. We anticipate augmented reality will perform at levels above human perception. Applications could notice unlocked cars or windows and alert the potential criminal. When faced with a new type of security system, the application could suggest techniques to bypass the device, a perverted twist on workplace training. The Google Glass video depicted the user calling up a map to find a desired section of a book store. We anticipate similar applications that might provide escape routes and locations of surveillance cameras. Law enforcement detection We also anticipate other applications to support law breaking activities. Today’s radar and laser detectors may feed data into drivers’ glasses as well as collaboratively generated data provided by other drivers about locations of traffic cameras and speed traps. Newer sensors, such as thermal imaging, may allow drivers to see police cars hidden in the bushes a mile down the road. License plate readers and other machine vision approaches will help unmask undercover police cars. Counter law enforcement applications will certainly move beyond just driving applications and may assist in recognizing undercover or off duty police officers, or even people in witness protection programs. Front and rear looking cameras would allow users to see behind them and collaborative or illicit sharing of video feeds would allow users to see around corners and behind walls. Average citizens may use their glasses to record encounters with police, both good and bad. Law enforcement variants of augmented reality may dramatically change the interaction between police officers and citizens. The civil liberties we enjoy today, such as freedom of speech and protection against self-incrimination, will certainly be affected by impending augmented reality technology. What might be relatively private today (such as our identity, current location, or recent activity) will be much more difficult to keep private in a world filled with devices like Google Glasses. A key enabler of future augmented reality systems is facial recognition. Currently, facial recognition technology is in a developmental stage, and only established at national borders or other areas of high security. Ralph Gross, a researcher at the Carnegie Mellon Robotics Institute, claims that current facial recognition technology is becoming more capable of recognizing frontal faces, but struggles with profile recognition. Current technology also has problems recognizing faces in poor lighting and low resolution. We anticipate significant advances during the next decade. Law enforcement agencies, like the police department in Tampa, Florida, have tested facial recognition monitors in areas with higher crime rates, with limited success. The primary cause behind these failures has been the inability to capture a frontal, well lit, high resolution image of the subject. This obstacle blocking effective facial recognition would be quickly removed in a world where augmented reality glasses are common and facial images are constantly being captured in everyday interactions. While facial recognition via augmented reality (through glasses or mobile devices) might seem harmless at first glance, a deeper look into this new technology reveals important unintended consequences. For example, a new form of profiling may emerge as a police officer wearing augmented reality glasses might recognize individuals with prior criminal records for which the subjects have already served their time. Without augmented reality, that police officer would have likely never recognized the offenders or known of their crimes. Of course augmented reality may be very beneficial to law enforcement activities, but raises serious questions about due process, civil liberties, and privacy. The end result may be a chilling effect on the population as a whole, both guilty and innocent. Dating and stalking Augmented reality opens the flood gates to applications for dating and stalking. Having a set of eyeglasses that records and posts your location on social networks means that everybody you know can see where you are. For example, a man sits down at a bar and looks at another women through his glasses, and her Facebook or Google+ page pops up on his screen (since she did not know to limit her privacy settings). While augmented reality brings vastly new and exciting opportunities, the technology threatens to eliminate the classic way of meeting and getting to know people: by actually spending time with them. Consider an application that already exists: “Girls Around Me,” -it uses data from social networking sites to display locations of nearby girls on a map. According to Nick Bilton of The New York Times, this application “definitely wins the prize for too creepy.” The evolution of such applications combined with augmented reality opens up numerous other possibilities. Perhaps the glasses will suggest pick-up lines based on a target’s interests, guess people’s ages, highlight single women (or married women), make people more attractive (virtual “beer goggles”), or provide “ratings” based on other users’ feedback. Lie detection applications will likely be in frequent use, and misuse. Expect continuous innovation in this domain. We anticipate that augmented reality will be used to emulate or enhance drug use. History has taught us recreational drugs will always be in demand as will be additional means of enhancement. Some may recall the combination of drugs with Pink Floyd laser light shows. Others may have experimented with Maker SHED’s Trip Glasses which suggests users “Enjoy the hallucinations as you drift into deep meditation, ponder your inner world, and then come out after the 14-minute program feeling fabulous” or the audio approaches suggested by Brad Smith’s DEFCON 18 “Weaponizing Lady GaGa” talk. Augmented reality will open up significant and sought after possibilities. Let’s face it, porn is a driving force behind Internet and technological growth, and we believe the same will hold true for augmented reality. Augmented reality will facilitate sexual activities in untold ways including virtual sexual liaisons, both physical and virtual, local and at a distance. Advanced sensors may allow penetration of clothing or the overlay of exceptionally endowed features on individuals in the real world, perhaps without their knowledge. The advice frequently given in public speaking classes, “Imagine the audience naked,” takes on entirely new meaning in this era. There are more than 300 million people in the United States alone and more than that number of mobile phones. Imagine if even one third of this group actively wore and used augmented reality glasses. That would mean 100 million always-on cameras and microphones wielded by adults, teenagers, and children continually feeding data to cloud-based processors. Virtually no aspect of day-to-day life will be exempt from the all seeing eye of ubiquitous and crowdsourced surveillance. Businesses will be incentivized to collect, retain, and mine these data flows to support business objectives, such as targeted advertising, and governments will covet and seek access to this data for security and law enforcement aims. The implications of the privacy of the individual citizen and the chilling effect on society as a whole could be profound. People have long been concerned about the danger of billboards when driving, because they take drivers’ eyes off the road. Text messaging while driving is widely illegal because of the distraction it causes. Now consider augmented reality glasses with pop-up messages that appear while a person drives, walks across a busy intersection, or performs some other activity requiring their full attention. For anybody wearing the glasses, text messaging or advertising alerts and similar interruptions would be very distracting and dangerous. You’ve likely seen, on many occasions, drivers attempting to use their cell phones and their resultant erratic driving. Augmented reality devices encourage such “multitasking” behavior at inappropriate times. The results will not be pretty. People today do stupid things (see the movie Jackass for textbook examples), and in the future, people will continue to do stupid things while wearing augmented reality glasses. One commenter on Google’s YouTube video, PriorityOfVengence1, suggested that someone might even commit suicide wearing Google Glasses. The context of this comment refers to the end of the video when the main character is on a roof video chatting with his girlfriend and says “Wanna see something cool?” PriorityOfVengence1’s comment received over sixty thumbs up in just three days. While some might laugh at the comment, it highlights a disturbing potential reality. What if people spiraling into depression began streaming their suicide attempts by way of their glasses? It is certainly possible — this and many other variations of augmented reality voyeurism should be anticipated. The focus of this article is on user applications that behave in accordance with the user’s wishes. However, if we expand our assumptions to allow for malicious software, options become even more interesting. With malicious software on the augmented reality device, we lose all trust in the “reality” that it presents. The possibilities are legion, so we will only suggest a few. The glasses could appear to be off, but are actually sharing a live video and audio feed. An oncoming car could be made to disappear while the user is crossing the street. False data could be projected over users’ heads, such as a spoofed facial recognition match from a sexual offender registry. For related malware research on today’s mobile technology see Percoco and Papathanasiou’s “This is not the droid you’re looking for…” from DEFCON 18 to begin envisioning additional possibilities. The era of ubiquitous augmented reality is rapidly approaching and with it amazing potential and unprecedented risk. The baser side of human nature is unlikely to change nor the profit oriented incentives of industry. Expect the wondrous, the compelling, and the creepy. We will see all three. However, we shouldn’t have to abdicate our citizenship in the 21st Century and live in a cabin in Montana to avoid the risks augmented reality poses. As security professionals we must go into this era with eyes wide open, take the time to understand the technology our tribe is building, and start considering the implications to our personal and professional lives before augmented reality is fully upon us. To live in the 21st Century today online access, social networking presence, and instant connectivity are near necessities. The time may come when always on augmented reality systems such as Google Glasses are a necessity to function in society; before that time however we must get ahead of the coming problems. The first few kids who walk into their SAT exams wearing augmented reality glasses and literally see the answers are going to open Pandora’s Box.
<urn:uuid:40913fc9-1278-434a-9fd0-b01c81872322>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/02/12/65279unintended-malicious-and-evil-applications-of-augmented-reality/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00385-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940621
3,485
2.59375
3
Report Shows That Headsets Reduce Exposure to Cell Phone Radiation by Charlie Schick A recent study by a UK consumers' association reported that headsets tripled the amount of specifically absorbed radiation (SAR) to the head during cell phone use. The conclusion was that the wire of the headset acted as an antenna and channeled unshielded radiation to the head. Naturally, the UK Department of Trade and Industry was concerned, and hired a company (SARtest) that regularly tests for SAR. SARtest repeated the consumer association study and expanded it to a wider range of headsets and phones. SARtest found that headsets not only do not increase SAR levels to the head, they can actually reduce emitted radiation by 65 to 70%. The study further revealed that what caused some of the "worst-case" indications of high SAR levels was merely the improper arrangement of cell phone antennas and headset cords—looping the headset cord around the antenna, for example. The study also revealed that most of the radiation is emitted toward the back of the phone. Therefore, the head is already well protected, especially with the newer phones with internal antennas. Using a headset, with the phone dialpad facing the body, protects the head even more. Of course, the directionality of the radiation does bring into question the amount of exposure to the hand. I've been taught that the hand can usually take more radiation than many other parts of the body, mostly due to tissue types. SARtest suggests further study of SAR involving other parts of the body in addition to A summary of the findings: - With normal use, headsets offer substantial reductions in SAR compared with conventional cell phone use, where the phone, rather than a headset, is held against the head. - Certain configurations have been found to cause SAR levels in the head when using headsets with cell phones. But such configurations involved only low SAR levels, and appeared in the area of the cheek rather than in the ear or near the brain. These conditions are considered to be highly unlikely and not typical with normal use. - If a cell phone is used with a headset, with the phone held against the body—in a pocket for example—placing the dialpad toward the body further reduces the possibility of I read the SARtest study, which suggests that headsets reduce exposure to cellular phone radiation, very closely. It contained interesting photographs depicting test setups and graphs of radiation intensities. Having had years of experience reading research papers, this study seems to be well designed and the results and conclusions seem valid. I am still not convinced as to whether the levels of SAR from cellular phones are actually harmful. We are surrounded by electromagnetic radiation from many sources, such as basic electrical wiring, computer monitors, blow dryers, ordinary telephones, and from virtually any place where electrical current is flowing. As far as I can tell, this study does not state whether any of these SAR levels are harmful. Nonetheless, it does reinforce my recommendation for using headsets with cell phones. Not only do headsets free your hands for other tasks—including driving your car—they reduce the amount of radiation to your head. Hello Direct offers cellular phone headsets. It's your call.
<urn:uuid:a2236a5b-81e8-4a5e-822e-254bdec8bbf4>
CC-MAIN-2017-04
http://telecom.hellodirect.com/docs/Tutorials/HeadsetBenefits.1.110200.asp
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00138-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942424
703
3.71875
4
Upon first glance, a utility meter might seem like the furthest thing from a security threat than you could imagine. After all, what harm could come from a device that measures the amount of electricity or gas your building consumes? The reality is, however, that in today’s ultra-connected world, this type of naive thinking could actually lead to a serious disaster. That’s because utility meters, like the majority of other devices currently on the market, are all ultimately linked to a much larger global network. Unfortunately, many of these devices are rife with security loopholes that can be exploited by criminals and used to facilitate much larger attacks on utility companies, government, healthcare systems and other critical infrastructure. While a single device such as a utility meter, machine or access point might not seem like much of a security concern, when illegally controlled a criminal could gain a foot in the door to a much larger operation and could compromise critical infrastructure systems such as a power grid that could ultimately lead to a loss of life or cascading system failures. One type of security measurement that every organization should be leveraging as protection against network device-related espionage is called a public key certificate, or a digital certificate as it is more commonly referred. Not easily cloned, a digital certificate is a strong identity that uniquely identifies the device. The certificates help protect the identities of computers, machines or devices that interact with critical infrastructure, cloud-based services, mobile platforms and network infrastructure. This is to prevent a third party from using a spoofed identity to manipulate the network and gain access to a network. Digital certificates are issued by an independent third-party certification authority (CA) for the purpose of providing an independent source of public key authentication. For more information on how you can leverage the protection of digital certificates in your enterprise, please visit www.entrust.com/enterprise.
<urn:uuid:8e2b9e6c-b4c0-48a3-bc9b-5b7cc58e74c2>
CC-MAIN-2017-04
https://www.entrust.com/digital-certificates-strengthening-security-enterprise/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00532-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948313
380
2.5625
3
Researchers at Texas A&M University say they have a new method for finding domain-fluxing botnets, which evade detection by constantly alternating domain names. Dr. Narasimha Reddy, who works in the University's Department of Electrical and Computer Engineering, collaborated with student Sandeep Yadav and Ashwath Reddy, as well as with Supranamaya "Soups" Ranjan with Narus Inc., to develop the new method. It can be used to detect botnets like Conficker, Kraken and Torpig, which use the so-called DNS domain-fluxing for their command and control infrastructure. Domain-fluxing bots generate random domain names; a bot queries a series of domain names, but the domain owner registers just one. As an example, the research points to Conficker-A, which generated 250 domains every three hours. In order to make it harder for a security vendor to pre-register the domain names, the next version, Conficker-C, increased the number of randomly generated domain names per bot to 50,000. MORE ON THE BOTNET WAR The research also finds Torpig bots "employ an interesting trick where the seed for the random string generator is based on one of the most popular trending topics in Twitter." Kraken, according to the report, employs a much more sophisticated random word generator and constructs English-language alike words with properly matched vowels and consonants. The randomly generated word is combined with a suffix chosen randomly from a pool of common English nouns, verbs, adjective and adverb suffixes, said researchers. Current detection methods require botnet researchers to reverse-engineer the bot malware and figure out the domains that are generated on a regular basis in order to get to the C&C. Security vendors have to pre-register all the domains that a bot queries every day, even before the botnet owner registers them. It's a time-intensive process, researchers argue in their report. Texas A&M officials say Reddy's method looks at the pattern and distribution of alphabetic characters in a domain name to determine whether it's malicious or legitimate. This allows them to spot botnets' algorithmically generated domain names. "Our method analyzes only DNS traffic and hence is easily scalable to large networks," said Reddy. "It can detect previously unknown botnets by analyzing a small fraction of the network traffic." Botnets using both IP fast-flux and domain fast-flux can also be detected by the proposed technique, according to Reddy. IP fast-flux is a round-robin method where malicious websites are constantly rotated across several IP addresses, changing their DNS records to prevent their discovery by researchers, ISPs or law enforcement. Reddy's new detection method discovered two new botnets with their method. One of the new botnets generates 57 character long random names and the second botnet generates names using a concatenation of two dictionary words. CERT, a nationwide network security coordination lab based at Carnegie Mellon University, is building a tool based on Reddy's technique and plans to distribute it for public use.
<urn:uuid:251e8607-6d01-458b-b81b-f371e58cb49c>
CC-MAIN-2017-04
http://www.csoonline.com/article/2127884/malware-cybercrime/new-method-finds-botnets-that-hide-behind-changing-domains.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00284-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92032
646
2.703125
3
In an effort to preserve The Alamo, one of the state’s most popular tourist attractions -- that is vulnerable to aging and erosion -- researchers are looking to get a better understanding of how the structure is being affected by erosion, heat and cold, and also are building a more thorough record of the historical site using 3-D laser scanning and photogrammetry. The preservation of The Alamo is important for historical reasons, but doing so is also one of the newly appointed duties of the Texas General Land Office, said Communications Director Mark Loeffler. “It’s the crossroads of Texas history, there’s no doubt,” he said. “With over 300 years of history, dating back to the Spanish Colonial period, it’s played a part not only in Texas history but the history of this region, the American Southwest, unlike almost any other place, so it’s clearly worth saving.” Students and professors at Texas A&M University began work on the project at the end of 2012, and they’re now looking to complete the data collection phase and begin analyzing what was collected. Students from Texas A&M University at Kingsville, the University of Texas and the University of Texas at San Antonio are also involved in the project. “We were basically taking records to have accurate information about the state of the surface of the building as it is now,” said Carolina Manrique, a PhD student at Texas A&M University’s College of Architecture. Another group, she said, is looking at documents like drawings, plans and texts to create yet another resource that the site’s conservator will be able to draw from. By combining the information gathered from direct data collection and academic research, Manrique said they will be able to “triangulate” the most accurate information possible about the site. By using 3-D laser scanning and photogrammetry, the team was able to collect high-resolution models and images that a conservator will be able to use and zoom in on to answer specific questions about the structure. And by creating 2-D and 3-D models of the building as it has existed at various points in history, Manrique said researchers and conservators will have a rich set of data they can draw from. The team is creating models of the building in 1836, the year the historic battle happened; 1885, the year San Antonio became the site’s custodian; 1961, the year the Historic American Buildings Survey created detailed drawings of the site; and today. In addition to aiding the building’s conservation, Manrique said all this information could unlock some new layers of history that may have otherwise been lost. “This is all about the history of the place,” she said, adding that by making this data more accessible, more people will be able to participate in the region’s preservation, historical and cultural efforts. The project received funding, which has been available since mid-2012, through the Ewing Halsell Foundation. And it took the leadership of Texas General Land Office to make the preservation effort a priority, Loeffler said. The preservation of the site, which attracts more than 2.5 million tourists each year, had been going well, but using new technologies and techniques to continue the effort was seen as a prudent decision, he said. “There’s always more that can be done, and that’s why the land office worked to try to make this funding available to pursue these projects."
<urn:uuid:d447f1cf-c686-440b-88bf-b9e56b3009d5>
CC-MAIN-2017-04
http://www.govtech.com/A-Digital-Take-on-Remembering-The-Alamo.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00432-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959402
741
3.25
3
NASA is today celebrating one of its most successful space programs ever: Voyager. Voyager 2 launched on Aug. 20, 1977, and Voyager 1 launched on Sept. 5, 1977 and between them, they have explored all the planets of the outer solar system, Jupiter, Saturn, Uranus and Neptune; 48 of their moons; and the unique system of rings and magnetic fields those planets possess. And the craft continue to run smoothly and send back information from distances more than three times farther away than Pluto. NASA notes that even though most of the launch vehicle's 700 ton weight is due to rocket fuel, Voyager 2's travel distance of 4.4 billion miles from launch to Neptune results in a fuel economy of about 30,000 MPG. As Voyager 2 zips out of the solar system, this economy will get better. Some interesting facts about the Voyagers from NASA: · Voyager 1 is the farthest away human-made object, at a distance from the sun of about 9.7 billion miles. Voyager 2 is about 7.8 billion miles from the sun. · Each spacecraft carries five fully functioning science instruments that study the solar wind, energetic particles, magnetic fields and radio waves as they cruise through this unexplored region of deep space. The spacecraft are too far from the sun to use solar power. They run on less than 300 watts, the amount of power needed to light up a bright light bulb. Their long-lived radioisotope thermoelectric generators provide the power. · The Voyagers call home via NASA's Deep Space Network, a system of antennas around the world. The spacecraft are so distant that commands from Earth, traveling at light speed, take 14 hours one-way to reach Voyager 1 and 12 hours to reach Voyager 2. Each Voyager logs approximately 1 million miles per day. The antennas must capture Voyager information from a signal so weak that the power striking the antenna is only 1 part in 10 quadrillion. A modern-day electronic digital watch operates at a power level 20 billion times greater than this feeble level. · Both Voyagers carry a greeting to any form of life, should that be encountered. The message is carried by a phonograph record - -a 12-inch gold-plated copper disk containing sounds and images selected to portray the diversity of life and culture on Earth. The contents of the record were selected for NASA by a committee chaired by Carl Sagan of Cornell University. Dr. Sagan and his associates assembled 115 images and a variety of natural sounds. To this they added musical selections from different cultures and eras, and spoken greetings from Earth-people in fifty-five languages. · The total cost of the Voyager mission from May 1972 through arriving at Neptune is $ 865 million. At first, this may sound very expensive, but the fantastic returns are a bargain when we place the costs in the proper perspective. It is important to realize that: on a per-capita basis, this is only 20 cents per U.S. resident per year, or roughly half the cost of one candy bar each year since project inception. The entire cost of Voyager is a fraction of the daily interest on the U.S. national debt. · A total of 11,000 workyears will have been devoted to the Voyager project through the Neptune encounter. This is equivalent to one-third the amount of effort estimated to complete the great pyramid at Giza to King Cheops. · Each Voyager spacecraft comprises 65,000 individual parts. Many of these parts have a large number of "equivalent" smaller parts such as transistors. One computer memory alone contains over one million equivalent electronic parts, with each spacecraft containing some five million equivalent parts. Since a color TV set contains about 2500 equivalent parts, each Voyager has the equivalent electronic circuit complexity of some 2000 color TV sets. · Like the HAL computer aboard the ship Discovery from the famous science fiction story 2001: A Space Odyssey, each Voyager is equipped with computer programming for autonomous fault protection. The Voyager system is one of the most sophisticated ever designed for a deep-space probe. There are seven top-level fault protection routines, each capable of covering a multitude of possible failures. The spacecraft can place itself in a safe state in a matter of only seconds or minutes, an ability that is critical for its survival when round-trip communication times for Earth stretch to several hours as the spacecraft journeys to the remote outer solar system. · In December 2004, Voyager 1 began crossing the solar system's final frontier. Called the heliosheath, this turbulent area, approximately 8.7 billion miles from the sun, is where the solar wind slows as it crashes into the thin gas that fills the space between stars. Voyager 2 could reach this boundary later this year. · Barring any serious spacecraft subsystem failures, the Voyagers may survive until the early twenty-first century (about 2020), when diminishing power and hydrazine levels will prevent further operation.
<urn:uuid:75ae23ec-b81c-4af1-85b9-106c2f8d4fa1>
CC-MAIN-2017-04
http://www.networkworld.com/article/2348514/voyager-spacecraft--celebrates-30-years-of-going-where-no-man-has-gone-before.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00432-ip-10-171-10-70.ec2.internal.warc.gz
en
0.917956
1,003
3.875
4
It’s September, which means back-to-school season is here again – and what better way to mark the occasion than with a fun look at the many ways video conferencing is extending the reach of the classroom and improving our education system? We recently published a guide titled Six Reasons Why Video Conferencing Is Essential for Education, which gives a more in-depth rundown on the many ways in which video conferencing is vital for the modern school or university. Let’s talk about the top three ways video conferencing is changing the shape of education in the 21st Century! - Video conferencing enables distance learning. Thanks to government investment programs and our grants program, more and more people are becoming familiar with the huge potential video conferencing represents in the field of distance learning. From children in rural areas finally having access to the same quality schooling as their urban and suburban peers, to college students able to attend universities a thousand miles away while still getting the same experience as their colleagues in the classroom, video conferencing is bridging distances and building opportunities above and beyond geographic constraints. - Video conferencing enhances curriculum. Imagine going to school in Iowa and being transported to the African Serengeti to learn about wildlife, the Smithsonian Museum in Washington, DC to learn about our nation’s history, or even the International Space Station to discover far-off stars, all without leaving the classroom – and not in the passive way of a documentary film, but in a brand-new way that allows you to experience those places as though you were actually there. The interactive qualities of high-definition video conferencing make for some truly incredible opportunities, opportunities that cater to learning styles that have historically been underserved by traditional education methods. - Video conferencing fosters global collaboration. The world is more global than ever, and it’s never been more important for our young people to develop an appreciation for different cultures, different nations, and different peoples. Video conferencing provides a way for students to connect with people all over the world, face-to-face and heart-to-heart, better preparing them to be citizens and stewards of the world. The educational possibilities created by video conferencing are virtually limitless, and we’ve only just begun to appreciate how true that is. These bullet points are just three of the ways that video conferencing is changing the world of education. Want to learn more about this brave new world? Then be sure to check out our guide, Six Reasons Why Video Conferencing Is Essential for Education, today!
<urn:uuid:e8188a7a-4fae-42a2-ab84-30ae4438ae6d>
CC-MAIN-2017-04
http://www.lifesize.com/video-conferencing-blog/back-to-school-with-lifesize/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00340-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946109
532
2.578125
3
Introduction to the NEBOSH Online Oil and Gas Course - 15th April 2016 - Posted by: Juan van Niekerk - Category: Health and Safety In 2010 the Deepwater Horizon oil spill occurred in the Gulf of Mexico, affecting 68,000 square miles of ocean. The main factors that are attributed to this disaster were: - The use of inferior cement - The failure of valves that were to stop oil from flowing to the surface - A misinterpreted pressure test - A faulty alarm that was meant to warn of gas leakages - Systems that were meant to automatically shut down in case of a gas blowout had faulty switches and flat batteries. These are all factors that could have been avoided, had the correct health and safety measures been in place and maintained. The spill resulted in 11 lost lives, catastrophic environmental damage and $42.2 billion in civil and criminal settlements. The oil and gas industry is considered to be one of the more hazardous fields to work in and, therefore, health and safety measures need to be followed to the letter as they are very closely scrutinised. Undertaking a gas and oil career can be extremely lucrative and, due to the hazards and risks involved, oil and gas safety professionals are highly regarded. The NEBOSH (National Examination Board in Occupational Safety and Health) online Oil and Gas course is aimed at those that are responsible for the health and safety aspects in the workplace relating to the industry of oil and gas. It targets those that have experience in the oil and gas field and, therefore, is more technically challenging than other NEBOSH courses of the same calibre. Upon completion of the NEBOSH online Oil and Gas course, you will have gained the skills and knowledge necessary to design and implement effective health and safety measures relating to the gas and oil industry. Many of the health and safety aspects covered in the course are unique to the industry since some of the hazards and measures that need to be implemented differ from other sectors. The most common incidents are linked to transportation, followed by contact with equipment or objects, fires, explosions, exposure to harmful substances or environments and falls. NEBOSH Oil and Gas Course focus The NEBOSH online Oil and Gas course has a strong focus on the following areas: - Learn from incidents – using the lessons learned from previous health and safety incidents to better implement preventative measures. - Hazards characteristic to oil and gas – many of the hazards that exist in the oil and gas industry are unique, and need to be catered for. - Risk management techniques used in the oil and gas industries. - Safety cases and safety reports – creating cases for investigation and reports on incidents. - Process Management – analysing the prevention of the release of hazardous chemicals. - Failure modes – anticipating failure of health and safety precautions. - How to safely store hydrocarbons. - Operations involving furnaces and boilers. - Explosion and fire risks relating to the oil and gas industry. - Emergency response procedures. - Marine and land transport. Internationally recognised NEBOSH certificate The certification that you will gain upon completion of the NEBOSH online Oil and Gas Course is recognised around the globe as the benchmark to which health and safety standards are measured in the oil and gas field. This will mean that, even if you do relocate to another country, you will still be able to continue practising as a NEBOSH International certified health and safety professional without the need to revise your certification. The legislation that is covered in this course is also applicable internationally and covers oil and gas safety for both onshore and offshore operations. The recommended study time before taking on the exam for the NEBOSH online Oil and Gas Course is about 55 hours. Oil and Gas Course Subject matter The NEBOSH Oil & Gas Safety course consists of one course unit which is divided into five sections. The key topics that are covered during the course include the following: - Health, safety and environmental management in context - Hydrocarbon process safety 1 - Hydrocarbon process safety 2 - Fire protection and emergency response - Logistics and transport operations Although it is recommended that students first attain the NEBOSH National General Certificate or NEBOSH International General Certificate or equivalent and have knowledge of the production and process operations of hydrocarbon, there are no official prerequisites to undertaking this course via Nebosh distance learning. The NEBOSH Oil and Gas Exam format The NEBOSH online Oil and Gas exam consists of one core unit (IOG1) which will allow you two hours (120 minutes) to answer as many of the questions as possible. You will be required to answer one long answer question and ten short answer questions. In order to gain your certification, you will need to answer a minimum of 45 questions correctly out of a possible 100 (45%). Once you have completed the exam, your paper will be marked by an approved NEBOSH examiner who will provide feedback on your paper and provide you with your final mark. The exams for the NEBOSH online Oil and Gas Course are available annually during the months of March, June, September and December. Certification progression for Health and Safety It is suggested that, after studying the NEBOSH online Oil and Gas Course, students consider undertaking the NEBOSH International Diploma as this will give them an even broader scope of skills knowledge in their health and safety careers, focussing on best practices when assessing, creating, implementing and managing health and safety systems in the workplace. Career options for NEBOSH Oil and Gas students Upon completion of the NEBOSH online Oil and Gas Course, you will be qualified to apply for positions such as: - Safety Compliance Advisor - Health and Safety Manager - Shift Operations Technician - Construction Manager - Facilities Manager - Electrical Contracts Manager - Health and Safety Consultant - Rig Engineer - Health and Safety Site Lead - Senior Site Safety Engineer Undertaking studies towards a specific area of health and safety will ensure that you remain an in demand professional and can have a massive positive impact on your health and safety career. The NEBOSH online Oil and Gas course will also add to your wealth of skills and knowledge if you are already employed in or already have a different certification in the health and safety sector. Get in touch with our expert to discuss the NEBOSH Oil and gas course.
<urn:uuid:0c19d2ed-42b1-46c7-9272-97d6b123e41a>
CC-MAIN-2017-04
https://www.itonlinelearning.com/blog/introduction-to-the-nebosh-online-oil-and-gas-course/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00248-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95018
1,320
2.75
3
VDE is an ethernet compliant virtual network that can be spawned over a set of physical or virtual computers over the Internet. The most notable VDE implementation is the open source project Virtual Square. The Virtual Square project is similar to the VMware virtual switch or vSwitch, which works much like a physical Ethernet switch. It detects which virtual machines are logically connected to each of its virtual ports and uses that information to forward traffic to the correct virtual machines. A Vmware vSwitch can be connected to physical switches using physical Ethernet adapters, also referred to as uplink adapters, to join virtual networks with physical networks. This type of connection is similar to connecting physical switches together to create a larger network. But who wants to shell out all that money for VMware licenses, Virtual Square offers the same functionality at no cost. The Virtual Square VDE is one of several tools developed within the Virtual Square project to provide an effective communication platform for virtual machine interoperability. The key features of VDE are: - consistent behavior with real ethernet network. - It enables interconnection between virtual machines, applications and virtual connectivity tools. - Last but not least, it does not requires administrative privileges to run. VDE main components The VDE network consists of the same architectural tools and devices of a real modern ethernet network. Here is a brief description of VDE components: - VDE switch - Like a physical ethernet switch, a VDE switch has several virtual ports where virtual machines, applications, virtual interfaces, connectivity tools and - why not? - other VDE switch can be virtually plugged in. - VDE plug - It is the program used to plug into a VDE switch. Data streams coming from the virtual network to the plug are redirected to standard output and data streams going to the VDE plug as standard input are sent into the VDE network. - VDE wire - Any tool able to transfer a stream connection can become a VDE wire (e.g. cat, netcat, ssh and others). - VDE cable - VDE components are interconnected via VDE cables that are made of one VDE wire and two VDE plugs as happen in a physical ethernet network. - VDE cryptcab - Informally VDE encrypted cable. Although it is possible to use tools like ssh or cryptcat to obtain an encrypted wire to interconnect VDE plugs, these tools work with connection-oriented streams to provide encryption, resulting in nested connection-oriented streams with poor performance and unjustified overhead. The idea behind cryptcab is the adoption of connectionless protocols to provide encrypted cables facility.
<urn:uuid:1bf33573-69f8-46b7-8f4d-e897c15b0f60>
CC-MAIN-2017-04
http://www.elasticvapor.com/2008/10/virtual-distributed-ethernet-vde-cloud.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00276-ip-10-171-10-70.ec2.internal.warc.gz
en
0.885246
543
2.65625
3
Recently, The American Registry for Internet Numbers (ARIN) announced the exhaustion of the free IPv4 address pool. Below is a collection of news articles to keep you updated on this topic. North America Just Ran Out of Old-School Internet Addresses The Internet is rapidly running out of the most commonly used type of IP address, known as IPv4. ARIN announced it has run out of freely available IPv4 addresses. While this won’t affect normal Internet users, it will put more pressure on Internet service providers, software companies, and large organizations to accelerate their migration to IPv4’s successor, IPv6. Read entire article. North America’s IPv4 address supply runs dry The long-predicted exhaustion of IPv4 addresses has now taken place in North America, with the region’s authority left with no further supply of the 32-bit labels to issue. In the early days of the internet, the 4.3 billion possible IPv4 addresses appeared adequate. But as early as 1995 the Internet Engineering Task Force, or IETF, named the IPv6 successor protocol, and people have been warning of the consequences of the impending IPv4 address exhaustion for years. Read entire article. No more IPv4, now what? After several false scares and years of warnings, it has finally happened: the Internet has run out of IPv4 addresses. Now that all IPv4 addresses have been taken, anyone looking for a new IP must either buy one from someone else or adopt the IPv6 format. Some businesses have decided to purchase more IPv4 addresses on the secondary market instead of switching. In most cases blocks of IPv4 addresses can be bought for around $15 each. Buying more IPv4 addresses can only delay the inevitable for these businesses. Read entire article. ARIN Issues the Final IPv4 Addresses in its Free Pool, Forcing Shift to IPv6 At 128 bits, IPv6 has a much larger address space than the current standard, IPv4, which is facing the threat of address exhaustion because of its small size. IPv6 provides more than 340 trillion, trillion, trillion addresses, compared to the four billion IP addresses that are available with IPv4. IPv6 also provides more flexibility in allocating addresses and routing traffic, eliminating the need for network address translation. Read entire article.
<urn:uuid:47e67fd8-9860-4c53-8fc7-9b760870cdbb>
CC-MAIN-2017-04
http://www.internap.com/2015/10/09/news-roundup-ipv4-address-exhaustion/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00394-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932452
476
2.65625
3
SNMP is the simple network management protocol. It is used by network management frameworks to manage and monitor network devices, such as hubs and routers. Some computer systems also respond to SNMP queries. Early versions of SNMP used trivial and ineffective security measures, so created significant Security Exploit's. Improper implementation of SNMP services on a given system can cause significant Security weaknesses.
<urn:uuid:c0b60f6b-cce1-4a85-a2c5-49427ef1bfdb>
CC-MAIN-2017-04
http://hitachi-id.com/concepts/snmp.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00422-ip-10-171-10-70.ec2.internal.warc.gz
en
0.877046
83
3.21875
3
Vila F.,Center for Studies on Ports and Coasts of | Ruiz-Mateo A.,Center for Studies on Ports and Coasts of | Rodrigo M.,Center for Studies on Ports and Coasts of | Alvarez A.,Center for Studies on Ports and Coasts of | And 2 more authors. Desalination and Water Treatment | Year: 2011 The seawater desalination process is a strong bet for developing regions like the Spanish Mediterranean, to satisfy the increasing fresh water demand, or the Canary Islands, being the desalination the most important artificial water resource. The main characteristic of the waste brine disposal resulting from the desalination process is its high salinity, and consequently, its higher density in comparison with that of the environment. Therefore, the discharge of the concentrated effluent into the sea may cause a negative impact in the sea water quality and its ecosystems, specially regarding the sea grass meadows that cover the Mediterranean coast. In order to make the development of the desalination plants sustainable, the Spanish Ministry of the Environment and Rural and Marine Affairs agreed to invest in Experimental Development Projects within its National Plan for Scientific Research, Development and Technological Innovation. The project entitled "Development and implementation of a methodology to reduce the environmental impact of brine discharges from desalination" has been approved and supported as a part of the Plan. This project is being carried out by the University of Cantabria in collaboration with the Spanish Centre for Experimentation on Public Works (CEDEX). The aim of this study is to find which discharge devices produce the biggest brine dilution, to make the environmental impact on the biocenosis as low as possible. Under this perspective, the behaviour of different jets and discharge devices were investigated in physical models. To this purpose, different instrumentation has been using to measure the conductivity and velocity in the near and far field of the effluent, including a micro scale conductivity and temperature instrument or a doppler velocity profiler. Part of the experiments have been performing in a wave flume of 30 m long, 1m wide and 2 m high, simulating a 3D superficial discharge from a beach with a fixed slope. In this flume different simulations, changing the wave variable (height and period, in an irregular JONSWAP train wave), have been testing. The results obtained with the performed physical model will be presented in this communication. © 2011 Desalination Publications. All rights reserved. Source
<urn:uuid:128a32e9-a972-44b3-87e5-4018d1c2243d>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/center-for-studies-on-ports-and-coasts-of-1698045/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00238-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9205
511
2.75
3
Q. What is the name of the world’s first mass-produced commercial computer? The history of computing hardware is global and intensely interesting. From the early mechanical calculators, like the 1640s Pascal’s calculator, to the punch card data processors of the 1880s, continuous innovation has led to faster and more universal computing devices. The first commercial computers Ferranti Mark 1 – The title of first commercially available computer goes to Freddie Williams and Tom Kilburn of the United Kingdom in February 1951. Unfortunately, a change in government led to a loss in funding, resulting in only two units’ being sold (and one at a major discount). UNIVAC 1 – The first mass-produced computer came in March 1951, selling 46 machines at more than $1,000,000 each ($9.2 million on today’s dollar). The UNIVAC (UNIVersal Automatic Computer) was designed by J. Presper Eckert and John Mauchly and gained a lot of public awareness by correctly predicting the 1952 US presidential election. LEO 1 – The first computer used in business application goes to the Lyons Electronic Office 1 in September 1951. The LEO 1 was used for payroll, inventory and calculating production requirements for the J. Lyons and Co. food manufacturing company.
<urn:uuid:39627cfc-4f62-4fcd-9165-05c6f8928d91>
CC-MAIN-2017-04
http://www.lifesize.com/video-conferencing-blog/12-days-of-geek-day-7/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00542-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930094
270
2.703125
3
2012: 5 reasons it's not the end of the world (and 5 reasons to worry) - By Kevin McCaney - Dec 20, 2011 Foretelling the end of the world has been a favorite pastime of certain people almost since the beginning of the world. From Pope Innocent III (who predicted the end would be in 1284) to Cotton Mather (who flooded the zone with 1697, 1736, 1716, 1717, then sometime post-1728), to, supposedly, Isaac Newton (2060), to the fuss over Y2K, it seems the end has always been near. As we roll toward 2012, the doomsayers are at it again, focusing on Dec. 21. Books have been written, websites have been launched, a bombastic movie was made, and the city of Tapachula, Mexico, has even installed a digital clock counting down to the end of the world. Why? Because Mayan mathematicians probably figured that coming up with a 5,000-year calendar was a good day’s work and left it at that. The Maya, an advanced ancient people whose civilization dates to about 2,000 B.C. and flourished between 250 and 900 A.D., developed a 5,126-year “Long Count” linear calendar for recording past and future events. The calendar started in 3114 B.C. (by our current Gregorian calendar) and extended to 2012 — specifically, the winter solstice date of Dec. 21, 2012. After that, they’d have a bit of their known Y2K problem and would need a new calendar, but what was the rush? It was more than 1,000 years away. People writing computer code in the 1960s weren’t worried about 40 years down the road, why should the Maya worry about a millennium? There was plenty of time to extend, adapt or replace it with a new calendar. The Maya never got around to it. The civilization, which covered a wide swath of Central America, was in decline by the year 900. And although the culture continued in spots (and Mayan peoples and languages exist today), its time of great influence was over, even before the Spanish Conquistadors arrived. Hundreds of years before the Mayan calendar needed an update, everyone had switched to the Gregorian calendar. So it seems there was little reason to start on a new one. It would be like updating WordStar long after everyone had switched to Microsoft Office or Google Apps. The doomsayers aren’t deterred, of course. And in the midst of political unrest and failing economies around the globe, and budget cuts, furloughs and seemingly monthly threats of a government shutdown at home, anxiety is on the rise. But, hey, it doesn’t have to be the end of the world. All we need is a little hope for the world to survive. Nobody ever committed suicide who had a good two-year-old in the barn, as the old horse racing saying goes. That’s hope. And in the realm of technology, there are reasons to think things could be looking up in 2012. Here are five of them. But because technology is always a double-edged sword, we also include five reasons these developments could still be cause for worry. Reason for hope: Check any list of IT concerns in just about any year, and security is at the top. And probably always will be. But network security just might be improving. The Federal Information Security Management Act, which has often been derided as a paperwork exercise, is making the transition from the periodic assessments to the enforcing the continuous monitoring everybody says it needs. And the Homeland Security Department has appointed an actual technologist with cybersecurity experience, Mark Weatherford, as its first deputy undersecretary for cybersecurity. It doesn’t mean everyone’s problems are solved, but it’s a step in the right direction. Reason to worry: There are many more fronts to cybersecurity, and not all of them are looking safe. Stuxnet-like threats to water and power plants, and other potential targets such as prisons are one reason for concern. There also is the growth of mobile malware, and targeted attacks via phishing lures that people still fall for. And, of course, there’s China, which, for all its claims of innocence, seems to have left fingerprints on a lot of cyber attacks. Reason for hope: Government has always collected loads of information, and the amount has increased exponentially in the computer age. But agencies didn’t always know what they had, or where it was, or even how to find it. Recent strides in analytics software are changing that. Tools that can search across multiple databases and perform statistical analysis, predictive analysis and semantic analysis, for instance, are putting all that big data to work. Everything from environmental models to road maintenance to police work is better for it. IBM’s Watson also is moving the ball forward on another front, with natural language processing that helped it excel on “Jeopardy!” and is now being used in medical research. The idea of government getting smarter and more effective might seem strange when one looks at the recent Congress, but at the operational level, it could be happening. Reason to worry: Not to put too fine a point on it, but a smarter, more effective government is exactly what some people fear. Privacy tops the list of concerns, along with the idea that too much efficiency could skew the checks and balances that a society needs. The mobile workplace Reason for hope: As the economy and fiscal budgets make life harder for people, at least technology can make things a little easier for workers. Government agencies are working to accommodate smart phones and tablets in the enterprise, which could lead to more mobile computing and, by extension, more teleworking by employees. The Veterans Affairs Department, for instance, is buying up to 100,000 iPads and developing a security strategy to allow for iPads and iPhones. And the National Institute of Standards and Technology has developed a guide to improved remote access procedures to ensure security for mobile devices. Better mobile options means a better life for employees. Reason to worry: The more popular mobile devices and apps become, the tastier targets they are for attackers. And if security protocols aren’t in place and users aren’t diligent those better employees can become real threats. Supercomputing and super networks Reason for hope: Supercomputing, as usual, has been making great strides. From employing Nvidia gaming-system chips to increase speed to efforts such as the National Science Foundation’s Extreme Science and Engineering Discovery Environment, a distributed computing infrastructure that will link researchers with advanced resources, high-performance systems show no signs of slowing down. And some research data can now ride on the Energy Department’s 100-gigabit transcontinental network, part of a DOE project that plans for an exabit capacity by 2020. Why should the average user rejoice? Because today’s supercomputing power becomes tomorrow’s everyday computing power, and bandwidth tends to follow the same course. The iPad2, for example, has the same computing power as the Cray 2 supercomputer of 1985. Down the road, that kind of power and speed is something to look forward to. Reason to worry: Not much, but for all the best efforts in the United States, the fastest machines are, at the moment at least, in Japan and China. The App Internet Reason for hope: Feeding off the mobile computing trend, what’s called the App Internet combines smart devices, the Open Web, HTML 5 and cloud computing to deliver “context-aware” apps that know who you are and where you are, and have at least an idea of what you’re looking for — when the next train arrives, how to get to a good Japanese restaurant, what time the next movie starts. And if those apps can talk, like the Apple iPhone’s Siri, all the better for having a truly helpful personal digital assistant. Reason to worry: Will there be any way to hide from all this help? People complained when it was revealed that iPhones had tracking capabilities, even if Apple said such tracking was only used to figure out where to place cell towers. But tracking seems to be the whole point of the App Internet. Think those apps will be the only entities that know where you are all the time? As with the potential downsides of other trends, it’s something to think about, but it doesn’t spell the end of the world. Privacy, maybe, but not the world
<urn:uuid:a748a069-912e-4812-ad3b-65bb539a424f>
CC-MAIN-2017-04
https://gcn.com/articles/2011/12/20/2012-5-reasons-not-the-end-of-the-world.aspx?admgarea=TC_BigData
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00174-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954313
1,790
2.609375
3
Windows Task Manager is a standard utility of the OS’s Microsoft Windows NT/2000/XP/2003/Vista. It allows you monitoring in real-time running applications and started processes, evaluate CPU and network usage. In order to open the Task Manager on the keyboard press simultaneously the buttons CTRL+ALT+DEL or CTRL+SHIFT+ESC. The open window displays four tabs which correspond to four activities monitored by the Task Manager: Applications, Processes, Performance (usage of system recourses) and Networking. Processes is the default opening tab. If no applications are running on the computer the Task Manager displays only servicing processes of the OS installed on the PC. When working with a computer at home it is recommended immediately after the OS installation to see the list of processes running on the PC. In future if you suspect your computer being infected you can view the list of running processes and exclude those processes which were from the beginning. Description of most processes can be found in the Internet. That is why whenever you have suspicions concerning a process; search the global network for the process definition. Each process has the following parameters displayed: Image Name (as a rule it coincides with the name of the executable file), User Name (name of the user who started the process), Central Processor Unit Usage (the CPU column) and the memory usage (the MemUsage column). If necessary you can end a process by clicking the button End Process. Example, you have detected a suspicious process and on the site of virus encyclopedia www.securelist.com you have read that the process unambiguously belongs to a virus or a Trojan program, but either no anti-virus program is installed on your PC or the installed anti-virus program does not detect it. In this case you should close all running applications and using the Task Manager to manually end this process. We strongly recommend that you do the following so the suspicious process would not appear again: In order to prevent the process from re-running it is strongly recommended to install antivirus software (if no such software is installed), update its anti-virus databases and start scanning the hard drive for viruses. If an anti-virus program does not find a virus in an executable file of a suspicious process, then e-mail this file to email@example.com.
<urn:uuid:e18fd72d-91f5-4be3-83a4-c0207489798e>
CC-MAIN-2017-04
http://support.kaspersky.com/viruses/general/1344
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00386-ip-10-171-10-70.ec2.internal.warc.gz
en
0.90315
486
2.921875
3
140 inaccurate characters: Social media and disasters - By Michael Hardy - Sep 22, 2011 Does social media spread misinformation during disasters? According to the Congressional Research Service, the answer is maybe. In a report issued Sept. 6, CRS reviewed the use of social media for disseminating information through official and unofficial channels. CRS found some evidence that when hundreds or thousands of people use Twitter and Facebook to report on disasters, some inaccurate information can creep in. “In some cases, the location of the hazard or threat was inaccurately reported,” analyst Bruce Lindsay wrote in the CRS report. “In the case of the March 2011 Japanese earthquake and tsunami, tweets for assistance were ‘retweeted’ after the victims had been rescued.” Other studies suggest that the presence of inadvertent inaccuracy is minimal, CRS reported. The more serious risk is deliberate misinformation. “Some individuals or organizations might intentionally provide inaccurate information to confuse, disrupt or otherwise thwart response efforts,” Lindsay wrote. "Malicious use of social media during an incident could range from mischievous pranks to acts of terrorism." For example, terrorists often call first responders after an attack so they can launch a second attack on the emergency crews. Terrorists might find social media to be a useful tool in those situations, Lindsay wrote. All in all, CRS offered tentative approval of the official use of social media to spread information about disasters and associated response, rescue and recovery efforts. However, because success stories are still largely anecdotal, the costs are unclear and some pitfalls might yet become apparent that are currently unknown, the study concludes that the use of social media warrants more study before the Federal Emergency Management Agency makes it part of its disaster strategy. Technology journalist Michael Hardy is a former FCW editor.
<urn:uuid:f6eaa084-730e-4a0a-a4de-b93a1bf81943>
CC-MAIN-2017-04
https://fcw.com/articles/2011/09/26/buzz-social-media-disaster-response.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00386-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950613
374
2.6875
3
Avoiding Thin Ice The idea of driving a truck across a frozen expanse of water may sound a little iffy to people living in warmer climates. But ice bridges, which traverse rivers and lakes, and ice roads, which travel along frozen rivers, are common transportation links in the northern regions of Canada, Alaska, Europe and Russia. In Canada's far north, ice roads and bridges are often the only economical way to get materials to and from remote towns and mining operations. Canada's Government of the North West Territories and contractors actually build and maintain ice bridges and roads throughout the winter season until the thaw comes in April, said Peter Dyck, fleet facilities officer of the Department of Highways. The bridges are built in layers, he said, and the Department of Highways and contractors use a ground-penetrating radar (GPR) system to track the thickness of the ice. The GPR system is pulled behind a snowmobile or other small vehicle. The system combines GPR data with GPS data so government officials can get a precise correlation between ice depth and physical location of the bridge. The data also can be fed into a GIS software package to create color-coded maps that display weak spots. Road repair and maintenance crews use the maps to target areas that need attention, Dyck said, and officials scan the ice for potentially fatal faults. "There's a certain panic to move loads across these roads before we close them in mid-April. In previous years, we've had traffic jams as a result of people showing up as late as midnight on the last day," he said. The department estimates that more than 4,000 heavy loads crossed ice bridges between January and March last year. Eyes on Emissions For the past five years, the Nevada Department of Motor Vehicles' emission control program has monitored vehicle emissions in the Las Vegas metropolitan area under a mandate of the EPA. Nevada has used remote-sensing technology -- roadside cameras and emissions-monitoring equipment -- to track the tailpipes of approximately 20,000 autos in the Las Vegas area, said Lloyd Nelson, program manager. Every year, some public grumbling accompanies newspaper stories about the testing, Nelson said. But this year, the grumbling is decidedly louder, focusing on the roadside cameras. In April, state Sen. Mark James, R-Las Vegas, told the Las Vegas Review Journal that using the roadside cameras could violate a state law banning the use of cameras for traffic enforcement -- a law that James co-authored. The law prohibits the use of roadside cameras for traffic enforcement unless the camera is held by a law enforcement officer or mounted on a law enforcement facility or vehicle. But Nelson said the cameras are essential because the DMV needs to gather license plate numbers to get information on vehicles' year of manufacture. "The remote sensing is being used to evaluate [our] emissions program's performance, general research, evaluating the fleet in the area [and] evaluating certain vehicles that are high emitters. That's the focus that the DMV has taken over the last five years," Nelson said. James contends that using unmanned cameras to gather information that ultimately could result in a notification of suspended registration for failure to pass emissions tests is ultimately an enforcement action and doesn't comply with state law. Counties Quake at Cable Revenue Shortfall In March the Federal Communication Commission reclassified cable Internet connections as an information service instead of a cable service, and fallout from that ruling already is hitting Maryland counties. Comcast Cable told several counties they would no longer receive cable-modem franchise fees from the company because the FCC decision means the company is no longer obligated to collect the money. Counties say that could cost them a sizable chunk of change. For example, Baltimore County, Md., officials said they could lose $830,000 next year if Comcast stops collecting cable-modem franchise fees. "The definition we created [in the franchise agreement] was that the county would receive a percentage of the gross revenue derived, in essence, from the wire," said Kevin Kamenetz, an eight-year member of the Baltimore County Council and the lead negotiator on cable issues for the county. "Any sources of income that our local Comcast entity receives as gross revenue derived from the transmission over the county rights of way would be subject to our franchise fee." Officials said the county told Comcast that its decision to cease collecting the fee is premature, given that the FCC's ruling isn't final and that if the FCC does reverse its decision, the county is due a refund of fees that haven't been paid. Several counties, through the state's association of counties, will lobby the FCC to change its decision about the classification of cable Internet services. "Congress has been pretty clear that they want to take any negotiating leverage from the local jurisdictions in the guise of free market competition," Kamenetz said. "Obviously, this may be a position that will be resolved in the courts or by Congress." State Commission Addresses DSL Regulation As broadband Internet connectivity becomes more commonplace in homes, public utilities' commissions could well play a larger role in regulating broadband providers. The California Public Utilities Commission waded into the broadband fray in March, ruling that CPUC has jurisdiction over quality of service issues, marketing of broadband services and business practices of providers. "State commissions are the place that people go to when they have complaints about the quality of their telephone service, and they don't differentiate DSL from their regular voice-grade telephone service," said Tom Long, adviser to CPUC President Loretta Lynch. The decision stems from a formal complaint filled by smaller DSL providers against Pacific Bell, part of telecommunications giant SBC, about issues related to Pacific Bell's DSL service. "The issue for us, that was raised by the motion to dismiss by SBC, was whether federal actions or federal law had preempted the ability and right of the state commission to address the claims," Long said. "The answer is no. ISPs have raised claims that come under state law. Nothing under federal law says that we're the wrong place to address those -- the claims related to service quality, discrimination of service." SBC had argued that the FCC alone has jurisdiction over such issues because the services are provided under tariffs that the FCC oversees, and, based on that, states don't have regulatory authority over that service. Long noted the decision is not final; a full administrative hearing before the commission is scheduled and a final ruling should come in about six months. He said approximately 15 states have delved into some form of regulation for DSL providers, and about seven states have reached similar conclusions to the CPUC's. "As DSL is becoming a more important service around the country, commissions around the country are facing the same kinds of claims and cases," he said. "These issues are getting sorted out, and, in California, we're just starting to see formal complaints filed by parties."
<urn:uuid:7dc65adf-4dae-4be0-8184-9c8d83c87921>
CC-MAIN-2017-04
http://www.govtech.com/public-safety/99403654.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00202-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965299
1,415
2.984375
3
The Centers for Disease Control and Prevention is tracking the approximate locations of cell phone users in West Africa who dial emergency call centers in an effort to predict the onset and spread of Ebola outbreaks. “The data is just the number of calls by cell tower, but from that you can get a rough idea of the area that the calls are coming in from, and then derive census, neighborhood data from that,” CDC spokeswoman Kristen Nordlund told Nextgov on Thursday. It’s one of the high-tech approaches the U.S. government is piloting to stop the spread of the disease. There is deep cell phone penetration in many parts of West Africa, where land lines sometimes are nonexistent. By collecting tower data from telecommunications providers, CDC officials can visualize the beginnings of an outbreak, explained Este Geraghty, chief medical officer at software mapping provider Esri. She’s working with the agency on response efforts. In Liberia, special call centers and a “4455” hotline number were set up for residents to ask Ebola-related questions and report cases. The Liberia Ministry of Health and telecom companies, with CDC support, "looked at the cell tower locations and tower traffic -- in other words, which tower the call came in through," Geraghty said. "It isn’t an exact location of the population with questions, but it does give them an idea of which part of the community questions are coming from -- and presumably populations of need that may not be identified through formal case investigations," she said. A spike in the number of calls could suggest a crisis. ‘Tower Dumping Sometimes Controversial’ In different circumstances, such “tower dumps” have sparked outcries over invasive surveillance. The New York Police Department recently was lambasted for examining all the calls made near the Brooklyn Bridge around the time miscreants replaced the American flags atop the landmark. Nordlund, the CDC spokeswoman, said officials can only see that a call was made to the “4455” Ebola response number and the location of the tower it came through. No personal information is collected, she added – “just total calls per time period by tower.” Using Esri mapping software, public health officials intend to layer the call data over census information, such as population densities and hospital locations. "When you have a dense urban setting where the health system is struggling to cope with an outbreak like this,” such geography tools “become crucial to help guide the limited health care resources,” Geraghty said. Responders need to understand the potential scope of the contagion so they can position mobile diagnostic labs, beds and health care workers accordingly. CDC has said if 70 percent of Ebola patients are under care by late December, the outbreak could end by late January 2015. The pandemic so far has claimed about 3,865 lives, primarily in Liberia, Sierra Leone and Guinea, according to the World Health Organization. A patient in Texas became the first U.S. casualty yesterday. There are more than 8,033 cases of the illness. Researchers Use Mobile Networks to Follow Virus’ Spread A Sept. 29 article in the online medical journal “PLOS Currents” outlined the potential of mobile network data to restrain Ebola. Researchers mapped transportation hubs against aggregated call patterns of a million anonymous phone users on the Orange Telecom network in Cote d’Ivoire, Senegal. Each communication was pinpointed by identifying the geographic coordinates of the transmitting tower and the associated cell phone. "Understanding the potential routes of spread of the virus within a country are critical to national containment policies, and will strongly influence more regional spread across borders," the researchers wrote. However, there are limitations to this method that can muddy predictions, such as data confidentiality protections and data precision. The information amassed can contain competitive information on a network operator’s designs and customer base, plus information about the customers’ travels and locations. Privacy constraints can be overcome during epidemics if companies provide aggregated, anonymous data sets, rather than tower dumps, the researchers suggested. “You get a rough area of geography based on the cell tower location,” CDC’s Nordlund said. Cell tower flow charts are but one of the graphing techniques federal health officials are employing to contain the virus. Pentagon Repurposes WMD Tracker The Defense Department has rejiggered a system geared for identifying weapons of mass destruction to instead flag the onset of Ebola outbreaks. The WMD biosurveillance prototype, dubbed Constellation, harvests, synthesizes and visualizes data of interest across the military and intelligence communities, according to Pentagon officials. Through Constellation, "information gathered from WMD threat reduction activities, when integrated with other relevant U.S. government and international partner information, will provide decision-makers and operational personnel a holistic view of the WMD landscape," Andrew Weber, assistant secretary of defense for nuclear, chemical and biological defense programs, told a House Armed Services panel this spring. Weber said Tuesday the Pentagon has built on the original concept to create an online portal, which will be used by nongovernmental organizations and governments "most affected by the Ebola outbreak" as well as Defense Department laboratories involved in the response. Many of these mapping exercises are aimed at "contact tracing," or the process of finding everyone who has come in direct contact with a sick Ebola patient. But such data points can be hard to compile because of Africa's terrain. One of the challenges is “continuing to gain and grow situational understanding over time,” Gen. David Rodriguez, commander of U.S. Africa Command, said Tuesday. “Isolated places” create even more problems, he added. Satellites can help by offering a very high-level view of the threat. A sudden Ebola outbreak could be indicated, for instance, by an unusually crowded hospital parking lot as viewed from space, Nextgov's sister publication Defense One reported last month. Just as satellite imagery showed Russian forces massing along the Ukrainian border, high-resolution images from low Earth orbit can offer a glimpse of where and when more sick people are seeking treatment.
<urn:uuid:7d532533-5795-4824-919a-eb535692d40e>
CC-MAIN-2017-04
http://www.nextgov.com/mobile/2014/10/cdc-tracks-cell-phone-location-data-halt-ebola/96239/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00433-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944102
1,300
3.078125
3
Our human are living in the environment that always be surrounded by light and electicity. Imagine a day when there is no electricity at home, the first idea come to us is there must be a fault need to be fixed. You should not only need to known what a cable fault is but also what kind of instruments needed to get you out of the bad situation. This article will give a brief introduction about cable fault location as well as the working process of a typical cable fault locator. A cable consists of multiple wires or electric conductors that are held together with a covering. The cables are frequently used for the distribution of the electrical energy. Although they are most highly developed, there are still malfunction in the cable system. Cable fault location is such process of locating periodic faults, such as insulation faults in underground cables, and is an application of electrical measurement systems. Cable Fault Location is the process of locating the short circuit faults, cable cuts, resistive faults, intermittent faults, and sheath faults. Power cable fault locator is designed to locate cable faults, pinpointing the fault location, route tracing, cable identification, voltage withstand test and cable information managementCable faults are damage to cables which effect a resistance in the cable. If allowed to persist, this can lead to a voltage breakdown. There are different types of cable faults, which must first be classified before they can be located. Contact fault: There are two kinds of contact fault, one is the contact between conductor and screen will generating a varying resistance, the other is the contact between multiple conductors will also generate a varying resistance. Sheath faults: Sheath faults refers to the damage of the cable sheath which allows the surroundings contact with the cable screen. Moisture caused faults: Water penetrates into the cable sheath and contacts the conductors. Impedance changes at the fault location make measuring more difficult. The resistance usually lies in the low-ohmic range. Voltage Disruptions: Voltage disruption is caused by the combination of series and parallel resistances, usually in the form of a wire break. All of the fault listed above will either cause the breakout of the electric or a electric exceed. As we known each cable has a limit of power supply. When this limited exceeds, or the wires becomes weak, there will be a short circuit causing a spark and then a minor or major explosion. To fix this issue, a cable fault locator is needed. To locate a fault in the cable, the cable must first be tested for faults. During the cable testing, flash-overs are generated at the weak points in the cable, the cable will be fixed either by providing new wiring, or increasing the wire strength. During the working process of the cable fault locator, the cable fault finder receives signals from the locator and the arrow on the instruments points to the position where the digging and maintenance required to be done. Fiberstore now offers a full series of power cable fault locators that will automatically detect the cable fault, which will greatly reduce the time and training required to find these problems. Our cable fault location instruments systems are applicable to all types of cable ranging from 1 kV to 500 kV and all types of cable faults such as short circuit faults, cable cuts, resistive faults, intermittent faults, sheath breaking, water trees, partial discharges. Know more informations of exporters and suppliers of electric or fiber optic tester.
<urn:uuid:c7ab8194-b6d0-4af9-aa23-0b5d43903f8a>
CC-MAIN-2017-04
http://www.fs.com/blog/further-understanding-cable-fault-location.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00487-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93268
704
3.40625
3
Written byJohn Breeden II Government workers are always a big target for hackers because of the information they protect. And of everything sitting inside government cubicles right now, keyboards and mice are probably the most trusted devices — or at least the most overlooked — when it comes to security. Well, you can regard those as a potential vulnerability now thanks to an emerging threat, and one with not too many defenses. Called BadUSB, this threat goes back to the Black Hat convention in July when researchers announced they had discovered a way to infect the firmware of USB devices. This lets them inject malicious code into machines those devices connected with. It also allowed keystroke logging and could even reprogram the compromised device so it reports itself as something else (like a camera saying that it’s a keyboard). The worst part is that because this is done by hacking the firmware, for the most part it’s undetectable and outside the realm of virus or malware scanners. Researchers Karsten Nohl and Jakob Lell presented their findings as a proof of concept, but they didn’t release any specifics at the time, fearing hackers would begin to exploit BadUSB before companies could work on a fix. They bought a little time, but not that much, apparently. Just last week another team of researchers cracked the secrets of BadUSB, too — only they posted the malicious code for everyone to see, and quite possibly use, on a public GitHub site. Their argument was that information should be made public and that hackers may have already discovered the BadUSB secrets. In any case, it’s in the wild now, and likely being modified and used by criminal hackers looking for a new tool. I thought that someone must have worked out a proper defense and went searching for folks. It took a while, but finally I came across the director of product management for IronKey, Mats Nahlinder. According to Nahlinder, most of IronKey’s secure drives and products are safe from BadUSB because of a unique firmware check that happens every time one of its devices is inserted into a computer. “It’s quite simple how we do it,” Nahlinder said. “All of the software running inside the firmware on our products is digitally signed. BadUSB is insidious because it happens below the [operating system] level, which prevents malware scanners from detecting it. But we can stop it.” Nahlinder explained how the process works. IronKey takes all the code inside one of its devices and creates a cryptographic hash, which is a non-reversible operation. The certificate for that hash value is then encrypted itself and embedded inside the non-writable hardware cryptochip. When users insert the device into a computer, they automatically use a public key to decrypt the hash value. If any changes have been made to the firmware, even one single byte of data or one number switch in the code, the stored hash will no longer match the newly created one. “If there is no match, then the drive will refuse to start,” Nahlinder said. “The red LED light will illuminate to indicate that there is a fault, and the drive becomes inoperable.” One place where the government can make use of this technology to protect USB devices is with the new IronKey Workspace W700 mobile workspace, which creates Windows 8.1 desktops in a secure space anywhere in the world for traveling employees. The W700 recently earned FIPS 140-2 Level 3 Certification. Nahlinder explained that, as part of that, the device had to be able to maintain a noncorruptible firmware, which it does. If BadUSB finds its way onto one of their drives, the change in code would render the device useless. Of course that means the user is out one portable USB workstation, but their network — and more importantly their data — remains uncompromised and safe, a price most government agencies would be more than willing to pay for that level of protection and assurance. Unfortunately, neither IronKey nor its parent company Imation make mice or keyboards. But perhaps companies that do could follow the pattern of protection put forward by IronKey in its secure drives to lock down the firmware throughout the USB landscape. BadUSB only just got into the hands of the bad guys a week ago, yet it has the potential to be a huge security risk in the very near future. Adding an encrypted firmware checking process to a mouse would likely significantly increase its price, but the cost for doing nothing in the face of BadUSB could be so much higher.
<urn:uuid:ec495e77-0a0e-4310-b4da-85fb77610705>
CC-MAIN-2017-04
https://www.fedscoop.com/despite-challenges-devices-can-good-protection-badusb/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00121-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956556
947
2.5625
3
Really? Cones for a traffic study? We should have been suspicious when New Jersey Gov. Chris Christie first explained the closed lanes on the George Washington Bridge as being necessitated by a traffic study. Fact is, traffic planners rarely — if ever — shut down lanes to perform traffic studies these days. "In general we don't disrupt traffic unless there is an engineering reason to do so, whether that is construction or signal timing adjustments or things like that," said Doug McClanahan, traffic analyst with the Washington State Department of Transportation. "Our aim is to not get in the way of traffic at all. All 50 states are probably like that." To do their traffic studies without getting in the way of drivers, McClanahan and his colleagues turn to traffic microsimulation software, such as Corsim and VISSIM. Corsim is a Windows-based traffic simulation and analysis program developed by the Center for Microcomputers in Transportation (McTrans), which was established by the Federal Highway Administration. VISSIM, also a Windows application, was developed by PTV Planung Transport Verkehr AG in Karlsruhe, Germany. (The acronym is for German words that translate as "Traffic in cities - simulation model.") The first step in a traffic study, says McClanahan, is to collect data on existing patterns. That, too, can be accomplished without disrupting transportation. "There are a couple places where we get our data," said McClanahan. "If it's operational – that is, if it's today's data – we can get that through mechanical counters, individual human beings who count, in-road monitors and video cameras." Next, the data is plugged into a microsim program, which analyzes driver behavior and which allows an analyst to set parameters and change variables such as, yes, the number of lanes available for the traffic or the timing of traffic lights on a stretch of road. "We have analytical models and we have simulation-type models," said McClanahan. "When we apply those, we can generally understand what's going to happen if we make a geometric or signal timing change." He adds that this is a relatively recent capability. "We've been able to do static models for 30 years," he noted. "But microsimulation is something we've really only gotten into in the last decade or so." As for the effect of closing lanes on a bridge? "We can simulate that, given time and budget," McClanahan said. "If the issue is emergent and we don't have time to do a simulation using computers, we would have to look to other methods." Posted by Patrick Marshall on Jan 21, 2014 at 12:20 PM
<urn:uuid:8bcbec5f-5c4d-47e9-92f9-32955b8e46d4>
CC-MAIN-2017-04
https://gcn.com/blogs/emerging-tech/2014/01/traffic-microsimulation.aspx?admgarea=TC_EmergingTech
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00359-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959943
556
2.765625
3
Intel unveils experimental 'cloud computer' chip - By Herb Torrens - Dec 07, 2009 Intel pushed the outer limits of computing this week by unveiling a new experimental processor it described as a single-chip cloud computer. The chip features 48-core processor technology developed by Intel's Tera-scale Computing Research Program. It was co-created by Intel labs in India, Germany and the United States. The company hopes to engage researchers in the coming year by providing more than 100 of the experimental chips for research and development. Those efforts will include developing new software and programming models based on the chip's technology. Microsoft is also involved in the research, said Dan Reed, corporate vice president of Microsoft’s Extreme Computing Group. The company is exploring market opportunities in "intelligent resource management, system software design, programming models and tools, and future application scenarios," Reed said in a released statement. The chip's connection to cloud computing was rather vaguely expressed in Intel's announcement. It states that computers and networks can be integrated on a single piece of 45 nanometer, high-k metal-gate silicon, which is about the size of a postage stamp. The smaller size might be useful for crowded data centers. In addition, the chip might introduce new data input, processing and output possibilities. "Computers are very good at processing data, but it requires humans to input that data and then analyze it," said Shane Rau, a program director at analyst firm IDC. "Intel is looking to speed up the computer-to-human interaction by basically getting the human element out of the way." According to Intel, that kind of interaction could lead to the elimination of keyboards, mouse devices and even joysticks for computer gaming. Intel's announcement even suggested that future computers might be able to read brain waves, allowing users to control functions by simply thinking about them. However, Rau said there's still room for slowed-down human processes. "This process needs to be thought out very carefully, and that's one area where the slow [input/output] of humans may be an advantage," he said. Intel developed the chip based on the company’s recognition, mining and synthesis (RMS) approach, Rau said. "The technology announcement today is similar to Intel's announcement regarding an 80-core processor last year," Rau said in a telephone interview. "It's basically an effort known as RMS by Intel that puts silicon in the hands of the people and institutions that can create the building blocks for future computing devices and software." The chip is only designed for research efforts at the moment, an Intel spokesperson said. "There are no product plans for this chip. We will never sell it, so there won't be a price for it," the Intel spokesperson wrote in an e-mail message. "We will give about a hundred or more to industry partners like Microsoft and academia to help us research software development and learn on a real piece of hardware, [of] which nothing of its kind exists today." Herb Torrens is a freelance writer based in Southern California.
<urn:uuid:84635d92-532f-439d-8423-50ccf3dc8963>
CC-MAIN-2017-04
https://gcn.com/articles/2009/12/07/intel-48-core-cloud-computer-chip.aspx?admgarea=TC_DataCenter
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00267-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957103
642
2.640625
3
A team of researchers based at the University of Oxford have come a step closer to unraveling the puzzle of quantum computing after generating 10 billion bits of quantum entanglement in silicon for the first time. Entanglement is a feature of quantum physics whereby it’s possible to link together two quantum particles so that a change in one is instantly reflected in the other, regardless of the distance between them. This spooky occurrence known as quantum entanglement lies at the heart of quantum mechanics, and harnessing this phenomenon is considered the key to creating future computational devices magnitudes more powerful than traditional machines. The scientists — an international team from the UK, Japan, Canada and Germany — used high magnetic fields and low temperatures to produce entanglement between the electron and the nucleus of an atom of phosphorous that had been embedded in silicon crystal. The electron and nucleus behave like a tiny magnet creating a spin that represents a bit of quantum information. These spins can then be coaxed into an entangled state. Stephanie Simmons of Oxford University’s Department of Materials, first author of the report, explains: “The key to generating entanglement was to first align all the spins by using high magnetic fields and low temperatures. Once this has been achieved, the spins can be made to interact with each other using carefully timed microwave and radiofrequency pulses in order to create the entanglement, and then prove that it has been made.” Co-author and team leader Dr. John Morton of Oxford University’s Department of Materials, comments: “Creating 10 billion entangled pairs in silicon with high fidelity is an important step forward for us. We now need to deal with the challenge of coupling these pairs together to build a scalable quantum computer in silicon.” The use of phosphorus-doped silicon in this experiment is significant as it is the dominant material in modern computing chips, although it’s important to note that the type of silicon used in the experiment was not standard commercial-grade, but rather a high-purity crystal. A report of the research, entitled “Entanglement in a solid-state spin ensemble,” has been published online this week in the journal Nature.
<urn:uuid:a23e2158-6e6d-4f7b-af9a-8afdcb8553e4>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/01/21/silicon_entanglement_revives_promise_of_quantum_computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00267-ip-10-171-10-70.ec2.internal.warc.gz
en
0.909942
456
4.03125
4
Wi-Fi Tech of Yesterday Wi-Fi has evolved significantly. If one goes back a decade, the Wi-Fi industry mostly comprised of 802.11g products, and was about to shift to 802.11n. Wi-Fi existed mostly in portable PCs, some desktop PCs, and a small number of PDAs. That was single-stream Wi-Fi in narrow bands. 802.11n brought the option of multiple data streams with MIMO and the use of beamforming to enable a more robust connection, but only to one device at a time. The value of a Wi-Fi access point would be limited to the highest common denominator. An 802.11n access point with 4X4 MIMO would only make use of 4X4 if there were 4X4 clients on that network, and there were practically none. The occasional 2X2 and 3X3 client – typically portable PCs and Chromebooks – would see a benefit of much higher data rates. 802.11n could work in either band, so the benefits would be greater if the both the access point and client were dual-band and connected over the 5 GHz band. 802.11ac Wave 1 continued that push, except the use of 5 GHz became mandatory; 802.11ac only works in the 5 GHz band. Of course, products are typically dual-band for backwards compatibility. 802.11ac also allowed wider channels to be used which enabled even higher data rates. Wi-Fi Tech of Today Today, Wi-Fi is shifting to 802.11ac Wave 2 with MU-MIMO and WiGig. Wi-Fi-enabled products now run the gamut from PCs, tablets, wearables, and a massive number of smartphones to TVs, STBs, smart home products, robots, and random IoT products. In the enterprise, the ability to connect anywhere from PCs, tablets, smartphones, and wearables that are densely packed in corporate environments is critical for ease of access to information. Broadband service providers wisely consider how they best support home broadband customers with consumer access points. Mobile operators are using Wi-Fi hotspots and small cells supporting link aggregation between LTE and Wi-Fi. In a decade the Wi-Fi industry has gone from 20 and 40 MHz channels to 80 GHz channels (and even 160 GHz channels in 802.11ac Wave 2) and to about 2 GHz channels with 802.11ad. More spectrum is part of the picture, but complex antenna techniques are also critical. To ease wireless network congestion in the home, enterprise, and service provider markets, two very different approaches are being leveraged by the Wi-Fi market: - The use of MU-MIMO (multi-user MIMO) and wider channels by 802.11ac Wave 2 in the 5 GHz band. - The use of beamforming with antenna arrays in ultra wideband channels by 802.11ad, or WiGig, in the 60 GHz band. 5 GHz or 60 GHz? Both? Design Tradeoffs? Boosting 5 GHz Wi-Fi with MU-MIMO and using WiGig in 60 GHz is not an either/or situation. The spectrum at 5 GHz will still in some cases get overcrowded necessitating the use of 60 GHz spectrum. Conversely, the need to connect through walls will require to continued use of 5 GHz bands, unless a space is set up with at least one WiGig access point in each room backhauled over 10 Gigabit Ethernet or 100 Gigabit Ethernet wiring. Since this would be too expensive, it is more likely the future will bring multiple WiGig access points in key rooms where video and VR are used while 5 GHz Wi-Fi will cover the rest. On the client side, however, design tradeoffs can and will be made. They will run the gamut from single-band to dual-band and tri-band. A smart home product might only use 2.4 GHz or both 2.4 GHz and 5 GHz. A portable PC will use all three bands, for example. A monitor might only use 60 GHz to act as a monitor and docking station . . . or the monitor might be tri-band so the portable PC connects only to the monitor(s) over 60 GHz for the display(s), connections to peripherals, and connection to the network/Internet where the monitor is connected to the Internet. Use the Latest Wi-Fi Protocols for the Best Future Compatibility One of the most important considerations for product planning is to place 802.11ac Wave 2 and 802.11ad (WiGig) in the context of their evolution. The next two key Wi-Fi protocols are 802.11ax and 802.11ay. (See our Wi-Fi market data for forecasts by protocol by product category.) 802.11ax will evolve Wi-Fi in the 2.4 GHz and 5 GHz bands and will add uplink MU-MIMO. It will be backwards compatible with all older 2.4 GHz and 5 GHz protocols. By ensuring the use of 802.11ac Wave 2 in products today, downlink MU-MIMO will allow these products to be as efficient as possible on the downlink. Older Wi-Fi protocols will really start to slow 802.11ax down. The inclusion of 802.11ad will allow current products to be future compatible with 802.11ay which will be able to fall back to 802.11ad. Two Free White Papers for More Information ABI Research is still offering a free white paper on each of these here: https://www.abiresearch.com/pages/mu-mimo-and-802-11ad/. They discuss the issues and solutions by these technologies, the ecosystem support by chipset and product vendors, and show how rapidly they will grow in the Wi-Fi market in different product types.
<urn:uuid:1354cbd0-2a2d-4adc-af64-b36afd0c5ea6>
CC-MAIN-2017-04
https://www.abiresearch.com/blogs/wi-fi-technology-today-nothing-wi-fi-technology-yesterday/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00083-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925322
1,208
2.703125
3
Forget electronic contact lenses; Google’s next step could be to inject vision-enhancing electronics directly into the eye. A patent fled by Google and Andrew Jason Conrad—currently the head of Verily, the life sciences unit of Google parent company Alphabet—calls for an “intra-ocular device” that would sit within the lens capsule of the eye. The device would not be a full-blown computer—the processor and controls would live in an external “interface device” such as a smartphone—but it would include sensors, an electronic lens, battery, and “bio-interactive components.” To power the device, Google’s patent mentions an “energy harvesting antenna,” which doesn’t at all bring the Matrix to mind. Google says this antenna could “capture energy from incident radio radiation,” and could optionally double as a way to communicate with the external device. Alternatively, the external device could be worn by the user or stashed beside a bed, and would power the eye sensor wirelessly. How would the “installation” work? Patients would get a dose of anesthetic, and then a surgeon would cut through the cornea and into the anterior chamber of the eye, introducing fluid into the lens capsule to help position the device. That fluid would then solidify, coupling the lens capsule and the device. Google describes being able to replace some or all of the patient’s natural eye lens if necessary. The main purpose of this device would be to improve or restore vision in people with medical issues such as cataracts and presbyopia. But the patent also mentions other applications beyond the medical realm, including depth- and focus-sensors for “a virtual scene presented to the user.” Why this matters: As with any patent filing, there’s no guarantee that Google’s intra-ocular device concept will materialize. But keep in mind the company previously patented smart contact lenses before working on a real version that measures blood glucose levels. Given enough time, this latest invention could be the HoloLens of the future—not worn atop your head, but squeezed into your eyeball. This story, "Google dreams of injecting electronics into eyeballs" was originally published by PCWorld.
<urn:uuid:937c7f77-1032-49b4-8748-eed83dc8ba80>
CC-MAIN-2017-04
http://www.itnews.com/article/3065665/tech-events-dupe/google-dreams-of-injecting-electronics-into-eyeballs.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00569-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934526
480
2.75
3
According to Harvard Business School academics it’s a combination of skill, luck and good timing. What makes some entrepreneurs successful and others unsuccessful? It’s a combination of skill, luck and good timing, according to Harvard Business School academics. In their paper “Performance Persistence in Entrepreneurship,” Paul Gompers, Anna Kovner, Josh Lerner and David Scharfstein looked into the factors behind an entrepreneur’s success (in this case, starting a company that subsequently goes public). Here are the key points: Serial entrepreneurs really are more successful than first-timers • An entrepreneur who is backed by venture capital and succeeds in one venture has a 30% chance of succeeding in his next venture, according to the study. • Whereas, a rookie entrepreneur has only an 18% chance of succeeding, while those who failed before have a 20% chance of succeeding in a future endeavor. So the lesson is “If at first you don’t succeed, try again.” Success breeds more success If you’re a good entrepreneur then you’ll succeed. But the perception of success may be a factor, too. • Entrepreneurs who have had success in the past are more likely to attract capital and critical resources. • In addition, higher quality people and potential customers are more likely to be attracted to that firm, because they think it has a better likelihood of success. That investors choose to back it probably increases the venture’s chances of success. So success breeds further success, even if the entrepreneur was just lucky first time out. Market timing is a skill “A good year” isn’t just a term used for fine wine. Choosing the right time to set up a venture is a knack that successful entrepreneurs have. • Of those computer companies that set up in 1983, 52% eventually went public, i.e., they were successful. • Whereas, of those computer companies that were created in 1985, only 18% went public – they missed the tide. If an entrepreneur set up a company in a “good industry year” (in which success rates were high) they are more likely to succeed in their next venture. They have a skill for choosing to start up in the right industry at the right time. Entrepreneurs who start up a new venture in a good industry year are more likely to invest in a good industry year in their next ventures, the study finds. Companies backed by top-tier venture capital firms are more likely to succeed The top VC firms help companies succeed, either because they are better at spotting good companies and entrepreneurs, they help it attract better resources or help it to formulate a better small business plan. Interestingly, top-tier VC firms were only observed as adding to the success of small business start ups by first-time entrepreneurs or by those that failed in a previous venture. Successful entrepreneurs don’t need a top-tier VC If an entrepreneur with a track record of success starts a company it is no more likely to succeed if it is funded by a top-tier VC firm than a lesser known firm. That’s because if successful entrepreneurs are better, then top-tier venture capital firms have no advantage identifying them (because success is public information), and they add little value. And if successful entrepreneurs have an easier time attracting high-quality resources and customers because of the perception that they’re successful, then top-tier venture capital firms add little value. In fact, another study cited by the Harvard academics found it was rare for serial entrepreneurs to receive backing from the same VC firm across all their ventures and that relationships with VC firms play little role in enhancing performance. Those entrepreneurs that invest in proper liability insurance to protect their ventures have a better chance of succeeding, because their plans are less likely to be blown out of the water by an unexpected and expensive legal disaster or claim. At Hiscox, we’re here to help small business entrepreneurs realize their dreams of success.
<urn:uuid:523c66ab-51b9-4f99-a04d-42f0b89b7376>
CC-MAIN-2017-04
http://www.hiscox.com/small-business-insurance/blog/do-you-know-what-makes-a-successful-entrepreneur/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00387-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957581
837
2.5625
3
|For SSL to work, the server computer requires a file that is more commonly known as a SSL certificate. This certificate in its most basic form contains both a public key and a private key and unique mathematical codes that identify the web host. A combination of a public and private key allows the certificate to create a secure channel to encrypt and decrypt data travelling between a client and a web host; so even if the data is hijacked halfway, all the hijacker would see is jumbled codes. This certificate needs to be installed onto a web server so that it may begin to initiate secure sessions with client browsers. Once installed, a client browser is able to obtain the certificate from the web host and subsequently encrypt its transmission to the web host using the public key in the certificate. Have more questions? Submit a request
<urn:uuid:fba6b2b7-bf4e-4d60-91be-5e8958b1e2c3>
CC-MAIN-2017-04
https://help.nexusguard.com/hc/en-us/articles/203021389-How-does-SSL-work-
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00295-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93063
163
3.828125
4
What is a firewall? Technically, a firewall definition could be that it’s the part of a system or network that blocks unauthorized access but permits outbound communications. Most importantly, firewalls are intended to protect key IT assets from security threats such as denial of service attacks or data theft. Firewalls come in many varieties. What makes one better than other will depend on numerous organization-specific factors. When IT Central Station users were asked about what makes the best firewall, they described a number of factors that will help anyone make the right choice. Some security professionals want to know what is the best free firewall? IT Central Station reviews suggest that this is a question that should asked only after one has assessed many basic requirements about usability and features first. Visibility is offered as one of the most critical aspects of an effective firewall. Users want global reports and traffic visibility as well as application visibility. IT Central Station members also want the firewall to provide visibility into specific users’ behaviors. Visibility as a key point of value cuts across different types of solutions, including Windows firewalls, firewall software and network firewalls. Ease of use and simplicity of administration also rated as high priorities for firewall buyers. A firewall should be easy to manage and configure. Easy installation is essential, as is integration. According to IT Central Station reviewers, firewalls typically function in complex, heterogeneous security environments. In parallel, solid vendor support is important. Reviewers noted that the first line of response to an issue with a firewall is almost always an in-house technical resource. That resource needs to be trained easily. If training is too cumbersome or if the firewall admin is a hard-to-find hire, the department will suffer. Firewall users list many specific functions as “must haves.” These include intrusion protection (IPS), VPN, high throughput, data loss prevention, SSL, IPSEC, application control and web content filtering. Some users want a firewall to easily integrate with an LDAP Server or Radius Server. Anti-spam is desirable, as is anti-virus and anti-spyware protection. Users emphasize the importance of IPv6 native support as well as traffic shaping and bandwidth control.
<urn:uuid:659bfb61-b006-41de-9698-9312552cbe64>
CC-MAIN-2017-04
https://www.itcentralstation.com/categories/firewalls
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00203-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957296
455
2.796875
3
The IT world is already overflowing with new and unnecessary 2.0 labels, so healthcare IT professionals will likely view Health 2.0 with a fair amount of skepticism. Nevertheless, it is important to understand what people are calling “Health 2.0”, because organization administrators will be asking about it or wanting to implement it, while physicians and patients may already be using it to some degree. The Physician Executive Journal reports that about 80% of Internet users currently rely on online resources for healthcare information. What Is Health 2.0? Like many other 2.0 labels, Health 2.0 is a wide-ranging term with several divergent definitions. Most experts agree that Health 2.0 involves the application of Web 2.0 technology to the healthcare field. Some definitions simply end there, while others claim that Health 2.0 extends far beyond the Web and represents an ideological departure from traditional healthcare delivery. Info-Tech adheres to the more limited, undisputed definition.
<urn:uuid:2febbb51-5fb7-453d-8340-6b36d9d4e59d>
CC-MAIN-2017-04
https://www.infotech.com/research/health-20-a-double-edged-scalpel
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00019-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922712
200
2.59375
3
Primer: LAMPBy David F. Carr | Posted 2003-07-01 Email Print Shorthand for Linux-Apache-MySQL-and (your choice of) Perl, Python or PHP. Shorthand for Linux-Apache-MySQL-and (your choice of) Perl, Python or PHP. In effect, it's a "stack" of basic business software that is freely available to corporations. The LAMP stack consists of operating system, database, Web server and Web-scripting software. These layers are comparable with the ones that make up commercial stacks like Microsoft .NET. The free, imitation Unix that Linus Torvalds invented while a university student. From its hobbyist roots, Linux has grown into a reliable operating system that now gets corporate support from startups like Red Hat and big companies like IBM. The world's most-used Web server; it's controlled by a group called the Apache Software Foundation and has also been embedded in commercial products like IBM WebSphere. A popular Web database; it has yet to prove itself capable of supporting critical business needs, such as financial transactions. MySQL AB of Sweden backs the product and also sells a commercial version. Perl, Python or PHP Though open-sourcers aren't likely to agree on a single "best" programming language, PHP is increasingly popular. It's also the one that is the most similar to Java Server Pages (JSP) and Microsoft Active Server Pages (ASP). PHP, which originally stood for Personal Home Page, is yet another Web-scripting technology that mixes HyperText Markup Language display code with programming instructions. If we're talking about a set of interlocking technologies on which developers can build, then yes, LAMP is a de facto platform. Still, each component is controlled by a different organization, making it less cohesive than, say, the Microsoft combination of operating system, database, Web server, programming languages and tools. Sun Microsystems can also sell you a fairly complete package (leaving the database to Oracle), though the Java platform does include competing implementations of the standards. Would you want to run your core financial systems on LAMP technologies? Probably not, given that until recently MySQL didn't even support the concept of a transaction. On the other hand, these technologies are already used to run high-volume Web sites, such as the O'Reilly Network, and they are certainly capable of supporting most intranet applications. Gartner Inc. analyst Mark Driver doesn't doubt that many big companies have open-source skunkworks applications. But, he says, "this sort of thing is most popular with organizations that are very price-conscious, capable of self-development and comfortable with peer-based support. That's typically not the Global 2000." Certainly. You can do Java development on Linux and take advantage of the Apache extensions for handling JSPs and Enterprise JavaBeans. You can use other Web application servers based on entirely different programming languages. You can run commercial databases like Oracle or DB2 on Linux. But if you want an end-to-end open-source solution, LAMP is the popular choice. Besides those developers who distrust Java as being not quite open enough, there are also those who happen to like PHP (or Perl or Python) as a matter of personal taste. By itself, LAMP really only defines software for Web applications. Although you can use it to build an application that connects to sophisticated middleware, the heavy-duty programming would likely have to be done in a language other than PHP, Python or Perl. The .NET and Java platforms, on the other hand, offer a way of writing both Web scripts and complex enterprise applications in the same language.
<urn:uuid:be4dfab0-bb3a-4927-b938-9d0bb647ea4a>
CC-MAIN-2017-04
http://www.baselinemag.com/storage/Primer-LAMP
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00324-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928339
769
2.609375
3
Research yields reliable test for preeclampsia Tuesday, Sep 3rd 2013 Pregnancies are often joyful times for the families, however, there is also danger associated with pregnancy complications. Researchers from the University of Manchester and Central Manchester University Hospitals NHS Trust have developed a system to test for preeclampsia, one of the riskiest conditions for expectant mothers. There was originally no way to determine the risk for first-time mothers, however, through analyzing urine samples, the researchers may have found a breakthrough in testing for the chances of complication. Samples from individuals at 15 weeks of pregnancy, before the traditional 20 weeks when symptoms begin to show, found proteins that differed between those who developed the condition and those who didn't, according to the university report. The study found two specific proteins that had not been previously associated with preeclampsia, but were consistently a predictor for risk. This breakthrough may facilitate early intervention and spur closer monitoring for mothers who have a high possibility of developing the condition. "We also hope to understand the biology of the disease better by determining why these proteins are higher in women with preeclampsia and whether they have a role in the development of the placenta," said Dr. Jenny Myers, one of the research team's leaders. New hope for health Preeclampsia has both mild and severe cases, which can make it difficult to detect if the symptoms are on the lighter side. Two of the most common indicators are high blood pressure and excess protein, along with headaches, nausea, changes in vision and upper abdominal pain, according to the Mayo Clinic. Because some of these symptoms are common in pregnancy, it can make preeclampsia even more difficult to diagnose. The real cause of preeclampsia has not been determined, however, the testing development will help physicians better understand each individual's risk factor and ensure that the pregnancy continues smoothly with consistent monitoring and treating for the complication. The only cure for the condition is to deliver the baby, which can put the mother and child at risk for further complications. However, additional research could yield new insights into the disease's development and lead to more effective courses of treatment. Using environmental control systems has helped engineer numerous breakthroughs in medicine. Keeping a stable temperature ensures that the samples are still viable for testing and that the results aren't a fluke. This helps researchers give the public straightforward answers and ways to change lives for the better. Diseases like preeclampsia may be difficult to treat, however, detection can easily lead to prevention and better medicine to significantly reduce the condition's risk in the future.
<urn:uuid:181c461d-c460-4086-a93c-c3c8befd6af9>
CC-MAIN-2017-04
http://www.itwatchdogs.com/environmental-monitoring-news/research-labs/research-yields-reliable-test-for-preeclampsia-500715
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00048-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965226
531
3.03125
3
The What And Why Of Burst Buffers May 19, 2015 Mark Funk Burst buffers are a hot topic these days. At the most simple level, a burst buffer consists of the combination of rapidly accessed persistent memory with its own processing power. Specifically, this persistent memory is packaged between a set of processors with their non-persistent memory (for example, DRAM) and a chunk of symmetric multi-processor (SMP) compute through high bandwidth links (such as PCI-Express). This burst buffer also sits between the non-persistent memory and slower, but significantly larger persistent memory, for use as more permanent storage. The burst buffer’s purpose is allow applications running on an SMP’s fast processor cores to perceive that the application data – data first residing in the SMP’s local and volatile memory – will be quickly saved on some persistent media. As far as the application is concerned, its data, once written into the burst buffer, had become persistent with a very low latency; the application did not need to wait long to learn that its data had been saved. If the power goes off after the completion of such a write, the data is assumed to be available for subsequent use. The data residing in the burst buffer is not the data’s ultimate destination, but the application need not perceive that. The application believes that the quickly written data is held persistently, and then it becomes the responsibility of the burst buffer device to forward this same data to the larger and slower storage media. You can see this effect in this silly burst-buffer animation link (supported by MIT’s Scratch); the left side is the application data, the right side (the goal) are the persistent storage, in this case, say, hard drives. Why the term burst buffer? In comparison to the total size of target persistent storage, the burst buffer is small. But perhaps more important, the bandwidth into the burst buffer is often much larger than the bandwidth out of the burst buffer. Picture, for example, constantly filling up that animation’s bucket with a big hose and draining it with a smaller one; it would not take long before the bucket fills up and you need to turn off the big hose. So, if your application was capable of consistently writing data into the burst buffer at a rate exceeding that of the bandwidth out of the burst buffer, the buffer would fill up and your application would need to wait. Instead, if you are occasionally turning on the big hose – turning it on in bursts – even the small hose left on – emptying the bucket consistently – can avoid having the bucket fill up. The “burst” in burst buffer says that your application can occasionally write data blocks as bursts into the burst buffer at very fast rates, and so with lower latency. But on average – over longer periods of time, the average rate of data being written can be no faster than can be read out of the burst buffer and into longer term persistent storage. Referring again to our animation, for a while, the application-cats can very quickly fill the bucket, but then they need to either stop or write slower into the buffer; the goal-cat needs to have the time to empty out the bucket to make space. But it’s not just bandwidth that we are playing with here. We are also talking about the latency of a write into persistent memory. Certainly a high bandwidth link from the SMP’s memory to that of the device containing the burst buffer’s persistent memory is key, but suppose that this persistent memory were just a set of spinning disks? The data would get across the link fast but then would need to wait for the usual HDD latencies; these longer latencies are related to positioning the head over the correct cylinder and sector to actually do the write. So, within the burst buffer device, we are also implying the use of more rapid forms of solid-state memory acting as the persistent memory. To empty the burst buffer, we’d also want relatively rapid reads of this same memory when the burst buffer device forwards its data to the longer-term storage devices. OK, got it. But why bother with a burst buffer in the first place? As best I can tell, this notion recently reached its greatest interest in support of HPC’s need for data checkpointing. Because of the potential for failure of some type in really large systems driving massively parallel algorithms, these same algorithms need to occasionally checkpoint their state. This was just in case the algorithm needed to be restarted after such a failure; it wanted to restart where it left off, not the beginning. The pipe to persistent memory just was not wide enough nor fast enough with traditional types of persistent storage devices to allow for this checkpointing period to be perceived as being a small fraction of the processing time. The aforementioned burst buffer helped this out a lot. There are, though, other uses. Comparative Anatomy: External Direct Attached Storage Devices Before we go on, let’s do a bit of comparative anatomy to help picture this point. Let’s compare this notion of a burst buffer to more conventional external DASD controllers. Interestingly, external DASD also happens to incorporate hardware not dissimilar to that of a burst buffer. As in the figure below, the external DASD controllers also have - Multiple Processors - Dynamic Memory - Non-Volatile Memory and - Direct connections to many units of HDDs or SSDs. Communicating with these external DASD controllers, from the volatile memory of the SMPs, requires multiple links, often passing over multiple bus types. When an SMP writes data to the controllers persistent memory, the data flows over these links, into the dynamic and then into non-volatile memory of the controller. Once successfully there, the controller can respond back to the SMP that the write request is now held in a persistent state, again responding over the multiple links. As with the burst buffer, the external DASD controller then stages this data in preparation for writing to the HDD or SSD proper. At some level, the concept is very similar to that of the burst buffer. But each burst buffer device is very close to the SMP’s memory; typically just one PCI-Express link away and of higher bandwidth and lower latency than that available with the full set of busses and bus types to the external DASD controller. As a result the application, waiting for its data to become persistent, becomes aware of this condition much sooner. This allows the application to more rapidly return to processing. Notice, also, in the preceding figure the use of what appears to be redundant external DASD controllers. They are indeed there for higher availability redundancy. But, as can be seen in the following figure, the NVRAM acts as front-end persistent storage for the other controller. In the event of failure of one, given data had been written to the other’s NVRAM – prior to writing to the HDD/SSD – the remaining controller can pick up the data for the other and forward that data to the HDD/SSD. In that sense, it is a write cache or even a type of a burst buffer. Just like the burst buffer, the data must reside in this NVRAM persistent memory before the controller can respond to the host system that the data really is held in a persistent state. Also, like the burst buffer, performance can become limited by the size of the NVRAM. Notice that the contents of the external DASD’s DRAM – and so its back-up on the NVRAM – is also a staging area for data being written to the HDD/SSDs. Once written successfully, the area used in this memory becomes freed for subsequent use. But if this memory becomes full, subsequent data flowing into the controller must wait; this wait, in turn, becomes visible to the application. Lastly, let’s recall just why persistent storage devices even exist. If there is a failure, like a power failure, when the controller subsequently returns to an active state, it will continue where it left off, writing the staged data to the HDD/SSDs. Subsequent reads of this same data will find the updated data as though no failure had occurred. The Burst Buffer Approach So it would seem that with burst buffers that we have a better solution for a rather specialized form of HPC application; a solution rather looking for a broader problem. But is it useful only there? Couldn’t such a device, one providing very low latency to persistent memory be of more general use as well? Before going on, let’s again address what we mean by persistent memory. In the event of – say – a power failure, the use of persistent memory implies that any data residing there can be perceived as safe and available for use, no matter the user, once the system is restarted. For example, in a database transaction, if the database manager is told that the data it has written is in persistent memory, the database manager allows the transaction to complete. A subsequent transaction started after the power is restored (and the system restarted) must be capable of successfully using the data from the previously completed transaction. So any data written into a burst buffer and perceived by the database manager as having become persistent, must also be available for use, just as though it were successfully written to the more traditional slower disk drives. For what follows, let’s assume that the burst buffer technology is capable of being responsive to such a system restart. We chose the notion of a database transaction in the preceding paragraph for a reason. burst buffers can be useful in multiple ways in speeding database transactions as well. To explain, let’s consider a system with a large set of concurrently executing and relatively complex database transactions, ones which just happen to also be changing the database. Think here in terms of thousands of such concurrently executing data-modifying transactions per second. The way that such database transactions often work is that, as each bit of data is accessed and perhaps marked for subsequent change, the database manager is sprinkling locks all over the database. Any concurrently executing transaction subsequently running into a conflicting lock set by another transaction often results in a delay; - The latter transaction waits for the lock to be freed by the former or - The latter transaction restarts in the hopes that the lock will later be found to be freed. The intent of this is to ensure that the latter transaction does not see the results of the former until the former transaction has successfully ended. The point is, the transactions running into lock conflicts appear to execute for a longer period of time than would have been the case without the lock conflicts. So, when and why do the locks get freed? The rules supported by database management systems often require that before the locks get freed, the data changes associated with each transaction – or actually something representing these changes – have to have first made their way out to persistent storage. Said differently, a transaction’s locks don’t get freed and the transaction doesn’t end until the transactions state is known to be on persistent storage. So once again, before the locks can be freed, allowing conflicting transactions to make progress, a successfully executing earlier transaction must first wait for writes to persistent memory to complete. Without burst buffers – that is, with traditional forms of persistent memory – the earlier transaction must spend from many microseconds to multiple milliseconds, just waiting on such writes. Once the writes are known to be persistent, the transaction commences, and frees its locks. If this write time were to become much shorter, say via burst buffers, the locks get freed sooner, potentially prior to the point in time where the later transaction first even needed a lock (thereby avoiding lock conflicts). A few milliseconds delay; no big deal right? Well, no, it happens that it often is. Because this one delay means that other otherwise concurrently executing transactions, transactions which are holding their own locks, are also holding their locks longer. This, in turn, results in still other transactions waiting still longer. It’s not just one transaction leaving its locks around, it can be thousands and they begin conflicting with each other because of the increasing length of time they are being held. These can occasionally turn into a real train wreck. A massive system with a lot of processors, all ideally busy driving database transactions up to the limit of their compute capacity, are instead largely simply idle, waiting themselves for the still waiting database transaction locks to free up. Of course, these now waiting transactions aren’t generating writes to disk, allowing the NVRAM to empty some. But would you rather hurry up and wait, or run fast most all of the time? In the ideal, picture now an in-memory database – one where reads from disk are minimal – residing in a many multi-terabyte memory of – say – a large NUMA-based SMP system. Let’s also assume that we want the DBMS to maintain ACID properties. The transaction state of thousands of currently executing transactions – and subsequently their changed database data itself – is flowing quickly and at various rates into burst buffers. The trick for these devices is to ensure that it is also flowing data into slower backing-store devices at a rate sufficient to keep some burst buffer storage always available. A Possible Future? Published musings on the upcoming Power9 suggest that not only is it possible for PCI-Express-linked persistent storage, but future persistent storage could be directly attached to the processor chip’s memory bus. Instead of asking some device to asynchronously copy data from the host system’s DRAM and into the burst buffer device, it would then seem possible to make data persistent with nothing more than a process-based memory copy operation (along with flushing the copied data out of the cache.) Once out of the cache and residing in this storage, the data suddenly becomes persistent. Talk about a fast burst buffer! Fast persistence, yes, but it is not quite that simple. Keep in mind why we are making the data persistent in the first place. We are placing the data there with the intent that it be available after, say, the power gets cycled; this is after the point that the OS was shut down. So suppose the OS that just wrote the data to this fast persistent storage just woke up. How does that OS know that the last time it was awake it had just completed a write which it wanted to save in case of such a calamity? Is this directly attached persistent memory somehow associated with a file system, or perhaps with a database log? Suppose also that this OS failed on this hardware system, but its replica woke up (or simply took over) over on another system? What then of the data still residing in the persistent storage in the now failed system? Can it be accessed? I think you get the idea. It’s going to be great to have such rapidly accessible persistent storage in the relatively near future, but the normal expectations for persistent storage might need to change some if we also want to use such truly rapid persistent storage. Single-Level Object Store And Burst Buffers And now for something completely different. Most of us have got it into our head that persistent storage implies some form of a file system. Whatever we want to remain persistent goes into something like a file. Those of us who are programmers know that our data – our objects – resides in various address spaces, but then when we want to make it persistent, we flatten it out, remove it from our address space, and write it into a file. How inefficient is that? Why can’t we just place our objects directly into persistent storage and know – even after a power cycle – that our objects are just there where we left them? Interestingly, there is an operating system that does just that, although that fact tends to be hidden within the operating system itself. This OS is very definitely object-based. It has a file system, of course, but files, directories, and libraries – and everything else for that matter – just happen to be objects of types known to this OS. This OS is now called IBM i, but for those who have been around for a while it was the System/38, then AS/400, iSeries . . . damn those marketers. But still better, from the point of view of burst buffers and persistent storage in general, those objects reside persistently at a unique virtual address. Yes, no matter the location of that object, be it in DRAM, a burst buffer, SSD, HDD, you name it, that object is represented by an address. Even if that power is off and the OS inactive, that object is still subsequently accessible using that address. Indeed, from the moment that the object is created, that object is bound to its address. They call this notion Single-Level Store (SLS), a completely appropriate name. Why do I bring that up here in a discussion on burst buffers? At the time that this architecture was first being created, the basic storage model was one where volatile memory (a.k.a., DRAM, main store) was where the programs ran and objects were modified, and then something else wrote the data into persistent storage (also known as HDD, DASD) and it took a while to get there. It is one we are all used to. As a result, the OS would allow some data to be explicitly committed to persistent storage. But, recalling that every byte in SLS has an address and that physical memory is managed as pages, as the pages are aged out of the DRAM, the objects residing there are also aged into persistent storage (and subsequently re-accessible from there using that same address); in the fullness of time, any changed objects just automatically became persistent. So, from a read point of view, aside from performance, SLS makes the distinction between volatile and persistent memory transparent; you want it, no matter the location, the OS finds it using its persistent address. It follows that with objects being staged via burst buffers – for ultimate write to, for example, HDD – if the application subsequently needed that object back in DRAM, the OS would use the object’s address and find that the object still existed in the burst buffer; as a result, it would access the object more rapidly from there. The location transparency provided by SLS, and which historically has suffered from the performance penalty of HDD accesses, now also becomes more performance transparent as well. Object writes to burst buffers are done more rapidly. And for read accesses, if that object happens to still reside in the burst buffer, the reads are done more rapidly as well. So, coming full circle, if you have an object that we want to be persistent, with a capability like SLS supported, you write it as an object, keeping the object’s organization as it is. And if that object store happens to be augmented by something like a burst buffer, you can be assured that it won’t take too long for your application to know that the object really is persistently held. And, by the way, if you want that object to stay in DRAM in the meantime, you can have that advantage as well. - Benchmarking a Burst Buffer in the Wild, Nicole Hemsoth, March 26, 2015, https://www.nextplatform.com/2015/03/26/benchmarking-a-burst-buffer-in-the-wild/ - Burst Buffers Flash Exascale Potential, Nicole Hemsoth, May 1, 2014, http://www.hpcwire.com/2014/05/01/burst-buffers-flash-exascale-potential/ - Perspectives on the Current State of Data-Intensive Scientific Computing, Glenn K. Lockwood, June 24, 2014, http://glennklockwood.blogspot.com/2014/06/perspectives-on-current-state-of-data.html?spref=tw - NERSC Burst Buffer, http://www.nersc.gov/research-and-development/storage-and-i-o-technologies/burst-buffer/ - On the Role of burst buffers in Leadership-Class Storage Systems, April 2012, http://www.mcs.anl.gov/papers/P2070-0312.pdf After degrees in physics and electrical engineering, a number of pre-PowerPC processor development projects, a short stint in Japan on IBM’s first Japanese personal computer, a tour through the OS/400 and IBM i operating system, compiler, and cluster development, and a rather long stay in Power Systems performance that allowed him to play with architecture and performance at a lot of levels – all told about 35 years and a lot of development processes with IBM – Mark Funk entered academia to teach computer science. He is currently professor of computer science at Winona State University. And, having spent far and away most of his career in the Rochester, Minnesota area, he continues that now working in IT for a major medical institution that is also located there. Reprinted with permission of Mark Funk. Original story posted here.
<urn:uuid:348aa586-e850-4d97-bab9-4173808353fa>
CC-MAIN-2017-04
https://www.nextplatform.com/2015/05/19/the-what-and-why-of-burst-buffers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00168-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953386
4,382
2.875
3
Understanding SOA Technologies Web services are most often used to implement an SOA. Fortunately, and unfortunately, there are many Web service standards to choose from. To follow-up on our last installment, an introduction to SOA, we will now outline the most prevalent Web standards used in SOAs. Web services are defined protocols for data exchange, via the Web. This does not necessarily mean that services will be exposed to the Internet, just that these are a set of agreed-upon “Web standards” that many products support. When deciding on which protocols to use, it is often the techies' recommendations that hold the most weight. They will likely recommend the one that is easiest to implement, the most widely supported and the most likely to work well in your environment; in that order. To have a successful SOA deployment that stands the test of time and continues to be extensible, all three factors are extremely important, interoperability being paramount. The Web Services Interoperability Organization is dedicated to establishing best practices for Web standards, ensuring interoperability regardless of operating system, platform, or programming language. The WS-I is responsible for defining best practice literature such as the WS-Security and WS-Transaction specifications. These help developers and businesses ensure they are conforming to practices that everyone else is adopting, which ensures interoperability. WS-I also publishes specifications, test suites, and examples of how to deploy these protocols. In essence, WS-I is a governing body comprised of many organizations such as Microsoft and IBM, whose mission it is to promote interoperable Web services. Ensure you spend some time reading their literature after this article, which should give you enough background to understand what they are talking about. Web services are dependent on protocols to ensure communication is meaningful. The content of the data sent between services must be previously agreed upon before either side can make sense of what it receives. SOAP is an example of the most widely used protocol for exchanging data. SOAP uses XML, allowing either side to decipher what was sent and to format messages sent back and forth. We will cover a few architectures in a moment, but also refer to some Web service protocols. It is important not to confuse the two, so here is a quick primmer. WSDL, the Web Services Description Language is a language used to describe a particular Web service in a formatted way, so that programs can parse it. WSDL does not itself provide any functionality in the way of Web service interaction. The protocols themselves, such as SOAP, XML-RPC, or DCOM, define exactly how messages will be passed and how a program can understand the data its given. There are two main types of architecture used in an SOA: the RPC family of protocols, and the Representational State Transfer (REST) methods. Remote Procedure Call methods allow developers to “call” functions the same way they are used to when programming on a single system, but to a remote system. The drawback to RPC-like services is that people tend to implement them like the programming languages they are familiar with on a given platform. It’s even easier to call a remote procedure if it’s similar to a local one, after all. This logic violates the concept of “loose coupling,” which essentially means that remote procedures should not be dependent on any particular operating system or programming language. SOAP is the successor to XML-RPC, which is just a Remote Procedure Call that wraps its messages in XML. SOAP uses HTTP to send data, which is nice and simple, but does have some drawbacks. Regardless, most Web services these days use HTTP for communication, largely because they all build upon SOAP. The Representational State Transfer (REST) method fundamentally differs from remote procedure calls because of the level at which it operates. A REST call looks just like any other Web request via HTTP, instead of RPC calls which look like standard function calls. The focus of REST is to operate with stateful resources rather than individual messages, which results in a more standard and widely understood method of interacting, like HTTP itself. REST handles passing blocks of simple data, where RPC passes complex procedures. RESTful services often use SOAP, but this is not required, as a REST is just a method of interacting, not a protocol itself. REST does not require any additional messaging layers, like SOAP, but being able to use SOAP allows for quicker adoption and development times. To REST, or RPC The question of whether or not to use REST is certainly a good one. It is likely to be the method of the future, but your SOA needs to interoperate with every piece of software you currently use. REST adoption has been slow, largely because of Web server support. While a REST system can use WSDL to describe a SOAP message over HTTP, there just isn’t enough support to truly use it. Apache, for example, does not even support the methods required to use REST without installing an add-on module. Other standards that are not part of the Web Services family do exist, but as you may expect they are not widely supported. Jini, WCF, and CORBA are a few examples, and when a vendor approaches you with a product that only works with one of these technologies, run, don’t walk, the other way. Web services are widely supported these days, and adoption is only growing. SOA itself is said to be new, unstable, and risky, but these risks are largely mitigated when you choose an appropriate Web services standard that is widely supported. In the end, sticking with good old SOAP on top of some type of RPC-like system is the only viable mechanism for building an SOA with Web services these days. If you do, the chances of vendor lock-in are dramatically reduced.
<urn:uuid:03ddfaf8-46df-40f4-8cb1-be2e2bec22f3>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netsp/article.php/3806171/Understanding-SOA-Technologies.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00378-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93796
1,210
3.296875
3
Clay Calvert, the director of cybersecurity for MetroStar Systems, has a strategy for banking online designed to increase its security. MetroStar is a consultancy that has worked with government agencies—from the Federal Reserve Bank of Philadelphia to the FBI—to create systems that protect highly sensitive data from cyber attacks. Calvert banks online, but with one caveat: he only does it on his phone or tablet. At face value, this seems counterintuitive; aren’t public networks easier to hack than a home or office Internet connection? But you are more of a problem than the network, according to Calvert. “The technology to defend [systems] has increased a lot. In fact, it’s usually the human element that gets foiled.” For example, security breaches are much more likely to happen when a human clicks an innocuous link or responds to a phony email. Cybersecurity systems operated by companies have beefed up significantly since the beginning of the internet age. Calvert will bank online via a tablet device or smartphone because they generally require consumers to download any software from an app store. Those apps also ask your permission to access and interact with other data on your system.
<urn:uuid:ab025a62-5e52-4190-a1c1-9284c37c76b4>
CC-MAIN-2017-04
http://www.nextgov.com/cloud-computing/2013/05/why-you-should-access-online-banking-your-smartphone-rather-your-computer/62914/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00102-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948978
244
2.578125
3
The Linux kernel is a prominent example of cooperative open source software projects, relying on contributions from thousands of programmers worldwide. Linus Torvalds introduced the Linux kernel in 1991, and it serves as the basic platform for systems ranging from large servers to consumer electronic devices. Cisco leads the InfiniBand initiative, defining a complete Remote Direct Memory Access (RDMA) driver stack for Linux. Improving on work that began at TopSpin (acquired by Cisco in 2005), Cisco engineers have become the primary developers for RDMA for Linux, delivering more extensive support than for other operating systems. Cisco principal engineer Roland Dreier has contributed 0.5 percent of changes to the Linux kernel, collaborating with more than 150 developers. Cisco and Linux Users Both Benefit Cisco is committed to the growth of Linux, and dedicates expertise and financial resources to help the community thrive and expand into new technology areas. This is consistent with the broader Cisco commitment to the open source community, fostering contributions that benefit the industry and consumers. The acquisition of IronPort solidified Cisco leadership in the information security market and enhanced the existing Cisco culture of open source contributions. Learn more about how Cisco is strengthening the open source community with extensive givebacks. Find Out More
<urn:uuid:844a430e-4dc0-4f60-b3c8-943ed52a86c9>
CC-MAIN-2017-04
http://www.cisco.com/c/en/us/about/open-source/linux-kernel.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00498-ip-10-171-10-70.ec2.internal.warc.gz
en
0.913863
253
2.5625
3
What is it? XSL (Extensible Stylesheet Language) is a way of transforming and formatting XML documents. Without a stylesheet, a processor would not know how to render the content of an XML document except as an undifferentiated stream of characters, according to the Worldwide Web Consortium (W3C). By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. Cascading Style Sheets (CSS) can describe how XML documents should be displayed, although CSS is primarily intended for HTML. XSL is purpose-designed for XML and is far more sophisticated. It can, for example, be used to transform XML data into HTML/CSS documents. Far from replacing CSS, XSL builds upon and complements it. The two languages can be used together, and both use the same underlying formatting model, so designers have access to the same formatting features in both languages. Where did it originate? XSL began as an initiative to bring publishing functionality to XML. The working group included representatives from IBM, Microsoft and the University of Edinburgh. As well as CSS, XSL's heritage includes the ISO-standard Document Style Semantics and Specification Language (DSSSL). XSL became a W3C recommendation in 2001. What's it for? The XSL specification is in two parts: a language for transforming XML documents - XSLT - and an XML vocabulary for specifying formatting semantics - XML Formatting Objects (XSL-FO). One use of XSL is to define how an XML file should be displayed by transforming it into a format recognisable to a browser, such as HTML. Each XML element is transformed into an HTML element. However, XSL does far more than simply formatting it can also manipulate, evaluate, add or remove elements, and reassemble the information in the XML source document. What makes it special? CSS was designed for the needs of browsers and to be easy for browser manufacturers to implement. XSL is a more complex proposition and for this reason browser suppliers - Microsoft with Internet Explorer 5, for example - have not always kept up. How difficult is it to master? XSL should be an easy progression for people with XML skills, as it uses XML syntax. But it may be more challenging for people coming from a C or Java programming background. Where is it used? As well as transforming web development, XSL was intended from the outset to be used by print publishers. It handles all modern (and some ancient) alphabets, including Braille. What systems does it run on? XSL is supplier- and platform-neutral, but some implementations are more neutral than others. XSL-supporting browsers include Firefox, Mozilla, and Netscape. What's coming up? The W3C's XSL Working Group has started work on version 2.0 of XSL-FO. There are many free XSL tutorials. Try, for example, the W3C site or the Cover Pages. Many other sites deal in detail with the day to day problems of working with XSL, or explore new ways of using it. IBM's Developerworks is one such site, and publisher O'Reilly and Associates has a daunting array of articles on the subject, as well as XSL books. Rates of pay XSL is used with all mainstream development skills - Active Server Pages, Visual Basic, Java, Perl and other scripting languages. Roles range from web designers to consultants in City firms. The range of wages varies accordingly. Vote for your IT greats Who have been the most influential people in IT in the past 40 years? The greatest organisations? The best hardware and software technologies? As part of Computer Weekly’s 40th anniversary celebrations, we are asking our readers who and what has really made a difference? Vote now at: www.computerweekly.com/ITgreats
<urn:uuid:242aa235-63fd-4e89-9764-d286791e85c3>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240078456/Hot-skills-XSL-Take-a-more-sophisticated-approach-to-style-sheets
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00314-ip-10-171-10-70.ec2.internal.warc.gz
en
0.910882
817
3.921875
4
Copy wide characters from one buffer to another #include <wchar.h> wchar_t * wmemmove( wchar_t * ws1, const wchar_t * ws2, size_t n ); - A pointer to where you want the function to copy the data. - A pointer to the buffer that you want to copy data from. - The number of wide characters to copy. Use the -l c option to qcc to link against this library. This library is usually included automatically. The memmove() function copies n wide characters from the buffer pointed to by ws2 to the buffer pointed to by ws1. This function copies overlapping regions safely. The wmemmove() function is locale-independent and treats all wchar_t values identically, even if they're null or invalid characters. Use wmemcpy() for greater speed when copying buffers that don't overlap. A pointer to the destination buffer (i.e. the same pointed as ws1). Last modified: 2014-06-24
<urn:uuid:3898b985-a7af-4a30-b78c-74c636bc230b>
CC-MAIN-2017-04
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/w/wmemmove.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00222-ip-10-171-10-70.ec2.internal.warc.gz
en
0.816078
227
2.671875
3
Lin S.,Fujian Agriculture and forestry University | Lin S.,Fujian Provincial Key Laboratory of Agroecological Processing and Safety Monitoring | Huangpu J.J.,Fujian Provincial Key Laboratory of Agroecological Processing and Safety Monitoring | Huangpu J.J.,Fujian Agriculture and forestry University | And 5 more authors. Pakistan Journal of Botany | Year: 2015 Pseudostellaria heterophylla is an important medicinal plant in China. However, cultivation of P. heterophylla using consecutive monoculture results in significant reductions in yield and quality. In this study, terminal-restriction fragment length polymorphism (T-RFLP) analysis and measurement of soil enzyme activities were used to investigate the regulation of soil micro-ecology to identify ways to overcome the negative effects of P. heterophylla consecutive monoculture. T-RFLP analysis showed that rice/P. heterophylla (RP) and bean/P. heterophylla (BP) crop rotation systems increased the number and diversity of microbial groups in P. heterophylla rhizosphere soil. In particular, the RP and BP crop rotations increased the number and abundance of beneficial bacterial species compared with two-year consecutive monoculture of P. heterophylla. The presence of these beneficial bacteria was positively correlated with soil enzyme activities which increased in rhizosphere soils of the RP and BP crop rotation systems. The results indicated that crop rotation systems could increase activities of key soil enzymes and beneficial microbial groups and improve soil health. This study could provide a theoretical basis to resolve the problems associated with P. heterophylla consecutive monoculture. © 2015, Pakistan Botanical Society. All Rights reserved. Source
<urn:uuid:2dec37c6-32ba-45fb-8727-01f51971e1ea>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/fujian-provincial-key-laboratory-of-agroecological-processing-and-safety-monitoring-137597/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00130-ip-10-171-10-70.ec2.internal.warc.gz
en
0.891907
359
2.65625
3
HOW TO:Create A DNS Forward Lookup Zone This articles explains the steps to create a DNS Forward Lookup Zone and a DNS Host Record as an example scenario. You may use different names where appropriate. Domain Name Server (DNS) Create a new Forward Lookup Zone To create a new forward lookup zone: 1. Start the DNS snap-in. To do this, click Start, point to Administrative Tools, and then click DNS. 2. Click the DNS Server object for your server in the left pane of the console, and then expand the server object to expand the tree. 3. Right-click Forward Lookup Zones and then click New Zone. The New Zone Wizard starts. Click Next to continue. 4. New Zone Wizard appears, click Next to continue. 5. Select “Primary Zone” to create a copy of your zone and click Next to continue. Tip: You can select the check box at the bottom of the New Zone Wizard on a Domain Controller (DC) to store the zone information in Active Directory (AD). 6. From the “Active Directory Zone Replication Scope” dialog box select one of the radial button options or accept the default of “To all domain controllers in the Active Directory domain Name.Com and click Next. 7. From the “Forward or Reverse Lookup Zone” dialog box select Forward lookup zone and click Next. 8. In the Zone Name box, type the name of the zone (for example, type newzone.com), and then click Next. NOTE: This name is typically the same as the DNS suffix of the host computers for which you want to create the zone. 9. From the “Dynamic Update” dialog box select one of the radial button options or accept the default of “Allow only secure dynamic updates (recommended for Active Directory)” and click Next to compete the task. 10. Click Finish. 11. The new zone is listed under Forward Lookup Zones in the DNS tree. Create a Host or "A" record To create a host or "A" record: 1. Start the DNS snap-in. Click the DNS Server object for your server in the left pane of the console, and then expand the server object to expand the tree. 3. Expand Forward Lookup Zones. 4. Under Forward Lookup Zones, right-click the zone that you want (for example, newzone.com), and then click New Host (A). 5. In the Name (uses parent domain name if blank) box, type the name of the host that you want to add. 6. For example, if you want to add a host record for a Web server, type mysite. And In the IP address box, type the IP address of the host that you want to add. For example, type 192.168.1.161. 7. Select the Create associated pointer (PTR) record check box, and then click Add Host. The host record mysite.newzone.com was successfully created. 8.When you are finished adding hosts, click Done. 9. We must run the ipconfig / flushdns under the command prompt in order for the sites to be accessible When using registry-based configuration, changes are applied to DNS servers only when the DNS Server service is re-initialized. So Restart the DNS, 11. Now you can check the connectivity using the ping command at command prompt. For example, ping mysite.newzone.com Add Your Comments
<urn:uuid:e3f312b2-fdc3-4e4d-bc31-c0ac48bd9c17>
CC-MAIN-2017-04
http://kb.machsol.com/Knowledgebase/Article/50069
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00552-ip-10-171-10-70.ec2.internal.warc.gz
en
0.764474
753
3.109375
3
Are IP Addresses Traceable? Q: Can someone really tell who you are by using your IP address? A: To fully answer this question, let’s start by determining how your IP address is configured. An Internet Protocol (IP) address is a unique address used to identify computers on a network similar to a street address for a house. There are two types of IP configurations: static and dynamic. A static IP address is manually assigned to a computer by an administrator and typically does not change. A dynamic IP address is generally a temporary address that is typically assigned randomly or by a server. If you are unsure whether your computer uses a static or dynamic IP address, most likely it is dynamic. Dynamic IP addresses usually are assigned on local area networks. Most Internet service providers (ISPs) will only assign a static IP address for specific purposes or needs. Because most IP addresses are dynamic and assigned by your ISP, it would be difficult for anyone to be able to trace an IP to a specific computer and find out information about you. Yes, it can be done, but your common Internet user will not have access to pull that information. Most dynamic IP addresses will be traced to your ISP and not directly to you. To obtain the actual name and address of the user for an IP address would require your ISP to look up this information, which will typically require a court order. In many situations, the only information you can obtain from an IP address would be the ISP the user is connected with and an approximate physical location, which is most likely the location of your ISP. If you are connecting to the Internet from work, your IP address can be easily routed to the company network you are connected to. There are many Web sites you can use as an IP locator. They appear to be somewhat accurate, but usually will locate the area your ISP is located, not where your computer is physically located. The large ISPs that carry a majority of the Internet’s traffic will locate all IP addresses owned by that ISP to the same city, making it geographically inaccurate. They also can be unreliable if your IP address has not been added to the IP locator’s database. Another way of finding information about the user of an IP is to run a trace route. Again, this will only provide you general information about the location of a user, which could only be the location of the ISP. This can be accomplished on a Windows computer by opening your command prompt (go to the Start menu, Run, type “cmd” and press OK) and type “tracert ” (replace with the actual IP address you are looking up). When you run a trace route, you will notice each jump that it takes to route your trace to the destination IP address. Each jump displayed in the trace route typically is a router or ISP that the request has to traverse during its route to the destination IP address. More than likely, the last jump displayed in the trace route will be either the final router or ISP that the destination computer is using to connect to the Internet. It is not technically possible to hide an IP address from a network. Hiding an IP address would mean your computer is no longer connected to the Internet. However, one way for you to anonymously connect to the Internet is by using an anonymous proxy server. An anonymous proxy server works as an intermediary between your computer and the Internet, making requests and receiving responses on your behalf. Your computer makes a request to the anonymous proxy server, which in turn makes the request to the Internet and relays the response back to your computer. The purpose for using this method is that all Internet requests will route to the IP address of the proxy server and not directly to your computer’s IP address. Using an anonymous proxy server only requires a configuration setting in your Web browser. There are many free, anonymous proxy servers on the Internet, but these may cause your Web surfing to suffer performance loss and bandwidth limitations. Finally, always make sure your Windows software is consistently updated. Andrew Bonslater, MCTS, MCSD, MCAD, is a solutions developer for mid- to large-sized organizations. He is a thought leader with Crowe Chizek in Chicago. You can reach him at editor (at) certmag (dot) com.
<urn:uuid:e6a6c217-3132-4e41-9fd7-4b2434e8240e>
CC-MAIN-2017-04
http://certmag.com/are-ip-addresses-traceable/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00278-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951366
885
3.171875
3
A Global Non-Profit Uses Data to Protect AnimalsBy Samuel Greengard | Posted 2016-03-02 Email Print Dedicated to protecting animals, Conservation International sorts data, identifies patterns and generates statistical models to obtain animal population data. One of the biggest challenges associated with protecting endangered animals in tropical rainforests and elsewhere is documenting the numbers and gaining accurate information about conditions. Without automated cameras to capture images, there's no way to know what's going on. Yet, sorting through hundreds of thousands of images—sometimes even millions—is a next-to-impossible task. "The challenges and resource demands can be enormous," reports Jorge Ahumada, executive director of the Team Network at Conservation International, headquartered in Arlington, Va. The 27-year-old organization focuses on 250 species of mammals and birds in 15 countries in tropical areas. These animals range from Golden Cats in Uganda to Asian elephants in Cambodia. "We set up thousands of cameras in order to monitor conditions and capture data about what is taking place," Ahumada says. Typically, the cameras are used at a location for about 30 days, and they snap between 20,000 and 40,000 images. Afterward, personnel in the field retrieve memory cards and upload the data. The software extracts exchangeable image file format (EXIF) data that produces time-stamped records for analysis. The system generates a 1 or 0 based on whether a species is captured in an image on a particular day. The organization currently holds about 2.5 million images, and the number grows daily. Overall, this equals upward of 4 terabytes of data. "The technical challenge is analyzing the huge volume of data collected by all the cameras and sensors and obtaining accurate animal counts," Ahumada explains. "In the past, we faced a huge hurdle trying to process all the data with a limited IT infrastructure." In fact, in many cases, the staff had to tackle the task manually. Taking Image and Data Processing to a Higher Level To resolve these challenges, Conservation International turned to Hewlett-Packard, now HP Enterprise (HPE), to take its image and data processing capabilities to a higher level. The environmental organization uses a custom software solution, Wildlife Picture Index Analytic System, to sort through all the data, identify patterns and generate statistical models. It is build on top of a Vertica Systems analytic database. The objective is to use statistical modeling to generate animal population data that's based on the "geometric mean" of specific species. (The "geometric mean" is a type of average where the numbers are multiplied and the square root or cube root is taken.) "It's an approach heavily focused on data science," Ahumada says. The result, says Eric Fegraus, senior director of technology and external relations for the Team Network at Conservation International, is a 30x improvement in processing speed. "We are able to sort through the data and get to meaningful results much faster," he says. Currently, the organization uses a single dashboard to conduct simulations and explore visualizations in order to better understanding trends and conditions. It also aids policymakers and others in developing and managing programs. The data-driven approach has produced valuable results. "In the past, there had been a lot of debate among conservation biologists and others about whether there are actual results in protected areas," Ahumada says. "Because we use a science-based approach, the data is critical for demonstrating that protected areas actually work." The organization is now looking to incorporate image-recognition software to further advance the capabilities of the technology. "We now have an IT infrastructure that allows us to put resources to use far more effectively and accomplish our mission," Fegraus reports.
<urn:uuid:dc023fac-cc52-43dd-ac28-062a91eda1a4>
CC-MAIN-2017-04
http://www.baselinemag.com/analytics-big-data/a-global-non-profit-uses-data-to-protect-animals.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00278-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932375
769
2.9375
3
Recently, cloud computing has attracted considerable attention. Cloud computing is becoming one of the most important computing and service paradigm. Cloud computing employs a group of interconnected computers which are dynamically provisioned and serve as one or more unified computing resources. Customers are able to access applications and data from a cloud at any place and at any time. Cloud computing appears to be a single point of access for all the computing needs of users. Cloud computing technology has already developed much and is continuously proceeding on the path of development. Cloud service providers are actively seeking to develop more robust cloud computing platforms for consumers and enterprises to facilitate the on demand access regardless of time and location. Some of the available cloud based technologies are providing virtualized computing environments which host different kinds of Linux based services. Another example is which provides a centralized storage for applications and data, so users could access all the information through a web based live desktop. Internet Data Center (IDC) is a common form to host cloud computing. An IDC usually deploys hundreds or thousands of blade servers, densely packed to maximize the space utilization. Running services in consolidated servers in IDCs provides customers an alternative to running their software or operating their computer services in house. The major benefits of IDCs include the usage of economies of scale to amortize the cost of ownership and the cost of system maintenance over a large number of machines. With the rapid growth of IDCs in both quantity and scale, the energy consumed by IDCs, directly related to the number of hosted servers and their workload, has enormously increased over the past ten years. The annual worldwide capital expenditure on enterprise power consumption has exceeded billions of dollars, and sometimes has even surpassed spending on new server hardware. The rated power consumptions of servers have increased by ten times over the past ten years. The power consumption of data centers has huge impacts on environments. The surging demand has called for the urgent need of designing and deployment of energy efficient Internet data centers. Information scientists are constantly trying to find better solutions to reduce power consumption by data centers. Many efforts have been made to improve the energy efficiency of IDCs including network power management, Chip Multiprocessing (CMP) energy efficiency, IDC power capping, storage power management solutions, etc. Among all these approaches, Virtual Machine (VM) technology has emerged as a focus of research and deployment. Virtual Machine (VM) technology (such as Xen, VMWare, Microsoft Virtual Servers, and the new Microsoft Hyper-V technology etc.), enables multiple OS environments to coexist on the same physical computer, in strong isolation from each other. VMs share the conventional hardware in a secure manner with excellent resource management capacity, while each VM is hosting its own operating system and applications. Hence, VM platform facilitates server-consolidation and co-located hosting facilities. Virtual machine migration, which is used to transfer a VM across physical computers, has served as a main approach to achieve better energy efficiency of IDCs. This is because in doing so, server consolidation via VM migrations allows more computers to be turned off. Generally, there are two varieties regular migration and live migration. The former moves a VM from one host to another by pausing the originally used server, copying its memory contents, and then resuming it on the destination. The latter performs the same logical functionality but without the need to pause the server domain for the transition. In general when performing live migrations the domain continues its usual activities and from the user’s perspective—the migration is imperceptible. Using VM and VM migration technology helps to efficiently manage workload consolidation, and therefore improves the total IDC power efficiency. For cloud computing platforms, both power consumption and application performance are important concerns. The Green Cloud architecture is utilized by cloud computing industry as an effective method to reduce server power consumption while achieving required performance using VM technologies. Reliability, flexibility, and the ease of management are the essential features of Virtual Machine (VM) technology. Due to these features Virtual Machine (VM) technology has been widely applied in data center environments. Green Cloud is an IDC architecture which aims to reduce data center power consumption, while at the same time guarantee the performance from users’ perspective, leveraging live virtual machine migration technology. A Green Cloud automatically makes the scheduling decision on dynamically migrating/consolidating VMs among physical servers to meet the workload requirements meanwhile saving energy, especially for performance-sensitive (such as response time-sensitive) applications. Green Cloud architecture guarantees the real-time performance requirement as well as saves the total energy consumption of the IDC. In the design of Green Cloud architecture, several key issues including when to trigger VM migration, and how to select alternative physical machines to achieve optimal VM placement are taken into consideration. Green Cloud intelligently schedules the workload migration to reduce unnecessary power consumption in the IDC. Green Cloud balances performance and power in such a way that users hardly notice that their server workloads are being or have been migrated. The above discussed technology reduces hazardous impact on our planet, keeps our environment green, and at the same time saves a lot of capital expenditure for a company that provides cloud computing service to various businesses. These benefits are ultimately passed to customers availing cloud computing services.
<urn:uuid:b252aa51-dd0b-44ec-881a-223ff080f774>
CC-MAIN-2017-04
http://www.myrealdata.com/blog/142_cloud-computing-is-green
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00278-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937594
1,061
3.1875
3
Citrix loves giving back to communities. At this year’s Citrix Synergy conference, we hosted local Las Vegas middle school and high school classes to teach kids about micro-controllers and robotics. Using Citrix’s Octoblu IoT platform, a class of 5th graders was able to build an Intel Arduino-powered push-to-talk video walkie talkie in less than 90 minutes! At first, teaching an Internet of Things (IoT) class to thirty 5th graders sounded a little daunting; however, we built the curriculum in four phases that demonstrated success at each step of the process. This short course taught the basics in hardware and software in small (but fun) steps. Step 1: We taught the kids how to attach an LED to the Arduino micro-controller and blink it at any speed. Step 2: Next, we taught the kids how to connect a button to the LED light to turn it on and off. As you can tell from the photo below, the kids were enjoying their accomplishments! Step 3: Next, we shifted to software and taught the kids how to activate the laptop’s camera via WebRTC to snap photos of the person pressing the button. Step 4: The final part of this exercise was networking all of the chats into a single video chat room using Octoblu. Mission accomplished! The room full of 5th graders was buzzing with entertainment while chatting with their friends by pressing micro-controller buttons to communicate with one another! We would like to give a giant THANK YOU to Intel for donating the Arduino 101 (Curie-powered) micro-controllers and Dell for loaning us the Chromebooks to use during this event. We would also like to thank Luis Montes for flying to Las Vegas to teach this class and all of the CTPs and volunteers for helping us make this event a huge success! Last (but not least), a warm thank you to Jo Moskowitz for making this event possible!
<urn:uuid:8a625506-302f-43e1-945c-b48344ac9fc4>
CC-MAIN-2017-04
https://www.citrix.com/blogs/2016/06/07/simply-serve-citrix-synergy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00278-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943819
416
2.765625
3
After the problem-plagued 2000 presidential election, the Help America Vote Act of 2002 (HAVA) aimed to re-establish America's faith in voting procedures. However, some say HAVA did not do enough to restore voter confidence. A recently introduced bill aims to finish what HAVA started, but critics say the legislation will halt the progress HAVA intends to achieve. HAVA's mandate to replace outdated voting equipment caused a shift toward direct record electronic (DRE) voting machines, such as touchscreen machines, where voters directly cast their votes into electronic memory. Some say there is no way to ensure the voting machines are bug-free or have not been tampered with. "The nightmare scenario is that the voter votes, confirms the vote, and then the vote is recorded internally in the machine differently than the voter intended," said Stanford computer science Professor David Dill. "I don't think there is any technological basis for somebody to assure us that can't happen." HAVA required that voting machines produce a paper record of each vote and that each voter be able to confirm their vote before casting it. But HAVA does not specifically require that the voter verify the paper record. "Right now the voter verified [requirement] is one thing; the paper record might represent something else. So there is an audit gap," said Dill. If votes are changed or lost due to system errors or tampering, a printout at the end of the day will reflect erroneous vote tallies, he explained. "Because of ballot secrecy, once the voter leaves the voting booth, there is no one who can make sure that voter's vote is consistent with what was actually recorded inside the machine. The voter can't do it. Election officials can't do it. The vendor can't even do it." New legislation -- H.R. 2239, the Voter Confidence and Increased Accessibility Act of 2003 -- would require a voter-verified paper trail. The measure was introduced in the U.S. House of Representatives in May to amend HAVA, but some contend the bill causes more problems than it would solve. Besides the paper-trail requirement, the bill mandates surprise recounts in 0.5 percent of jurisdictions and a verification system that separates the vote generation function from that of vote casting for those with visual impairments. The measure also would ban wireless technology in voting machines and require source code be made available for inspection by any citizen. The recent press surrounding Johns Hopkins University and Rice University researchers who claim Diebold Election Systems' code may have fatal security flaws could bring this point to life because it is unclear whether the code examined by researchers was ever used in an election. If vendors were required to expose their source code, voters wouldn't be left guessing about its quality, but exposing the code without precautions could prove disastrous, said Dill. "The code is probably full of security holes, because the companies are depending on secrecy so heavily." But security for electronic voting devices shouldn't depend on secrecy of their source code, added Dill, regardless of the legislation. Even if the design is kept secret, he argued, the system should be secure even if the design were exposed. At least one voting system manufacturer said exposing source code would not trigger security concerns. John Groh, senior vice president of strategic alliances for Election Systems & Software (ES&S), said his firm would make open source code as secure as its current system if the law were to pass. However, there may be confusion over just how far the bill's source code requirement could stretch, and broadly distributing the source code could be an invitation for problems, according to Groh. "You do not want that in the election industry," he said. "Where people can take something and modify it. That is exactly where somebody would take a system, change how the source code and all of the operational functions work, and make it not count ballots correctly." Groh said the code for electronic voting devices is already subject to review by an independent testing authority, but exposing the source code to anyone who wants to examine it would threaten industry competition. "If you open that up and allow anybody to look at it, the competitors are going to be able to look at each other's and take the good from somebody else's," he said. He also said to maintain the highest security, the code should only be available to those "directly responsible for testing and administrating elections." But many computer scientists say the current certification process for electronic voting systems is too secretive. They contend citizens should be able to verify that the equipment is doing what it is supposed to do. "Publicly disclosing the design allows more people to inspect it and find problems," said Dill. On a Web page devoted to frequently asked questions about DRE voting systems , Dill and fellow computer scientists Rebecca Mercuri, Peter Neumann and Dan Wallach give the example of how "easter eggs," or hidden features inserted by programmers that are triggered by arcane combinations of commands and keystrokes, according to them, routinely clear quality assurance tests by vendors. Dill said a voter-verified audit trail is still necessary even if source code were exposed for inspection. "Security holes are sometimes still discovered the hard way," he said. "Open source is not a panacea, because it is practically impossible with conventional computer technology to make sure the open source software is what is really running on the computer." Progress or Hindrance? Others say requiring a voter-verified paper trail will halt implementation of DRE machines, which have numerous advantages, such as allowing disabled citizens to vote autonomously and facilitating voting in numerous languages, including some that lack written form. Critics argue that the audit requirement would add cost and weight to the machines; necessitate additional supplies, training and storage space; and introduce one more piece of technology that can malfunction and tie up the process. "If you require something that hasn't been successfully implemented yet -- that is contemporaneous paper replica -- counties are going to have no choice but to stay with inferior paper-based systems," said Ohio State University law Professor Dan Tokaji, who was involved in California voting reform as a former attorney with the American Civil Liberties Union of Southern California. Although HAVA requires a voting machine in each precinct that allows those with disabilities to vote independently, Tokaji said if H.R. 2239 were to pass, counties could revert to less accessible systems, such as optical scans. To avoid going to duel-system elections, Tokaji explained, counties could argue that they don't have to use DREs to accommodate those with disabilities because of the language of the statute. "The question of whether a contemporaneous paper replica will be required is passing a cloud over voting modernization. Counties are understandably afraid to take any action because they're afraid either Congress or the state may mandate something that turns out to be completely unworkable," Tokaji said. Uncertainty about standards also has influenced voting machine vendors. "We have had to look at the architecture and make certain that if these types of bills get passed, we can shift gears and make the changes necessary to accommodate them." said ES&S' Groh. Vendors have to be flexible to keep up with demand anyway, said Groh, but with the proposed changes, they have had a close eye on legislation to keep ahead of possible requirements. The fact that industry players are offering solutions in advance of legislation means the paper trail may find its way into use even if the bill doesn't pass. The major vendors are developing machines capable of a voter-verified paper trail, so jurisdictions can opt for its implementation even if the law does not require it. A Paperless Compromise Paperless technology capable of achieving an audit trail -- it would capture an image of the screen as the vote is cast and save it separately from tabulated votes -- is being developed. Although the electronic version may not satisfy all audit trail advocates, the option would have advantages aside from eliminating need for additional supplies and storage. The electronic audit trail could easily keep track of many languages, said Groh, which on paper could be problematic. "Are you going to require the printer to print in all those languages?" he said. "And then if it does print in all those languages, later when somebody goes to look at those ballots and interpret, you're going to need somebody that can interpret every language ballot that's there." In California, the Ad Hoc Touch Screen Task Force created a report for Secretary of State Kevin Shelley that put forth an electronic audit trail as a possible solution, but acknowledged it would take time systems to develop, certify and implement systems. Hurry up and Wait One county has decided to wait for the legislation and technology ruckus to pass. The Sacramento County Board of Supervisors cancelled an RFP for touchscreen voting machines. Former Sacramento County Registrar of Voters Ernest Hawkins -- who retired Aug. 1 -- recommended the RFP cancellation to hold out for better equipment and to await resolution of the uncertainty over legislation. Hawkins said even though vendors had developed or were in the process of developing better technology, the wording of the RFP would have required them to buy inferior equipment because the state had not yet certified the new equipment. "If we'd have gone forward and made a recommendation to the board and actually bought equipment at that time, the vendors would have been obligated to install equipment here that was not state of the art," Hawkins said. Uncertainty over federal legislation was also part of his decision to recommend cancellation. Hawkins said because part of what HAVA produced was a new organization to develop standards, and the process of appointing its members is not yet complete. "We're probably not going to have standards for a year, so we were a little bit reluctant knowing that we we're buying equipment that might not even meet the specs," said Hawkins. Hawkins said because HAVA hasn't been fully implemented and put to the test, passage of H.R. 2239 is unlikely at this point. "I think the tendency is to hold off doing anything until that organization is in place, until we have the presidential elections next year and we see how this new Help America Vote Act works, and whether or not there is more reform that is needed at the federal level before legislation will get serious hearing, review and passage." he said. "Were going through a period of hurry up and wait," Hawkins added. "Waiting to get everything fully into place before much of anything more will happen in my opinion."
<urn:uuid:a45721c4-dd89-4e39-9a14-28590dcdab33>
CC-MAIN-2017-04
http://www.govtech.com/policy-management/Whats-Next-for-Electronic-Voting.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00304-ip-10-171-10-70.ec2.internal.warc.gz
en
0.969555
2,161
2.625
3
There is plenty of fear, uncertainty and doubt out there over the upcoming federal ban on incandescent light bulbs. The very thought of losing that pear-shaped giver of warm, yellow light drove Europeans to hoard Edison's invention as the European Union's Sept. 1 deadline on incandescent lamps approached. China's ban on incandescent lamps that use 100 watts or more of power starts Oct. 1. The ban expands to cover any light bulbs that use more than 60 watts in 2014 and to 15 watts in 2016. In the U.S., the Energy Independence and Security Act (EISA) of 2007 is not technically a ban. It's an energy efficiency standard that requires all screw-in light bulbs (also known as lamps) to use 30% less power, beginning with 100-watt bulbs this year. The end standard requires bulbs to use 65% less energy by 2020. If a manufacturer could produce an Edison incandescent bulb that used 30% less power today, the maker could sell it. Since manufacturers can't make such a bulb, the EISA essentially becomes a ban on inefficient lamps. When suppliers run out of stock, consumers and businesses will have to replace traditional bulbs with more energy-efficient alternatives. They will have three choices: halogen incandescent bulbs, compact fluorescent lamps (CFLs) or light-emitting diodes (LEDs). In the U.S., EISA standard requirement for 100-watt bulbs began last January. The ban on 75-watt bulbs goes into effect Jan. 1, 2013. The deadline for the most popular bulbs, the 60-watt and 40-watt lamps, is Jan. 1, 2014, and will have the greatest impact on consumers, according to Philip Smallwood, senior lighting analyst at IMS Research. When the ban on 60-watt and 40-watt lamps begins in 2014, sales of incandescent bulbs are expected to drop off a cliff. For example, in 2011, about 1.1 billion bulbs were sold in North America (Canada also has an Edison bulb phaseout plan, but it has been put on hold). In 2014, North American sales are expected to drop to 200 million, according to IMS Research. In 2014, what's going to account for 200 million bulbs moving across checkout counters? There will be Canadian sales, and then there will be 22 types of traditional incandescent lamps that are exempt from the EISA, including appliance bulbs, heavy-duty bulbs, colored lights and three-way lamps. The EISA was signed into law by President George W. Bush. However, conservatives, ranging from radio commentator Rush Limbaugh to U.S. Rep. Michelle Bachmann have criticized the EISA on the basis that it's government intrusion into U.S. homes and a restriction on free choice. The first phase of EISA was to have begun on January 1, 2012, but Republican legislators made amendments in an appropriation bill that prohibited the DOE from spending money to enforce the rules in the 2012 and 2013 fiscal year. The sponsor of the legislation was Rep. Michael Burgess (R-Texas). Burgess also fought against EISA in 2007, when it was originally passed. "It's something the market place should determine. Let consumers make the choice. There was no reason for the government to make that choice for them," Burgess said in an interview with Computerworld. "It should be up to me if I want to make the decision if I want to run a light bulb that uses more kilowatt hours, so I can see better with my old, failing eyes, in my favorite chair. I should be able to do that," he said. "I get the fact that I work in a federal building and I get the fact that they get to determine the type of light I work under all day long, but at night time, when I go home and read ..., I should be able to read under whatever light I want." The DOE would not speak on the record about how that lack of funding would affect adherence to the standards. Even without funding for enforcement, manufacturers are honoring the standards and discontinuing their production of incandescent light bulbs, according to Smallwood. Some consumers also echo Burgess' concerns, namely that they'll have to replace the warm luminescence of a traditional incandescent bulb with the harsh, white light of LED lamps. But not all LED lamps emit the cold, bright light. "LEDs don't necessarily have white light, that is more of a concern with CFLs. There are several LED products that do produce warm light," Smallwood said. Another issue with CFL lamps is that they contain mercury, "which some people are concerned about," he added. CFL lamps last from 5,000 to 8,000 hours, well beyond the typical 1,000-hour lifespan of an incandescent bulb. One drawback of CFL lamps is that they die more quickly in environments where they're frequently turned on and off . "You have to leave them on at least 15 minutes in order not to kill the light," Smallwood said. By comparison, LEDs last well beyond all other lamps today, but the pricing is exorbitant. The LED equivalent to a 60-watt incandescent bulb costs $25 or about 20 times more, according to Smallwood. But the total cost of ownership is vastly less. For example, an LED lamp has an average lifespan of 25,000 hours, so an LED could last 25 years or more. By comparison, new energy-efficient halogen lamps produce the same type of yellow light as an incandescent lamp, and they look the same, but cost from $1.50 to $2 more per lamp. They also only last about 50% longer than current bulbs or about 1,500 hours versus an average 1,000 hours for incandescent bulbs. Lamp lifespan also creates a significant issue for manufacturers and retailers. While energy efficient lamps will have skyrocketing sales over the next few years, their ability to last 25 times longer will mean sales will drop off precipitously, Smallwood said. "If you buy an LED lamp and it's going to last you 20 years, that replacement market disappears," Smallwood said. One learning curve for consumers will be a change to the metric system after using the U.S. system of unit measurements. For LEDs, brightness is measured in lumens, not watts. Lumens are listed on an LED lamp's packaging. More lumens mean brighter light. To replace a 60-watt traditional bulb, consumers should look for bulbs that provide about 800 lumens, according to the DOE. To replace a 100-watt incandescent bulb, look for a bulb that emits 1600 lumens; for a 75 watt bulb, the equivalent would be an 1100 lumens LED; and for a 40-watt replacement, look for an LED equivalent that has 450 lumens. One advantage to lumens is that consumers can get a wider range of brightness. Instead of having to choose between a 100-watt or 75-watt lamp, bulbs using lumens run the gamut, offering a much finer brightness gradient. Energy savings from efficient lamps are also an advantage. An LED lamp is five times more energy efficient than an incandescent lamp. According to the DOE, the operating cost savings a consumer can achieve by switching to an energy efficient bulb is dramatic. For example, the operating cost per year for a 60-watt incandescent bulb is $4.80, a halogen incandescent bulb costs $3.50, a CFL bulb is $1.20 and an LED light is just $1. Beginning this year, on average, light bulbs sold in the U.S. will use 25% to 80% less energy as manufacturers begin flooding the market with new, compliant products. According to the DOE, upgrading 15 inefficient incandescent bulbs could save a homeowner about $50 per year. Since most of the bulbs also have longer life spans, the savings continue into the future. Nationwide, lighting accounts for about 10% of home electricity use. With new EISA standards, U.S. households in total could save nearly $6 billion in 2015. Lucas Mearian covers storage, disaster recovery and business continuity, financial services infrastructure and health care IT for Computerworld. Follow Lucas on Twitter at @lucasmearian or subscribe to Lucas's RSS feed . His e-mail address is email@example.com.
<urn:uuid:06213cfc-dbbf-4969-837d-730f0762b742>
CC-MAIN-2017-04
http://www.computerworld.com/article/2491457/sustainable-it/light-bulb-ban-leads-to-hoarding-in-europe--is-u-s--next-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00516-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948941
1,775
2.609375
3
To make the perfect cup of coffee, use temperature and humidity monitoring Thursday, Apr 4th 2013 In comparison to other food items like meat, fish, dairy or produce, the need to use temperature and humidity monitoring equipment for coffee is less immediate. After all, foodborne illnesses spread through perishable food items cause millions to get sick and send thousands to the hospital every year in the United States, but coffee is far less likely to cause such harm. Still, for those that rely on their morning cup of joe to get their day going, baristas should know that the perfect coffee can only be achieved through humidity and temperature monitoring. "[U]nderstanding coffee packaging can make your coffee a truly religious experience," master barista Giorgio Milos wrote in The Atlantic. How to best deal with green coffee Although most of us envision coffee as dark hard beans or as a fine powder, coffee in its raw form is a green bean - this is what is extracted from the coffee tree at harvest time. Coffee roasters, specialty coffee houses or anyone else who might need to deal with these green beans should know proper storage practices. Writing in the Brazilian Journal of Plant Physiology, Karl Speer and Isabelle Kölling-Speer of Technische Universität Dresden's Institute of Food Chemistry, noted that green Arabica and Robusta beans can contain significant amounts of fatty acids and moisture. Some strains contain as much as 30 grams of free fatty acids for every kilogram of green coffee bean. To keep the fatty acid content stable and to prevent the green beans from having spiked moisture levels, raw coffee should be stored under specific moisture temperature conditions. Speer and Kölling-Speer found that while free fatty acid and water levels remained mostly consistent when beans were stored at 12 degrees Celsius (53.6 degree Fahrenheit) for as long as 18 months, levels of both compounds spiked when the raw coffee was stored at 40 degrees C (104 F). Levels of cafestol - a compound in coffee which can give the beans a bitter flavor if found in large quantities - also spiked when stored at 40 C versus 12 C. Properly storing roasted and ground beans In order to bring out the coffee flavors most people associate with the plant and the drink, the beans must first be roasted. This process changes the internal chemical structure of the bean, and thus monitoring and storage needs have to change as well. According to Milos, roasting the beans dramatically boosts their carbon dioxide levels - dark roasts of beans can have as much as 10 liters for every 1 liter of coffee. To preserve these levels, roasted coffee needs to be kept away from oxygen at all costs. "Oxidation is part of staling, and it degrades quality by altering coffee's essential oils and aromatic components, ultimately creating a rancid taste akin to butter left out too long," he wrote The effects of oxidation can also be brought on by heat and moisture, Milos wrote. The National Coffee Association recommends keeping whole and ground beans stored in airtight containers and kept in cool, dark and dry spots. Humidity monitoring is important at this stage, since excess moisture can damage beans. As a result, coffee ready for consumption should not be stored in a refrigeration or freezer unit. Milos said fresh coffee begins to lose its flavor and aromas when stored at room temperature after around 10 to 15 days. For instances when whole beans need to be kept around for longer than this duration, freezers can be useful. The NCA said whole roasted coffee beans can be frozen for up to one month so long as the containers are airtight and not used intermittently during that period. However, Real Simple reported that freezing can dramatically affect the quality of the bean and the final cup of coffee. "The cell structure changes, which causes a loss of the oils that give coffee its aroma and flavor," Scott McMartin, a member of the Starbucks Green Coffee Quality group, told the source.
<urn:uuid:9c933074-364a-4054-978c-47144ac40020>
CC-MAIN-2017-04
http://www.itwatchdogs.com/environmental-monitoring-news/cold-storage/to-make-the-perfect-cup-of-coffee,-use-temperature-and-humidity-monitoring-417054
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00240-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950302
815
2.796875
3
Google Cultural Institute Commemorates U.S. Civil Rights The Institute was established in 2010 to help preserve and promote culture online, to make important cultural material available and accessible to everyone and to digitally preserve it to educate and inspire future generations, according to the Institute. The museum's exhibits already cover a wide swath of the history of the world's cultures, as well as a huge and growing collection of art, artifacts and more from around the world. The Google Cultural Institute hosts marvelous online collections of artwork and cultural treasures that are in hundreds of museums, cultural institutions and archives around the world, according to the group. Google created the organization to help show the collections virtually to people around the globe. The Google Cultural Institute includes the Art Project, with some 40,000 images of world-renowned and community-based artwork from more than 40 countries; the World Wonders Project, which includes images of modern and ancient heritage sites from around the globe using Street View, 3D modeling and other Google technologies; and archive exhibitions featuring massive collections of information from institutions and museums the world over, much of which cannot always be put on public display, according to Google. In March, the Institute launched an online "Women in Culture" project that tells the stories of known and unknown women who have impacted our world as part of the company's commemoration of International Women's Day on March 8. The fascinating online feature included 18 new exhibits that showcase detailed stories about amazing women throughout our history.The online institute features a collection of more than 57,000 pieces of art. In November 2013, the Google Cultural Museum showcased the five handwritten versions of Abraham Lincoln's Gettysburg Address online in commemoration of the 150th anniversary of his famous and moving 272-word speech. The five versions were placed online in a special gallery for viewers to read and review. Five different copies of the Gettysburg Address were written by Lincoln and given to five different people, each named for the person to whom they were given, according to AbrahamLincolnOnline.org. In December 2013, the Google Cultural Institute gained more artwork for its online collections, including a new assortment of pieces that challenge the visual perceptions of viewers.
<urn:uuid:9ab9f4cb-a6a6-4ef4-83ab-e81a8ce8c845>
CC-MAIN-2017-04
http://www.eweek.com/cloud/google-cultural-institute-commemorates-u.s.-civil-rights.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00360-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955516
439
3.40625
3
ZIFFPAGE TITLEWhat Bad Bots DoBy Kim S. Nash | Posted 2006-04-06 Email Print In moments, hackers with bot code can break into vulnerable computers, turn them into zombies, steal information and spread the infection. While you scramble to secure your network--and the vital data on it--botmasters sell access to your hacked machines What Bad Bots Do Bots aren't always bad. Using C++, Assembler or other low-level languages that produce compact code, a programmer can create a bot to do mundane tasks online—maybe check stock quotes or compare prices at e-commerce sites. Search company Google uses its Googlebot, for example, to collect and index documents on the Web. In the hands of hackers, however, bots make trouble. Ancheta, who an uncle and cousin say is self-taught on computers, didn't write his own bot code from scratch. According to his plea, he modified Rxbot, a bot strain well known among hackers and available for download at several Web sites. Most botmasters, in fact, rely on pre-written code refined over time by other hackers, says Dmitri Alperovitch, a research scientist at CipherTrust. This is akin to how the legitimate open-source community works, Alperovitch says, where many people pool knowledge to improve a product, "but [it's] not as public." Stealing another page from the mainstream computing world, botmasters prefer modular systems, where instructions for different tasks can be plugged into or removed from bot code depending on what the user wants to do with it. "He might want to harvest CD keys or e-mail addresses, take information from the software registry or find code for doing denial-of-service," Alperovitch explains. The bot code can install other software that records keystrokes or finds these pieces of information itself, he says: "All these are pluggable modules." To his version of Rxbot, Ancheta added instructions to seek out computers with a specific weakness, according to the plea. Rxbot can be tweaked to exploit several unpatched Windows vulnerabilities, including LSASS. LSASS itself should be a crucial safeguard, as it was built to handle local security and authentication, so people without passwords can't log on to individual PCs. But as Microsoft revealed in an April 2004 security bulletin labeled "critical," LSASS suffers from a buffer overflow problem that, if left unpatched, opens any computer running Windows XP or Windows 2000 to hijack. A hacker "could take complete control of an affected system, including installing programs; viewing, changing, or deleting data; or creating new accounts that have full privileges," the bulletin warned. A buffer is a limited amount of memory allocated to a certain task. Software creates buffers to hold data the program might need later. If you can fool the program you're targeting into overflowing that region, it's possible to inject malicious programming instructions into the machine's memory. A hacker attacking LSASS can flood its buffer with hundreds of lines of nonsense text laced with real programming instructions telling the system to do what he wants. In this case, he'd want to be authenticated as a valid user. The garbage text crashes LSASS but leaves the instructions in memory for the computer to execute like any other execution request, such as booting up or opening a file. As recently as last November, 19 months after Microsoft put out its initial patch, the LSASS buffer overflow was the most exploited vulnerability in networks facing the outside world, according to Qualys, a security company in Redwood Shores, Calif. Qualys studies computers at 2 million IP addresses worldwide and manages security problems for customers such as DuPont, Hershey and eBay. Stephen Toulouse, a security program manager at Microsoft, contends that the issue isn't technical error anymore—patches exist—but a human one. "This speaks more to the importance of making sure software is up to date," Toulouse says. "Criminals will look at even the oldest of vulnerabilities and try it." Once Ancheta's bot infiltrated an exploitable computer, the code instructed the computer to connect to a private IRC channel he had created to direct his zombie computers, according to his plea. The password to the channel was embedded in the bot code. He "owned" these machines, in hacker lingo. Typically, Ancheta would send over IRC a command code for the activity he wanted the bot to perform—open a certain port and start sending spam, or continue scanning a range of Internet Protocol addresses for PCs with particular software flaws, for example. At any given time, several dozen to several thousand bots would go to this spot looking for instructions. Newer bot attacks are even more insidious, says Gary McGraw, chief technology officer at Cigital, a software quality consultancy in Dulles, Va., and author of the book Software Security. Bots can now come as rootkits—code that embeds itself into the operating system and can modify key functions performed by the system. In a setup like Ancheta's, the bot program is visible, at least to a technology professional who knows where to look. It sits in an area on the operating system known as the user space, along with common applications like Web browsers and word processors. But when a bot is coded as a rootkit, the bot inserts itself into what is called the kernel space, close to a computer's core operating system. The kernel is where behind-the-scenes programs such as network drivers communicate with the operating system or access the computer's hardware. Because a rootkit can modify key functions performed by the operating system, it can conceal the bot code. For example, if antivirus software requests from the operating system access to a particular memory location to check it for malicious code, the rootkit can intercept the request and provide the security software with fake data saying, in essence, that everything is OK. Bots that "can't be seen" by current antivirus software, McGraw says, can live longer on an infected system. Worse, a highly skilled botmaster can use a rootkit to insert a bot into a computer's hardware, McGraw says. Specifically, the Erasable Programmable Read-Only Memory chip in every computer, which holds data when the power is turned off, can be violated, in a technique called "flashing the EPROM." If the computer survives this procedure, it becomes permanently infected. Yet the simplest means of infiltrating a corporate network is still to supplement bug exploits with trickery, says a person interviewed by Baseline who claims to be SoBe, Ancheta's accomplice in Boca Raton, Fla. That is, getting people like the one at Auburn University to click on a link. For example, according to SoBe, an employee may take his laptop home to browse the Web over a weekend. He doesn't know it, but bot code rides into his system when he downloads a freeware application for, say, tracking local weather. Also unknown to him is the fact that the botmaster then uploaded a virus that will spam an instant message to the employee's buddy list when he plugs into the corporate network on Monday. The message might read, "Hey, check out my new pictures," and give a hyperlink that, when clicked, sets off a bot. Since the note is from a friend, many of the people receiving it will indeed click, thereby infecting themselves and growing the botnet. Such social engineering, say security experts, is highly effective and growing. "It happens all the time with very annoying frequency," says TransUnion chief security officer Lines. SoBe agrees. "Basically, some of these spamming methods rely on friendship," he says. "You don't use an exploit to infect people, you use their stupidity."
<urn:uuid:ffbcb635-42c6-4b66-acf1-0ff104390e1c>
CC-MAIN-2017-04
http://www.baselinemag.com/c/a/Projects-Management/Security-Alert-When-Bots-Attack/2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00268-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939784
1,623
2.890625
3
Clustering software helps center untangle DNA mysteries. The DNA sequence of the entire human genome was mapped as of April, but the battle to keep that massive amount of data available and protected rages on at places such as The Genome Sequencing Center at Washington University Medical School. Established in 1993 with a grant from the National Human Genome Research Institute at the National Institutes of Health, The Genome Sequencing Center contributed about 25 percent of the completed human genome information. In the course of this work, the GSC experienced what all major scientific research efforts and most commercial entities are going through these days: skyrocketing data proliferation. The GSCs data store grew from somewhere in the gigabyte range to some 8 terabytes over the past few years. High availability to data is crucial to ongoing work, according to officials at the center, and data loss is unacceptable, given that the costs of the research and investments in computational resources that go into mapping DNA are just too high. To put that into dollar terms, according to Kelly Carpenter, senior technical manager at the center, in St. Louis, every piece of DNA mapped translates to about $200,000 in initial technology investment and subsequent upkeep. "You look at any file folder on the system, and, I figured it out, it comes out to about $200,000 for each," Carpenter said. "You lose that, you lose $200,000." To protect those six-figure file folders, the center turned to Oracle Corp.s Oracle9i RAC (Real Application Clusters) managed with the Database Edition of Veritas Software Corp.s Advanced Cluster heterogeneous file system software. This cluster software is the foundation for Oracle RAC environments that run on Solaris or HP-UX. The GSC put this software at the center of a new Fibre Channel storage area network running on Solaris and Linux operating systems. The motivation behind these technology choices, Carpenter said, was to provide a high-availability environment that would enable the research center to cut costs, beef up performance, lower management costs and stay on top of the massive data growth associated with gene mapping. The GSC is now running two Sun Microsystems Inc. Sun Fire V880 servers in an array, each with four processors, and is in the process of migrating off an older cluster that consists of two Sun E3500 servers. Previously, the GSC used a high-availability Oracle HA Cluster platform. In that type of parallel-server setup, one server runs the Oracle9i database while another sits idle, waiting to take over if the production version fails. Such a setup is expensive, Carpenter said, given that half the server resources involved are seldom used. Another factor contributing to the high cost of Oracle HA is its difficulty to set up and administer, Carpenter said. The failover capabilities of RAC with Veritas clustering software are also much smoother than those of Oracle HA, Carpenter said. With Oracle HA, if a query dies when a database instance goes down, researchers would be "dead in the water" until another Oracle instance came back upwhich could take minutes, Carpenter said. With RAC, its "more like a bump," he said. "For the client, its cool," Carpenter said. "If the client is doing a query from some nice pretty GUI, if the physical server its talking to dies, the client can figure out it died and will automatically reissue the query from the beginning on another server, without any interruption. Depending on how long the query is supposed to run, something that normally takes a few seconds to run, for example, will run a little slower. But by the time they ask, Is the server down? they say, Oh, wait! Its done." The GSC also uses Veritas NetBackup software with FlashBackup and Shared Storage options to protect and restore some 285 million files. Since the migration to the current cluster and storage setup in June, the GSC credits the FlashBackup option with improving backup performance from 24 hours to 4 hours and with reducing its catalog from 150GB to 30GB. Those gains are significant. But one of the main points of installing Veritas Advanced Cluster software to handle the Oracle RAC setup comes down to ease of use, Carpenter said, with its GUI that allows database administrators to easily fail over servers without having to resort to command lines to bring things down. Obviating the need to enter big chunks of commands into command-line interfaces brings total cost of ownership down by reducing ongoing management costs, Carpenter said.
<urn:uuid:29a3f82b-ab7c-47ae-95cc-07e699cebeea>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Data-Storage/Data-at-the-Ready
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00388-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939119
935
2.703125
3
A cyber incident in a large, complex industrial control system can have serious consequences, and all security technologies have limitations. This means we can always be more secure, or less. How then, should we evaluate security funding requests for industrial sites? How do we know how much is enough? Read more > The abstract, qualitative models that most of us use for cyber threats are poorly understood by business decision-makers, and are not easily compared to risk models for threats such as earthquakes and flu pandemics. We could force-fit cyber risks into more conventional models by "making up" numbers for the probability of serious incidents, but "made up" numbers yield poor business decisions. Most business leaders though, do understand cyber attack scenarios and their consequences, and find them much more useful than qualitative models or "made-up" probabilities. To communicate industrial cyber risks effectively, an assessment process should distill complex risk information into a small, representative set of high-consequence attack scenarios. Business decision-makers can then "draw a line" through the set, selecting which combinations of attacks, consequences and risks to accept, and which to mitigate or transfer. Join us to explore using attack scenarios to communicate risks, consequences, and costs to business decision-makers.
<urn:uuid:ed17e9cc-f9f2-4f7f-a5c5-ab85d11adcda>
CC-MAIN-2017-04
https://www.brighttalk.com/channel/11329/securityweek
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00204-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943949
256
2.671875
3
Big data is everywhere we look these days. Businesses are falling all over themselves to hire 'data scientists,' privacy advocates are concerned about personal data and control, and technologists and entrepreneurs scramble to find new ways to collect, control and monetize data. We know that data is powerful and valuable. But how? This article is an attempt to explain how data mining works and why you should care about it. Because when we think about how our data is being used, it is crucial to understand the power of this practice. Without data mining, when you give someone access to information about you, all they know is what you have told them. With data mining, they know what you have told them and can guess a great deal more. Put another way, data mining allows companies and governments to use the information you provide to reveal more than you think. To most of us data mining goes something like this: tons of data is collected, then quant wizards work their arcane magic, and then they know all of this amazing stuff. But, how? And what types of things can they know? Here is the truth: despite the fact that the specific technical functioning of data mining algorithms is quite complex -- they are a black box unless you are a professional statistician or computer scientist -- the uses and capabilities of these approaches are, in fact, quite comprehensible and intuitive. Read the full story on data mining at The Atlantic
<urn:uuid:94750817-554e-4d96-86ef-21d394dea821>
CC-MAIN-2017-04
http://www.nextgov.com/big-data/2012/04/everything-you-wanted-to-know-about-data-mining-but-were-afraid-to-ask/50955/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00471-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951983
287
2.90625
3
There was a time when the word "supercomputer" inspired the same sort of giddy awe that infuses Superman or Superconducting Supercollider. Asupercomputer could leap tall buildings in a single bound and peer into the secrets of the universe. And chief among this race of almost mythical machines was the Cray. His first computer, the Cray 1, debuted in 1976, and was the embodiment of all the power that crackled around the supercomputer. It weighed 10,500 pounds. Thirty humans were necessary to help install it. And its first users built nuclear weapons: Model No. 1 went to Los Alamos National Laboratory. Eventually Cray sold 80. I love this description of its capabilities and style from the National Center for Atmospheric Research (which got Cray's third machine): With the help of newly designed integrated silicon chips, the Cray-1 boasted more memory (one megabyte) and more speed (80 million computations per second) than any other computer in the world. The Cray’s bold look also set the machine apart. Its orange-and-black tower, curved to maximize cooling, was surrounded by a semicircle of padded seats—dubbed an "inverse conversation pit" by one observer—that hid the computer’s power supplies. One megabyte of memory! 80 million computations per second! Current smartphones blow away that kind of performance. But still, there's something to the Cray. The physical form was relatively easy to put together. They used a CNC machine, painted the wood model, and covered the "semicircle" with pleather. The hardware was easy to get a hold of, too. "It wasn’t difficult to find a board option that could handle emulating the original Cray computational architecture. Fenton settled on the $225 Spartan 3E-1600, which is tiny enough to fit in a drawer built into the bench," GigaOm writes. "Considering the first Crays cost between $5 and 8 million, that’s a pretty impressive bargain." The thing that turned out to be tricky, actually, was the software. No one had preserved a copy of the Cray operating system. Not the Computer History Museum. Not the U.S. government. It was just gone. Fenton searched high and low, eventually finding an old disc pack that contained a later version of the Cray OS. Restoring the software to usable condition proved a ridiculously ornate task, which Tantos, a Microsoft engineer took over. And, after a year of work, they're finally getting somewhere: [Tantos] rewrote the recovery tools, plus a simulator for the software and supporting equipment like printers, monitors, keyboards and more. For the greater part of the last year, he arduously reverse engineered the OS from the image. Despite a few remaining bugs, the Cray OS now works. For them and for us, this should serve as a reminder that computing eats its own history. The Cray was the supercomputer, and not a single historian or archivist can show you the working object itself. Luckily, the world has nerds like Chris Fenton and Andras Tantos. Thanks.
<urn:uuid:68a94126-4108-4161-8ec4-66dcdd23c4cb>
CC-MAIN-2017-04
http://www.nextgov.com/emerging-tech/2014/01/these-two-guys-tried-rebuild-cray-supercomputer/76838/?oref=ng-dropdown
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00287-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956567
669
2.96875
3
Criminals looking to steal data or disrupt commerce don’t only hone in on large corporations. Small and midsize businesses (SMBs), in fact, are just as attractive a target. In 2013, there were about 28 million SMBs in the U.S., two-thirds of which contributed about $7.5 trillion to the U.S. economy. This makes them a lucrative and vulnerable victim for cybercriminals simply because many of them are not paying attention. Crime committed through the Internet falls into two broad categories: information theft and digital vandalism. Theft includes financial information, product or strategic proprietary information, customer records and transaction histories. Once stolen, this information is used to either directly steal funds from the SMB or its customers, or is sold to other criminals. Phishing is a form of information theft that entices a user to reveal sensitive information such as passwords or credit card numbers by masquerading as a trusted entity. Digital vandalism includes denial of service (DoS) attacks, viruses or other types of malware, often intended to simply disrupt a business. All forms of cybercrime exact damaging costs. Assessing costs to smaller enterprises For a small business, customer information theft can paralyze operations or put a company out of business. A single incident that damages a firm's reputation or compromises the integrity of its electronic storefront could result in unrecoverable losses. The average direct cost to a small business for a single attack in 2013 was almost $9,000, but that excludes brand damage and other soft costs. SMBs incur nearly four times the per capita cybercrime costs of larger firms, according to Ponemon. To many SMBs, these costs can prove fatal. A 2012 National Cyber Security Alliance study showed that 36 percent of cyber attacks are conducted against SMBs. Of those, up to 60 percent go out of business within six months of an attack. Yet 77 percent of SMB owners believe their companies are safe from cyber security breaches. Cybercrime is an unfortunate side effect of the information age. Where physical goods or cash once contained all the value targeted by thieves, today information holds even greater value. Businesses must be diligent to protect against electronic theft. SMBs must assess their potential exposure to cybercrime and take actions to prevent and blunt attacks. Although the precise costs of an attack differ based on an SMB's size and circumstances surrounding that attack, the following sections describe the types of costs that could be incurred by an SMB in the wake of such an unhappy event. 1. Business lost during attack A security breach often means shutting down the SMB's electronic operations for some period of time. An online retailer subjected to a DoS attack could be shut down for several days or weeks while determining the attack's origin and taking corrective action. A customer data breach in which credit card information was stolen would likely cause a similar lock-down. Corrective action often depends on a service provider's responsiveness; a frustrating, time-consuming and costly affair. Costs are likely to result in total revenue losses for at least several days. 2. Loss of company assets Bank account numbers and passwords stolen during a breach can cause theft of account funds. SMB owners may wrongly assume that banks will cover the loss, as do consumer credit card companies. In fact, an SMB will lose any stolen funds, which could cause a business to lose its working capital. Proprietary information, such as product designs, customer records, company strategies or employee information, is often compromised or stolen outright. All of these assets have incalculable value to a business, and thus can inflict crippling losses. 3. Damage to reputation Another cost that's difficult to quantify is reputation damage after an attack. The much-publicized Target breach that compromised 100 million customer records cost that firm roughly $148 million in direct cash costs, after insurance payments. Yet the damage to Target's reputation will linger for a long time, making people hesitant to share personal information, use their credit cards or shop at the store. Forrester Research estimated that Target's total costs would exceed $1 billion. This scenario could be worse for an SMB. For example, consider a resort operator that relies heavily on its website to attract new customers, book reservations and maintain its brand. If that site is hacked and infected with malicious links, it will be quarantined—placed in a "sin bin”—for a fairly long period by search engines, making it harder for customers to find the website. Even after the operator resolves the hack, it could take months for the resort's virtual reputation to be restored. And that's on top of losses in revenue and good will from customers affected during the attack. SMB's aren't likely to be sued if their customers' information is stolen unless they failed to implement reasonable protection measures. In the Target case, for example, consumers, and the banks that held their credit cards, filed class action lawsuits. In the latter case, a US judge ruled that Target played a "key role" in allowing hackers to gain access to its data center, which enabled the banks to continue their lawsuits. Certainly, Target is not an SMB, but a small business needs to recognize the need to protect its customers' information. Taking reasonable measures (“exercising due diligence”in legal terms) should offer protection against future litigation in the unfortunate event of a data breach. 5. Protection costs: staff, firewalls, encryption and software The most important cost of cybercrime should also be the first outlay: prevention. Businesses of any size need to implement a strategy to protect against the reality of cybercrime. For the smallest of SMBs—a one-person proprietorship—that could be as simple as using robust password protection on all systems and utilizing low-cost protection software, perhaps as little as $50/year. For larger businesses, costs scale with size. Use of security information and event management solutions (SIEMs), intrusion prevention systems (IPSs), network intelligence systems and data analytics can greatly reduce cyberattack costs, some report by as much as a factor of six. Expert advice: Do something The biggest risk facing an SMB manager is inaction. Ignoring cybercrime does not make it go away and places the business in jeopardy. Protective actions against cybercrime are now more important than the locks on a store's front door. Failure to put an electronic protection plan in place appropriate to the SMB's size and business model is equivalent to leaving the front door wide open with a pile of cash in plain sight. Don’t let that cash get away: put it under lock and key. Chris Janson is a technologist with over 25 years of industry experience working in engineering, marketing and management roles for companies large and small. He has published many articles on communications networks and their use in government, finance, education and other industries. He speaks at industry conferences, serves on the boards of OpenCape Corporation and Rural Telecom Congress and has taught courses at Northeastern University in Boston. Ed Tittel has been working in IT for over 30 years. He's the author of over 100 computing books, including the Exam Cram series of certification prep titles. He also blogs regularly for the IT Knowledge Exchange ("Windows Enterprise Desktop"), PearsonITCertification, GoCertify and Tom's IT Pro. For more info about Ed, please visit his website at www.edtittel.com. This story, "5 costly consequences of SMB cybercrime" was originally published by CIO.
<urn:uuid:51130fb0-886a-404d-bb90-88e22769b04b>
CC-MAIN-2017-04
http://www.itnews.com/article/2908864/security0/5-costly-consequences-of-smb-cybercrime.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00103-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958702
1,557
2.96875
3
Park Y.-G.,Daejeon Regional Korea Food and Drug Administration | Lee E.-M.,Daejeon Regional Korea Food and Drug Administration | Kim C.-S.,Daejeon Regional Korea Food and Drug Administration | Eom J.-H.,Daejeon Regional Korea Food and Drug Administration | And 4 more authors. Journal of the Korean Society of Food Science and Nutrition | Year: 2010 Korean government will set up the nationwide food safety system with strict control of hazardous nutrients like sugar, fatty acids and sodium as well as advanced nutrition education system. In addition, almost one hundred percent of school food service rate forced the government to consider more effective ways to upgrade the nutritional status of school meals. The object of our study was to provide the data on content and consumption of sugar in school meal for the nationwide project. For this purpose, we surveyed the sugar content of 842 school meal menus and their intake level for 154 days in 8 schools in Daejeon and Chungcheong Province. Sugar contents, the sum of the quantity of 5 sugars commonly detected in food, were analysed with HPLC-RID (Refractive Index Detector). Sugar intakes were calculated by multiplying the intake of each menu to the sugar content of that menu. The sugar content was highest in the desserts, which include fruit juices, dairy products and fruits. Sugar content of side dish was high in sauces and braised foods. Sugar intake from one dish is high in beverage and dairy product, and one dish meals contribute greatly to sugar intake because of their large amount of meal intake. The average lunch meal intakes of second grade and fifth grade elementary school students were 244 g/meal and 304 g/meal, respectively. The meal intake of middle school student was 401 g/meal. The average sugar intake from one day school lunch was 4.22 g (4.03 g on elementary and 5.31 g on middle school student), which is less than 10% of daily sugar reference value for Koreans. The result of this study provides exact data of sugar intake pattern based on the content of sugar which is matched directly to the meals consumed by the students. Source
<urn:uuid:257334a0-4c40-4d0e-9efd-ce44691e9204>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/daejeon-regional-korea-food-and-drug-administration-1971161/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00187-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959319
441
2.96875
3
Displaying attractive diagrams: quick hints for choosing a graph layout algorithm from Dojo Diagrammer AdrianVasiliu 270002HS2B Visits (5221) Many types of complex business data can best be visualized as a set of nodes and interconnecting links, more commonly called a graph or a diagram. Examples of graphs include business organization charts, workflow diagrams, telecom network displays, and genealogical trees. The mathematical concept of graphs is so general that it is used as a modeling tool in almost any domain. When we need to get a visual understanding of the graph data, there is a need for the automatic visualization of graphs. The purpose is to optimize the display by obeying common rules in a given field and by maximizing the readability. Among the new visualization components included in the WebSphere Application Server Feature Pack for Web 2.0 & Mobile v1.1, Dojo Diagrammer allows applications to display and edit graphs (diagrams)and provides a comprehensive set of graph layout algorithms for the automaticplacement of the nodes and/or to ensure the links have optimal shapes. Now, we may wonder: with automatic arrangement of diagrams, humans, that isdevelopers or end-users, have no role to play anymore? Well, there is stillplace for humans taking decisions... The following decisions are in the handsof the developer (if appropriate, the developer can offer these choices to theend-user thanks to configuration GUI): This algo organizes the nodes in horizontal or vertical levels, in such away that the majority of the links flow uniformly in the same direction. Typical examples: This algo is the obvious choice for representing hierarchies, that is forh The Tree layout can also be used if the graph is not a (pure) tree, howeverin this case it only takes care of the shape of part of the links, those thatcontribute to the pure tree part of the graph, the so-called "spanningtree" of the graph. Your graph does not represent a hierarchy, is not a tree and the orientationof links does not matter? Then Force-Directed layout is likely to fit. An example: This algo is slower than Hierarchical or Tree layouts, therefore it is notrecommended for very large graphs. The Circular Layout is mostly designed for Telecom applications where nodes arepartitioned into clusters. The clusters are displayed as rings or stars andpositioned in a radial tree-like fashion. An example: In most Telecom applications, the nodes represent network devices that have predefinedcluster ids. Hence, the layout algorithm allows specifying the clusters as inputdata. For cases when no cluster ids are available, the layout algorithm is alsoable to automatically calculate appropriate clusters from the graph topology. Obvious choice when your graph has no links, or you want nodes to be placedon a grid or matrix while ignoring the links. An example: Short and Long link layout Both are pure link layouts, that is they do not move the nodes, they onlyreshape the links in such a way that crossings and overlaps are reduced oravoided. But why two different algos, Short link layout and Long link layout?The answer is that they have different (mostly complementary) char That said, it is also useful to be aware of some characteristics of each,which help making the choice. In very brief, the names "Short Linklayout" and "Long Link layout" refer to the fact that the firstone fits better when most links are "short", that is they inte A more in-depth comparison follows: Short link layout Long link layout The two screenshots above hold for the same graph data. At a quick look, theresults of the two link layouts are relatively equivalent, but at a closerlook, they differ in terms of link-link crossings, link-node overlaps, symmetryof connection points with respect to the node box, number of link bends (howmany turns). Each of these two link layout algorithms is useful in various use-casesand for various application requirements. All layout algorithms can be applied to nested graphs, that is graphs with nodes that contain another graph. Most often, the best results for such graphs are obtained using the Hierarchical layout. For the Business Process applications, Dojo Diagrammer also provides support for a special case of nested graphs, called swimlanes. An example: Finally, all layout algorithms are provided with many configuration parametersto cover a wide range of needs. As a quick guidance, a future publication will providehints about the parameters that matter the most and their impact onperformance, in particular for the deployment on mobile devices. Stay tuned... Dojo Diagrammer offers many other features, be sure to check the showcase sample that comes with the WebSphere Application Server Feature Pack for Web 2.0 & Mobile v1.1. You can install the showcase on your local server after downloading the Feature Pack (here are direct links to the installation instructions for WAS 8.0 and WAS 7.0). There is also an online version of the showcase here, which includes the live versions of the following graph layout samples: Graph Layout Browser, Organization Chart, Business Process Diagram, Graph Layout Explorer, Client-side Graph Layout for Mobile, and Server-side Graph Layout for Mobile.
<urn:uuid:241576b1-bca7-4a50-81e1-8947cfc63d47>
CC-MAIN-2017-04
https://www.ibm.com/developerworks/community/blogs/94e7fded-7162-445e-8ceb-97a2140866a9/entry/starting_guide_for_nice_diagrams_quick_hints_for_choosing_a_graph_layout_algorithm_from_dojo_diagrammer22?lang=en
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00095-ip-10-171-10-70.ec2.internal.warc.gz
en
0.893522
1,106
2.5625
3
Reduced Instruction Set Computer (RISC) and Explicitly Parallel Instruction Computing (EPIC) are 64-bit processor technologies that provide high performance platforms for server operations. Both are based on the concept of a smaller number of fixed length instructions providing higher performance. The EPIC technology, used in the Intel Itanium processor, adds parallel instruction processing, much of which is dictated by compiler optimizations rather than hardware. While the overall percentage of servers using RISC/EPIC processors continues to shrink, the market for processors offering higher performance than current x86-based platforms continues to exist. These systems also provide greater margins and more customer lock-in than the commodity servers.
<urn:uuid:35be2ceb-db2c-415b-965f-7d189040aeeb>
CC-MAIN-2017-04
https://www.infotech.com/research/sun-takes-a-risc-lead
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00333-ip-10-171-10-70.ec2.internal.warc.gz
en
0.906911
135
3.0625
3
By Patrick Toomey Direct link to keychaindumper (for those that want to skip the article and get straight to the code) So, a few weeks ago a wave of articles hit the usual sites about research that came out of the Fraunhofer Institute (yes, the MP3 folks) regrading some issues found in Apple’s Keychain service. The vast majority of the articles, while factually accurate, didn’t quite present the full details of what the researchers found. What the researchers actually found was more nuanced than what was reported. But, before we get to what they actually found, let’s bring everyone up to speed on Apple’s keychain service. Apple’s keychain service is a library/API provided by Apple that developers can use to store sensitive information on an iOS device “securely” (a similar service is provided in Mac OS X). The idea is that instead of storing sensitive information in plaintext configuration files, developers can leverage the keychain service to have the operating system store sensitive information securely on their behalf. We’ll get into what is meant by “securely” in a minute, but at a high level the keychain encrypts (using a unique per-device key that cannot be exported off of the device) data stored in the keychain database and attempts to restrict which applications can access the stored data. Each application on an iOS device has a unique “application-identifier” that is cryptographically signed into the application before being submitted to the Apple App store. The keychain service restricts which data an application can access based on this identifier. By default, applications can only access data associated with their own application-identifier. Apple realized this was a bit restrictive, so they also created another mechanism that can be used to share data between applications by using “keychain-access-groups”. As an example, a developer could release two distinct applications (each with their own application-identifier) and assign each of them a shared access group. When writing/reading data to the keychain a developer can specify which access group to use. By default, when no access group is specified, the application will use the unique application-identifier as the access group (thus limiting access to the application itself). Ok, so that should be all we need to know about the Keychain. If you want to dig a little deeper Apple has a good doc here. Ok, so we know the keychain is basically a protected storage facility that the iOS kernel delegates read/write privileges to based on the cryptographic signature of each application. These cryptographic signatures are known as “entitlements” in iOS parlance. Essentially, an application must have the correct entitlement to access a given item in the keychain. So, the most obvious way to go about attacking the keychain is to figure out a way to sign fake entitlements into an application (ok, patching the kernel would be another way to go, but that is a topic for another day). As an example, if we can sign our application with the “apple” access group then we would be able to access any keychain item stored using this access group. Hmmm…well, it just so happens that we can do exactly that with the “ldid” tool that is available in the Cydia repositories once you Jailbreak your iOS device. When a user Jailbreak’s their phone, the portion of the kernel responsible for validating cryptographic signatures is patched so that any signature will validate. So, ldid basically allows you to sign an application using a bogus signature. But, because it is technically signed, a Jailbroken device will honor the signature as if it were from Apple itself. Based on the above descrption, so long as we can determine all of the access groups that were used to store items in a user’s keychain, we should be able to dump all of them, sign our own application to be a member of all of them using ldid, and then be allowed access to every single keychain item in a user’s keychain. So, how do we go about getting a list of all the access group entitlements we will need? Well, the kechain is nothing more than a SQLite database stored in: And, it turns out, the access group is stored with each item that is stored in the keychain database. We can get a complete list of these groups with the following query: SELECT DISTINCT agrp FROM genp Once we have a list of all the access groups we just need to create an XML file that contains all of these groups and then sign our own application with ldid. So, I created a tool that does exactly that called keychain_dumper. You can first get a properly formatted XML document with all the entitlements you will need by doing the following: ./keychain_dumper -e > /var/tmp/entitlements.xml You can then sign all of these entitlments into keychain_dumper itself (please note the lack of a space between the flag and the path argument): ldid -S/var/tmp/entitlements.xml keychain_dumper After that, you can dump all of the entries within the keychain: If all of the above worked you will see numerous entries that look similar to the following: Service: Dropbox Account: remote Entitlement Group: R96HGCUQ8V.* Label: Generic Field: data Keychain Data: SenSiTive_PassWorD_Here Ok, so what does any of this have to do with what was being reported on a few weeks ago? We basically just showed that you can in fact dump all of the keychain items using a jailbroken iOS device. Here is where the discussion is more nuanced than what was reported. The steps we took above will only dump the entire keychain on devices that have no PIN set or are currently unlocked. If you set a PIN on your device, lock the device, and rerun the above steps, you will find that some keychain data items are returned, while others are not. You will find a number of entries now look like this: Service: Dropbox Account: remote Entitlement Group: R96HGCUQ8V.* Label: Generic Field: data Keychain Data: <Not Accessible> This fundamental point was either glossed over or simply ignored in every single article I happend to come across (I’m sure at least one person will find the article that does mention this point :-)). This is an important point, as it completely reframes the discussion. The way it was reported it looks like the point is to show how insecure iOS is. In reality the point should have been to show how security is all about trading off various factors (security, convenience, etc). This point was not lost on Apple, and the keychain allows developers to choose the appropriate level of security for their application. Stealing a small section from the keychain document from Apple, they allow six levels of access for a given keychain item: CFTypeRef kSecAttrAccessibleWhenUnlocked; CFTypeRef kSecAttrAccessibleAfterFirstUnlock; CFTypeRef kSecAttrAccessibleAlways; CFTypeRef kSecAttrAccessibleWhenUnlockedThisDeviceOnly; CFTypeRef kSecAttrAccessibleAfterFirstUnlockThisDeviceOnly; CFTypeRef kSecAttrAccessibleAlwaysThisDeviceOnly; The names are pretty self descriptive, but the main thing to focus in on is the “WhenUnlocked” accessibility constants. If a developer chooses the “WhenUnlocked” constant then the keychain item is encrypted using a cryptographic key that is created using the user’s PIN as well as the per-device key mentioned above. In other words, if a device is locked, the cryptographic key material does not exist on the phone to decrypt the related keychain item. Thus, when the device is locked, keychain_dumper, despite having the correct entitlements, does not have the ability to access keychain items stored using the “WhenUnlocked” constant. We won’t talk about the “ThisDeviceOnly” constant, but it is basically the most strict security constant available for a keychain item, as it prevents the items from being backed up through iTunes (see the Apple docs for more detail). If a developer does not specify an accessibility constant, a keychain item will use “kSecAttrAccessibleWhenUnlocked”, which makes the item available only when the device is unlocked. In other words, applications that store items in the keychain using the default security settings would not have been leaked using the approach used by Fraunhofer and/or keychain_dumper (I assume we are both just using the Keychain API as it is documented). That said, quite a few items appear to be set with “kSecAttrAccessibleAlways”. Such items include wireless access point passwords, MS Exchange passwords, etc. So, what was Apple thinking; why does Apple let developers choose among all these options? Well, let’s use some pretty typical use cases to think about it. A user boots their phone and they expect their device to connect to their wireless access point without intervention. I guess that requires that iOS be able to retrieve their access point’s password regardless of whether the device is locked or not. How about MS Exchange? Let’s say I lost my iPhone on the subway this morning. Once I get to work I let the system administrator know and they proceed to initiate a remote wipe of my Exchange data. Oh, right, my device would have to be able to login to the Exchange server, even when locked, for that to work. So, Apple is left in the position of having to balance the privacy of user’s data with a number of use cases where less privacy is potentially worthwhile. We can probably go through each keychain item and debate whether Apple chose the right accessibility constant for each service, but I think the main point still stands. Wow…that turned out to be way longer than I thought it would be. Anyway, if you want to grab the code for keychain_dumper to reproduce the above steps yourself you can grab the code on github. I’ve included the source as well as a binary just in case you don’t want/have the developer tools on your machine. Hopefully this tool will be useful for security professionals that are trying to evaluate whether an application has chosen the appropriate accessibility parameters during blackbox assessments. Oh, and if you want to read the original paper by Fraunhofer you can find that here.
<urn:uuid:048c208c-b776-4413-bd58-3ed3911a5b0b>
CC-MAIN-2017-04
https://labs.neohapsis.com/tag/encryption/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00113-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922968
2,255
2.59375
3
Security & Firewall Features Firewalls used to consist of simple mechanisms to controlling access into and out of the company. Traditional firewall techniques such as port blocking and NAT still play a massive part; however on their own they do not have the in depth power to stopping today’s threats. The primary job of a firewall is to protect the company’s network from internet threats and to enforce company security policies. The security policy will dictate what applications, services, ports and IP addresses are allowed and disallowed via the firewall. Companies are in need of firewall's that do not only consist of advanced protection tools, but also have other built in advanced capabilities for other uses, such as VPN’s, WAN Optimisation, Failover and high availability, VLAN support, Dynamic routing, logging and reporting and other very handy utilities. Choosing a firewall can be a daunting task when you look at the vast amount of firewalls on the market. Do you go with a market leader firewall from Cisco, Juniper, Palo Alto and Checkpoint paying £10,000 more for a brand name, or do you purchase a cheaper firewall which is still playing catch up and has not really had enough coverage and reviews in the world of security. Also when you take into consideration that this product will be the main entrance point to and from your company you have to ensure you have chosen a solid firewall with a proven reputation. An example to choosing the correct firewall would be when choosing the correct door for your property. For example if you were to purchase a door, which door would you invest in? Would you invest in a door made out of solid material, which is heavy duty and bullet proof from a world class manufacturer with promising reviews and references, or do you choose a door from your local DIY shop which looks the part but is actually hollow in the middle, comes with weak hinges, built from cheap material and comes with a warded lock which is very easy to circumvent. Then there's door manufacturers that sit in the middle such as a door that has been promised to provide good security but have not really been put to the test and no independent references exist for the door manufacturer, and when you question the manufacturer of the door and question their marketing on the door claiming it meets certain security standards, you find that these tests were internally tested from the developers themselves and not from a third party known reviewer. Firewall's come with different techniques to stopping many of today’s threats and come in different platforms and setup methods depending on the environment they will accommodate. This is why you need to spend some time researching each firewall, what you and your company require from a firewall and carefully choosing the right firewall for your company based on a balance of a number of aspects such as reputation of the vendor and the overall cost of the product. In particular if your looking for a home based firewall, then visit Home Firewall Guide. For recommendations on network firewall vendors for your business or company, visit Which Network Firewall Guide, or for a general buyers guide on network firewalls visit Network Firewall Buyers Guide. If your looking for a security package for your home or business computer/s you may be better off with a Full Internet Security Suite, which consists of a firewall, anti-virus software as well as other security features. For further reading, there's some excellent electronic ebooks available for download from eBooks.com
<urn:uuid:bfa0b6b3-d985-450c-9a7c-d8f7a46cca43>
CC-MAIN-2017-04
http://internet-computer-security.com/Firewall/Firewalls.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00234-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959538
696
2.5625
3
Extending the Network: Remote-Access Tools Basically, remote-access tools serve the function of allowing users outside a local network to log into, access and use resources on that network as if they were locally attached. Historically, remote access developed as a technology to permit users to dial into specific banks of modems attached to a server that would permit them to set up virtual network sessions through the dial-in connection. They could either access resources only on the dial-in server (limited remote access), or they could interact with any network resources available through that server as if they were logged in on a PC attached to the network. For reasons primarily related to low-bandwidth connections, some remote-access implementations created an interesting software architecture (plus associated hardware, in some special cases). Remote-access sessions were designed to move only mouse movements and keyboard input from the remote-user side of the connection, and only to move screen updates (and in some special cases, other data or file transfers) from the server side of the connection. This kept the amount of data moving between client and server to a minimum and enabled users to function more or less normally, as if they really did have a local connection. On the server side, this could mean setting up virtual machines on a one-per-user basis, so that basically, the server would emulate a separate, distinct virtual computer for each user and supply the CPU horsepower, memory, storage and networking access for each machine as needed. More specialized implementations included “PC on a card” hardware, so that remote users actually took control over a real PC inside the server instead of a virtual emulation. Either way, users could log into, access and interact with applications, file stores, other servers and even printers as if they were plugged right into the local network. Remote access can have other meanings as well. One popular interpretation of the term describes a scenario where a remote user on a laptop or other mobile computing device establishes a connection with his primary desktop to access files, run applications and perform other activities as if he were sitting in front of the primary machine. Also known as “remote control software,” this sort of thing usually falls outside more stringent interpretations of remote access. While many products offer this kind of functionality (for example, Symantec’s PCAnywhere, LapLink or Microsoft’s Remote Desktop Connection), most of them do not offer full-blown Web-based, virtual-private-network-(VPN)-based implementations that feature strong authentication and encryption capabilities as well. That’s why only two products in that category are mentioned in the list of top remote-access tools: Citrix Online’s GoToMyPC and NetOp RemoteControl. The Citrix product is included because of strong security, an elegant implementation (any Web-capable client will do) and a very compact implementation. NetOp Remote Control gets the nod because of strong security, easy deployment and use across all kinds of connections, and support for Windows, Macintosh, Linux/UNIX, Solaris and legacy systems. Nevertheless, lots of alternatives abound. Some of the companies that played heavily in this space are still quite active in the remote-access world, but their offerings, their typical avenues for remote access and the platforms they support have changed dramatically. Microsoft’s Remote Access Server (RAS), which began life as a dial-in-only solution, adapted by adding Internet access and channel aggregation capability on the server side. Today, the corresponding service is known as the Routing and Remote Access Service (sometimes abbreviated as RRAS) and includes both dial-in support for those who still need it and VPN-based access for those who wish to establish remote-access connections over the Internet. Along the way, Microsoft has considerably strengthened security for remote connections, upping its use of more secure protocols (IPSec), authentication mechanisms (RADIUS and/or Kerberos), encryption mechanisms and so forth. Citrix is another company with deep roots in the remote-access field. Always viewed as a high-end provider of Microsoft alternatives, Citrix MetaFrame server technology is still regarded as a leading high-end server solution for remote access. Increasingly, the company gives users more alternatives on both client and server sides, with support for Web-based client access complementing Windows-specific capabilities, and with a UNIX implementation of the server technology available, as well as support for all current Windows server versions. At the same time, the company has also embraced the same kinds of enhanced and strengthened protocols and authentication as Microsoft and other leading players in the remote-access space. As the Internet became increasingly popular and bandwidth became cheaper, straight-through dial-up options began to lose their appeal for many applications. Why make clients place expensive long-distance calls to access a server directly when they can access the Internet almost anywhere by placing a local call (if they can’t use a faster broadband connection as so many do nowadays)? As long as the server can also access the Internet to establish the “other end” of a remote connection, this ends up being cheaper, easier and, in many cases, faster than old-fashioned dial-up anyway. When no other connection type is available, dial-up remains better than nothing. But fewer remote-access users than ever before rely solely on dial-up to make connections these days. That’s because ubiquitous Internet access plus secure VPN connections have revolutionized remote access. Simply put, this technology makes long-distance dial-up unnecessary in nearly all cases (and even when local dial-up is used, VPNs make connection types more or less transparent, aside from bandwidth issues). As the name suggests, a VPN turns a public connection into something that’s as secure as a private connection would be by applying rigorous encryption and protection to all the communications traffic it ferries. This eliminates most privacy and confidentiality concerns and makes the Internet more suitable as a business communications and remote-access medium. The advent of secure VPN technology has completely changed the face of remote access. For one thing, it’s no longer necessary to emulate virtual machines to provide network access (though there’s nothing stopping anyone from using emulation-based remote-access software across a VPN link, either). Other than VPN client software, no special additional software is required, because remote clients function just as if they were locally connected to any networks that a VPN server can reach. There’s also been a profound change in the style and behavior of most remote-access clients in the past few years. The overwhelming trend is toward Web-based client interfaces, which essentially present access to applications and services through a browser window. Among other things, this removes a lot of client platform dependencies from the client side of the equation, so that even Web-enabled PDAs or cell phones can play the client role for remote access (within limits). Likewise, as long as a Web browser works, a Macintosh, a Linux/UNIX machine or a PC can play the client role with equal facility—provided, of course, that the user understands how the remote-access interface looks and behaves, and knows how to operate it properly. Be aware that high-end, full-function remote-access solutions are still not cheap (except for remote-control implementations and implementations with limited functionality). It’s not unusual to have to spend $10,000 or more for server-side software and client licenses, not including server or communications costs. This often translates into per-seat costs between $200 and $500, depending on the products chosen and related licensing
<urn:uuid:22216a1b-346b-4b33-ac74-bdc70af85530>
CC-MAIN-2017-04
http://certmag.com/extending-the-network-remote-access-tools/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00536-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944861
1,564
3.203125
3
Today is the 80th anniversary of the Communications Act of 1934. The Act was established to regulate telephone, telegraph and radio so that all U.S. citizens could receive basic communication services. It contains seven sections, Title I through Title VII. Title II, the section on common carrier regulation, has been making headlines recently, with some fiercely pushing to apply this regulatory regime to the Internet. But should an eight-decade old dense list of rules really apply to the Internet – the most technologically advanced communication network the world has ever known? To give you an idea of national trends in 1934, just consider: - a loaf of bread cost 7 cents - a gallon of gas cost 10 cents - average yearly wages were just under $2,000 - rent was about $20 a month - the primary national media was radio and newspapers - there were less than 5,000 TV sets in operation Title II was meant to regulate simple communications technologies – telegraphs, radios and telephones – that have no resemblance to the complicated network of networks that defines today’s multifaceted, global Internet. The needs of these technologies were wildly different from those needed to implement, grow, and maintain fiber optics networks with thousands of interconnection points that carry billions of bits every day, ranging from tiny emails to massive rich media and video streaming packets. The Internet is a futuristic and ever expanding network that is entirely ill-suited to the permission-based regulatory model of Title II. We’re in the midst of a technological and communication revolution that could not even be dreamt of in a world 80 years ago. So why are some looking 80 years in the past as the model to keep it growing?
<urn:uuid:118642fc-d9c2-4ffe-88ad-0d7feb06f4de>
CC-MAIN-2017-04
https://www.ncta.com/platform/broadband-internet/should-an-80-year-old-law-apply-to-the-internet/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00536-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958304
352
3.125
3
Cyber Security is a scareware program from the same family as Total Security . This rogue is promoted through the use of malware as well as fake online anti-malware scanners. When installed via Trojans, it will be installed on to your computer without your permission. When promoted via the web, you will see a pop-up that states that your computer is infected and that you should download and install Cyber Security to protect your computer. When the program is installed it will be configured to start automatically when you start Windows and perform a scan of your computer. When the scan has finished, Cyber Security will state that there are numerous infections on your computer, but will state it cannot remove anything unless you first purchase the program. This method of showing fake scan results is just a method where the developers of Cyber Security are trying to trick you into thinking that your computer has a security problem in the hopes that you will then purchase the program. As the only security problem on the computer is Cyber Security, you should not purchase this program. Cyber Security also installs an Internet Explorer Browser Helper Object that is used to hijack your browser when you are surfing the web. When browsing the web you will be randomly be redirected to an about:blank page where you will be shown a red screen with a message stating that This website has been reported to be unsafe and will then suggest that you update your web protection software. When you click on that link you will be brought to a site that is attempting to sell Cyber Security to you. This browser hijack is attempting to impersonate Firefox's and Google's Secure Browsing feature that alerts you when you visit unsafe sites. In Cyber Security's method it does not matter if the site you are visiting is legitimate or not, it will still randomly show the message so that you think you are at risk. While Cyber Security is running it will also show numerous alerts and screens that are devised to make you think that there is a major security problem on your computer. One tactic is to randomly display alerts from your Windows taskbar that contain fake messages in various languages: Privacy violation alert! Cyber Security has detected numerous privacy violations. Some programs may send your private data to an untrusted internet host. Click here to permanently block this activity and remove the possible threat (Recommended) System files modification alert! Important system files of your computer may be modified by malicious program. It may cause system instability and data loss. Click here to block unauthorized modification and remove potential threats (Recommended). Spyware activity alert! Spyware.IEMonster activity detected. It is spyware that attempts to steal passwords from Internet Explorer, Mozilla Firefox, Outlook and other programs, including logins and passwords from online banking sessions, eBay, PayPal. It may also create special tracking files to log your activity and compromise your Internet privacy. It's strongly recommended to remove this threat as soon as possible. Click here to remove Spyware.IEMonster. Systemdateien wurden geandert! Irgendeinen wichtigen Systemdateien wurden von gefahrvolles Programm geandert. Das kann Systemsinstabilitat und Datenverzicht zur Folge haben. Klicken Sie hier an um unberechtigten Modifikationen durch die Loschung der Gefahrdungen zu blockieren (empfehlt). activite du Logiciel espion! Logiciel espion. Lactivite de IEMonster est decouverte. Cest un Logiciel espion, qui tente de s'emparer des mots de passe d Internet Explorer, Mozilla Firefox, Outlook et d autres programmes, y compris des logins et des mots de passe des operations bancaires en ligne, eBay, PayPal. Il peut egalement creer des poursuites speciales des fichiers pour enregistrer votre activite et compromettre votre intimite a l'Internet. Il est fort recommande d eradiquer la menace le plus vite possible. Cliquez ici pour supprimer le Logiciel espion. IEMonster. Cyber Security will also display a window that will impersonate the legitimate Microsoft Windows Security Center. The difference is that the imposter will suggest that you purchase Cyber Security to secure your computer. Last, but not the least in a long line of deceptive tactics, Cyber Security will randomly display a screen saver that impersonates Windows crashing with a Blue Screen of Death that contains a message that Spyware caused it. Then the screen saver will show your computer rebooting with a message that you should purchase Cyber Security to protect yourself. The text you will see in the screen saver crash is: ***STOP: 0x000000D1 (0x00000000, 0xF73120AE, 0xC0000008, 0xC0000000) A spyware application has been detected and Windows has been shut down to prevent damage to your computer If this is the first time you've seen this Stop error screen, restart your computer. If this screen appears again, follow these steps: Check to make sure your antivirus software is properly installed. If this is a new installation, ask your software manufacturer for any antivirus updates you might need. Windows detected unregistered version of %product% protection on your computer. If problems continue, please activate your antivirus software to prevent computer damage and data loss. *** SRV.SYS - Address F73120AE base at C00000000, DateStamp 36b072a3 Beginning dump of physical memory... It is important to understand that this is just a screen saver and your computer is not actually crashing and rebooting. As you can see, Cyber Security uses numerous tactics to try and have you think that there is a serious security problem on your computer. The reality, though, is that the only serious problem is Cyber Security itself. Therefore, please do not purchase the program due to the alerts this program shows you. If you have already purchased the program, then please contact your credit card company and dispute the charges. Last, but not least, please use the guide below to remove Cyber Security and any related malware from your computer for free. Self Help Guide - Print out these instructions as we will need to close every window that is open later in the fix. Due to this malware infecting Internet Explorer, it is suggested that you use Firefox or another browser when following these - Before we can do anything we must first end the Cyber Security process so that it does not interfere with the cleaning process. To do this, please download RKill.com to your desktop from the following link. RKill Download Link - (Download page will open in a new tab or browser window.) When at the download page, click on the Download Now button labeled iExplore.exe download link. When you are prompted where to save it, please save it on your desktop. - Once it is downloaded, double-click on the iExplore.exe icon in order to automatically attempt to stop any processes associated with and other Rogue programs. Please be patient while the program looks for various malware programs and ends them. When it has finished, the black window will automatically close and you can continue with the next step. If you get a message that rkill is an infection, do not be concerned. This message is just a fake warning given by when it terminates programs that may potentially remove it. If you run into these infections warnings that close Rkill, a trick is to leave the warning on the screen and then run Rkill again. By not closing the warning, this typically will allow you to bypass the malware trying to protect itself so that rkill . So, please try running Rkill until malware is no longer running. You will then be able to proceed with the rest of the guide. Do not reboot your computer after running rkill as the malware programs will start again. - Just to be sure, we will use another program to verify that the processes are indeed terminated. To do this we must first download and install a Microsoft program called Process Explorer. Normally, we would have you use the Windows Task Manager, but this rogue will disable the ability to run it. Please download Process Explorer from the following link and save it to your desktop: Process Explorer Download Link - You should now have the Procexp.exe file on your desktop. You now need to rename that file to iexplore.exe. First delete the current iExplore.exe file that is on your desktop and then right-click on the Procexp.exe and select Rename. You can now edit the name of the file and should name it to iexplore.exe. Once it is renamed you should double-click on the file to launch it. - Once the program is running, you should be presented with a screen similar to the one below. - Scroll through the list of running programs until you see a process named tsc.exe. When you see this process, select the tsc.exe process by left-clicking on it once so it becomes highlighted. Then click on the red X button as shown in the image below. Newer versions of this executable may be using names consisting of random numbers or characters. If you see a process that is composed of random numbers or characters and has a shield icon or a padlock icon next to it, then you have found the process you need to terminate. If you do not see any processes using random characters or with the name tsc.exe, please continue to step 9. - When you click on the red X to kill the process, Process Explorer will ask you to confirm if you are sure you want to terminate it as shown in the image At this point you should press the Yes button in order to kill the process. - At this point you should download Malwarebytes Anti-Malware, or MBAM, to scan your computer for any any infections or adware that may be present. Please download Malwarebytes from the following location and save it to your desktop: Malwarebytes Anti-Malware Download Link (Download page will open in a new window) - Once downloaded, close all programs and Windows on your computer, including - Double-click on the icon on your desktop named mb3-setup-1878.1878-22.214.171.1249.exe. This will start the installation of MBAM onto your computer. - When the installation begins, keep following the prompts in order to continue with the installation process. Do not make any changes to default settings and when the program has finished installing, make sure you leave Launch Malwarebytes Anti-Malware checked. Then click on the Finish button. If MalwareBytes prompts you to reboot, please do not do so. - MBAM will now start and you will be at the main screen as shown below. Please click on the Scan Now button to start the scan. If there is an update available for Malwarebytes it will automatically download and install it before performing the scan. - MBAM will now start scanning your computer for malware. This process can take quite a while, so we suggest you do something else and periodically check on the status of the scan to see when it is finished. - When MBAM is finished scanning it will display a screen that displays any malware that it has detected. Please note that the infections found may be different than what is shown in the image below due to the guide being updated for newer versions of MBAM. You should now click on the Remove Selected button to remove all the seleted malware. MBAM will now delete all of the files and registry keys and add them to the programs quarantine. When removing the files, MBAM may require a reboot in order to remove some of them. If it displays a message stating that it needs to reboot, please allow it to do so. Once your computer has rebooted, and you are logged in, please continue with the rest of the steps. - You can now exit the MBAM program. Your computer should now be free of the CyberSecurity program. If your current anti-virus solution let this infection through, you may want to consider purchasing the PRO version of Malwarebytes Anti-Malware to protect against these types of threats in the future.
<urn:uuid:d6a4e6e1-38e8-4bb9-8095-9c03efdad7df>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/virus-removal/remove-cyber-security
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00170-ip-10-171-10-70.ec2.internal.warc.gz
en
0.848465
2,623
2.8125
3
People often say that "you are judged by who you associate with." It appears that message also resonates among our fellow great apes, the chimpanzees. Maybe that shouldn't be a surprise. After all, the chimps are right below humans on the evolutionary ladder -- many still use BlackBerries -- so they've worked out similar social structures and status hierarchies. For example, like male humans, male chimps learn to form "coalitions" in order to direct aggression at other male chimpanzees and exert dominance. In the chimp (as well as the human) world, dominance equals more sex. So not only do the most dominant male chimps (and humans) have more "mating" opportunities, members of the most successful coalitions also score more (roadies). But you don't have to go all "alpha" to enjoy the benefits of a dominant lifestyle, as many a personable guy (and chimp, if they could talk) will tell you. A new study by Ian Gilby at Duke University in North Carolina finds that "male chimpanzees with central positions in the coalitionary network were most likely to father offspring and increase in rank. Specifically, those who formed coalitions with males who did not form coalitions with each other were the most successful." Wait, what? "Central positions in the coalitionary network"? Here's an explanation from a press release by the Springer journal Behavioral Ecology and Sociobiology: Gilby and his colleagues studied data from wild chimpanzees gathered over 14 years from the Kasekela community in Gombe National Park in Tanzania. They wanted to test the hypothesis that male coalitionary aggression leads to positive benefits via increased dominance rank and improved reproductive success. Of the four measures they used to characterize a male's coalitionary behavior, the only one that was related to both of these factors was "betweenness" -- a measure of social network centrality -– which reflects the tendency to make coalitions with other males who did not form coalitions with each other. The only non-alpha males to sire offspring were males that had the highest "betweenness" scores. These males were also more likely to increase in rank, which is associated with higher reproductive success. In other words, the chimps with the highest "betweenness" rating were like the popular kid in high school who transcended cliques, or a politician who successfully forges bonds with the financial, labor and religious communities. It all adds up to a winning campaign and a steady stream of mistresses. Gilby and his fellow researchers say the study results would indicate that "male chimpanzees may recognize the value of making the 'right' social connections." Let's just hope the chimps keep it real and don't get too carried away trying to prove their social status, like some other great apes I could mention. The world doesn't need members of another species touting their Klout scores. Now read this:
<urn:uuid:75bd5b9d-7d95-45fe-89fb-78d39b535dd4>
CC-MAIN-2017-04
http://www.itworld.com/article/2716555/enterprise-software/how-social-networking-helps-male-chimps-get-sex.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00380-ip-10-171-10-70.ec2.internal.warc.gz
en
0.97558
591
2.671875
3
The Air Force this month said it was looking to award up to two preliminary design contracts worth up to a total of $214 million for development of the Space Fence which ultimately will help protect Earth by detecting objects in low and medium earth orbit heading our way. The S-Band Space Fence is part of the Department of Defense's effort to track and detect space objects which can consist of thousands of pieces of space debris as well as commercial and military satellite parts. The Space Fence will replace the current VHF Air Force Space Surveillance System built in 1961. The Space Fence program, which will ultimately cost more than $3.5 billion, will be made up of a system of geographically dispersed ground-based sensors to provide timely assessment of space events, said program manager Linda Haines in an Air Force release. The Space Fence will use multiple S-band ground-based radars -- the exact number will depend on operational performance and design considerations -- that will permit uncued detection, tracking and accurate measurement of orbiting space objects. "That will allow us to reduce susceptibility to collision or attack, improve the space catalog accuracy and provide safety of flight," she stated. "The Space Fence is going to be the most precise radar in the space situational surveillance network," Haines said. "The S-band capability will provide the highest accuracy in detecting even the smallest space objects." The idea then is to avoid additional space collisions, which would otherwise add to the thousands of existing objects and debris already in space. All these objects present potential threats for communication or GPS satellites or even NASA's International Space Station and the shuttle, the Air force stated. Northrop Grumman Lockheed Martin and Raytheon got $30 million from the Air Force to start developing the first phase of a global space surveillance ground radar system in 2009. The Air Force Space Command wants to have the Space Fence running by 2015. The need for such technology is growing. NASA' s Orbital Debris Program Office this month said the number of debris officially cataloged from the 2007 Chinese the Fengyun-1C spacecraft anti-satellite test alone has now surpassed 3000. By mid-September 2010, the tally had reached 3037, of which 97% remained in Earth orbit, posing distinct hazards to hundreds of operational satellites, the office stated. The Orbital Debris Program Office this summer said that while over 4,700 space missions have taken place worldwide since the 1960s, only 10 missions account for one-third of all cataloged objects currently in Earth orbit and of that, six of these 10 debris producing events occurred within the past 10 years. Debris from China the US and former Soviet Union spacecraft make up majority of junk floating in space. Approximately 19,000 objects larger than 10 cm are known to exist, NASA stated. Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories: IBM, European Union team to swat electronic vampires
<urn:uuid:0e0b9d12-f808-4998-94c8-c5e9220f8fd3>
CC-MAIN-2017-04
http://www.networkworld.com/article/2227612/security/watching-orbital-objects--air-force-space-fence-project-moves-forward.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00104-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92945
610
2.546875
3
The modern corporation serves mainly as a conduit for money and power to the hands of a select few. These institutions have so many resources at their command that it is now a simple matter for them to undermine government. This is not the opinion of a wild-eyed socialist but of a lawyer who used to work for these entities. Abraham Lincoln, who is mostly famous for other things, wrote: "Corporations have been enthroned. An era of corruption in high places will follow...until wealth is aggregated in a few hands...and the Republic is destroyed." Does this ring a bell? Today Americans take it for granted that corporations are the only way for them to exist. Few people even question the idea that what's good for business is good for America, even though there is plenty of evidence to the contrary. This idea is so ingrained that Presidents Obama and Bush and their heads of the Treasury, Federal Reserve, SEC and more all put the best interests of giant banks and investment firms ahead of the people and everything else. It is easy to believe these people enriched Goldman et al. because they were corrupt, but the sadder truth may be that they genuinely believed what they were doing was best for the people of the nation. The Founding Fathers never imagined the modern corporation. There is no mention of them in the Constitution because at the time of Independence there were only seven chartered business corporations in the entire nation. And these were nothing like the today's corporations. As writer Jonathan Rowe noted: "The first corporations in the Western tradition were monasteries, boroughs, guilds, and the like. They were vehicles of community and social cohesion; they sought to restrain the tendencies toward self-seeking—not provide an institutional amplifier for them. By the time of the American Revolution, this form had evolved into a kind of franchise, chartered by the legislature to perform a specific public function, such as running a toll road or a bridge." That's why corporations aren't mentioned in the Constitution. The Founding Ones thought corporations were groups of people making money by performing public functions. They were extensions of government that would be restrained by the same checks and balances applied to the country’s other governing bodies. Imagine the Judiciary corrupting the Executive branch. "That assumption fell apart in the period that came to be described as "Jacksonian Democracy." The practice of granting charters one by one through legislation had given rise to corruption and abuse, and the Jacksonians opened up the corporate form to all comers, through general incorporation laws." The intent was to end monopolies not release corporations from their responsibilities to the community. "While the rights of private property are sacredly guarded," wrote Jacksonian Chief Justice Roger Taney in 1837, "we must not forget that the community also has rights." At the time the laws governing corporations all included limits on size and scope. They also couldn’t buy stock in other corporations, though a number of presidents, including JamesMadison, Teddy Roosevelt, William Howard Taft and Woodrow Wilson all called for federal chartering of large corporations. In order to lure the companies they got in a race to the bottom, with each one imposing fewer legal requirements than the one before. That’s how Delaware became the corporate capital of America. Pay your fee, file your papers and you are left alone. That’s how America got where it is today with Goldman Sachs, Citizens United and the long-term well-being of the nation being sold off in exchange for wealth for the few. So what to do? I am going to go along with Madison, Roosevelt, Taft and Wilson (you have no idea how much it hurts to write that) and the idea of Federal charters for corporations that actually require them to be responsible. It would at least be a start.
<urn:uuid:82675b38-b858-4a0a-bc1e-f5dd16e5d361>
CC-MAIN-2017-04
http://www.cio.com/article/2370532/government/to-save-the-modern-corporation-you-must-first-destroy-it.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00014-ip-10-171-10-70.ec2.internal.warc.gz
en
0.975295
789
2.84375
3
There are several kinds of industrial applications through the use of fiber optical cable. Thin fiber of glass or plastic, through which data can light and sound propagation is called optical fiber. These optical fibers as thin as human hair. When they are assembled together, they form a cable that can be used for transmitting information and signals. Optical fiber is widely used in telephone and telecom industry. Optical lighting is also an indispensable medical, aerospace and military applications. Other systems such as intrusion detection alerts or light through optical fiber movements. Thanks to their big data carrying capacity, the cable is particularly important in the local area network (LAN). Applications such as machine vision lighting are enabled via optical lightings. A major advantage of these cables is proposed is their lower cost than the traditional use of copper wire. Here are some other offers important advantages of fiber optic cable: Long-distance data transmission High bandwidth can be reach even over long distances using this cable. They can carry critical signals without the loss of data. These cables also do not get jammed, making them ideal for mission critical operations such as sending flight signals. Immune to electromagnetic interference Since these cables use the medium of light, and not electricty, to transmit signals, electromagnetic interference doesn’t usually affect the data transmission process. The ideal security data transmission It is a known fact that electromagnetic interference (EMI) could also cause data leaks. This is a potential threat for sensitive data transfer operations. It may not always be possible to shield the wire, and even with the shielding, also cannot guarantee 100% safety. On the contrary, an optical cable has no external magnetic field so signal tapping is not easily achieved. This makes an optical cable is the most preferred components or sensitive data transmission security. No spark hazard Electrical wiring constantly needs to be safeguarded against a potential spark hazard. This isn’t the case with optical fiber cables as they are inherently safe. This particular attribute is especially significant in industries such as chemical processing or oil refineries where the risk of explosion is high. Signals that are sent using cables do not spark. No heat issues Fiber optics can carry small amounts of light without the risk of producing heat. Thus, fiber optic cables are safe to use in surgical probes that are inserted inside a patient’s body to study internal organs. These very cables are also used during surgeries to relay laser pulses. With no heat or shock hazard, such cables are safe to use during the most critical surgeries. This attribute makes optical cables safe for use in machine vision lighting applications too. These are a few fundamental advantages of optical cables. There are several other benefits that a professional optical cable manufacturing house shall be able to discuss with you. More information about products such as LSZH cable, armored fiber cable for all you industrial applications.
<urn:uuid:370c4d89-49dd-426a-860f-33570a41ea3d>
CC-MAIN-2017-04
http://www.fs.com/blog/the-industrial-purpose-of-using-fiber-optic-cables.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00014-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937074
576
3.21875
3
Save Yourself From “Yourself”: Stop Spam From Your Own Address I just got junk email … from me! It is surprisingly common for users to receive Spam email messages that appear to come from their own address (i.e. “email@example.com” gets a Spam email addressed so it appears to be from “firstname.lastname@example.org”). We discussed this issue tangentially in a previous posting: Bounce Back & BackScatter Spam – “Who Stole My Email Address”? However, many users wonder how this is even possible, while others are concerned if their Spam filters are not catching these messages. How can Spammers use your email address to send Spam? The way that email works at a fundamental level, there is very little validation performed on the apparent identity of the “Sender” of an email. Just as you could mail a letter at the post office and write any return address on it, a Spammer can compose and send an email address with any “From” email address and name. This is in fact very easy to do, and Spammers use this facility with almost every message that they send. So, while you do own your domain name and can lock down the accounts you are using to send and receive email, there is no way to prevent someone else from sending an email message that purports to be from you or some address at your domain. The best you can do is to use SPF and/or DKIM or PGP or S/MIME digital signatures to allow your recipients to verify the messages if they want to (though most recipients may not know how to use these technologies). E. g. with SPF and/or DKIM, recipients (including yourself) can use Spam Filters to determine that these messages were not authorized and can thus discard them as fraudulent. Why do Spammers send you Spam that appears to be from you? Sending email to you that appears to be from you is an increasingly popular Spamming trick. As spam filters get more and more complicated, people have taken to adding their own email addresses and/or the their domain names to their spam filtering allow lists. The intention is to ensure that no email from other people in their organization (or that they send to themselves) is ever caught in the spam filter by mistake — because no one in their domain is sending spam, right? The problem is that as soon as you add your own email address or domain name to your spam filtering allow list, all email from these addresses will sail through your spam filters (as requested). This includes all Spam email where the sender address is forged to appear to be from you. It is not really from you, but the only thing that the Spam filter’s allow lists care about is whether the From address is on your allow list or not. So, users who see that their spam filters are being ineffective against email that appears to be “from themselves” probably have their email address or domain name on their own allow list and thus have exempted all of that email from filtering. What are the alternatives to having yourself on your allow list? Of course, most people do not want to take their domain or address off of their allow list for the very reason they put it there in the first place … they don’t want to risk having their internal email caught in the filters. So, what can they do that will meet this requirement and still allow the forged messages to be filtered? The best thing to do is to add only the Internet addresses (IP addresses) of any servers from which you send email (e.g. SMTP servers and WebMail servers) to your allow list instead (if your spam filter allow list supports this). This way, messages sent from the servers that you and your coworkers actually use for sending email will be allowed (and thus you will not lose internal email); however, messages sent from other servers (even if those messages appear to be “from you”) will be subject to the normal filtering process. This will stop most of the forged spam for good, especially if you add DKIM and SPF to further assist your Spam filter in identifying fraudulent messages. But are not DKIM or SPF good enough? It is true that DKIM and SPF can be used to block email send from servers that are not authorized to send email from your domain; however, how everyone is willing to allow their filters to be so harsh as to block all messages that fail SPF or DKIM test … as that can happen for many different reasons. As a result, failed SPF and DKIM checks commonly make a message more spam-like, but do not always force the message to be considered spam. Contact your filtering provider if you want to update your spam filter so that SPF or DKIM failures will cause the message to be rejected. So, what do we recommend? The simplest way to take care of this situation is to: - Use Premium Email Filtering with SPF-protected Allow Lists to stop this kind of spam completely. - Make sure you have robust, reliable spam filtering software, and make sure that it’s enabled. - Make sure that any catch-all email aliases are turned off (the ones that accept all email to unknown/undefined addresses in your domain and deliver them to you anyway — these are giant spam traps). - Make sure that your email address and your domain name are NOT on your own allow or white list(s). - Make sure that, if you are using your address book as a source of addresses to allow, that your own address is NOT in there (or else don’t white list your address book). - Add the Internet IP address(es) of the servers from which you do send email to your allow list, if possible. Contact your email provider for assistance in obtaining this list and updating your filters with it. - Add SPF to your domain’s DNS. - Use DKIM If you want to go further, consider use of technologies such as PGP or S/MIME for cryptographic signing of individual messages and consider “closed” email systems … where only the participants can send messages to each other.
<urn:uuid:ee951da0-5309-4a95-a657-bcd62e5fd3de>
CC-MAIN-2017-04
https://luxsci.com/blog/save-yourself-from-yourself-stop-spam-from-your-own-address.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00462-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94786
1,308
2.65625
3
Crowd-funded ArduSat Satellite Offers Space-based Experiment Time to Science Students ZARAGOZA, Spain & SAN FRANCISCO–(BUSINESS WIRE)–With the successful space launch of ArduSat aboard a H-IIB rocket, the first open satellite platform that allows private citizens to design and run their own applications in space is now on its way to the International Space Station (ISS). “Our Radiation Sensors were developed originally to measure radioactivity levels on Earth. We adapted them to meet the satellite’s restrictions in terms of weight, size and power control, in a nice collaboration between Libelium and ArduSat engineering teams” Included in the standard payload of the two 10cm x 10cm orbiters launched this week are Radiation Sensor Boards designed by Libelium that will monitor radiation levels generated by space phenomena such as sun storms and background activity. This sensing technology acts as a Geiger counter measuring gamma particles produced anywhere in space. Once the ArduSat is released into orbit at an altitude of more than 300km, students from a dozen schools across the United States and select schools in Brazil, Guatemala, India, Indonesia and Israel will access and control the satellites for their science experiments, beginning this fall. One of six pre-built experiments uses the Libelium Geiger counter to detect high-energy radiation levels from space. “Our Radiation Sensors were developed originally to measure radioactivity levels on Earth. We adapted them to meet the satellite’s restrictions in terms of weight, size and power control, in a nice collaboration between Libelium and ArduSat engineering teams,” said David Gascón, co-founder and CTO of Libelium. “We’re making space exploration affordable and accessible to everyone, with a space platform that lets the users innovate. The spirit of discovery and sharing that inspires open source development fits perfectly with this aim and makes it come to life,” said Peter Platzer, CEO of NanoSatisfi, ArduSat’s parent company. ArduSat Pre-built Experiments include: - Test for orbital mechanics and dynamics using the Accelerometer + Gyroscope - Build a 3D model of Earth’s magnetic field using the Magnetometer - Measure temperature changes in space (i.e. cold snap) using the IR Temperature Sensor - Detect high-energy radiation levels using the Geiger counter - Build a spectrograph of Earth’s Albedo (reflection coefficient) using the Spectrometer - Take a picture from space For more information on the capabilities of Libelium’s Open Source Sensor Platform go to: http://www.libelium.com/waspmote Libelium designs and manufactures open source hardware for wireless sensor networks so that system integrators, engineering and consultancy companies can deliver reliable Smart Cities solutions with minimum time to market. All Libelium’s products are modular, easy to deploy and include extensive documentation and support through a global community of developers. Libelium’s customers range from startups to large international corporations in North America, Asia and Europe. Libelium’s open source DIY hardware division, Cooking Hacks, is dedicated to making electronics affordable, easy to learn and fun. Cooking Hacks serves a worldwide community of developers, designers, engineers, hobbyists, inventors and makers who love to create electronics with sensors, robotics, actuators, Arduino and Raspberry Pi. Established in 2006, Libelium is privately held and has headquarters in Zaragoza, Spain. www.libelium.com NanoSatisfi democratizes access to space exploration, images and data by providing individuals access to a user-programmable in-orbit satellite for $250/week. With the ArduSat, the company is providing unique educational opportunities using cutting-edge Space technologies, to drive economic competitiveness and inspire a brand new generation of Science, Technology, Engineering and Math (STEM) professionals. http://nanosatisfi.com.
<urn:uuid:bc47d207-43f3-4390-818d-99b5e7e638bf>
CC-MAIN-2017-04
http://www.machinetomachinemagazine.com/2013/08/06/libelium-sensors-launch-into-space/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00490-ip-10-171-10-70.ec2.internal.warc.gz
en
0.889063
846
2.84375
3
Biffi M.,CNRS Functional Ecology & Environment Laboratory | Charbonnel A.,CNRS Functional Ecology & Environment Laboratory | Buisson L.,CNRS Functional Ecology & Environment Laboratory | Blanc F.,Conservatoire dEspaces Naturels Midi Pyrenees Toulouse France | And 2 more authors. Aquatic Conservation: Marine and Freshwater Ecosystems | Year: 2016 The implementation of effective and appropriate protection actions is frequently hindered by lack of thorough knowledge on species ecology especially in the case of endemic, vulnerable and elusive species. Using a recently updated and unpublished dataset describing the spatial distribution of the Pyrenean desman (Galemys pyrenaicus) in its northern range (French Pyrenees), an Ecological Niche Factor Analysis (ENFA) was conducted to provide a quantitative estimate of local habitat use by this endangered semi-aquatic mammal. A comparative approach was used to investigate potential differences in habitat use among the three main hydrological regions of the French Pyrenees. The Pyrenean desman was identified as a marginal and specialist species concerning the selection of its local habitat in the French Pyrenees. Key habitat variables corresponded mainly to river-bed characteristics (i.e. high heterogeneity of shelters and river substrates, fast flowing water facies, low amounts of fine sediment) and river-bank characteristics (i.e. high proportion of rocks, low proportion of earth). A difference in habitat selection between the three hydrological regions of the French Pyrenees was also highlighted. A decrease in marginality and specialization from west to east as well as differences in habitat variables driving the ecological niche of the Pyrenean desman and its range suggested a spatial structure in desman populations regarding local ecological factors. These results stress the importance of effective and sustainable river management for the habitat quality of this endangered species and also demonstrate the importance of taking into account the variability in habitat preferences that can exist between geographically distinct populations. This finding has important implications for conservation planning that should thus be conducted at the population level instead of the traditional species level in order to target the specific needs of each hydrological region. © 2016 John Wiley & Sons, Ltd. Source
<urn:uuid:120f7ada-e38b-4b4b-9710-85bfc7b3aa36>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/conservatoire-despaces-naturels-midi-pyrenees-toulouse-france-1801288/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00546-ip-10-171-10-70.ec2.internal.warc.gz
en
0.903169
455
2.828125
3
In part one of this series, I covered the basic concepts of data duplication. Before getting to the next installment, I wanted to take a second and apologize to the readers for the long delay between posts. It's a long story, but the good news is I am back and ready to go. Here in part two, I dig a bit deeper into the guts of how deduplication works and why it's important in ensuring that our personal and business data is continually and efficiently protected. This is a concern for almost every person who backs up their data to the cloud or share's information with friends and families over the network. In the case of some businesses, cloud service providers might be the preferred method to back up files in the event of a disaster or hardware failure. This is why many service providers and enterprises rely upon the method of deduplication to keep storage costs in check. Expanding upon my last blog on the business benefits of deduplication, let's dive into the details of how this technology works by looking at the different compute methods vendors are using within their dedupe offerings. The most common methods of implementing deduplication are: - File-based compare - File-based versioning - File-based hashing - Block or sub-block versioning - Block or sub-block hashing File system-based deduplication is a simple method to reduce duplicate data at the file level, and usually is just a compare operation within the file system or a file system-based algorithm that eliminates duplicates. An example of this method is comparing the name, size, type and date-modified information of two files with the same name being stored in a system. If these parameters match, you can be pretty sure that the files are copies of each other and that you can delete one of them with no problems. Although this example isn't a foolproof method of proper data deduplication, it can be done with any operating system and can be scripted to automate the process, and best of all, it's free. Based on a typical enterprise environment running the usual applications, you could probably squeeze out between 10 percent to 20 percent better storage utilization by just getting rid of duplicate files. Example: File1.txt and File2.txt are the same size and have the same creation time. Most likely, one is a duplicate. File-based delta versioning and hashing More intelligent file-level deduplication methods actually look inside individual files and compare differences within the files themselves, or compare updates to a file and then just store the differences as a "delta" to the original file. File versioning associates updates to a file and just stores the deltas as other versions. File-based hashing actually creates a unique mathematical "hash" representation of files, and then compares hashes for new files to the original. If there is a hash match, you can guarantee the files are the same, and one can be removed. A lot of backup applications have the versioning capability, and you may have heard it called incremental or differential backup. Some backup software options (Tivoli Storage Manager is a good example) always use the versioning method to speed backup. You do a full backup the first time, and from then on, only the changes in the data need to be stored. IBM calls this "progressive" backup. Other software solutions use similar techniques to reduce wide area network (WAN) requirements for centralized backup. Intelligent software agents running on the client (desktop, laptop or workstation) use file-level versioning or hashing at the client to send only delta differences to a central site. Some solutions actually send all data updates to the central site and then hash the data once it arrives, storing only the unique data elements. Most products that use a "hashing" mechanism also require an index to store the hashes so that they can be looked up quickly to compare against new hashes to see if the new data is unique (i.e., not already stored), or there is a hash match and the new data element does not need to be stored. These indexes must be very fast or handled in such a manner that the unique data stored increases and becomes fragmented so that the solution doesn't slow down during the hash lookup and compare process. Different solutions from various vendors use diverse hashing algorithms, but the process is basically the same. The term "hashing the data" means "creating a mathematical representation of a specific dataset that can be statistically guaranteed to be unique from any other dataset." The way this is done is to use a generally understood and approved method to encrypt each dataset, so that the metadata or resulting mathematical encryption "hash" can be used to either reproduce the original data or as a lookup within the index to see if any new data hashes compare to any stored data hashes, so the new data can be ignored. Block delta versioning and hashing Block-based solutions work based on the way data is actually stored on disk and do not need to know anything about the files themselves, or even the operating system being used. Block delta versioning and hashing solutions can be used on files (un-structured data) and databases (structured data). Block delta versioning works by monitoring updates on disk at the block level, and storing only the data that changed in relation to the original data block. Block-level delta versioning is how snapshots work. Each snapshot contains only the changes to the original data. Block-level delta versioning can also be used as a method to reduce data replication requirements for disaster recovery (DR) purposes. Let's say your company wants to keep the remote data up to date every six hours, so you have to replicate changes every six hours to the DR location. If a block of data on disk at the local site is updated hundreds of times during the time delta between the last replication and the new one, but the replication solution uses block delta versioning, only the last update to the block needs to be sent, which can greatly reduce the amount of data traveling from the local site to the DR site. Block-level hashing works similar to file-level hashing, except in this case, every block or chunk of data stored on the disk is mathematically hashed and the hashes are indexed. Every new block of data being stored is also hashed, and the hashes are compared in the index. If the new data hash matches a hash for a block already stored, the new data does not get stored, thus eliminating duplicates. Sub-block delta versioning and hashing Sub-block-level delta versioning and hashing methods work exactly the same as the block method, except at a more granular level. Sub-block delta versioning works at the byte level and can be many times more efficient in reducing duplicate data than block level. For example, open system servers from Windows, Unix and Linux format disks into sectors of 512 bytes each. The smaller you chunk the data, the more probable it is that you will find a duplicate, but as smaller chunks are used, more hashes are required. There is usually a tradeoff between the deduplication ratio and the size and therefore speed of the hash index. A block of data on a Windows server usually takes up eight 512 byte sectors for each four kilobyte block of data being stored. Since one of the smallest updates to a disk usually occurs at the sector level, if only one sector is updated, then why mark the entire block as updated? A sub-block delta versioning solution that monitors updates at the sector level is eight times more efficient than one that simply tracks block updates, and it can be up to 64 times more efficient than other replication solutions that sometimes use 32K tracks as the smallest monitored update. An update to a single 512 byte sector on a solution that tracks updates at the 32K track level would need to send the entire track. That's a lot of "white space" and can be an inefficient use of network bandwidth and storage. Sub-block-level delta versioning is also known as "micro-scanning" in the industry. To hash or not to hash Hashing-based dedupe solutions typically provide great results in reducing storage requirements for a particular data set, but there is one huge disadvantage over delta versioning. Since everything is stored as a jumble of mathematical hashes, objects and indexes, it requires the data to be "re-constituted" prior to being usable again for applications. This re-constitution process takes time, which may have a negative impact if the data needs to be recovered NOW. Micro-scanning solutions have a slightly lower overall ratio for a particular dataset, but the data is always in the native format of the application and is always immediately available for use. This is important when quick application recovery is the goal. Another benefit of micro scanning is the ability to restore only the sectors required to recover any lost or corrupted data, so massive databases like data warehouses can sometimes be recovered over the network almost instantly. In the final part of this deduplication series, I will examine the various implementation methods of data deduplication. Chris Poelker on Data Deduplication 2. Deep Dive 3. Implementation methods (free registration required) This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:2b3b4bb9-25ae-4022-ad65-469051559d64>
CC-MAIN-2017-04
http://www.computerworld.com/article/2475106/cloud-computing/data-deduplication-in-the-cloud-explained--part-two--the-deep-dive.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00454-ip-10-171-10-70.ec2.internal.warc.gz
en
0.917352
1,920
2.984375
3
The Internet is a powerful tool for education. But teachers are often unable to use it effectively because of clumsy Web filtering tools installed in schools, says Professor Craig Cunningham, of National-Louis University. Schools routinely install Internet filtering software, designed to protect students from porn, hate speech, and other inappropriate content, as well as shield the children from sexual predators, and from wasting time on social media sites when they should be learning, Cunningham said. But the filters are put in place without adequate forethought, blocking sites that should be accessible, and vice-versa. Schools don't take an active role in deciding which sites should be blocked, abdicating that responsibility to the private, for-profit vendors who sell the products. The result is that students are deprived of education, Cunningham said. Cunnningham gave a presentation in Second Life, part of a Smarter Technology series of educational talks. (Disclaimer: I have a personal connection with the subject matter and hosts of this talk. Scroll down to the bottom of this post for details.) Instead of simply blocking sites, students should be permitted, or even encouraged, to access objectionable material, under teacher supervision, to help them learn, Cunningham said. The sites that are edgiest are the most likely to be most educational. "It comes down to education versus prohibition. Do you prohibit students from accessing materials, or do you educate them by letting them access the materials?" he said. "True learning occurs at the margins, true learning occurs at the situations where people encounter materials with which they're unfamiliar, and don't understand, and have misconceptions about." Schools put filtering software in place as part of the requirement of the Children's Internet Protection Act (CIPA), a U.S. law passed in 2001 covering public schools and libraries. The law requires filtering to protect children against "inappropriate" and "harmful materials" on the Internet, as well as protecting students' "safety and security" when using e-mail, chatrooms and "other forms of direct electronic communications." Because of First Amendment restrictions, the law gives U.S. government no oversight authority over the nature of the filters. In theory, that leaves the rules up to state and local government and the teachers. In practice, local authorities generally install Web filtering from private vendors, who make the decisions what to filter on their own, often keeping the lists of censored sites secret. Despite the abuse, Cunningham said filtering is necessary. "I'm not arguing that all filtering is bad. But when filtering reaches the point that teachers and students are prevented from accessing materials that in their mind has educational value, that's more like censoring than filtering," Cunningham said. Filtering should be limited to pornography, which is the only thing the law requires. The U.S. is not alone enacting Internet filtering. In Cuba, if a computer user at a government-controlled Internet cafe types certain words, the word processor or browser is automatically closed and the user gets a state-security warning. By comparison, in Chicago, if a student or school employee accesses an inappropriate site, a siren sounds, similar to a warning of a natural disaster. The noise is audible to everyone around, and it continues until the site is shut down. Students can disable the siren. A administrator gets an e-mail when inappropriate site access occurs. Inappropriate sites include porn, as well as social networking sites such as MySpace or YouTube. Filters use a variety of techniques. Some block all sites except those on a whitelist. Others only block sites on a blacklist. Some block sites with banned words, phrases, or even images -- algorithms recognize when a photo is mostly skin. Other filters block some words or phrases from being typed by users, although that's rare. Most companies use a variety of these techniques. Some filters block all newsgroups, social networking, sites, and some search engines. Some filters block translation sites, because those might be used to access inappropriate contact in foreign languages, Cunningham said. Some schools give teachers a password to override filters for 30 minutes, which Cunningham said is a great idea. "There are a lot of districts that don't do that, though. In Chicago, nobody has that authority. You have to submit a form to the central district and wait three or four weeks to get a response." Web filtering denies students equal access to education. "The U.S. has areas like Chicago that are relatively liberal and diverse and tolerant of ideas (although the schools are not like that), and then you have small-town Kansas where everyone is white, everyone is a Republican, everyone is a Christian, and that kid is going to be raised in an environment where he has no access to alternative points of view on homosexuality and religion. So you're denying that kid an education -- you're literally doing that," Cunningham said. Web filtering also leads to inequities in education based on household income. Students from more affluent areas have access to Internet at home and, often, more enlightened parents who can let them access information blocked in schools and libraries. Poorer students without home access don't have those opportunities, Cunningham said. Children need to be educated to face the challenges of the 21st Century, not protected from inappropriate content, Cunningham said. He quoted form a National Research Council study, "Youth, Pornography, and the Internet:" "Swimming pools can be dangerous for children. To protect them, one can install locks, put up fences, and deploy pool alarms. All these measures are helpful, but by far the most important things one can do for one's children is to teach them to swim." Download the presentation: "Filtering Disclaimers: I do Internet marketing consulting for Palisade Systems, which makes network security tools that include Web filtering. World2Worlds, which hosts the Smarter Technology event, also provides hosting for Copper Robot, a series of interview programs I host in Second Life. And Smarter Technology is published by Ziff Davis Enterprise, which competes with Computerworld's parent company, International Data Group. Mitch Wagner on Internet censorship in schools
<urn:uuid:8fbf198a-92de-4c52-a354-3b2f45e63b3e>
CC-MAIN-2017-04
http://www.computerworld.com/article/2468012/endpoint-security/internet-filtering-as-a-form-of-soft-censorship.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00572-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9485
1,259
2.75
3
No one doubts that programming, or at least technical literacy, is critical for today's students. But how early should one start slinging code? A blog by 18 year old Kenny Tran titled "Didn't Begin When I Was 12" started a conversation on YCombinator. Tran wonders if by 18 he's too old to become a good programmer, since many started younger. Is 18 too old to start and be successful? Is 12 too young? Many make the argument that the younger the better. Alfred Thompson, in his Microsoft Education blog, says middle school is the time to start. A 13 year old asks if that's too young on Yahoo Answers. If you do start children early, how? Aiden and I built our first Lego NXT robot the other day. We programmed it together. He's five.Steve Dembo on blogs.msdn.com I started when I was 16, so you're not too young. You can never be too young for programming!Abdur-Rahman on answers.yahoo.com I learned to code at about 12, but it took over my life. I don't think I talked to a single girl between the ages of 12 and 19, never mind went to partiesbarrkel on news.ycombinator.com People are already telling you that eighteen is not too old to learn programming. And they are correct. People start at forty and fifty.mechanical_fish on news.ycombinator.com My own children (13 & 17) have TAUGHT THEMSELVES programming because I thought they were too young when they felt they were ready.Erica Roberts on blogs.msdn.com I started, well, pre-kindergarten. Or in my teens, for anything major and self-guided. Or at about twenty for producing professional results. Or in my late twenties, as someone who could do research-level algorithms work. Or today. If I stop using a language for a while, I have to relearn it.glimcat on news.ycombinator.com I think kids should be introduced to the idea as soon as they're old enough to follow instructions, like making brownie mix using the recipe on the box. If you can do that, you can learn some very basic programming concepts.Liz Krane on blogs.msdn.com You need to look up the language Python. There are a ton of free tutorials and guides on the internet for it, and I have confidence you can learn it.Charlie Strangler on answers.yahoo.com you DON'T start by learning languages, you start by learning programming.Colanth on answers.yahoo.com If you're a programmer, tell us how old you were when you started. Now read this:
<urn:uuid:d9977d80-fde8-41c9-aa32-bce34479152d>
CC-MAIN-2017-04
http://www.itworld.com/article/2725112/enterprise-software/how-young-is-too-young-to-start-programming-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00390-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954489
584
2.875
3
How to Improve Your Cellular Reception by Jim Hanks How often have you stood in the middle of a densely populated city only to find that you cannot get a signal with your cell phone? Most of us blame the poor coverage on our service providers. But is it always their fault? Sometimes it's not. In this article I'll give you a few pointers that will enable you to get a signal where you normally cannot. But, first, let me present some background information on how cellular Sending and receiving cellular signals The name "cellular phone" is derived from the practice of dividing wireless coverage areas into hexagonal regions or cells. For instance, if you live within a community that is 100 square miles (10 miles x 10 miles), your provider might divide the area into 4 cells, each approximately 5 miles wide. However, if this area is in a densely populated city, obstructions and system traffic will encourage providers to set up cells that are much smaller (often they are only a city block wide). In the center of each cell, a base station is positioned which consists of radio equipment attached to a cellular tower. This tower sends and receives signals over the range of frequencies assigned to each cellular service provider by the FCC. The technology used to transmit these signals is similar to FM radio technology, except that cellular transmissions are sent in both directions. When you attempt to make a call, your provider assigns your phone a frequency on which to communicate with a base station. In order to initiate (or receive) a call, one of these frequencies (or channels) must be available. So, why is your phone often unable to get a channel? Transmission power levels Cell phones and cellular base stations both transmit at fairly low power levels, thereby limiting the distances that their signals can travel. Now, you're probably wondering, "Why would providers intentionally limit the range of their signals by using low power transmissions?" Well, there are 2 reasons: First, the FCC assigns each provider a limited number of frequencies to be used for calls. By dividing areas into cells and using low-power transmissions, these providers can use the same frequency for different calls in nonadjacent cells; since low-range transmissions ensure that signals will never overlap. Second, high-power transmissions (like those you'd send with a CB radio) require much stronger batteries. Most people don't want to carry car batteries on the backs of their mobile phones. So, given that there are a limited number of channels, the challenge for providers is to cram as many calls as they can into a single frequency. This is where digital technology becomes useful. Digital vs. analog Digital (or PCS) systems1 are becoming the predominant wireless technologies because they allow more information to be transmitted on a single channel. Much like how music is stored on a CD, a digital wireless network repeatedly samples a voice call and converts it into binary code (a series of 1s and 0s). This code is compressed into digital packets and sent using only a portion of the frequency band. In digital format, up to 10 calls can be held on the same channel, along with features such as Caller ID, voice mail, and On the other hand, analog signals work by transmitting pulses of a voice callmuch like cassette tapes do. Since analog signals require their own channels and cannot support the same features as digital signals, analog technology is quickly becoming obsolete. However, because analog signals use a lower frequency band (around 800 MHz) than PCS signals (around 1900 MHz), analog systems have greater range. Right now, analog systems also offer truer voice quality; but as technology improves, digital systems will sample at higher rates and approach the quality of analog. If you have no idea what type of system your phone uses, here's a quick way to figure it out. If you enter a poor coverage area and you hear static until you lose your call, you are using an analog system. If your caller's voice has that underwater, garbled sound as you begin to lose him or her, you are on a digital system. The reason for this garbled sound is that as you lose reception, digital packets are being dropped. After a certain number of packets have been destroyed, the digital system terminates your call. Other factors influencing reception (and a few remedies) Besides the transmission technology, there are other reasons you might be getting poor reception. Fortunately, many of these problems are easily corrected. Many people think that a poor connection is often caused by a glut of subscribers using the network at the same time. This belief is relatively unfounded. In digital systems, increased traffic doesn't usually impact voice quality since you can't set up a conversation unless the system is available. Once a frequency has been assigned to you and you initiate a call, space is allocated to your phone. For the most part, this is also true of analog systems. Unless a provider in the area has faulty network design, the only case in which high system traffic affects you is when you cannot receive the initial signal necessary to initiate a call. If this is the case, just keep trying. Your call clarity will be fine once you get a signal. Buildings, structures, and mountains can all obstruct the tower-to-subscriber path. As such, a slight correction of where your antenna is pointing is often the difference between service and no service. If you're on foot and buildings are your problem, improve your reception by making calls at street intersections. And don't make calls from deep inside buildings or before walking into an If your company cannot receive cellular calls within your building, you might want to talk to your provider about getting a bi-directional amplifier. These devices are often used for conventions, when people within structures require cellular coverage. You may experience interference from nearby electronic devices (such as computer screens, blenders, or power saws) while they are in operation. Walk away from these devices while you're on a call, or simply turn them off. Everything from humidity to storms can affect the quality of a transmission. Arid days will deliver slightly diminished range because radio waves travel better through moist atmosphere. But if you need to make an important call when humidity is high and there is lightning in the area, expect problems. I have no advice for combating weather. Sometimes you can't beat Mother Nature. If you're experiencing poor reception in a region that you know has good coverage, check your antenna. Many phones must have their antennae either completely pushed in or fully extended in order to maintain clear connections. If your antenna is only partially extended, you'll hear static or your call will get dropped. If your antenna is fully extended yet you still have constant problems in high coverage areas, there might be a problem with your phone. Call the customer service number found in your phone's instruction manual. Often, your battery can be strong enough to attempt a call, but not strong enough to find a signal. Try to keep your battery charged to at least 2 bars on your battery indicator. Buying high-quality batteries (such as lithium-ion batteries) will give you more talk time and therefore more time during which you can obtain a signal. Unlike most countries, the U.S. did not adopt a standardized network when it jumped into the digital wireless world. As a result, the U.S. is experiencing more growing pains than seen in countries such as Finland (where GSM is the standard) and Japan (where CDMA is the standard). U.S. mobile phones support GSM, IDEN, TDMA, or CDMA. Each of these cellular technologies can impact voice quality as can the actual phone or system software. But the reasons why are pretty complicated and preferences are largely a matter of opinion. So, if you've extended your antenna, you've recharged your battery, and the sun is shining down as you stand on a crest in the middle of Central Park, and you're still getting lousy reception, my last suggestion is to complain to your provider. Getting your company to install additional transmitters isn't always just a matter of the squeaky wheel getting the grease; often providers are unaware of coverage holes. To notify your provider of poor reception areas, you can usually reach customer service departments for free by dialing 611 from your cellular phone. Or, if you can't get enough reception to do so, here are contacts for 3 of the major P.O. Box 755 Atwater, CA 95301 Southwestern Bell Wireless 1 PCS stands for Personal Communications Service. This service usually includes extra features such as Caller ID, voice mail, call forwarding, and web browsing. PCS systems use digital technology, yet operate on a different frequency than standard digital systems. Because the frequency assigned to PCS systems is higher, these systems have a shorter range and thus require more cellsand towersthan generic digital networks do.
<urn:uuid:001875d9-71e8-434b-ae7c-e4ef75ba6c0f>
CC-MAIN-2017-04
http://telecom.hellodirect.com/docs/Tutorials/ImproveCellReception.1.031501.asp
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00143-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937352
1,974
2.59375
3
Portable computers' feature reach exceed current battery technology's grasp. Fuel cells should render the battery obsolete for these systems before the decade is over. At Decembers Eco-Products 2003 show, Hitachi announced its entry into the race to obsolete the battery . The company is now one of two major mobile technology vendors racing to ensure the battery is obsolete as a power source for portable computers long before this decade is over. Battery technology has languished in the face of laptop and handheld computers rapid advancement. Power-conservation efforts have resulted in laptop-battery life that ranges from about two hours (for models equipped with desktop processors) to 7 hours (for IBMs Centrino-based T-series laptops), but we are still a long way away from being truly wireless. PDAs showed promisePalms initial handheld computers lasted up to a weekbut as performance demands rise and power-hungry wireless technologies expand, handheld computers are quickly dropping to the top of the laptop range for battery life. The industry needs a change, and it needs it quicklyparticularly for emerging platforms like modular PCs; tablet computers; and hand held media players (youll hear more about these at Januarys CES in Las Vegas), all of which will need battery life measured in days, rather then hours, to meet their full market potential. Fuel cells are currently considered to be the fix for this problem. Basically little solid-state generators that run on diluted alcohol, these ultra-small machines address the battery issue by replacing or supplementing these batteries. Representing a lower fire hazard then the lithium-ion batteries currently in laptops, this technology has had three major problems to overcome. The first is size. Until recently, a fuel-cell generator has been too big to use in a portable device. However, advancements by Hitachi and Toshiba have dropped the size down to a point where the cell can be about the size of an AA battery for handheld devices and about the size of an extended battery for laptops. The second hurdle has been the dilution of the fuel itself. The FAA in particular has been nervous about the problems associated with a flammable liquid in planes, and coming up with a fuel mixture that satisfies their concerns while providing an adequately powerful source of energy has until lately been elusive. Right now, the industry is working with mix of about 20 percent alcohol to water but wants 30 percent or better to meet performance goals. Finally, the devices have simply been too expensive. While they will never be as inexpensive as batteries, they need to drop to a level that people will be willing to pay. One nice thing about these devices is that they are relatively environmentally friendly. Their "exhaust" is water and carbon dioxide, and some designs (like Toshibas) are endothermic. (In other words, the harder the device works, the colder it gets). If you take into account the heat problems with which laptop and modular computer makers are struggling today, this technology could have the unique ability to address one problem (overheating) while it also addresses the need for portable power. When these products hit in 2005, expect them to come in two forms: internal, which replace or supplement the battery technologies used today; and external, which can serve as a power source for devices that were not designed for fuel cells at all. Hitachi is leading with the smaller devices and Toshiba with the larger; both plan to have product market-ready in 2005. (There are other firms I havent mentioned that are also clearly in this race.) Im looking forward to asking the flight attendant for two shots of vodka: one for me and one for my laptop computer. (A laptop thats able to drink me under the table is an interesting concept we will have to explore during New Years celebrations.) This time of year it is traditional that we look to the future. If fuel cells hit their design and release goals, that future is not only brightit is well-powered. Discuss This in the eWEEK Forum Rob Enderle is the principal analyst for the Enderle Group, a company specializing in emerging personal technology.
<urn:uuid:eac7f74d-b38f-405c-860d-76cd7eec688b>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Desktops-and-Notebooks/Hitachi-Toshiba-Ready-FuelCell-Attack
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00381-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959609
840
2.71875
3
Big data: Will you know it when you see it? - By Shawn McCarthy - Mar 28, 2014 We've all heard of big data. While few of us may agree on exactly what the term really means or how large a data set needs to be in order to qualify as big data, most of us understand that big data is a data set so large and intricate that it can't be managed with traditional IT solutions such as database tools, spreadsheets or storage management structures. As someone who works to verify market sizes and trends, I've spent some time reviewing how big data currently is defined. To qualify as big data, it's not just a matter of how data is counted, but how the information flows and how decisions are made for big data use cases. Over the past few years IDC has worked to establish a specific definition for big data, fully realizing that the definition must shift as some solutions become more mainstream and as the upper limits of big data continue to grow. Currently to make the big data grade, the data collected first needs to meet one of three criteria: - There needs to be more than 100 terabytes of data collected in the set. - The data generated needs to exceed 60 percent growth per year. - The data received is delivered in near real-time, via ultra-high-speed streaming. Then, no matter which of the three criteria has been met, the data also needs to be deployed on a dynamically adaptable infrastructure. If it also meets this standard, it must meet one of the following criteria: First, the data must originate from two or more formats and/or data sources. Second, the data is delivered as a high-speed streaming connection, as in sensor data used for real-time monitoring. That's certainly a long list of qualifiers. So it's no wonder there is ongoing debate about what big data means. However, I strongly believe that only data collections (and associated IT systems) that meet these criteria qualify as big data under IDC's definition. This is an important point, because with this type of definition, the size of the government big data market can start to be measured and growth, changes and technology preferences noted. As agencies have learned, there are unique challenges involved in managing extremely large data sets, including the way the data is gathered, managed, stored, searched, analyzed and transferred. A whole new IT market is evolving with new tools and technologies designed specifically to work with these oversized sets of information. Cloud computing has helped light a fire under big data because government agencies can quickly have access to the large data storage systems and big data analysis tools they need. By working with information in a single collected set, rather than separately analyzing smaller sets, agencies have found that it's possible to spot trends, to notice correlations between data sets and to analyze real-time changes in the information. For these reasons, technologies such as broad- and narrow-scope data analysis, analytics and data visualization have become closely aligned to big data. Here are just a few examples of big data in the federal space: The NASA Earth Observing System Data Information System. EOSDIS manages data from EOS missions from the point of data capture to delivery to end users at near-real-time rates. It includes the collection, storage and dissemination of several terabytes of data each day. Battlespace networks. Within the Defense Department, battlespace is a term used to describe DOD's unified military strategy, where armed forces within the military's theatre of operations can communicate, share data and make decisions. It includes integrated air, land, sea and space components. The military's battlespace networks are a prominent generator of big data, which is shared via networks, satellites and in some cases huge arrays of hard drives on reconnaissance aircraft that can be offloaded as soon as the plane lands. Big Data to Knowledge (BD2K). This National Institutes of Health initiative is meant to help biomedical scientists leverage big data from multiple medical and scientific research communities. It's worth noting that a large amount of data is already located in government data centers. Some people might describe such data stores as big data. But much of the information in legacy data centers is located on a variety of storage types, including both active databases and older tape silos. In its current format, many of these collections would not meet the definition of big data described here. With the federal government now using large cloud-based resources such as Amazon, RackSpace and CleverSafe as cloud providers, we expect to see more vendors partnering with commercial cloud providers to develop cloud-based real-time data processing as a service. This will make it easier for vendors to pitch cloud-based big data solutions that can be ramped up fairly quickly, as long as business and analytical needs can be clearly defined.
<urn:uuid:f9c3c3f0-8cb3-431b-852d-29f80dd0f045>
CC-MAIN-2017-04
https://gcn.com/articles/2014/03/28/idc-big-data.aspx?admgarea=TC_BigData
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00317-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957788
988
2.78125
3
Welcome back everybody! I’m back and better than ever with a new round of fresh hacks to share with you! So, with that out of the way, let’s talk about what we’ll be doing today. There are many services that require passwords in order to access their goodies. Often times we, the attackers, need to steal these passwords in order take said goodies. One of these services is SSH (Secure SHell). SSH allows for the remote management and use of things like network devices and servers. If we could find the SSH password, we could have control over the target system! Normally, we could look for some password disclosure vulnerability or do some social engineering. But, when all else fails, we can use brute force to try and crack the password the hard way. Today we’ll be building a tool that will go through a list of possible passwords to see if they’re correct. We’ll be building our password cracker in Java, so let’s get started! Step 1: Downloading JSch To make a long story short, Java does not natively support the SSH protocol. This means that we’ll have to use a third-party package in order to build our password cracker. The package we’ll be using is JSch. This will allow us to perform the SSH logins, so we need to download it and import it in our Java code. You can download it by running the following command: wget https://sourceforge.net/projects/jsch/files/jsch.jar/0.1.54/jsch-0.1.54.jar/download -q –show-progress -O jsch.jar We should get output that looks like this: Now that we’ve downloaded the package we need, we can get to actually coding our password cracker! Step 2: Importing Packages In Java, we need to import quite a number of packages before we can get started building. This step is rather simple to explain, we’re just going to import a bunch of packages. So, let’s do that now: We can see here that we import a small number packages, ending with our newly downloaded JSch package. Now that we have our packages, we can get started on the exciting stuff! Step 3: Declaring Class and Checking Host In Java, all functions for a certain program must be stored under the class for that program. So, since our program name is sshbrute then our class name will also be sshbrute. Pretty simple, right? After we declare our class, we’re going to make our first function. This function will attempt to connect to a given port on the target system. This is to ensure that the port specified by the attacker is, in fact, open. So, let’s take a look at this code: Let’s break this down really quick. First, we declare our sshbrute class, nothing special there. Next, we make a function named checkHost. This function opens a socket and attempts to connect to a port given as an argument (this connection attempt does have a timeout set). Let’s move on to the next section! Step 4: Reading a Wordlist The way this password cracker will work is that it will attempt to log in to an SSH service with a set of passwords. This set of passwords is called a wordlist. These are normally stored in normal text files, so we need to have a function to read a text file and extract all the passwords we need to try. Let’s take a look at it: First of all, our function takes a single argument, a file path. This will be the path to the wordlist file we need to read. Next, it declares an array list to store the passwords in. An array list is like a dynamic array, so we don’t have to give it a buffer, we can just add things to it (that makes our job much easier). After declaring our array list, we open up the wordlist file with a buffered reader. We then read the file line-by-line and add each line to the array list until there are no more lines left in the file. Once this is complete, we return our completed array list. Now that we can read and store a wordlist, we can build the function to try them. Step 5: Attempting Logins Before we try all of these passwords, we need a function that will accept one password and try it out. This will keep everything organized in our final function. We’ll take a look at the code, then break it down: This function is rather simple. We simply dissected the example code given by the JSch developer website and ripped out the code that is used to log in to SSH. This function will make a new session, configure the password and key checking, and attempt to log in to the service. It will then disconnect from the service and return true or false. Now that we have all our base functions, we can finally make our main function. Step 6: Build the Main Function Every Java program must have a main function. This is the function that will executed when we run our program. We’ll start the main function by taking some command line arguments and assigning some variables. Let’s take a look at the first half of our main function: We start by checking for the correct amount of arguments, if not, we provide a very basic usage message to the user. If the correct amount of arguments are supplied, we declare two variables; one being the host address, the other being the port running the SSH service (normally this port is 22, but an admin may configure it to run on a different port for added security). We then do some checking on the first argument and fill out our variables accordingly. Now that we’ve got this out of the way, we can see the second half of our main function: In the second half of our main function, we use all the functions we made earlier. First, we call the checkHost function to make sure the target is up and running. We’ve also assigned the target username to it’s own variable. We then make a new array list and store the result of our wordlist-reading function in it. Next, we print that the cracking has started, along with some information about the attack. Once this print happens, the cracking begins! We start by making a for loop that will iterate through the length of our wordlist. For each iteration, it will call that password out of the wordlist and pass it to the crackPass function. If the login is successful, we inform the user and shutdown the program. Otherwise, we keep going until we run out of passwords. There we have it, our SSH password cracker is complete! Now we move on to the final step. Step 7: Testing it Out Before we end our session today, we’re going to test out our new password cracker..I have a simple server set up on my local network running OpenSSH server. So let’s crack this password! First, we need to compile our Java code into a class file that we can execute: We can see here that we need to use the -cp flag to force the JSch package to be used in the compilation. Then, we execute the program while again forcing it to use the JSch package. Now that we have our program compiled, we need a wordlist to use. Let’s make a simple wordlist now: Nothing really special here, just using some commands to make a very small wordlist. Now that we have a wordlist, we can use it to crack the SSH password: We then execute the program again (forcing the JSch package) and pass all our arguments. We see the functions executing before our eyes for a minute before it returns that the credentials were found. We successfully cracked an SSH password! That’s it for this one, I’ll see you all soon with interesting new attacks!
<urn:uuid:8961868e-9401-40a5-bab6-b9d6de707d47>
CC-MAIN-2017-04
https://www.hackingloops.com/tag/java/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00041-ip-10-171-10-70.ec2.internal.warc.gz
en
0.900928
1,712
2.984375
3
Nowadays the introduction of high-definition digital video formats and the requirements for carrying high-speed signals have ushered in a higher demand for quality cable construction all the way down the line. A new level of care and expertise in installation is required to avoid damage and accompanying loss of bandwidth. Here we take a look at the current field of cables and connectors being offered to meet these new challenges. Cable is the medium through which information usually moves from one network device to another. There are several types of cable which are commonly used with LANs. In some cases, a network will utilize only one type of cable, other networks will use a variety of cable types. The type of cable chosen for a network is related to the network’s topology, protocol, and size. Understanding the characteristics of different types of cable and how they relate to other aspects of a network is necessary for the development of a successful network. The following are the mainstream types of cables used in networks currently: Unshielded Twisted Pair (UTP) Cable; Shielded Twisted Pair (STP) Cable; Fiber Optic Cable; Cable Installation Guides; Twisted pair cable comes in two varieties: shielded and unshielded. The quality of UTP may vary from telephone-grade wire to extremely high-speed cable. The cable has four pairs of wires inside the jacket. Each pair is twisted with a different number of twists per inch to help eliminate interference from adjacent pairs and other electrical devices. The tighter the twisting, the higher the supported transmission rate and the greater the cost per foot. Coaxial cable can support greater cable lengths between network devices than twisted pair cable. The two types of coaxial cabling are thick coaxial and thin coaxial. Fiber optic cable transmits light rather than electronic signals eliminating the problem of electrical interference. This makes it ideal for certain environments that contain a large amount of electrical interference. It has also made it the standard for connecting networks between buildings, due to its immunity to the effects of moisture and lighting. Fiber optic cable has the ability to transmit signals over much longer distances than coaxial and twisted pair. It also has the capability to carry information at vastly greater speeds. This capacity broadens communication possibilities to include services such as video conferencing and interactive services. The cost of fiber optic cable is comparable to copper cabling; however, it is more difficult to install and modify. Connector has several functions. It aligns the fiber with emitters in transmitters, adjacent fibers in splices, and photo-detectors in receivers. With the development of various connector styles, each one has its own advantages, disadvantages, and capabilities. All fiber optic connectors have four basic components, which are the ferrule, connector body, cable, and coupling device. For multimode networks such as those used in buildings and campuses, the ST is the most common fiber optic connector. This connector has a long cylindrical ferrule for holding the fiber and a bayonet mount. The ST connector is considered the most popular multimode connector because it is cheap and easy to install. The SC Connector, which is a snap-in connector that latches with a simple push-pull motion, is used in single mode systems. This connector shows excellent performance and is also available in a duplex configuration. The MU connector is more popular in Japan and looks like a miniature SC with a 1.25-mm ferrule. A standard ceramic ferrule connector, which is half the size of a ST connector, is the LC connector. This connector is used in single mode systems, performs well, and is easily terminated with any adhesive. A connector that is similar to the LC, but has a shutter over the end of the fiber is the E2000/LX5. Used in multimode systems only, the MT-RJ connector is duplex with both fibers in a single polymer ferrule. Pins are used for alignment with male and female versions. There are Single mode fiber optic connector and Multimode fiber optic connector, Single mode fiber optic connectors can be with PC, or UPC or APC polish, while Multimode fiber optic connectors only with PC or UPC polish. PC or UPC or APC refer to how we polish the ferrule of the fiber optic connectors. Multimode connectors are usually with black boot or beige color, Single mode PC and UPC ones are usually with blue or black color, Single mode APC is with green color. Insertion loss is important technical data of the fiber optic connectors. The smaller the better. APC insertion loss is smaller than UPC, UPC is smaller than PC. Both cables and connectors are important components used in fiber optic network. They are also the key parts used in fiber optic patch cord and fiber optic pigtail. While there are so many types of cables and connectors, choose the type of cable and connector completely depend on the network your install.
<urn:uuid:e94cb156-4faa-4676-ab37-b125bdbebac6>
CC-MAIN-2017-04
http://www.fs.com/blog/cables-and-connectors-technology.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00437-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935655
1,020
3.28125
3
A project currently under way will help Washington landowners and local officials make informed decisions about where it is safe to build in and around areas prone to flooding. The Federal Emergency Management Agency (FEMA) and the Department of Ecology (Ecology) are updating existing flood hazard maps in several Washington counties that have high risk, flood-prone areas. In many Washington communities, it has been about 20-30 years since flood hazard maps have been updated. The new maps will depict flood hazards more accurately, including changes in flooding patterns. The mapping project is part of a nationwide effort by FEMA. Ecology is helping FEMA put the new maps into a digital, electronic format. The more detailed maps will eventually be available on the Internet. The new digitized maps will better represent all the geographical features and hazards within a particular flood plain. The work will involve a series of engineering assessments, computer modeling, geographic information system (GIS)-based mapping, and public meetings. According to FEMA Regional Administrator Susan Reinertson, flood plains often cover more than a single county, city, town and related urban growth boundaries. "We're continuing to improve the quality and accuracy of national flood hazard data by developing Geographic Information System-based products with the best technologies," Reinertson said. "The new digitized maps will provide communities with flood maps and data that are more reliable, easier to use and more readily available." Completed digital flood hazard maps are available now to jurisdictions in Island, Ferry, Kitsap and Whatcom counties. Preliminary digital maps are available to jurisdictions in Adams, Clark, Grant, King, Pierce and Snohomish counties. New flood hazard maps for jurisdictions in Clallam, Cowlitz, Grays Harbor, Lewis, Skagit, Spokane and Yakima counties are scheduled to be revised within the next two years. For jurisdictions in Asotin, Benton, Chelan, Columbia, Douglas, Ferry, Franklin, Garfield, Jefferson, Klickitat, Lincoln, Mason, Okanogan, Pacific, Pend Oreille, San Juan, Skamania, Stevens, Thurston, Wahkiakum, Walla Walla and Whitman counties, new flood hazard maps are to be revised in the next three to five years. "These maps are vital in helping local governments make decisions about where homes, businesses and utilities can be built safely -- where they shouldn't be built due to past and potential flooding," said Ecology's Dan Sokol, who coordinates the National Flood Insurance Program for the state. "The revised maps will help save lives, property and reduce economic harm. It's critical that our maps be as accurate as possible." Reinertson said the new digital maps will help FEMA and Ecology: "The new digital maps are a vast improvement over the old paper maps," Sokol said. "They will be more comprehensive, show more on-the-ground information, and be more accessible to citizens and officials who need the information." FEMA had planned to update the flood hazard maps in all 39 Washington counties. The federal agency was going to convert its existing paper maps into electronic format. However, FEMA changed its policy after the public raised concerns that high-risk areas needed improved, more accurate hazard maps.
<urn:uuid:790cc1c5-f528-4326-96fb-971f9735362f>
CC-MAIN-2017-04
http://www.govtech.com/geospatial/Washington-Department-of-Ecology-FEMA-Revising.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00253-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925366
663
2.625
3
Check out this short documentary about a Turing Machine built out of Legos, by Jeroen van den Bos and Davy Landman at the CWI (Centrum Wiskunde & Informatica) in Amsterdam, Netherlands. The machine was built for the CWI exposition "Turings Efrenis" to honor the 100th birthday of Alan Turing. As the documentarian notes, "Alan Turing was a brilliant mathematician who helped define the theoretical model of the computer as we know it today. He was a visionary, one of the few people of his time who recognized the role the computer would play for humanity. The Turing Machine (1936) is an adequate model of a computer. It can do anything the computers of today or tomorrow can do." More details on the project: www.legoturingmachine.org More details on the making of the video: www.ecalpemos.nl/2012/06/18/lego-turing-machine-video Keith Shaw rounds up the best in geek video in his ITworld.tv blog. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter, Facebook, and Google+.
<urn:uuid:d5ac48c1-c353-445c-be24-836d59277631>
CC-MAIN-2017-04
http://www.itworld.com/article/2722450/virtualization/watch-this-turing-machine-made-of-legos.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00005-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924932
260
2.875
3
A group of students with the University of New South Wales have built an electric car that currently holds the record for the fastest solar powered vehicle. In 2011, they built a car that was able to reach 55mph. Now with the new vehicle, named eVe, the group is hoping to break the record for the highest average speed over a 310 mile distance. The current record is 45mph. eVe, the new electric test car, is powered by Li-ion batteries, solar panels, and electric motors that produce an amazing 97% efficiency. The car is capable of reaching a max speed of 87mph. If driving at highway speeds, the car uses about as much electricity as a kitchen toaster. It’s also got an impressive range of 500 miles if it uses the battery pack along with help from the solar panels. The body is made of carbon fiber, which is extremely light – a mere 661lbs. The front wheels are made of carbon fiber and the rear wheels are aluminum. The group of students is hoping that some of their ideas on efficiency will eventually make it to production vehicles sometime in the future.
<urn:uuid:70a29bd9-2dd1-4649-a84a-53c96ce52a21>
CC-MAIN-2017-04
http://www.bvainc.com/eve-fastest-solar-powered-vehicle/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00399-ip-10-171-10-70.ec2.internal.warc.gz
en
0.974519
229
3.234375
3
1. Blocking Social Media In this video, you’ll learn how to block access to social media websites using FortiGuard categories. You’ll need an active license for FortiGuard Web Filtering services. Web filtering with FortiGuard categories allows you to take action against a group of websites in a certain category. In this example, you’ll learn how to block websites that fall into the social media category. Computers on your internal network will not have access to any websites that are fall into Fortiguard’s social media category. You can go to the FortiGuard website to find out which websites are included in a category. If there are any additional specific websites and subdomains that you’d like to block, you can combine FortiGuard categories with our Static URL filter. For more information about categories visit the FortiGuard Center at http://www.fortiguard.com/webfilter. For more information about Static URL filters visit the Cookbook website at: http://cookbook.fortinet.com/blocking-facebook-54/. Visit Fortinet's docum
<urn:uuid:fc910b42-11ac-4e30-92e0-b47c0a7a4a29>
CC-MAIN-2017-04
https://video.fortinet.com/playlist/featured-playlist/chapter/1/-1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00427-ip-10-171-10-70.ec2.internal.warc.gz
en
0.826991
237
2.59375
3
While computer systems have undoubtedly changed the way we do business, there are downsides to this, with one of the biggest being security. It’s challenging to keep your systems completely secure, especially since new threats are uncovered on a near daily basis. That’s why you need to keep abreast of new potential threats and adopt trusted IT security services too. One of the latest issues to come to light is a device that can infect your computer when connected to a USB port. While USB threats aren’t anything new – USB thumb drives are well known to be used by some employees to copy and take important files with them when they leave the office – this latest threat is a little different. Hackers have developed a USB stick that can bypass Windows Autorun features and infect your system. How do these drives work? As you may have noticed, when you connect a device like an external hard drive to your computer via the USB port, Windows will not run, or open the drive. Instead, you will get a window with a number of options, including: Open folder to view files, Download pictures, Play files, etc. The reason for this is because hackers figured out a number of years ago how to put a virus on a USB stick, which when plugged into the computer, would be auto run (started up) by Windows and infect the system. Hackers have recently figured out how to trick this feature. What they have done is create a flash drive that looks like a USB memory stick. Only, when you plug it into a computer, Windows thinks it’s a plug-and-play peripheral like a keyboard, and will allow it to run. There is memory on the stick, where hackers can write and store a virus or infection, which will then run, infecting the system. There are four things to be aware of with these drives: What does this mean for my company? Because these devices are nearly indistinguishable from real memory drives, it is nearly impossible to spot and therefore stop them from infecting systems. Because these drives are currently hard to find and infection rates are generally low, many companies probably don’t have to worry too much. However, you can bet that these drives will probably become more popular in the near future. This doesn’t mean that you don’t have to be aware of this risk and understand that these drives exist. Some companies have started to take action by disabling USB drives, monitoring what employees plug into their drives and even providing employees with tamper-proof USB drives. One thing you might have to concern yourself with is if you allow employees to bring in their own drives. In general, if you take steps to ensure that the drives being used are legitimate and approved by the company, this shouldn’t be much of a problem. Of course, keeping your security systems and anti-virus scanners up to date and functioning is always a good idea. If you would like to learn more about this security threat and what you can do to stop it, including how we can help minimize risks, please contact us today to see how our systems can help you.
<urn:uuid:083c96d9-7419-4a74-997a-0136b19224e9>
CC-MAIN-2017-04
https://www.apex.com/usb-device-poses-security-threat/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00328-ip-10-171-10-70.ec2.internal.warc.gz
en
0.966467
647
2.625
3
A massive security risk in wireless mice and keyboard dongles is leaving billions of PCs, Macs and millions of enterprise networks at risk. Using an attack which Bastille researchers have named “MouseJack,” hackers can remotely hack the mice from within 100 meters away. Once paired, the MouseJack operator can insert keystrokes or malicious code with the full privileges of the PC owner and infiltrate networks to access sensitive data. The attack is at the keyboard level; therefore, PC’s, Macs and Linux machines using wireless dongles can all be victims. Affected vendors include: Logitech, Dell, HP, Lenovo, Microsoft, Gigabyte, AmazonBasics, but most non-Bluetooth wireless dongles are vulnerable. “MouseJack poses a huge threat, to individuals and enterprises, as virtually any employee using one of these devices can be compromised by a hacker and used as a portal to gain access into an organization’s network,” said Chris Rouland, founder, CTO, Bastille. “The MouseJack discovery validates our thesis that wireless internet of things (IoT) technology is already being rolled out in enterprises that don’t realize they are using these protocols.” As protocols are being developed so quickly, they have not been through sufficient security vetting, he added: “The top 10 wearables on the market have already been hacked and we expect millions more commercial and industrial devices are vulnerable to attack as well. MouseJack underscores the need for security across the entire RF spectrum as exploitation of IoT devices via radio frequencies is becoming increasingly popular among the hacker community.” The MouseJack vulnerability affects a large percentage of wireless mice and keyboards, as these devices are ubiquitous and often found in sensitive environments. While some vendors will be able to offer patches for the MouseJack flaw with a firmware update, many dongles were designed to not be updatable. Consumers will need to check with their vendor to determine if a fix is available or consider replacing their existing mouse with a secure one. “Wireless mice and keyboards are the most common accessories for PC’s today, and we have found a way to take over billions of them,” said Marc Newlin, Bastille’s engineer responsible for the MouseJack discovery. “MouseJack is essentially a door to the host computer. Once infiltrated, which can be done with $15 worth of hardware and a few lines of code, a hacker has the ability to insert malware that could potentially lead to devastating breaches. What’s particularly troublesome about this finding is that just about anyone can be a potential victim here, whether you’re an individual or a global enterprise.” Photo © anaken2012
<urn:uuid:0bf36dd7-e31e-48ee-9e6c-7abac059d37c>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/news/mousejack-flaw-affects-billions-of/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00052-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938716
568
2.625
3
Technology Primer: OpenStack - Course Length: - 0.5 day Instructor Led Cloud computing is transforming enterprise IT as well as communication service provider networks and OpenStack is the open source Infrastructure as a Service (IaaS) solution for building and managing shared clouds. This course provides a conceptual understanding of the benefits, capabilities, architecture as well as the high level architecture of the OpenStack IaaS. Then we explain the functionality provided by each of the key services such as Glance, Keystone, Nova, Glance, Neutron, Cinder, and Swift as well as Heat orchestration. Finally, we will discuss OpenStack orchestration and telemetry services and how it integrates with NFV and SDN. This course is designed for professionals in the industry who need to develop a high-level understanding of OpenStack. After completing this course, the student will be able to: • Explain the motivation for implementing IaaS • Define IaaS and Cloud Computing Options • Identify the benefits and applications of IaaS and OpenStack. • Diagram OpenStack’s Logical and Physical architectures. • Discuss roles of various OpenStack Services • Describe how OpenStack IaaS can provide redundancy for a tenant Virtual Machine • List capabilities of Role Based Authentication and Control for OpenStack user management • Discuss how OpenStack integrates with NFV and SDN • Describe OpenStack orchestration and Telemetry services 1. OpenStack IaaS Architecture and Services 1.1. Brief history and releases 1.2. OpenStack architecture 1.3. OpenStack services 2. Virtualization and Cloud Fundamentals 2.1. Physical vs. Virtualized 2.2. Hypervisor – What and why? 2.2.1. Resource Virtualization 2.3. Virtual machines vs. containers 3. OpenStack Capabilities and Limitations 3.1. Key capabilities 3.1.2. Role-based authentication 3.1.3. Lifecycle management 3.1.4. VM instantiation 3.1.5. Message queue (RabbitMQ) 3.2. Limitations and disadvantages 4. OpenStack IaaS Operations 4.1. Cloud segregation techniques 4.2. End-to-end operation of creating a tenant network 4.3. IaaS operational management 4.4. Telemetry service 5. Putting it-all-together 5.1. Integration with NFV and SDN
<urn:uuid:f43dfa71-0695-42b3-9f55-63325ec131eb>
CC-MAIN-2017-04
https://www.awardsolutions.com/portal/ilt/technology-primer-openstack?destination=ilt-courses
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00080-ip-10-171-10-70.ec2.internal.warc.gz
en
0.82551
535
2.546875
3
Virtual LANs (VLANs) seem to be one of the easiest topics to configure in the CCNA course. However, there can be some tricky things that are often overlooked. First, you must create your VLANs, then configure trunking, and lastly assign ports to VLANs. To begin, there are three modes you can enter to configure VLANs: VLAN Database mode, VLAN Configuration mode, and Interface Configuration mode. VLAN Database mode is deprecated so it may not be available for future usage. % Warning: It is recommended to configure VLAN from config mode, as VLAN database mode is being deprecated. Please consult user documentation for configuring VTP/VLAN in config mode. VLAN 5 added: VLAN configuration mode is a sub-mode inside of global configuration mode that allows you to configure all of the basic options. Now for Interface configuration mode, you have to issue the command switchport access vlan vlan#. Usually this only applies a VLAN to a port, however if the VLAN in question isn’t inside the VLAN database, then It will be created. SW1(config)#interface fastEthernet 0/5 SW1(config-if)#switchport access vlan 5 % Access VLAN does not exist. Creating vlan 5 By the way, the VLAN database is a file inside of flash (not NVRAM) called vlan.dat. (Later we will have a post about VLAN Trunking Protocol (VTP) so the management of these VLANS can be simplified in a larger network.) Secondly, your next objective will be to create your trunks. This can be done dynamically or manually. However in many causes it should be done manually. It is accomplished with the interface command switchport mode trunk. SW1(config)#interface fastethernet 0/10 SW1(config-if)#switchport mode trunk Also, a majority of the time, the Ethernet interface must be Fastethernet capable or faster. Although, two different types of encapsulation that can be used for trunking, newer model switches only support IEEE 802.1q. (In a future blog I will discuss options that are necessary for modifying 802.1q trunks). Lastly, you should apply the VLAN to the interface with the command switchport access vlan vlan#. SW1(config)#interface fastEthernet 0/15 SW1(config-if)#switchport access vlan 50 You can verify this with the show vlan brief command, the show interface switchport command or the show interfaces status. The various output of these command all show different information, however they have VLAN listed to the interface. Also, the commands for configuring trunk and assigning ports to VLANs are located in the running-config and, (later) the startup-config. (There also will be future posts about the appearance of these different show commands.) Just remember, with configuring VLAN information there are different things that have to be taken into account. There are different modes for creating VLANs, considerations for configuring trunk and assigning VLANs to interfaces. Author: Jason Wyatte
<urn:uuid:17229bfd-e992-4968-94ef-8afa8ecdb69b>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2009/03/18/think-you-know-vlans/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00198-ip-10-171-10-70.ec2.internal.warc.gz
en
0.794716
688
3.078125
3
One of the first initiatives for secure booting has been the Unified Extensible Firmware Interface (UEFI) Initiative. UEFI is a superior replacement of the Basic Input Output System (BIOS) and a secure interface between the operating system and the hardware firmware. The UEFI Initiative was a joint effort by many companies to minimize the risks of BIOS attacks from malware that may compromise the system. It was started by Intel and termed as Extensible Firmware Interface (EFI) for its Itanium-based systems since BIOS lacked the inherent capability to secure vulnerable firmware. One of the aforementioned BIOS attacks was the Mebromi rootkit, a class of malware that focused on planting itself in the BIOS. Similar to the BIOS, the UEFI is the first program in the booting process and is installed during the manufacturing process of the hardware. UEFI has the inbuilt capability for reading and understanding disk partitions and different file systems. UEFI has several advantages, including the ability to boot from large hard disks of around 2TB with a GUID Partition Table, excellent network booting, CPU-independent architecture and drivers. It uses the GUID partition table with globally unique identifiers to address partitions and has the ability to boot from hard disks with capacity of around 9.4 ZB (1024x1024x1024 GB). Secure boot is a UEFI Protocol to ensure security of the pre-OS environment. The security policy integrated in the UEFI works on the validation of authenticity of components. UEFI has a modular design that gives system architects and hardware designers greater affability in designing firmware for cutting edge computing and for the demand for higher processing capabilities. The sequence of booting remains the same and a computer boots into the UEFI followed by certain actions and ultimately the loading of the operating system. Furthermore, the UEFI controls the boot and runtime services and various protocols used for communication between services. The UEFI resembles a lightweight operating system that has access to all the computer’s hardware and various other functions. The transition from EFI to UEFI continues with Itanium 2 systems followed by System x machines and now we have the new Intel and AMD Series with inherent UEFI capabilities. Once we power on a UEFI-capable computer, the code execution starts, and configures the processor and other hardware and gets ready to boot the operating system. As of this date, UEFI has been used with 32/64 bit ARM, AMD and Intel chips and for each of these platforms, there had to be a specific compilation of the boot code for the target platform. UEFI offers support for older extensions like ACPI, which makes it backward compatible with components that are not dependent on a 16-bit runtime environment. Once a system gets powered on, the firmware checks the signature of the firmware code that exists on hardware components like hard disks, graphic cards and network interface cards. Next Option ROMs work by preparing and configuring the hardware peripherals for handoff with the operating system. It is during this process that the firmware checks for embedded signatures inside the firmware module against a database of signatures already in the firmware. If a match is found, that particular hardware module is allowed to execute. Hence, it works on a checklist of matching the integrity of signatures from the firmware database and denies further action if a particular component signature is found in the Disallowed list, which means that it may be infected with malware. The main database is actually segmented into an Allowed and a Disallowed list. The Allowed list contains the trusted firmware modules while the Disallowed list contains hashes of malware-infected firmware and their execution is blocked to maintain the integrity and security of the system. The original equipment manufacturer installs a unique signature and keys during the manufacturing process for the secure booting process. This trust relationship is built on a digital certificate exchange commonly known as Public Key Infrastructure (PKI). PKI is the core infrastructure of the secure boot feature in UEFI. The Public Key Infrastructure is a set of hardware and software policies used to create, manage and distribute digital certificates with the help of a Certificate Authority (CA). The Secure Boot feature requires the firmware to have UEFI version 2.3.1 or higher. The secure booting feature mainly addresses rootkits and malware that may target system vulnerabilities even before the operating system loads. This feature even protects systems from bootloader attacks and firmware compromises. A cryptographic key exchange takes place at boot time to keep a check whether the operating system trying to boot is a genuine one and not compromised by malware or rootkits. A while ago there was a dispute between Microsoft and the Free Software Foundation in which the latter accused the former of trying to use the secure boot feature of UEFI to prevent the installation of other operating systems such as different Linux versions by requiring the computers certified with Windows 8 getting shipped with secure boot enabled through a Microsoft private key. Microsoft controls the key signing authority and anyone who wanted to boot an operating system on the hardware certified for Microsoft Windows would have to buy Microsoft’s private key at a lucrative price. The computer hardware would itself have a copy of Microsoft’s public key and would use it to verify the integrity of the private key and check whether it is originally from Microsoft. If any modifications are made, the verification would fail and the computer would fail to carry on the boot process any further. Microsoft then denied the fact that this strategy was built to prohibit the installation of other operating systems. It further said that it had the option to either disable the secure boot or allow the Windows 8 boot along with the secure boot feature. The developers of the open source community were concerned, since most Linux vendors did not have the power to get their certificates in the UEFI system. Red Hat, Ubuntu, and Suse would have no doubt implemented their certificates in the UEFI but the problem lies with communities like Slackware, NetBSD, and others. The main concern was that there are many UEFI motherboard manufacturers and getting the certificates included in each of them would not be an easy task for non-commercial open source communities since it would require a lot of time and money. All the binaries needed to be signed in with certificates from the binaries’ vendor, and this was indeed a tough task. And this certificate which signed those binaries had to be imported to the UEFI, which would enable that particular operating system to function securely. The problem would arise when a hardware vendor would not allow disabling Secure Boot from the setup menu and does not install certificates from other operating systems. In that case, the users who buy the computers with such capability will not be able to make use of open source Linux operating systems either through dual boot or single boot Linux since the secure boot feature would need the certificate from that particular operating system. The protests have taken form of Facebook pages like “Stop the Windows 8 Secure Boot Implementation” and campaigns like “Will your computers Secure Boot turn out to be Restrictive Boot” being created. Until and unless the public key of each open source operating system was available to the hardware vendor, GNU/Linux users would fail to enjoy the combination of secure boot with the inherent security of Linux and if the option to disable the secure boot was not incorporated in that particular hardware by the vendor then life would certainly become very difficult for Linux users. This secure boot initiative would prohibit tech people from implementing their own custom Linux flavors, and restrict them to using only what the manufacturer of the computer wants them to. The Certifying Authority (CA) would be incorporated by the computer manufacturer and he would ultimately decide whether a particular operating system has to be included or not. A simple solution to this controversy would be making the user be the CA and giving him or her the authority to decide the choice of operating system with secure boot. But on the other hand, this would open non-technical to the danger of being tricked into using a malicious operating system. Everything has its pros and cons and that is how technology goes. Luckily, everything is not settled yet and Microsoft is still trying its best without harming the Free Software Foundation and the Open Source Community. Red Hat, in collaboration with Canonical (the Ubuntu Community) and The Linux Foundation, published a white paper titled UEFI Secure Boot Impact on Linux. For further information regarding Linux and Red Hat, check out the Linux certification courses offered by the InfoSec Institute. The Red Hat and Canonical team further warned people that the personal computer devices will ship their hardware enabled with Secure Boot, which ultimately would be a problem for the open source distributions. Although Microsoft clearly denies this fact, the Linux Foundation is full of anger over this initiative. Microsoft is open to the implementation of the option to disable Secure Boot in the UEFI model but at the same time, it does not strongly support it. The issue would become even more troublesome if a user wants to dual boot Linux along with Windows. Red Hat along with the Linux Foundation have worked with hardware vendors and Microsoft to develop a UEFI secure boot mechanism that would allow users to run the Linux of their choice. During its research initiative, Red Hat’s main aim was to not only provide support to Red Hat/Fedora but also to make users able to run any one they choose. Red Hat geek Matthew Garrett, put forward a customized solution in which Microsoft would provide keys for all Windows OS, and Red Hat would similarly provide keys for Red Hat and Fedora. Ubuntu and others could participate by paying a nominal price of 99$. This would allow them to register their own keys for distribution to firmware vendors. We have covered the advantages of having the Secure Boot feature of UEFI, but there are cons to be considered as well. Having the Secure Boot feature would require all the components of the system to be signed, which includes not only the bootloader, but any hardware drivers as well. If the component vendors wished to sign their own drivers, they would need to ensure that their key is installed on all hardware they wish to support. For laptops, a single point solution would be to make all the drivers be signed with the OEM’s keys. At the same time, this approach would be problematic for the new hardware vendors and would prevent them from entering the new market until they distributed their keys to major OEMs. An alternative approach could be to have the drivers signed by a key included in the majority of the platforms. This would help hardware vendors from having per-platform issues. Also, if secure boot is disabled to boot an alternate OS, then this process would be limited to those who are technologically-savvy, i.e. not for the masses. Another disadvantage to the signing process is that if the signing key is disclosed and gets in the wrong hands, it may be used to boot a malicious operating system even with Secure Boot restrictions. To avoid this, the signing key would have to be blacklisted, which would prevent the operating system from booting. If the same happens with hardware vendors then the drivers would not validate and would cease the system process. Hence, we come to a point that the UEFI Secure Boot technology is a crucial part of a Linux setup and increases the protection at the root level to fight against the use of malicious software. The only limitation is that it should not hinder user freedom by limiting its use of different operating systems. The sad part is that the current version of Secure Boot model deters easy installation of Linux and inhibits users to play with the whole system. So after a long research initiative, the open source community recommended that the Secure Boot implementation is designed around the hardware vendor who would have full control over security restrictions. It is also recommended that the original equipment manufacturer should agree with allowing the secure boot option to be easily disabled and enabled as per the user’s choice. (This means that secure boot may be disabled through the OS and you may have the option to enable it through the firmware interface something like BIOS has.) This would help the open source community and also help the cause of the Secure Boot initiative.
<urn:uuid:f8aeac43-a8d4-41b6-ab74-6787fc2e63d4>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/07/15/uefi-secure-boot-next-generation-booting-or-a-controversial-debate/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00502-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949489
2,445
3.078125
3
This resource is no longer available Deep Defender Best Practices Guide What is a rootkit? Attackers use rootkits to replace administrative tools and obtain root access. They insert malicious software that can't be detected. In order to prevent problems with rootkits, you need to develop a better understanding of the issues. View this informative white paper to learn more about the operation of a rootkit and detection.
<urn:uuid:34673dbb-3084-4f32-8fb5-8126aacf6597>
CC-MAIN-2017-04
http://www.bitpipe.com/detail/RES/1349887233_741.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00134-ip-10-171-10-70.ec2.internal.warc.gz
en
0.883215
84
2.78125
3
Cyberbullying is defined differently by different organizations all around the world. The US National Crime Prevention council for example defines it as the use of cell phones, the Internet, or other devices in sending or posting text messages or images intended to embarrass or hurt a person. StopCyberbullying.org, an organization focused on stopping cyberbullying among children and teens, on the other hand, defines this as a situation that arises when a child, tween or teen is repeatedly tormented, embarrassed, harassed, tormented or otherwise targeted by another child or teen using email, text messaging, or any other type of digital technology. No matter how the term cyberbullying is defined however, everyone concerned with this growing trend admits that it is a serious matter. In the USalone, there already have been several cases of suicide of children ages 12-17 attributed to cyberbullying. In all cases, these children chose to end their lives after months of harassment, physical abuse and cyberbullying from schoolmates. Moreover, in most of these cases, the parents didn’t even know the severity of the situation or has been told by authorities that the situation wasn’t that serious. In fact, the issue of cyberbullying has been gaining more and more attention nationally and internationally. Some states in theU.S.has updated or created new laws to address this particular problem. Organizations, government agencies and corporations are also beginning to take part in helping stamp out cyberbullying. We have created the following articles to help parents, teachers and students learn more about bullying and how to prevent this alarming trend: There are several organizations that provide information and resources for parents, educators and students. We have listed some of them below. If you know of an excellent resource and would like to see it added to our list, email us at email@example.com.
<urn:uuid:3a287508-0de5-4d09-a171-9c6999e95c65>
CC-MAIN-2017-04
http://facecrooks.com/safety-center/Anti-Bullying-Resources.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00346-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953821
384
3.859375
4