text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
An API (application programming interface) is a programming language construct that enables two applications to communicate or, as the acronym suggests, interface with each other. As such, they are a common boundary that facilitates integration between applications, allowing for the rapid and efficient sharing of information.
APIs are used widely in ways that affect the daily lives of most users of web applications. Consumers are keen users of the new breed of social tools such as Twitter, Facebook, Google and LinkedIn that enable high levels of effective interaction - and all of those services rely on APIs to deliver information and connect users.
APIs are key enablers for the extended enterprise that is a reality today owing to the widespread use of mobile devices and cloud-based technology delivery mechanisms. They allow organisations to share information with users wherever they are, using whatever device or platform they wish. APIs enable organisations to provide customers with real time information about their products and services, making customer outreach a more social, proactive activity that can provide the organisation with competitive advantage.
APIs are also the cornerstone for enabling the promise of the 'Internet of Things', where all kinds of objects will be connected via internet protocol, from smartphones to cars to household appliances to sensors in all manner of industrial equipment, or smart technology such as meters. Technology vendor Cisco has estimated that this could lead to there being some 50 billion devices connected to the internet by 2020. APIs are also crucial for initiatives such as Smart Cities, which aim to promote the smarter use of resources and provide a way to give the public easier access to information. In order to make this dream a reality, open and interoperable digital services initiatives are required. In Europe, the European Commission is funding the CitySDK project, which aims to foster smarter participation, mobility and tourism, and which has developed an SDK with APIs that enable faster, more efficient application development possibilities.
There are many technology vendors that are actively exploring the promise offered by APIs. One such vendor is CA Technologies, which sees APIs as being crucial to its business, enabling its focus on mobility and mobile apps, cloud technologies, the Internet of Things and the development of smart technologies in general. It believes that APIs are vital for enabling the growth in opportunities related to such new, disruptive technologies and the promise that they offer.
To back up its vision, CA Technologies recently acquired API security and management vendor Layer 7 Technologies. The technology is positioned within CA Technologies' security business unit, where it is a natural fit with CA Technologies' web access management technology. However, CA Technologies states that it has broader implications beyond the use of APIs for enabling identity and access management capabilities. The API capabilities will enable more secure, accelerated application development and delivery in which security functions can be more effectively built into the software development lifecycle and more easily tested to improve the overall quality of software and applications. It sees a huge role for APIs in improving application governance and control overall.
CA Technologies is not the only vendor that sees the importance of APIs, but this acquisition does serve to underscore the overall importance and promise of APIs. As APIs continue to grow in importance, legacy applications and the data that they hold can be more rapidly repurposed to make those applications and the data available to all components of the extended enterprise - including mobile devices, partners, customers and developer communities-for the benefit of everyone involved. | <urn:uuid:585b8ba3-82b1-4e97-b6f1-23f6660569e4> | CC-MAIN-2017-04 | http://www.bloorresearch.com/blog/fran-howarth/apis-a-new-era-in-enablement/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00168-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936049 | 673 | 3.234375 | 3 |
Climate Warming Tipping Point
We’ve all heard about global warming. The earth’s warming, it’s not — it’s cooling. One thing’s for sure, my head is spinning. It’s gotten hard to keep track of what kind of climate change we’re really in for. James White, Professor of Geological Sciences, Univ. of Colorado at Boulder, talked with me about what he thinks may be a tipping point.
Prof. White says that we may be in for an abrupt climate change, which is when the climate system shifts modes very rapidly (in a few years), by large amounts (5 to 10 degrees C annual temperature, doubling or halving of precipitation, etc.). This type of change is particularly scary, as it is unpredictable and potentially devastating.
The timing of thresholds are hard to predict. Arctic sea ice extents dropped dramatically this past year, for example. Models can predict that change, but not the exact timing.
Is there anything we can do to prepare for abrupt climate change?
The amount of carbon dioxide (CO2) in the air has a direct affect on the amount of warming our planet experiences.
- At 380 ppm CO2, we are 100 ppm above the preindustrial level.
- 100 ppm is the difference we see in ice cores over the past 1 million years between glacial periods and interglacial periods.
- The climate system has inherent inertia, and thus most of the response to this increase in greenhouse gases is yet to come.
- Increased levels of greenhouse gases will change climate. This is simple physics, and not a matter of scientific debate.
- Quantifying feedbacks are key to understanding our future. Small changes in greenhouse gases can cause warming that trigger sea ice melt that in turn trigger much more warming as blue water replaces reflective, white ice.
- Climate change from greenhouse gases is only one measure of human impacts on the planet. We make as much nitrogen fertilizer as all bacteria in the world.
- We produce as much sulfate (a cooling aerosol that is a major cloud condensing nucleus) as all phytoplankton in the ocean.
- We have power, we have the domination we’ve long sought… but now we need to accept the responsibility that comes with that power. Its time to mature and start doing that.
Have we already exceeded a level of increased greenhouse gases that will ultimately result in large increases in sea level (many meters), rainfall patterns, temperatures, and other parts of the climate system? The past would tell us yes.James White – A biography
James has started the INSTAAR Stable Isotope Lab in 1989. In recent years, his research has helped to show that large climate changes tend to occur in the natural system as abrupt and rapid shifts in mode probably driven by internal adjustments in the Earth climate system, rather than slow and gradual adjustments to changing external conditions, such as the amount of energy received from the sun.
His research has also helped to show that land plants are capable of removing large amounts of carbon dioxide from the atmosphere, amounts that equal our input of CO2 from fossil fuel burning on short time scales. Such large changes in the uptake of CO2 by plants is a key piece in the puzzle we must solve to address future CO2 levels and climate change.
James has written following research papers and also been a contributor of number of publications
- Global scale climate and environmental dynamics.
- Carbon dioxide concentrations and climate from stable hydrogen isotopes peats and other organics.
- Climate from deuterium excess and hydrogen isotopes in ice cores.
- Isotopes in general circulation models.
- Modern carbon cycle dynamics via isotopes of carbon dioxide and methane. | <urn:uuid:9d91f227-6915-4e1f-afae-8b556781f61f> | CC-MAIN-2017-04 | http://craigpeterson.com/environment/global-warming/climate-warming-tipping-point/114 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00470-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933222 | 773 | 3.46875 | 3 |
Energy Department procures 20 petaflop computer
New machine will be almost 20 times more powerful than today's fastest machine
- By Joab Jackson
- Feb 03, 2009
The Energy Department's National Nuclear Security Administration (NNSA) has chosen IBM to build a supercomputer that will be almost 20 times more powerful than today's fastest supercomputer.
To be housed at Lawrence Livermore National Laboratory, the computer, nicknamed Sequoia, will run 1.6 million IBM Power processors, and is expected to a be able to execute 20 quadrillion floating-point operations per second (20 petaflops). The machine is expected to be fully operational by 2012.
IBM will also provide NNSA with a starter supercomputer to test applications that will be used on Sequoia. Named Dawn, this computer will be operational in 2009 and be capable of executing 500 trillion floating point operations per second (500 teraflops).
Sequoia will occupy 3,422 square feet. Its 98,304 compute nodes will be housed in 96 refrigerator-sized racks, and the compute nodes will be connected with fiber optics. Sequoia will run the Linux operating system. IBM predicts Sequoia will also be an energy-efficient supercomputer able to offer 3,050 calculations per watt of energy.
“These powerful machines will provide NNSA with the capabilities needed to resolve time-urgent and complex scientific problems, ensuring the viability of the nation’s nuclear deterrent into the future. This endeavor will also help maintain U.S. leadership in high performance computing and promote scientific discovery,” NNSA Administrator Thomas D’Agostino, said in a statement.
NNSA use will use Sequoia to model the state of the nation’s nuclear weapons stockpile, to ensure that it is safely maintained. The Energy Department's Advanced Simulation and Computing program will also use the machine to run very large suites of complex simulations to model the weather and other complex phenomena.
At present, the world's most powerful computer is widely thought to be the Los Alamos National Laboratory's Roadrunner, another IBM system that topped the most recent Top 500 ranking of the world's most powerful computers. Roadrunner is a 129,000 processor machine with a sustain performance of more than 1.1 petaflops.
IBM will build and test the machine at its Rochester, Minn. plant.
Joab Jackson is the senior technology editor for Government Computer News. | <urn:uuid:9d772617-8f67-453e-b429-b42f370496ca> | CC-MAIN-2017-04 | https://gcn.com/articles/2009/02/03/energy-supercomputer.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00104-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.8752 | 508 | 2.59375 | 3 |
A Cyber History Of The Ukraine ConflictThe CTO for the US Cyber Consequences Unit offers a brief lesson in Russian geopolitics and related cyber flare-ups, and explains why we should be concerned.
For the second time in recent history Russia has flexed both its military and cyber muscles. The latest incident is playing out in The Autonomous Republic of Crimea (Ukraine). The previous incident occurred in South Ossetia (Georgia) in 2008. Both countries were once integral pieces of the vast Soviet empire, which crumbed more than two decades ago. Russia has also flexed its cyber power in the former Soviet states of Estonia (2007) and Kyrgyzstan (2009).
Over the years, the international community has closely monitored each of these worrisome incidents. The Georgian incident was especially troublesome, because it was the first time cyber attacks were used in concert with traditional military operations, which included tanks storming across the border of a sovereign nation.
My post-analysis of this incident concluded that 11 Georgian websites were knocked offline prior to the Russian military invasion. The official website of the President of the Republic of Georgia and several media outlets (e.g., www.news.ge) were among those impacted by the initial cyber barrage. The attack method used to disrupt these key sites was a distributed denial-of-service (DDoS) attack, launched from botnets controlled by Russian cyber criminals -- most likely cooperating with the Russian government. The attacks didn’t wane from their targets for the entire duration of the Russian military campaign against Georgia; they stopped immediately after Russia and Georgia signed a preliminary ceasefire agreement.
Flash forward to today and the situation in Ukraine. While the current state of affairs there is complicated, it’s clear that Russia isn’t running the same cyber playbook it used in Georgia. For instance, when Russian forces invaded Crimea they didn’t blind the Ukrainian government with massive cyber attacks. Such attacks were not launched, because the strategic and operational environments in Ukraine and Crimea were much different from those in Georgia.
In the current crisis, Russian forces severed the Internet and other communication channels that connect the Crimean peninsula with the rest of Ukraine. Some cyberwar experts have referred to this incident as a cyber attack, although information surrounding it points to physical sabotage by a military force, for example, cutting cables and destroying equipment. What this means is that the recent incident wasn’t a cyber attack in and of itself, even though it interfered with communication services delivered by cyber technology.
Jamming or cyber attacks?
There have also been numerous reports that the mobile phones belonging to key Ukrainian government officials are being targeted. The Russian military has the capability to employ sophisticated electronic warfare techniques (e.g., jamming), which would disrupt cellular communications within Ukraine. This type of jamming normally hits a wide range of frequencies over a large geographic area. Based on open-source reporting it’s unlikely that the mobile phones in question were victims of military jamming. It’s more likely that Russian intelligence or pro-Russian sympathizers targeted these specific mobile phones through a Ukrainian cellular provider.
There is some historical precedent that supports this argument. For instance, in January protestors in Ukraine received an ominous text message, which read: "Dear subscriber, you are registered as a participant in a mass disturbance."
This text message was only sent to individuals located in a specific geographical location in Kiev. Ukrainian cellular providers have denied providing subscriber metadata to the government. Based on the January incident it’s highly-probable that someone -- Russia -- targeted the mobile phones of Ukrainian government officials via subscriber information, such as telephone number, or the international mobile equipment identity (IMEI) number. But without additional details about these isolated incidents, it’s difficult to confirm that the mobile phones of these government officials were impacted by cyber attack.
Over the last few months Ukrainian websites (within the TLD .ua) have seen their fair share of defacements. Evidence indicates that Muslim hacking groups with pro-Syrian or anti-Israeli agendas conducted the majority of the defacements. A recent round by a group named Cyber Berkut is particularly troubling. Based on the targets attacked and symbolism used it’s very clear that the Cyber Berkut is pro-Russian. Some of the group’s tactics, techniques, and procedures (TTPs) are similar to those used in cyber operations in 2007 and 2008 by the Kremlin against Estonia and Georgia.
While these attacks are truly unsettling, they provide only a small window into the cyber capabilities of the nations embroiled in this conflict. Tomorrow’s attacks may paint a sharper picture of those cyber capabilities and how they are wielded on the battlefield. What is clear is that "cyber" will continue to play an important role in future military operations.
John Bumgarner is Chief Technology Officer for the U.S. Cyber Consequences Unit, an independent, non-profit research organization that investigates the strategic and economic consequences of possible cyber attacks. He has work experience in information security, intelligence, ... View Full Bio | <urn:uuid:fc07eef6-1650-4b8b-a4d6-f1389962152b> | CC-MAIN-2017-04 | http://www.darkreading.com/attacks-breaches/a-cyber-history-of-the-ukraine-conflict/d/d-id/1127892?_mc=RSS_DR_EDT | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00342-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956096 | 1,035 | 2.75 | 3 |
A few years ago, if you knew how to use a multimeter, you were probably an electronics technician or an engineer. But, today, the handy test equipment known as a Digital Multimeter should be in the toolbox of every handy person. The multimeter is great for determining the working status of many appliances. Multimeters are available in either digital or analog models. Digital multimeters will display readings in numbers. Analog multimeters indicate the value with a needle over a scale.
Have you ever wondered why a light doesn’t turn-on?
If the lightbulb is not bad, is the lamp out of order?
Is the lamp cord bad?
Here is one simple example of where a digital multimeter comes in handy. A useful function of a multimeter is the resistance, or “continuity” test function. To avoid the chance of electrical shock, it is important to never use the continuity test function on any appliance that has live voltage connected. Always make sure the appliance is first NOT connected to any power source.
To test a switch, place a test probe on each side (pole) of the switch. When you move the switch from the off to on position, the multimeter reading should change from zero to infinity. If not, then the switch is not working properly. To test a motor, touch a test probe to each pole. Again, a reading of zero indicates that the motor has continuity, current can pass through, and the motor windings are good.
After carefully studying the instructions of the digital multimeter, you will be able to move on to more detailed electrical tests. A multimeter can measure alternating current (AC or household current) or direct current (DC or battery current) in a live circuit. It can also check voltage. A multimeter can test 120 volts AC in a home circuit, or it can test DC batteries to learn if they are weak or fully charged.
Take the time now to learn how to properly use a basic multimeter. You will be glad you did when someday you would like to verify the cause of an everyday electrical problem.
A basic digital multimeter is available at Fiberstore.com for a reasonable price depending on your test requirements. Buy your wanted digital multimeter from us with your confidence. | <urn:uuid:e264b437-5da3-4bd1-a618-82b25cfacadf> | CC-MAIN-2017-04 | http://www.fs.com/blog/digital-multimeters-helps-you-handy-test.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00552-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.895711 | 467 | 3.375 | 3 |
By: Joel Carleton, CSID Director of Cyber Engineering
We’ve all heard that it’s important to pick long, complicated passwords. What you may not realize is why this becomes crucial in the context of a breach. While ensuring you don’t pick from some of the most common passwords is important, it’s still not enough.
Some background information on how passwords work: while we still see websites storing passwords unencrypted (in this case, if you are part of a breach, the complexity of your password makes no difference), it is most common for websites to encrypt your password with a one-way hash. Put simply, this is a method that takes your password and transforms it into a long string of characters that is then stored in the website’s database. The website does not know your original password. When you log in to the website it applies the transformation and compares the long string to what it has stored in the database. If they match, then it knows you have entered the correct password.
When a company is breached, a common result is the selling and or sharing of that company’s user accounts. They could be publicly disclosed, shared in criminal forums and chat rooms, or sold to the highest bidder. The breached company may have taken steps to secure your account credentials, but the strength of your password can be your best friend or worst enemy. When a breach happens on a website where the passwords have been hashed, the criminal steals a list of user ids/emails and associated hashed passwords. They do not yet have your original password. The criminal has to decrypt the hash to retrieve the original password. While there are many sophisticated techniques at the criminals’ disposal, one of the most popular is referred to as the “brute force” method. Every possible password is tried. Given the short and simple passwords that are routinely used, the criminal can quickly decrypt the majority of the encrypted passwords.
To find out just how simple it is to decrypt a password, try to Google the encrypted hash of a common password, “d8578edf8458ce06fbc5bb76a58c5ca4”. It’s pretty easy to see what the original password is even without using brute force guessing software.
Let’s assume you’ve chosen something more complicated. For passwords with 6 characters, how many brute force guesses are necessary? Assuming your password at least has mixed upper and lower case letters, there are 19 billion possible passwords. There are two things that make cracking this type of password trivial for the criminal:
- They do not have to attempt to log in to the website for each of their guesses. It would be impossible to make the necessary number of attempts to log in. They are able to make as many guesses as they want without anyone knowing what they are doing because they have the hashed password.
- Computers are very good at making very fast guesses. An average computer with an upgraded graphics card can make 500 million guesses a second. Your 6-character password length can be guessed in 38 seconds or less. Adding numbers and the full set of non-alphanumeric characters, the password can now be guessed in 26 minutes or less.
Parting advice: the easiest way to make your passwords better is to make them longer (at least 9 characters). If you still use only alphanumeric characters but your password is 10 characters, a criminal would need over 18,000 days to crack it. Hopefully he won’t have this much time on his hands and will move on to an easier target! | <urn:uuid:aae7bce9-67b6-4265-ba77-67bb3c20d144> | CC-MAIN-2017-04 | https://www.csid.com/2012/05/password-complexity-why-it-makes-a-difference-in-a-breach/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00278-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948699 | 744 | 3.1875 | 3 |
The increasing amount of information individuals share on social networking Web sites also could put them at greater risk of identity theft, according to identity management professionals.
The amount of personal information posted on social networking sites has made it easier for criminals and others to collect data and impersonate individuals online, said identity specialists speaking on Thursday at a panel in Washington hosted by the technology lobbying group TechAmerica.
"The definition of personal identifiable information will continue to expand," said Rick Kam, president of the consulting firm ID Experts. "Our approaches must also evolve."
The number of phishing incidents where individuals are asked to enter their personally identifiable information into a third-party Web site has increased sharply in recent years, said Dianne Usry, deputy director for incident management at the Internal Revenue Service's Office for Privacy, Information Protection and Data Security.
To comply with an Office of Management and Budget mandate intended to combat the increase in identity theft, the IRS is limiting its use of Social Security numbers both on printed documents and as a way to authenticate online visitors to its Web sites. Last year the IRS decreased the number of documents and letters with Social Security numbers by 8 million.
"The IRS will never get away from paper," Usry said. "We're actually more concerned about the possibility of a data breach from paper documents than from online."
The IRS does not keep statistics on the number of phishing attempts that successfully steal personal data, but most domestic phishing sites usually are shut down within three hours, she said. International sites take longer to shutter.
"The criminals are more active and so are we," Usry said. "We hope awareness is going up along with activity."
Social Security numbers are no longer the only target of online criminals, according to the panel members. Social networking sites such as Twitter and Brightkite allow individuals to post a stream of updates that include where they are. The popular photo-sharing Web site Flickr allows users to see exactly where a photo was taken. By aggregating the data about an individual's activities and movements, someone can create a detailed account about the person's work or personal life, according to Ian Glazer, a senior analyst for identity and privacy strategies at Burton Group.
"Individuals and organizations should treat their location as an enterprise asset," Glazer said, adding that disclosures made on social networking sites like Facebook could reach much larger audiences than users intended.
Also on the rise is medical identity theft, whose victims account for 3 percent of all identity theft, according to Dan Steinberg, an associate at Booz Allen Hamilton. Steinberg said medical identity theft is especially troubling because in addition to financial damage, the act can result in physical injury or loss of life.
One of the most common forms of this type of theft is when an individual uses someone else's information to seek medical care, either with or without their consent. The impostor's patient information is then added to the authentic patient's record, creating the possibility that the victim might receive a misdiagnosis or mistreatment when he or she visits a doctor or hospital.
Steinberg said health care providers can prevent this by verifying the identity of patients before providing care. Many providers now request identification when patients arrive, but the practice is not widely followed. | <urn:uuid:976e1528-991d-4b1f-af9c-12c503f233fd> | CC-MAIN-2017-04 | http://www.nextgov.com/health/2009/11/social-networking-sites-a-treasure-trove-for-identity-thieves/45239/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00094-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945726 | 662 | 2.5625 | 3 |
Hueso R.,University of the Basque Country |
Perez-Hoyos S.,University of the Basque Country |
Sanchez-Lavega A.,University of the Basque Country |
Wesley A.,Acquerra Pty. Ltd. |
And 20 more authors.
Astronomy and Astrophysics | Year: 2013
Context. Regular observations of Jupiter by a large number of amateur astronomers have resulted in the serendipitous discovery of short bright flashes in its atmosphere, which have been proposed as being caused by impacts of small objects. Three flashes were detected: one on June 3, 2010, one on August 20, 2010, and one on September 10, 2012. Aims. We show that the flashes are caused by impacting objects that we characterize in terms of their size, and we study the flux of small impacts on Jupiter. Methods. We measured the light curves of these atmospheric airbursts to extract their luminous energy and computed the masses and sizes of the objects. We ran simulations of impacts and compared them with the light curves. We analyzed the statistical significance of these events in the large pool of Jupiter observations. Results. All three objects are in the 5-20 m size category depending on their density, and they released energy comparable to the recent Chelyabinsk airburst. Model simulations approximately agree with the interpretation of the limited observations. Biases in observations of Jupiter suggest a rate of 12-60 similar impacts per year and we provide software tools for amateurs to examine the faint signature of impacts in their data to increase the number of detected collisions. Conclusions. The impact rate agrees with dynamical models of comets. More massive objects (a few 100 m) should impact with Jupiter every few years leaving atmospheric dark debris features that could be detectable about once per decade. © ESO, 2013. Source | <urn:uuid:3aa2332e-d568-4cad-8868-332c572b88c9> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/association-of-lunar-and-planetary-observers-of-japan-2158357/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00332-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939473 | 380 | 2.890625 | 3 |
3.1.8 How is the RSA algorithm used for authentication and digital signatures in practice?
The RSA public-key cryptosystem can be used to authenticate (see Question 2.2.2) or identify another person or entity. The reason it works well is because each entity has an associated private key which (theoretically) no one else has access to. This allows for positive and unique identification.
Suppose Alice wishes to send a signed message to Bob. She applies a hash function (see Question 2.1.6) to the message to create a message digest, which serves as a "digital fingerprint" of the message. She then encrypts the message digest with her private key, creating the digital signature she sends to Bob along with the message itself. Bob, upon receiving the message and signature, decrypts the signature with Alice's public key to recover the message digest. He then hashes the message with the same hash function Alice used and compares the result to the message digest decrypted from the signature. If they are exactly equal, the signature has been successfully verified and he can be confident the message did indeed come from Alice. If they are not equal, then the message either originated elsewhere or was altered after it was signed, and he rejects the message. Anybody who reads the message can verify the signature. This does not satisfy situations where Alice wishes to retain the secrecy of the document. In this case she may wish to sign the document, then encrypt it using Bob's public key. Bob will then need to decrypt using his private key and verify the signature on the recovered message using Alice's public key. Alternately, if it is necessary for intermediary third parties to validate the integrity of the message without being able to decrypt its content, a message digest may be computed on the encrypted message, rather than on its plaintext form.
In practice, the public exponent in the RSA algorithm is usually much smaller than the private exponent. This means that verification of a signature is faster than signing. This is desirable because a message will be signed by an individual only once, but the signature may be verified many times.
It must be infeasible for anyone to either find a message that hashes to a given value or to find two messages that hash to the same value. If either were feasible, an intruder could attach a false message onto Alice's signature. Hash functions such as MD5 and SHA (see Question 3.6.6 and Question 3.6.5) have been designed specifically to have the property that finding a match is infeasible, and are therefore considered suitable for use in cryptography.
One or more certificates (see Question 22.214.171.124) may accompany a digital signature. A certificate is a signed document that binds the public key to the identity of a party. Its purpose is to prevent someone from impersonating someone else. If a certificate is present, the recipient (or a third party) can check that the public key belongs to a named party, assuming the certifier's public key is itself trusted. | <urn:uuid:91cd178e-2a68-42f5-8716-d5e60b16408c> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/authentication-and-digital-signatures-practice.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00058-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921184 | 621 | 4.0625 | 4 |
Some hard drives are more prone to being dropped than others. Laptops are frequently in danger of taking a tumble off your lap and landing on the floor. Your external hard drive can be knocked off the desk by an errant hand, pet cat, or child. There are many points of possible failure when a hard drive is dropped. If you are in need of dropped hard drive recovery services, our engineers can help.
What Happens When I Drop My Hard Drive?
To understand how a dropped hard drive fails, it’s important to understand how a hard drive works. Inside your hard drive are thin, delicate disks with a magnetic coating. These are the drive’s data storage platters. They are like CDs, but smaller and denser. Laptop hard drives, which are the most prone to being dropped, have platters made out of glass. The hard drive’s spindle motor spins these platters at around 5,400 to 7,200 revolutions per minute.
The platters are divided into sectors. These are magnetically-charged pieces of the surface that contain the actual data on the hard drive. Small electrically-charged coils of copper wire mounted on long arms sweep across the radius of the platters as they spin. These are the magnetic read/write heads. These heads are kept away from the platters when the drive is not in use. A ramp guides them into their proper positions above the platters.
A set of failed read/write heads from a dropped hard drive recovery case
Unlike the needle on a record player, these heads are never supposed to touch the surfaces of the platters. Instead, they hover a tiny distance above the platters on a cushion of air. This distance is about the equivalent of a couple dozen atoms laid end-to-end.
To imagine what the read/write heads are doing, think about a plane flying at top speed a few feet above the ground. The margins for error are razor-thin. It’s unsurprising that the read/write heads are one of the most common failure points for a hard drive.
At the instant you drop your laptop or external hard drive, the device experiences an instantaneous moment of weightlessness. Then gravity does its work. The higher the height your device falls from, the more speed it builds up. And then it hits the ground, and it stops—very quickly.
You may be familiar with Newton’s second law of motion: Force equals mass times acceleration. The faster an object’s velocity changes, the more force is exerted on it. As the saying goes, it’s not the fall that kills you.
If your hard drive is running when your laptop or external drive takes a tumble, the read/write heads could crash onto the surfaces of the platters. This is the one thing they are absolutely not supposed to do.
The platters might only briefly impact with the platters, making some dings and scraping out a few sectors here and there. Or they might make prolonged contact with the platters and gouge huge tracks out of them. This is called rotational scoring. Severe rotational scoring can be devastating.
The heads can also clamp down on the platters and stop them from spinning. This can cause the hard drive spindle motor to seize up.
Manufacturers of both laptop computer and hard drives have come up with ways to mitigate the damage done to a dropped hard drive. Many laptops today have accelerometers inside them. These features, such as Apple’s Sudden Motion Sensor, send signals to the hard drive and warn it to take its heads away from of the platters if the laptop enters free fall. Hard drive manufacturers are also now building free fall sensors into the drives themselves.
This doesn’t stop the drive from failing if it falls hard enough. The heads can become damaged by a fall even if they’ve been tucked away. But it does reduce the chances of the platters containing your precious data becoming damaged.
There are some external hard drive manufacturers who make their enclosures as tough as possible. These products are typically aimed at nature photographers and anyone with rough and adventurous lines of work. These hard drives are much more well-insulated from shocks and drops than your typical external hard drive. But as long as a hard drive has moving parts, dropping your hard drive always carries risks.
How Do Gillware’s Dropped Hard Drive Recovery Services Work?
At Gillware Data Recovery, we offer free evaluations for all dropped hard drive recovery scenarios. We even offer free inbound shipping for anyone living in the continental US.
There’s no way of telling how severe the damage to the drive has been until we’ve brought it to our hard drive recovery cleanroom and had our technicians look at it. A dropped hard drive could need its read/write heads replaced. Its platters might need burnishing to clear debris off of them. The spindle motor may have failed. Our professional hard drive recovery engineers may have to deal with some or all of these failure points in order to get your drive up and running.
Charles, one of our cleanroom data recovery engineers, inspects the 2.5-inch platters of a dropped laptop hard drive for any signs of rotational scoring.
We take note of the hard drive’s condition and determine the cost and probability of success for dropped hard drive recovery on a case-by-case basis. We then present you with a price quote. We don’t ask for any payment at this point, just acknowledgment of the cost.
If you approve the price quote, our hard drive recovery cleanroom engineers do the work to get your hard drive into shape. This can require one repair to the drive or many repairs. Our in-house data recovery software tool HOMBRE helps our engineers read the drive safely and efficiently. As part of our dropped hard drive recovery services, we salvage as much as we can from the drive. We’ve made our entire data recovery operation as financially risk-free as possible. We only consider our recovery efforts successful if we recover your important data. If necessary, we can work with you and show you our results before you pay, so you can help us determine the success of the recovery. When the recovery is complete and successful, we take your recovered data and put it on a new hard drive and ship it to you.
Why Choose Gillware Data Recovery?
Dropped hard drive recovery is a unique and highly specialized skill. Our cleanroom data recovery engineers have spent years and thousands of hours working on hard drives. Just about every brand and model of hard drive has come into our cleanroom at least once.
Our engineers have solved thousands of dropped hard drive recovery cases in the years we’ve been in business. We stand by our financially risk-free data recovery guarantee. If we can’t get your data back at a price that makes sense to you, you don’t owe us a dime.
Ready to Have Gillware Assist You with Your Dropped Hard Drive Recovery Needs?
Best-in-class engineering and software development staff
Gillware employs a full time staff of electrical engineers, mechanical engineers, computer scientists and software developers to handle the most complex data recovery situations and data solutions
Strategic partnerships with leading technology companies
Gillware is proud to be a recommended provider for Dell, Western Digital and other major hardware and software vendors. These partnerships allow us to gain unique insight into recovering from these devices.
RAID Array / NAS / SAN data recovery
Using advanced engineering techniques, we can recover data from large capacity, enterprise grade storage devices such as RAID arrays, network attached storage (NAS) devices and storage area network (SAN) devices.
Virtual machine data recovery
Thanks to special engineering and programming efforts, Gillware is able to recover data from virtualized environments with a high degree of success.
SOC 2 Type II audited
Gillware has been security audited to ensure data safety, meaning all our facilities, networks, policies and practices have been independently reviewed and determined as completely secure.
Facility and staff
Gillware’s facilities meet the SOC 2 Type II audit requirements for security to prevent entry by unauthorized personnel. All staff are pre-screened, background checked and fully instructed in the security protocol of the company.
We are a GSA contract holder.
We meet the criteria to be approved for use by government agencies
GSA Contract No.: GS-35F-0547W
Our entire data recovery process can be handled to meet HIPAA requirements for encryption, transfer and protection of e-PHI.
No obligation, no up-front fees, free inbound shipping and no-cost evaluations.
Gillware’s data recovery process is 100% financially risk free. We only charge if the data you want is successfully recovered.
Our pricing is 40-50% less than our competition.
By using cutting edge engineering techniques, we are able to control costs and keep data recovery prices low.
Instant online estimates.
By providing us with some basic information about your case, we can give you an idea of how much it will cost before you proceed with the recovery.
We only charge for successful data recovery efforts.
We work with you to define clear data recovery goals for our technicians, and only charge you upon successfully meeting these goals and recovering the data that is most important to you.
Gillware is trusted, reviewed and certified
Gillware has the seal of approval from a number of different independent review organizations, including SOC 2 Type II audit status, so our customers can be sure they’re getting the best data recovery service possible.
Gillware is a proud member of IDEMA and the Apple Consultants Network. | <urn:uuid:177a7be3-850f-4b12-b909-c333d8d5cd95> | CC-MAIN-2017-04 | https://www.gillware.com/dropped-hard-drive-recovery/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00176-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939078 | 2,021 | 3.015625 | 3 |
This week’s security news takes a look at the danger of malicious apps, outdated software, mobiles and unprotected POS.
Bitdefender Labs has reported that over nine million registered users of Tinder could be at risk from a series of bots which have invaded the dating app and are now spreading dangerous downloads.
The malicious schemes attempt to lure users with tempting profiles and pictures, some using pictures stolen from an Arizona-based photography studio.
Kaspersky Lab has reported that a third of all phishing attacks are aimed directly at stealing money.
The firm’s research revealed that in 2013, 31.45 per cent of phishing attacks exploited the names of leading banks, online stores and online payment systems.
The most attractive targets were banks, which were used in 70.6 per cent of all financial phishing attacks, with Amazon.com the most popular cover for phishing attacks impersonating online stores – its name was used in 61 per cent of online trade-related phishing attacks.
In addition, Kaspersky found that phishers are increasingly using social networking sites – the number of attacks using fake Facebook pages and other social networking sites grew by 6.8 percentage points and accounted for 35.4 per cent of total attacks.
To further educate technology users on the danger of cyber attacks, Kaspersky has also launched an interactive cyberthreat map that visualises cyber security incidents occurring worldwide in real time.
Elsewhere, a F-Secure survey has suggested that many businesses are risking company assets by using outdated software.
The security firm reported that ninety-four per cent of small and medium size businesses (SMBs) it spoke to think it is important to keep software updated, but only 59 per cent of businesses stated that their software is always up to date – with 63 per cent blaming a lack of available resources for the outdated software.
Other risks are present for smartphone users: security specialist Avira announced today that it has added three features to its new premium Avira Antivirus Security Pro app to protect the 95 per cent of adults who currently use a mobile in the UK.
Mobile Point of Sale (MPOS) devices are also at risk, and can be easily hacked, leaving banks, retailers and millions of customers exposed to serious fraud around the world, claims security firm MWR InfoSecurity.
MWR Labs researchers have demonstrated that it is possible to compromise MPOS terminals with multiple attacking techniques using micro USBs, Bluetooth and a malicious programmable smart card.
Jon, head of research at MWR InfoSecurity, said: “What we have found reveals that criminals can compromise the MPOS payment terminal and get full control over it.
“This shows that card holders paying at MPOS terminals worldwide are potentially at risk. Banks and retailers should also be wary when implementing this technology as it could leave them open to serious fraud.”
According to Experian and the UK's fraud prevention service CIFAS in their Fraudscape report, the changing nature of identity-related crimes since 2009 has had a substantial effect on the demographics of fraud victims.
The report states that more financially-secure social groups are more likely to be victims of identity-related crimes. Most frequently targeted by identity fraudsters is the 'Alpha Territories' group (an average of 764 victims each year per 100,000 adult population) – the group consists of people with "substantial wealth who live in the most sought-after neighbourhoods".
Other common targets are the 'Liberal Opinions' group (513 victims per 100,000), who are "young, well-educated city dwellers", and the 'Professional Rewards' group (424 victims per 100,000) – said to be "experienced professionals in successful careers enjoying financial comfort in suburban or semi-rural homes". | <urn:uuid:137e9f9b-d85a-42d5-a027-2927854c4ca4> | CC-MAIN-2017-04 | http://www.pcr-online.biz/news/read/security-roundup-dating-danger-outdated-software-risks-and-more/033718 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00297-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946871 | 774 | 2.515625 | 3 |
NARA map shows elections past and future
- By Emily Cole
- Nov 05, 2012
NARA's interactive electoral college map, here set to show Ronald Reagan's landslide 1984 win, allows users to look up past elections and test scenarios in the current contest.
The 2012 election may be hard to predict, but there are still ways for citizens to estimate how many Electoral College votes each candidate will receive come Tuesday. On Nov. 1, the National Archives’ Office of the Federal Register released an interactive map that displays current, historical and hypothetical election data in a visual manner.
Try it here.
Since 2004, the National Archives has displayed electoral vote predictions as a calculator in text form, but this year has switched to the easier-to-use map interface. The Federal Register will also use the map to display electoral votes as they come in on Tuesday, Nov. 6. Users may then share their predictions via Facebook or Twitter, and the Electoral College website keeps a running tally of each prediction.
“We really designed the new section to display information and for people to use as an educational tool. Social media was not our primary focus, but we recognize that it can be an important tool, so we’re happy we were able to add that function,” Federal Register Staff Attorney Miriam Vincent said.
The site, titled “2012: Make Your Prediction,” features a map of all 50 states, with the option to click once to turn the state red for Romney, and click twice to make the state blue for Obama. Election data from every presidential election starting in 1964 is available as red and blue maps with specific popular and electoral vote amounts.
The Federal Register has also posted a video explaining how the Electoral College process works. The video is not subject to any copyright restrictions, allowing its use and free distribution. At press time, the statistics for web traffic were not available.
Emily Cole is an editorial intern for FCW. | <urn:uuid:6134b288-7208-4cdf-a34b-2899b75b966f> | CC-MAIN-2017-04 | https://fcw.com/articles/2012/11/05/nara-electoral-map.aspx?admgarea=TC_Agencies | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00021-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931639 | 400 | 2.765625 | 3 |
Software Problems and How Docker Addresses Them
Organizations are leveraging Docker to become more agile, responsive and leaner as they continue to compete in a challenging software environment. Using Docker allows organizations to create easily deployable software systems that can run on individual or clustered computer systems on a wide variety of platforms. Learn how Docker makes it easy to update, test and debug software with this white paper and gain foundational knowledge about Dockerfile, Docker images and containers.
Docker is a new approach to old, but increasingly troublesome problems in the software industry, namely:
- How can we deploy ever more powerful and complex software systems that are used by tens, hundreds, or thousands of users concurrently?
- How can we create, update, and maintain this software, while giving developers the platforms to run the software for testing and debugging?
- How can we facilitate testing and create automated systems that detect bugs and performance problems?
- How can we deploy these systems, doing the system administration equivalent of changing tires on a moving car, to help users who depend on our software to always be available?
- How can we use the lessons we learned in creating more powerful and flexible hardware to help us solve our software problems?
That last question is an important one, because hardware went through a similar evolution to address similar issue and, Docker was partly inspired by the hardware evolution. As hardware became more powerful, the IT industry used that extra power to solve problems inherent in running complex systems.
The hardware was "chopped up" into virtual machine (VM) software that for all intents and purposes is a separate machine, indistinguishable from those running on traditional machines (bare metal computers). VMs solved the problem of making computers more efficient and cost-effective. You can buy one or more large boxes, then divide their capacity into multiple smaller VMs to run the system. As the system grows and changes, just change the space and power allocated to the VM.
To be as portable as possible a VM defines the operating system to be used, the number of CPUs that need to be allocated, the amount of memory that must be assigned, and any local storage space reserved for it. When these resources become available the VM boots up the operating system, starts any other necessary programs, then is ready to run anything a bare metal computer might run.
On the software side, new systems are also becoming more complex and as such they are increasingly depending on other software systems (for example, an application that depends on an image-rendering library). These dependencies must be managed carefully, because a misconfigured system may not run, may run incorrectly, or may have security vulnerabilities. One way to manage dependencies is to use the VM approach: just package up the desired software, along with all the software it depends on, into a VM image. Then when the system boots up, everything is in the correct place at the correct version level with the correct configuration. The trouble with the VM approach for deploying software is that a server must be built with each package. The packages end up getting bigger and more complex to justify the resources dedicated to starting and running the VM. Also, deploying software as a VM means that all the resources for the VM itself must be allocated, making it difficult to have multiple VMs on a developer's laptop, for example.
Meanwhile, with the concept of minimum viable products (MVPs) and agile approaches to development, project stakeholders and end users are demanding faster cycle times and more responsive software teams. Rather than waiting months or years for products, users are demanding product releases in weeks, days, and maybe even hours or minutes. Think of how often companies like Google, Twitter, Amazon, and Facebook change their software. In these companies, there is no concept of a release being frozen, tested, and then released. The software is continuously changing. And because there is no "frozen" version of the software, each new iteration of the software must be packaged so it can be deployed quickly.
At the same time, the idea that a company would develop a single software system based on one programming language, one set of libraries, for one operating system, is no longer standard practice. Now, each group working on its own modules within a software system can choose the tools and environments that best meet the modules' needs. This new paradigm means that all the dependencies for any software, such as libraries, run-time environments as well as the new code itself, must be part of the release. | <urn:uuid:2f49adf0-9a6c-40a8-91e6-1492bfddaebc> | CC-MAIN-2017-04 | https://www.globalknowledge.com/us-en/resources/resource-library/white-papers/software-problems-and-how-docker-addresses-them/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00417-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938795 | 910 | 2.609375 | 3 |
Kaspersky Lab, a leading developer of secure content management solutions, announces the successful patenting of cutting-edge information security technology in the US. The technology in question effectively detects and deletes malicious software and removes any trace of its effects by running automatically generated scripts.
Today’s computers are exposed to a growing number of increasingly complex and rapidly changing malicious programs. Greater emphasis is now being placed on automatic protection methods that ensure fast data processing and prompt responses to threats. However, such technologies often generate false positives or suffer from low levels of new threat detection.
The recently patented technology from Kaspersky Lab is an effective combination of existing and newly developed methods to combat malicious software. Its automated methods are effective at processing large volumes of data. Moreover, processing and storing large volumes of information is advantageous in that it helps optimize and train the protection system, while security experts have the option of adjusting and fine-tuning the protection system as it operates.
This combination produces a synergy effect that saves resources and provides a high level of malware detection. Use of empirical data and the system’s learning capabilities enables a gradual specialization and perfection of its functions.
The cutting-edge technology was invented by Oleg Zaitsev, a senior technical specialist at Kaspersky Lab. The patent for the new technology and its implementation was registered as No. 7540030 by the US Patent Bureau on 26 May, 2009.
The patented system automatically aggregates statistics on programs and their activities. Information is collected from event logs, system scan results and user records about quarantined files. The data are used to identify malware, automatically generate scripts to remove detected threats and carry out an in-depth analysis of the system.
The scripts generated by the system can be improved by computer security specialists, which may be beneficial in cases where the system does not have sufficient knowledge to develop and take decisions in complex situations. This allows subsequent problems of a similar nature to be resolved automatically. In other words, as the amount of statistical data collected increases with time, the system operates more effectively.
“This technology helps increase response times to newly emerging threats and streamlines the user’s communication with the technical support service. The fuzzy logic and artificial intelligence systems incorporated in the patented technology build up knowledge that can be classified and used to carry out self-training,” says the inventor of the technology, Oleg Zaitsev.
Kaspersky Lab currently has more than 30 patent applications pending in the US and Russia related to a range of innovative technologies developed by company personnel.
About Kaspersky Lab
Kaspersky Lab is the largest antivirus company in Europe. It delivers some of the world’s most immediate protection against IT security threats, including viruses, spyware, crimeware, hackers, phishing, and spam. The Company is ranked among the world’s top four vendors of security solutions for endpoint users. Kaspersky Lab products provide superior detection rates and one of the industry’s fastest outbreak response times for home users, SMBs, large enterprises and the mobile computing environment. Kaspersky® technology is also used worldwide inside the products and services of the industry’s leading IT security solution providers. Learn more at www.kaspersky.com. For the latest on antivirus, anti-spyware, anti-spam and other IT security issues and trends, visit www.viruslist.com. | <urn:uuid:51fa77a9-70e1-4348-b798-89101c7c1bce> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/business/2009/Kaspersky_Lab_patents_cutting_edge_technology_to_combat_increasingly_sophisticated_malware | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00141-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.9112 | 704 | 2.609375 | 3 |
Volunteer grid computing is in the news again. The distributed computing project known as GIMPS (Great Internet Mersenne Prime Search) has tracked down the largest known prime number after a five-year hunt. Over 17 million digits in length, the number is most easily represented as 2 multiplied by itself 57,885,161 times, minus one.
The Great Internet Mersenne Prime Search (GIMPS) was created in January 1996 by MIT graduate George Woltman with the goal of identifying Mersenne primes using his software Prime95 and Mprime. Now in its 17th year, GIMPS is the longest continuously-running project of its kind. At its peak, the volunteer grid employs 360,000 CPUs, with a top output of 150 trillion calculations per second.
Interestingly, the search for the biggest Mersenne Prime has created a rivalry of sorts between University of Central Missouri and the University of California Los Angeles. This most recent discovery was made by volunteer Curtis Cooper’s computer, one of several University of Central Missouri machines engaged in the search. It’s the third record prime for the university, where Dr. Cooper is a professor. Their earlier discoveries took place in 2005 and 2006.
Then in 2008 UCLA mathematicians unearthed a 12,978,189 digit prime number. They held onto that record for five years, until the University of Central Missouri came back with the next biggest Mersenne prime.
As this progression demonstrates, not only are Mersenne primes rare, but each successive one gets harder and harder to find. The proof itself took 39 days of continuous computing. Results were independently validated by three different researchers in four separate verification tests.
The Prime Objective
A prime number is a positive integer that can only be divided by one and itself. The prime series starts out 2, 3, 5, 7, 11, 13, and so on. The integer 6 would not be a prime because it is divisible by 2 and 3.
A Mersenne prime is expressed as 2 raised to the power of “P” minus 1, where P is also a prime. Examples of Mersenne primes are 3, 7, 31, and 127, which corresponds to P = 2, 3, 5, and 7 respectively.
According to GIMPS, “Mersenne primes have been central to number theory since they were first discussed by Euclid in 350BC. The man whose name they now bear, the French monk Marin Mersenne (1588-1648), made a famous conjecture on which values of P would yield a prime. It took 300 years and several important discoveries in mathematics to settle his conjecture.”
As of this latest discovery, there are now 48 known Mersenne primes. GIMPS has found the last 14 of them.
Computing for a Cause
Volunteer computing relies on the principles of grid computing to allow average netizens to contribute computational resources to a variety of causes. The projects – and there are lots of them – harness the idle processing power of thousands of machines (most often personal computers or university systems) operated by volunteers who have installed a software client on their systems.
Besides GIMPS, other popular projects include SETI@home, Folding@home and the World Community Grid. But these are just a small sampling of the many noteworthy candidates.
Most are strictly volunteer endeavors, but some do provide incentives, and GIMPS is one that does. The project offers two discovery awards: currently set at $3,000 – for numbers with fewer than 100,000,000 digits – and $50,000 – for the first prime discovered having at least 100,000,000 digits. The discovery made by Dr. Cooper’s machine (which located a 17,425,170-digit prime) is eligible for the $3,000 award.
The monies for the second and more-substantial award level would actually come from the Electronic Frontier Foundation. As part of its Cooperative Computing Awards project, EFF is providing $150,000 to the first “Internet user” to locate a 100 million digit prime number. GIMPS participants agree in advance that if their computer is the one to identify the next winner, they will split the money with the GIMPS foundation (to be used for future awards) and with a mathematics-related charity.
Prime hunting is no mere mathematical navel gazing, there are implications in higher math theory. Prime numbers are also important to public key cryptography, widely-used to secure online transactions. | <urn:uuid:92304b0d-fef8-4b40-ba26-3e06be00d9f4> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/02/08/volunteer_grid_project_finds_biggest_prime_ever-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00351-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954172 | 940 | 3.140625 | 3 |
Military application for snappily titled Power Efficiency Revolution for Embedded Computing Technologies (PERFECT).
Researchers Georgia Tech are helping Defense Advanced Projects Research Agency (DARPA) to develop an energy efficient computer that can last 75 times longer than the present day computers.
The computer is being developed as part of an initiative called Power Efficiency Revolution for Embedded Computing Technologies (PERFECT), which is still in the elementary stages.
The success of this project could result in smaller and more efficient systems which could be used in aircraft and ground vehicles as well as used by soldiers on the ground.
David Bader, School of Computational Science and Engineering executive director of high-performance computing, said, "The program is looking at how do we come to a new paradigm of computing where running time isn’t necessarily the constraint, but how much power and battery that we have available is really the new constraint."
Georgia Tech’s part in DARPA-led PERFECT effort is called Graph Analysis Tackling power-Efficiency, Uncertainty and Locality (GRATEFUL), which focuses on algorithms that will create graphical representation out of large volumes of data in the most energy-efficient way.
The main focus would be to reduce power consumption by cutting the level of data collection.
The ultimate goal of the project is to get an algorithmic framework that would enable to create smaller devices with supercomputer like capabilities. | <urn:uuid:4ca50b67-f173-40aa-93cb-eab8d8dc762a> | CC-MAIN-2017-04 | http://www.cbronline.com/news/enterprise-it/researchers-working-on-computers-which-can-last-75-times-longer-4311072 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00077-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941248 | 292 | 3.234375 | 3 |
Computer crooks and spammers are abusing a little-known encoding method that makes it easy to disguise malicious executable files (.exe) as relatively harmless documents, such as text or Microsoft Word files.
The “right to left override” (RLO) character is a special character within unicode, an encoding system that allows computers to exchange information regardless of the language used. Unicode covers all the characters for all writing systems of the world, modern and ancient. It also includes technical symbols, punctuations, and many other characters used in writing text. For example, a blank space between two letters, numbers or symbols is expressed in unicode as “U+0020”.
The RLO character (U+202e in unicode) is designed to support languages that are written right to left, such as Arabic and Hebrew. The problem is that this override character also can be used to make a malicious file look innocuous.
This threat is not new, and has been known for some time. But an increasing number of email based attacks are taking advantage of the RLO character to trick users who have been trained to be wary of clicking on random .exe files, according to Internet security firm Commtouch.
Take the following file, for example, which is encoded with the RLO character:
Looks like a Microsoft Word document, right? This was the lure used in a recent attack that downloaded Bredolab malware. The malicious file, CORP_INVOICE_08.14.2011_Pr.phyldoc.exe, was made to display as CORP_INVOICE_08.14.2011_Pr.phylexe.doc by placing the unicode command for right to left override just before the “d” in “doc”. | <urn:uuid:77448ac6-3793-4a08-9a70-44aa26d528ae> | CC-MAIN-2017-04 | https://krebsonsecurity.com/tag/commtouch/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00563-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931801 | 369 | 2.890625 | 3 |
The U.S. Air Force’s Civil Air Patrol takes thousands of aerial pictures following a natural disaster, but those images alone often aren’t sufficient to assess the full scale of damage to building and communities.
Imagine, though, if those images could be combined with geotagged pictures taken by disaster victims on the ground culled from Twitter, Flickr and Instagram.
That’s the dream of a team of short-term government technologists known as Presidential Innovation Fellows who are working with the Federal Emergency Management Agency on the project. The image database known as GeoQ could also be used to assess which roads into a disaster site are most passable, improving response times for emergency workers traveling in and out of disaster zones, said Jackie Kazil, a fellow working on the project.
“We could look at structures to see what it looks like for citizens on the ground and we could make decisions about where to put resources more quickly than we do today,” Kazil said.
Kazil outlined the fellow’s plans for GeoQ at the White House’s Safety DataPalooza on Tuesday, a conference devoted to using open government data to improve citizens’ safety. Many of the projects discussed at the conference grew out of the White House’s open data initiative, which aims to open up government-gathered data to the public in machine readable formats so developers can build Web and mobile tools with it.
The burgeoning open data market could ultimately grow the economy by up to $3 trillion annually, according to researchers, as companies, nonprofits and independent developers build useful new tools in sectors such as health care and energy.
Agencies announced several new open data initiatives during Tuesday’s event. Among them:
- The Food and Drug Administration announced plans to open up numerous data sets, including about medication error reports.
- The Labor Department announced plans for a “data jam” to use Occupational Safety and Health Administration data to reduce job-related hearing loss.
- The Consumer Product Safety Commission announced a Safer Products App Challenge aimed at building tools to raise awareness of reports of product dangers submitted through SaferProducts.gov.
Get the Nextgov iPhone app to keep up with government technology news. | <urn:uuid:a793259f-cae0-471d-bd64-8a7d32719f59> | CC-MAIN-2017-04 | http://www.nextgov.com/big-data/2014/01/saving-lives-government-data/76826/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00407-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948579 | 463 | 2.5625 | 3 |
| ||Tea Ceremony|
Sen No Rikkyu
Sen No Rikkyu - founder of the Japanese tea ceremony.
He is the founder of Japanese tea ceremony.
He stressed on simplicity, rusticness and other simple qualities in the tea ceremony.
He was also a simple man with simple taste and had a cultivated and disciplined lifestyle.
He was a man of honour. When he committed suicide after having differences
with the Shogun (the military ruler in Japan). Before he died, he gave his
tea scoop as a keepsake to his disciple Oribe who name it "tears".
| || ||
Read about the:
Home | Research | Dictionary | Galleries | About Us | Help | <urn:uuid:afd018cd-bcb5-499c-93d8-7d468c2585ee> | CC-MAIN-2017-04 | http://www.easterntea.com/ceremony/founder_jtc.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00223-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933248 | 161 | 2.640625 | 3 |
This video was posted three years ago, but some other sites have picked it up recently to show how smart some people were in 1995, at the beginning of the Internet's boom.
As part of a public service announcement, a bunch of fifth grade students talk about what the future of the Internet would look like, and why it was important (back then) that people get Internet access. And wouldn't you know, one of the students even talks about cats (I would have died if someone had predicted the importance of bacon as well).
These fifth graders would now be about 26 or 27 years old now, so I'm sure most of them are using the Internet exactly as they predicted. Or maybe they're blogging about the Internet.
Keith Shaw rounds up the best in geek video in his ITworld.tv blog. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter, Facebook, and Google+.
Watch some more cool videos: James Bond meets My Little Pony: Mashup gold This 13-foot Japanese robot is packing heat The Legend of Zelda as a Western Friday Funnies: Batman rants against the Dark Knight haters/a> Did this 1993 film predict Google Glasses and iPads? | <urn:uuid:1c2151c1-6fe2-48f5-a182-b0e71389c51b> | CC-MAIN-2017-04 | http://www.itworld.com/article/2725013/networking-hardware/fifth-graders-accurately-predict-future-of-the-internet-in-1995.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00435-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.966726 | 264 | 2.78125 | 3 |
The true transition of the global Internet from IPv4 to IPv6 is expected to span many years. And, during this period of transition, many organizations introducing IPv6 into their infrastructure will support both IPv4 and IPv6 concurrently. There is not a one-size-fits-all transition strategy for IPv6. The incremental, phased approach allows for a significant period where IPv4 and IPv6 can co-exist using one or more transition mechanisms to ensure interoperability between the two protocol suites. The most often-used methods of performing this transition is by operating in a dual-stack environment, or by using tunneling and translation between the two versions of Internet Protocol (IP).
IPv4/IPv6 Dual Stacks
Because IPv6 represents a conservative extension of IPv4, it is relatively easy to write a network stack that supports both IPv4 and IPv6 while sharing most of the code. Such an implementation is called a dual stack, and a host implementing a dual stack is called a dual-stack host.
The term “dual stack” means that the host or router uses both IPv4 and IPv6 at the same time. For hosts, this means that the host has both an IPv4 and IPv6 address associated with each Network Interface Card (NIC), and that the host can send IPv4 packets to other IPv4 hosts, and that the host can send IPv6 packets to other IPv6 hosts. For routers, it means that in addition to the usual IPv4 IP addresses and routing protocols, the routers would also have IPv6 addresses and routing protocols configured. To support both IPv4 and IPv6 hosts, the router could then receive and forward both IPv4 packets and IPv6 packets.
The dual stack approach can be a reasonable plan of attack to migrate an enterprise to IPv6 for communications inside the enterprise. The routers could be easily migrated to use dual stacks and today most desktop operating systems (OS) support IPv6. In some cases, the upgrade may require new software or hardware, and these conditions could cause a slower migration schedule. This situation is not necessarily a bad thing because a support staff could require additional time to learn how IPv6 works.
There are a number of factors that will affect the success and duration of the transition process. At the top of that list is training. IPv6, while built on many of the fundamental principles of IPv4, is different enough that most IT personnel will require formalized training. The level of training required will vary and depend upon the role a member of the organization’s IT staff plays in developing, deploying, and supporting IPv6 integration. (Check out this post discussing training issues for more on this).
The term tunneling refers to a means to encapsulate one version of IP in another so the packets can be sent over a backbone that does not support the encapsulated IP version. For example, when two isolated IPv6 networks need to communicate over an IPv4 network, dual-stack routers at the network edges can be used to set up a tunnel that encapsulates the IPv6 packets within IPv4, enabling the IPv6 systems to communicate without having to upgrade the IPv4 network infrastructure that exists between the networks.
Several types of IPv6-to-IPv4 tunnels exist. Three of the following types of tunnels are used by routers, where the fourth type (Teredo) is used by hosts.
- Manually Configured Tunnels (MCT): This is a simple configuration in which tunnel interfaces, which are a type of virtual router interface, are created with the configuration referencing the IPv4 addresses used in the IPv4 header that encapsulates the IPv6 packet. Manually configured tunneling is used when network administrators manually configure the tunnel within the endpoint routers at each end of the tunnel, and is usually more deterministic and easier to debug than automatic tunneling, and is therefore recommended for large, well-administered networks. Any changes to the network, like renumbering, must be manually reflected on the tunnel endpoint. Tunnels result in additional IP header overhead because they encapsulate IPv6 packets within IPv4 (or vice versa).
- Dynamic 6to4 Tunnels: This term refers to a specific type of dynamically created tunnel, typically done on the IPv4 Internet in which the IPv4 addresses of the tunnel endpoints can be dynamically found, based on the destination IPv6 address.
- Intra-site Automatic Tunnel Addressing Protocol (ISATAP): ISATAP is another dynamic tunneling method, typically used inside an enterprise. ISATAP treats the IPv4 network as a virtual IPv6 local link, with mappings from each IPv4 address to a link-local IPv6 address. Unlike 6to4 and Teredo, which are inter-site tunneling mechanisms, ISATAP is an intra-site mechanism, meaning that it is designed to provide IPv6 connectivity between nodes within a single organization.
- Teredo Tunneling: Teredo is a tunneling protocol designed to grant IPv6 connectivity to nodes that are located behind IPv6-unaware NAT devices. It defines a way of encapsulating IPv6 packets within IPv4 UDP datagrams that can be routed through NAT devices and on the IPv4 Internet. Many hosts are currently attached to the IPv4 Internet through one or several NAT devices, usually because of an IPv4 address shortage. In such a situation, the only available public IPv4 address is assigned to the NAT device, and the 6to4 tunnel endpoint needs to be implemented on the NAT device itself. Many NAT devices currently deployed, however, cannot be upgraded to implement 6to4 for technical or economic reasons.Teredo alleviates this problem by encapsulating IPv6 packets within UDP/IPv4 datagrams, which most NATs can forward properly. Thus, IPv6-aware hosts behind NATs can be used as Teredo-tunnel endpoints even when they don’t have a dedicated public IPv4 address. In effect, a host that implements Teredo can gain IPv6 connectivity with no cooperation from the local network environment.
Translating Between IPv4 and IPv6 with NAT-PT
Both classes of IPv6 transition features so far discussed-dual stack and tunnels-rely on the end hosts to at least support IPv6, if not both IPv4 and IPv6. However, in some cases, an IPv4-only host needs to communicate with an IPv6-only host. A third class of transition features need to be used in this case: a tool that translates the headers of an IPv6 packet to look like an IPv4 packet, and vice versa.
In Cisco routers, Network Address Translation-Protocol Translation (NAT-PT), which is defined in RFC 2766, can be used to perform the translation. To do its work, a router configured with NAT-PT must know what IPv6 address to translate to which IPv4 address and vice versa. This is the same kind of information held in the traditional NAT translation table. And, like traditional NAT, NAT-PT allows static definition, dynamic NAT, and dynamic PAT, which can be used to conserve IPv4 addresses.
As indicated previously, NAT-PT is defined in RFC 2766, but due to numerous problems it has been made obselete by RFC 4966 and deprecated to historic status. It is typically used in conjunction with a DNS Application-Level Gateway (DNS-ALG) implementation.
While almost identical to NAT-PT, Network Address Port Translation + Protocol Translation, which is also described in RFC 2766, adds translation of the ports as well as the address. This is done primarily to avoid two hosts on one side of the mechanism from using the same exposed port on the other side of the mechanism, which could cause application instability or security flaws.
As a final note, to support IPv6, all of the routing protocols had to go through varying degrees of changes, with the most obvious being that each had to be changed to support longer addresses and prefixes.
Author: David Stahl | <urn:uuid:ac817128-bd05-45dd-8e7b-64ed4ee24b6c> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2009/11/24/ipv6-transition-methods/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00343-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922085 | 1,651 | 3.0625 | 3 |
NEW YORK, NY--(Marketwired - Apr 23, 2014) - Dr. Ian Mortimer's diligent research, which unearths significant facts that have been lost in the course of history, marks his historical biographies -- Edward III: The Perfect King, Henry IV: The Righteous King and Henry V: The Warrior King -- with surprising insights about his subjects that will intrigue readers. These three books, now in digital format and published by RosettaBooks, take on some of England's most revered monarchs in an effort to show the men behind the legends.
"In these three books I tried hard to break down the traditional objectivity of the scholarly historian and to understand what made these men tick as human beings. Collectively, each book reveals the vision, passions and courage of three of England's greatest medieval kings," said Mortimer.
Mortimer shows in Henry IV: The Righteous King (The Fears of Henry IV: The Life of England's Self-Made King) that perceptions can change within the ruling of a monarch. Although Henry IV is often remembered as a king who subverted divine rule by deposing the proper monarch Richard II, he was actually hailed as a hero at the time for relieving the people of a tyrannical ruler. His ascension by force allowed his allies to devise schemes to do the same to him, and he survived eight plots to dethrone or kill him in the first six years of his reign.
Because of his contributions to the modern justice system, art and architecture, and language itself, Edward III is often praised as one of England's most influential kings. Edward III: The Perfect King (The Perfect King: The Life of Edward III, Father of the English Nation) reveals that among his accomplishments, a streak of brutality allowed the king to rise to power in the first place after overthrowing his guardians at age 17, usurping his father's crown, and ordering his uncle beheaded.
Celebrated by Shakespeare and the nation as one of England's greatest heroes, Henry V's legend has reached epic proportions over the centuries. However, Mortimer reveals in Henry V: The Warrior King (1415: Henry V's Year of Glory) that especially at the Battle of Agincourt, he used the insecurity of the Catholic Church and cruel force to solidify his authority.
"History can sometimes warp over decades and centuries, and scholars like Ian Mortimer remind us to look at the details to see the true picture," said Arthur Klebanoff, CEO of RosettaBooks.
About RosettaBooks: RosettaBooks is the leading independent digital publisher. Its prominent author collections include 52 works of Winston Churchill, 35 titles by renowned science fiction author Arthur C. Clarke, 20 works by Kurt Vonnegut, 12 titles from international bestselling business author Stephen R. Covey and 18 works by Robert Graves, celebrated 20th century English poet, critic, and author of I, Claudius and Claudius, the God. RosettaBooks also publishes eBook lines in collaboration with AARP, Harvard Health Publications and Mayo Clinic. Publisher of ten Kindle Singles, including Ray Bradbury's The Playground, RosettaBooks has launched nine of them to bestseller status. RosettaBooks is an Inc. 500 company, on the exclusive list of the fastest growing private companies in the United States. For more information, please visit RosettaBooks.com and follow the e-publisher on Facebook and Twitter. | <urn:uuid:82c36aa1-d529-4b70-a28a-4fb6a9cb7835> | CC-MAIN-2017-04 | http://www.marketwired.com/press-release/british-historian-ian-mortimer-reveals-realistic-portraits-english-monarchs-now-ebook-1901957.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00251-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952708 | 704 | 2.734375 | 3 |
A New CodeRed Modification Carries a Trojan in Its Pocket
05 Aug 2001
Kaspersky Lab, an international data-security software developer, announces the detection of a new variant of the "CodeRed" ("Bady") Internet worm.
Unlike the two previous worm variants, CodeRed.c installs a Trojan program on an infected computer that opens general access to C: and D: drives. A new approach to choosing IP addresses is also included in the latest variant when attacking a system. As is known from the earlier CodeRed variants, the worm selected addresses randomly in such a way that the numerous attempts at penetration were thwarted from the start, whereas now, this method occurs only 1 out of 8 times. As concerns the rest of the occurrences, the target-computer IP address is based on the originating computer's IP address, which perpetrates the attack.
Kaspersky Lab has not yet received any reports of CodeRed.c having been detected "in the wild."
You can read a more in-depth description of the CodeRed worm in the Kaspersky Virus Encyclopedia. | <urn:uuid:c220a06a-ca2d-4564-9888-80a1829a239d> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/virus/2001/A_New_CodeRed_Modification_Carries_a_Trojan_in_Its_Pocket | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00067-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923677 | 223 | 2.859375 | 3 |
Homebrew NMS: Put It Together with Perl and Net::SNMP
Understanding SNMP is key to understanding what's going on with your network, and critical to any tool you build. Perl's Net::SNMP helps gather the data.
Before we explore the Perl module Net::SNMP, let us have a quick refresher on SNMP.
The Simple Network Management Protocol agent listens on UDP port 161 for incoming requests. SNMP traps are sent, without warning, to UDP port 162 on a central management station that is configured to listen for traps. Traps can take the form of "link down," which is normally sent by a network device, or can be as complex as you'd like.
The magical goo that defines what those numbers mean is called the Management Information Base, or a MIB. A MIB defines what types are allowed in each object (integer, string), as well as what they are called. A MIB essentially turns numbers into meaningful information, as well as defines allowed values.
Taking SNMP for a Walk
On to the useful stuff. Let's try out an SNMP query, using out old friend snmpwalk, which comes with the net-snmp software on most *nix hosts. The snmpget command is useful to get a single object, but if you want all information available down a certain branch of the tree, snmpwalk will do just that.
% snmpwalk -v2c -c public nermal 220.127.116.11.18.104.22.168 SNMPv2-MIB::sysLocation.0 = STRING: "System administrators office"
Running snmpwalk on a specific OID, where nothing follows in the tree, is akin to using snmpget. However, if we bump it up a notch, we can get the entire system MIB:
% snmpwalk -v2c -c public nermal 22.214.171.124.2.1.1 SNMPv2-MIB::sysDescr.0 = STRING: SunOS nermal.domain.com 5.10 Generic_118833-24 sun4u SNMPv2-MIB::sysObjectID.0 = OID: NET-SNMP-MIB::netSnmpAgentOIDs.3 DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (338266757) 39 days, 3:37:47.57 SNMPv2-MIB::sysContact.0 = STRING: "System administrator" …
Experiment with your network gear, and the private, instead of mgmt branch, under Internet to see what's available. In the Cisco MIB, we can get everything we'd ever want to know, including the ARP table, the Bridge table, and MAC address with port locations. Those are all standards-based OIDs, and the interesting Cisco-specific goodies are in the private tree, under 126.96.36.199.4.1.9.
Port numbers are difficult, since Cisco uses an interface identifier and a few layers of indirection to identify a specific port, but it is possible. Let's start off easier. We'll fetch the port descriptions (string assigned to a port) from our Cisco switch, so that we can ensure they are accurate based on our formatting standards and MAC address discovery information. | <urn:uuid:e2de5cf9-b356-4714-b432-c3ccff9fa85f> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/netos/article.php/3697071/Homebrew-NMS-Put-It-Together-with-Perl-and-NetSNMP.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00553-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.853909 | 708 | 2.703125 | 3 |
The crimping tool is used for creating a joint between two metal pieces or other materials with good malleability. The joint formed by crimping needs to be strong to ensure that the application works properly. Crimping tools are available in different types to support various types of applications. FiberStore is a good crimping tools supplier. We provide many types of crimping tools in all possible varieties so that the purchasers are able to make the right choice while being in our store.
For example, the structured cabling system lays stress on following the practices which will add elegance, discipline, method and reliability to cabling. The tools which are used in installation of a cabling system go a long way in and can sustain extensive use in the field. Modular Crimping Tool can be used to crimp RJ-45(related products: rj45 plug) and RJ-11 types of fiber connectors. It is a highly compact and rugged tool and is meant for continuous use in the field. The parallel action design maintains accurate.
Hand operated crimpers are the most frequently used of all crimping tool types. Most are designed in a basic plier pattern with one or a number of crimping points machined into their jaws. This type of tool is typically used to effect smaller crimps on steel cables, electrical connections and terminations, preinsulated lugs and ferrules, and RJ type plugs. The crimp points on hand crimpers are either half round compression or cup and tab crimp type designs. This type of crimper is generally used to crimp steel or copper ferrules or sleeves to join two lengths of steel or electrical cable.
Types of Crimping Tools
First and foremost, you must offer these them manufactured in different types. Different customers might come and ask for a specific type. learn about different types of tools used for crimping form the below list:
1.Cable tie tools are the crimping tools used to tighten the ties around the bundles of wires or cables.
2.Compression crimp tools are used for terminating twisted-pair modular plugs and coaxial compression connectors.
3.RJ45 crimp tools are used for crimping the wires of various connectors like RJ45, RJ-11, RJ-12 and so on.
4.Point to cup tools are used with round section crimp sleeves. Besides, there are cup to cup tools available in different varieties like standard duty tool, heavy duty tool and bench press tool.
Crimp Tool Operation
A crimping tool is an essential part of the crimping process, the other parts being the terminal and wire. Terminal size is largely universal and can accept many sizes of gauged wire, which can also vary widely within the same nominative value. As such, the crimp tool is a means of compressing the terminal to both the wire’s insulation (for positioning) and the wire’s brush (for conduction).
The quality of the tooling determines the quality of the crimp design. Common considerations include if the volume of crimping deserves an automated wire stripping and process machine, or if the application is better suited by an on-site, handheld crimping tool. Many tools will have two crimping cavities to properly roll the terminal’s crimps, and possibly more if there are two conductors in the wire. Some crimp tools will feature several gauge sizes and possibly a stripper to enhance the crimper’s utility. Crimp tools may also feature interchangeable dies. Die-less crimpers are meant for general applications.
FiberStore supplies a wide variety of specialized cable crimping tool, modular crimping tool, network cables crimpers which are all at very competitive price to help you get the job done right. For more information, please contact our sales representative right now. | <urn:uuid:cd1d2cb0-2506-4592-8d63-c8cd0411c782> | CC-MAIN-2017-04 | http://www.fs.com/blog/crimping-tool-types-and-operation.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00553-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.917886 | 784 | 2.625 | 3 |
There are two commonly used fiber optic testers. They are optical time domain reflectometer (OTDR) and visual fault locator.
OTDR is the most classic fiber tester, and can provide the most information on testing optical fiber. The OTDR itself is a one-dimensional closed-loop optical radar, measuring the distance from one end of an optical fiber head. Emitting the high intensity, narrow optical pulses into the optical fiber, while the optical probe at high speed record return signal. This instrument gives a visual interpretation of the optical link. OTDR curve reflects the continuation point, the size of the connector and the location of the point of failure, and loss.
OTDR evaluation process has many similarities with optical multimeter. In fact, OTDR can be considered as a very professional test instrument combination: a stable high-speed pulse source and a high-speed optical probe. The selection process of OTDR can be focus on the following attributes.
1). Confirming the operating wavelength, fiber type and connector interface.
2). The expected loss of the connection and the need to scan the range.
3). The spatial resolution.
Most of the visual fault locator is a handheld instrument suitable for multimode and single-mode fiber-optic system. The use of OTDR technology for fiber fault point location, most of the test distance in less than 20 km. Instrument directly to the digital display the distance to the point of failure. : Wide Area Network (WAN), the 20-km range communication systems, fiber-to-roadside (FTTC), single-mode and multi-mode fiber optic cable installation and maintenance, and military systems. Single-mode and multi-mode fiber optic cable system, it is necessary to locate the connector failure, bad splices, visual fault locator is an excellent tool. The visual fault locator simple just a one-button operation, multiple events can be detected up to seven.
Visual fault locator: the performance for fiber loss function of the distance. With OTDR, technicians can see the outline of the entire system, to identify and measure the span of fiber splice points and connected head. Diagnosis meter fiber failure, OTDR is the most classic but also the most expensive instrument. With the ends of the optical power meter and optical multimeter test, OTDR only through the end of the fiber can be measured fiber loss. The position and size of the the OTDR trace line gives the attenuation value of the system, such as: any connectors, splices, optical fiber shaped, or the position of the fiber break its loss size.
OTDR can be used in the following three aspects:
1). In the understanding of the characteristics of the cable (length and attenuation), before being laid.
2). To obtain the signal trajectory line waveform of the period of the optical fiber.
3). Positioning serious point of failure when problem increased and connected worsening .
Visual fault locator is a special version of OTDR. It can automatically find fiber failures without the OTDR complex steps, and its price is also just fraction of OTDR. | <urn:uuid:be0052f7-b4c4-4c9f-8ef8-51ac8fb644e2> | CC-MAIN-2017-04 | http://www.fs.com/blog/depth-analysis-of-fiber-optic-testers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00553-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.870016 | 639 | 2.921875 | 3 |
It is possible that one of the largest asteroid impacts in the history of the Earth has been located in Antarctica. A land mass concentration detected by satellite mapping of the gravitational field suggests a crater that is around 500 km in diameter (300 miles) and was probably created about 250 million years ago. It has been further suggested that this large impact event may have contributed to the greatest known mass extinction, called the Permian-Triassic extinction, which occurred in the same time frame. This extinction has been previously attributed to volcanic activity in an area known as the Siberian Traps. It has been noted that asteroid strikes may contribute to volcanic activity by sending a shock wave through the planet to the antipodal (opposite) area where it disrupts the surface allowing volcanic venting. The theory of this article has not yet been confirmed. Several kilometers of ice on top of the land mass complicate the process of gathering evidence and make drilling problematic.
Big Bang in Antarctica: Killer Crater Found Under Ice – [researchnews.osu.edu]
Ancient mega-catastrophe paved way for the dinosaurs, spawned Australian continent
COLUMBUS, Ohio — Planetary scientists have found evidence of a meteor impact much larger and earlier than the one that killed the dinosaurs — an impact that they believe caused the biggest mass extinction in Earth’s history.
The 300-mile-wide crater lies hidden more than a mile beneath the East Antarctic Ice Sheet. And the gravity measurements that reveal its existence suggest that it could date back about 250 million years — the time of the Permian-Triassic extinction, when almost all animal life on Earth died out.
Its size and location — in the Wilkes Land region of East Antarctica, south of Australia — also suggest that it could have begun the breakup of the Gondwana supercontinent by creating the tectonic rift that pushed Australia northward.
Wilkes Land crater – [wikipedia.org]
The Wilkes Land mass concentration (or mascon) is centered at 70°S 120°E / 70°S 120°E / -70; 120Coordinates: 70°S 120°E / 70°S 120°E / -70; 120 and was first reported at a conference in May 2006 by a team of researchers led by Ralph von Frese and Laramie Potts of Ohio State University,. The team used gravity measurements by NASA’s GRACE satellites to identify a 300 km (200 mi) wide mass concentration and noted that this mass anomaly is centered within a larger ring-like structure visible in radar images of the land surface beneath the Antarctic ice cap. This combination suggested to them that the feature may mark the site of a 480 km (300 mi) wide impact crater buried beneath the ice.
New details on the east Antarctic gravity field from the Gravity Recovery and Climate Experiment (GRACE) mission reveal a prominent positive free-air gravity anomaly over a roughly 500-km diameter subglacial basin centered on (70°S, 120°E) in north central Wilkes Land. This regional inverse correlation between topography and gravity is quantitatively consistent with thinned crust from a giant meteorite impact underlain by an isostatically disturbed mantle plug. | <urn:uuid:061cf211-9622-4d3d-ace4-9854ace67932> | CC-MAIN-2017-04 | http://www.hackingtheuniverse.com/space/asteroid-belt/antarctic-asteroid-crater | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00553-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925657 | 655 | 4.4375 | 4 |
A Heapsort is based on a heap data structure. A heap is a
complete binary tree. This means that every successive level
of the tree must fill up from left to right. Further, an entire level
must be full before any nodes at that level can have children nodes. In a heap, the parent nodes always have a
greater (or lower) key value than their children nodes. A heap in
which the parents are always greater than their children is called a
max-heap whereas the opposite is called a min-heap. | <urn:uuid:ea699d44-e26f-4329-a08e-dd9b4f5a7235> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/alg/node55.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00187-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.968626 | 115 | 3.4375 | 3 |
In what’s being heralded as a breakthrough in bioinformatics research, European researchers have developed a new software suite, called eXtasy, that automatically generates the most likely cause of a given genetic disorder. A team of scientists from iMinds, STADIUS, and Katholieke Universiteit Leuven (KU Leuven) built the new tool using advanced artificial intelligence techniques that enable the automated analysis of huge amounts of genetic data.
The research is directly relevant to the millions of people who are affected by a hereditary disease, roughly 5 percent of the world’s population. Pinpointing treatments for hereditary disorders has been difficult because until recently the genetic basis for the disease origins could only be identified in half the presented cases. The lack of a definitive diagnoses made treatment more problematic and created further strain on the patient who had to undergo unnecessary treatments in an effort to find the most effective one.
The unraveling of the human genome has enabled quicker and more accurate diagnoses of hereditary diseases. But despite the huge potential, the advance came with its own challenges, related to processing all that data.
“The genomes of two healthy individuals show no less than four million differences or mutations,” notes an official release from the partners. “Most of these mutations are harmless, but just one extra, malignant mutation can be enough to cause a genetic anomaly. Existing analytical methods simply do not have the means to reliably and quickly find this needle in the haystack.”
The eXtasy software suite cuts this problem down to size by tracing the origins of genetic disorders twenty times more accurately than previous analytical methods, while demonstrating a 10-fold reduction of false positives.
“eXtasy uses advanced artificial intelligence to combine whole sets of complex data into a global score that reflects how important a certain mutation is for a certain disease. This data can consist of networks of interacting proteins, but could also include scientific publications or even scores that estimate how harmful a mutation is for the protein in question,” explains Prof. Dr. Yves Moreau of iMinds – STADIUS – KU Leuven. “In this way, we can detect disease-causing mutations twenty times more accurately, and provide patients and their families with a much faster and more conclusive diagnosis. We hope this can considerably improve and accelerate the treatment of millions of patients.”
The breakthrough is part of the emerging era of personalized medicine. As Dr. Yves Moreau Accurate attests, accurate diagnoses are essential for customized treatments. Dr. Moreau will be speaking more about this important topic at TEDx Brussels (on October 28) as part of a talk entitled “Mining a Million Genomes.”
iMinds is an independent research group, founded by the Flemish government, that works with a number of public and private institutions – including KU Leuven in Flanders, Belgium – to stimulate digital innovation. ESAT-STADIUS (the Department of Electrical Engineering – STADIUS Center of Dynamical Systems, Signal Processing and Data Analytics) is a division within iMinds that focuses on such domains as industrial automation, digital communications, the processing of biomedical signals and bioinformatics. | <urn:uuid:df68bdac-39ce-40fa-b593-b5b822899f80> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/10/28/software-pinpoints-disease-causing-genes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00003-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928676 | 674 | 3.078125 | 3 |
The Difference Between a Shared and a Dedicated IP Address
Every domain name that is a home to a website has an IP address assigned to it. An IP address is the real address of a website. Domain names were developed because it is difficult to remember long IP numbers like 22.214.171.124.
A shared IP is an IP address that is used for multiple sites. A shared IP can host all sites on a webserver. Because the IP address of a website is used for various sites on the server the actions of one site owner can affect everyone on the server. For example, if an IP address is blacklisted for spamming this will blacklist mail for all sites using the shared IP address. We work hard to prevent and to resolve these issues immediately and take corrective action against anyone who abuses the system. You will not be able to install an SSL certificate if your site is on a shared IP.
Many site owners are able to host their site on a shared IP without ever being affected by another owner on the server. If mail is critical to you we recommend using a mail hosting provider that can provide a robust mail server such as Google Apps. This will also help you because if the shared IP address becomes blacklistes your email will not be affected because your mail will be hosted elsewhere. Google Apps is able to offer more mail storage and better spam and junk filtering than the service that is available with your web hosting account with us. For instructions on configuring Google Apps please read How to Configure Google Applications.
A dedicated IP is an IP address that is assigned to one site. Large websites or e-commerce sites have dedicated IP addresses that host only one domain. If a site handles payment processing directly and not through a third party such as PayPal, the site must use SSL and therefore will need to be on a dedicated IP address. | <urn:uuid:0dd7c610-7b1a-4f31-86b9-981f2fd36341> | CC-MAIN-2017-04 | https://support.managed.com/kb/a584/differences-between-shared-and-dedicated-ip-addresses.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00453-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954704 | 374 | 2.78125 | 3 |
If you have multiple languages or keyboards configured through the Windows Keyboard and you find yourself pressing characters on the keyboard but having different characters display on the monitor, check the language bar. Sometimes it can be very subtle. If you entered a different language letter, it would catch your eye immediately. A lot of times, however, the letters will be the same but be in a different location on the keyboard. Punctuation seems to be the most common change; typing a semi-colon results in a parenthese, etc.
There is a good chance that the language has switched from English or your usual keyboard setup. The buttons are then entering correct information, it’s just not the information you want. Simply click on the language bar or its minimized form (the blue square with two letter initials) and then click on the language you want from the menu that pops up.
“How did this happen?” you say. Well, if you didn’t change it, how come it is no longer English (or your language of choice)? The keyboard shortcut Alt+Shift can also be used to cycle through this list. Thus it seems that most likely a mistype of those buttons is the culprit for why you’re now typing in Swedish. | <urn:uuid:017f9f8d-d53b-4a1d-adee-f3dc646b15ac> | CC-MAIN-2017-04 | https://www.404techsupport.com/2008/05/my-keyboard-is-making-the-wrong-symbols/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00269-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934689 | 258 | 2.796875 | 3 |
10 Ways to Prevent or Mitigate SQL Injection Attacks
SQL injection attacks could allow hackers to compromise your network, access and destroy your data, and take control of your machines.
What Is SQL Injection?
The principal behind SQL injection is pretty simple. When an application takes user data as an input, there is an opportunity for a malicious user to enter carefully crafted data that causes the input to be interpreted as part of a SQL query instead of data.
For example, imagine this line of code:
which is designed to show all records from the table "Users" for a username and password supplied by a user. Using a Web interface, when prompted for his username and password, a malicious user might enter:
1' or '1' = '1
1' or '1' = '1
resulting in the query:
SELECT * FROM Users WHERE Username='1' OR '1' = '1' AND Password='1' OR '1' = '1'
The hacker has effectively injected a whole OR condition into the authentication process. Worse, the condition '1' = '1' is always true, so this SQL query will always result in the authentication process being bypassed.
<Code sample sourced from OWASP http://www.owasp.org/index.php/Main_Page>
Using characters like ";" to append another query on to the end of an existing one, and - - to comment out(and therefore cut off) a part of an existing query, a hacker could potentially delete entire tables, or change the data they contain. He could even issue commands to the underlying OS, thereby taking over the machine, and using it as a staging post to attack the rest of your network. In summary, the consequences of a SQL injection attack could be:
- Loss of data confidentiality
- Loss of data integrity
- Loss of data
- Compromise of the entire network
What Can Be Done to Prevent SQL Injection Attacks?
The most important precautions are data sanitization and validation, which should already be in place. Sanitization usually involves running any submitted data through a function (such as MySQL's mysql_real_escape_string() function) to ensure that any dangerous characters (like " ' ") are not passed to a SQL query in data.
Validation is slightly different, in that it attempts to ensure that the data submitted is in the form that is expected. At the most basic level this includes ensuring that e-mail addresses contain an "@" sign, that only digits are supplied when integer data is expected, and that the length of a piece of data submitted is not longer than the maximum expected length. Validation is often carried out in two ways: by blacklisting dangerous or unwanted characters (although hackers can often get around blacklists) and by whitelisting only those characters that are allowed in a given circumstance, which can involve more work on the part of the programmer. Although validation may take place on the client side, hackers can modify or get around this, so it's essential that you also validate all data on the server side as well.
But sanitization and validation are far from the whole story. Here are ten ways you can help prevent or mitigate SQL injection attacks:
- Trust no-one: Assume all user-submitted data is evil and validate and sanitize everything.
- Don't use dynamic SQL when it can be avoided: used prepared statements, parameterized queries or stored procedures instead whenever possible.
- Update and patch: vulnerabilities in applications and databases that hackers can exploit using SQL injection are regularly discovered, so it's vital to apply patches and updates as soon as practical.
- Firewall: Consider a web application firewall (WAF) either software or appliance based to help filter out malicious data. Good ones will have a comprehensive set of default rules, and make it easy to add new ones whenever necessary. A WAF can be particularly useful to provide some security protection against a particular new vulnerability before a patch is available.
- Reduce your attack surface: Get rid of any database functionality that you don't need to prevent a hacker taking advantage of it. For example, the xp_cmdshell extended stored procedure in MS SQL spawns a Windows command shell and passes in a string for execution, which could be very useful indeed for a hacker. The Windows process spawned by xp_cmdshell has the same security privileges as the SQL Server service account.
- Use appropriate privileges: don't connect to your database using an account with admin-level privileges unless there is some compelling reason to do so. Using a limited access account is far safer, and can limit what a hacker is able to do.
- Keep your secrets secret: Assume that your application is not secure and act accordingly by encrypting or hashing passwords and other confidential data including connection strings.
- Don't divulge more information than you need to: hackers can learn a great deal about database architecture from error messages, so ensure that they display minimal information. Use the "RemoteOnly" customErrors mode (or equivalent) to display verbose error messages on the local machine while ensuring that an external hacker gets nothing more than the fact that his actions resulted in an unhandled error.
- Don't forget the basics: Change the passwords of application accounts into the database regularly. This is common sense, but in practice these passwords often stay unchanged for months or even years.
- Buy better software: Make code writers responsible for checking the code and for fixing security flaws in custom applications before the software is delivered. SANS suggests you incorporate terms from this sample contract into your agreement with any software vendor. | <urn:uuid:508c9216-9a4a-4701-bff8-d239f4069cc3> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/netsecur/article.php/3866756/10-Ways-to-Prevent-or-Mitigate-SQL-Injection-Attacks.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00482-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913668 | 1,161 | 3.4375 | 3 |
Flash memory is an electronic, meaning no moving parts, or non-volatile computer storage device that can be electrically erased and reprogrammed. It was developed from electrically erasable programmable read-only memory (EEPROM).
There are two main types of flash memory, named after the Negated AND (NAND) and Negation of the OR (NOR) logic gates.
Whereas EEPROMs needed to be completely erased before being rewritten, NAND type flash memory may be written and read in blocks which are generally much smaller than the entire device.
The NOR type allows a single machine word or byte to be written or read independently.
The NAND type is primarily used in main memory cards, USB flash drives, solid-state drives and similar products, for general storage and transfer of data. The NOR type, which allows true random access and therefore direct code execution, is used as a replacement for the older EPROM and as an alternative to certain kinds of ROM applications.
NAND or NOR flash memory is also often used to store configuration data in numerous digital products – a task previously made possible by EEPROMs or battery-powered static RAM.
IHS iSuppli is a major provider of diverse global market and economic information. According to a market brief on February 1, 2013, as a whole, cell phones are positioned to become the world’s single largest consumer of flash memory in 2013.
IHS iSuppli sees this as another sign that smartphones hold a preeminent position in the global technology market.
Ryan Chien, analyst for memory and storage at HIS said, “With smartphones accounting for an ever- increasing portion of the global cell phone business, the mobile handset market is demanding more and more memory, particularly flash. This is causing the cell phone business to eclipse all other application markets for flash usage. Indeed, the shift in flash demand is reflective of a widespread transition in technology markets to focus more on mobile platforms like smartphones.”
We have seen that everyday people find more uses for their smartphones. They’ve become pocket-sized, mini computers. Not only are they used for phone calls, but for music, videos and browsing the Internet. With all this comes the need to save and keep more information.
Considering the way that smartphones are being used today, it’s not surprising that people want more memory. Flash memory fits the bill quite nicely.
In 2012, flash storage cards had the largest market share of flash memory utilization. IHS iSuppli sees that number coming down, putting flash storage cards in the third place slot in 2013 with about a 19.7 percent share. Solid state drives (SSD) are ranked number two with a 20.6-percent share. This is up two spots from last year.
All-in-all, tablets, MP3 players, GPS and handheld gaming devices are among the other devices that round out the 100 percent of flash memory consumption. Cell phones will take over as the number-one slot for flash memory utilization.
The use of NAND flash in various applications was a major theme at the Consumer Electronics Show (CES) and the Storage Visions events, both held last month in Las Vegas. Toshiba and Seagate Technology discussed individual efforts in hybrid hard drives. These are different from cache SSD. The hybrids feature the flash memory component integrated within the hard drive – not outside it. Both companies said they believe 8-gigabyte, single-level cell NAND caches to be sufficient for most user needs. Intel believes in 24 gigabytes of NAND.
Kingston Technology introduced a 1-terabyte USB 3.0 flash drive, at a time when most manufacturers haven’t even showcased 512-gigabyte models.
According to IHS iSuppli, as the intersection of flash, storage and the cloud deepens in the consumer and enterprise environments, a bounce-back for the NAND industry is imminent this year, with revenue projected to climb to a record $22 billion – up from $20 billion last year.
Revenue in 2012 had contracted after industry takings of $21 billion in 2011.
Edited by Braden Becker | <urn:uuid:544eae64-f1f5-455e-a07d-50f70df0848b> | CC-MAIN-2017-04 | http://www.mobilitytechzone.com/topics/4g-wirelessevolution/articles/2013/02/04/325415-largest-consumer-flash-memory-2013-will-be-smartphones.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00482-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925974 | 861 | 3.0625 | 3 |
This weekend the Met Office is to host the European arm of the NASA International Space Apps Challenge which will see teams compete to "build, create, and invent new solutions to challenges of global importance."
Weather boffins the Met have a number of apps from staff members including Growers Nation by Radar Products scientist Selena Georgiou. Grower’s Nation is an app to determine what produce to grow and when given the soil type and current seasonal conditions
"This app aims to get more people around the world involved and enthusiastic about growing produce sustainably. This would be done by using the available space in their gardens, school or university grounds or work places that are not currently being used to their potential," the Met said of the Growers Nation app.
Another is from the Met's Weather Impact Research Scientist Jo Robbins called #HazardMap which provides real time hazard mapping by scraping social media sites.
The Met is also using the event to promote the recently launched DataPoint web service which provides access to UK weather data and observations.
Check out the NASA International Space Apps Challenge here. | <urn:uuid:8e2393b7-1418-4ab7-a136-1c8aeb6a4bff> | CC-MAIN-2017-04 | http://www.pcr-online.biz/news/read/met-office-hosts-uk-nasa-space-apps-challenge/028216 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00298-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932938 | 222 | 2.5625 | 3 |
Critical infrastructure systems – many of them aging and outdated – continue to show fraying around the edges, opening up the power grid, water plants, industrial control systems and more to nefarious activity, despite the high-profile reporting on it and scrutiny from the Obama Administration, which continues to carry out information-sharing initiatives as laid out in February’s Executive Order on the subject.
The problem is that many of the systems are connected in ways that are considered outdated, and often get overlooked as threat vectors. “SCADA systems are potentially more vulnerable to exploitation given that, when they were developed, internet use was yet to explode,” explained Ross Brewer, vice president and managing director for international markets at LogRhythm, in an email. “The focus of control system security has therefore been typically limited to physical assets, rather than cybersecurity."
Researcher Chris Sistrunk and Adam Crain, which are part of a consulting firm called Automatak, began a fact-finding mission last April using a custom “fuzzer” for detecting vulnerabilities in SCADA systems. They have since found 25 flaws that could allow attackers to do everything from causing power outages to blocking operator visibility into substation operation so that, unbeknownst to the NOC, it starts making decisions based on outdated operational information. That, in turn, paves the way for a shielded attack.
While most of the known issues would not render servers completely unable to control utilities, some of them do allow for complete hijacking, they said. A buffer overrun vulnerability is the most serious issue that they’ve found so far, which would allow arbitrary code to be injected remotely, so that attackers would “own” the server.
Automatak has submitted its findings to the US Department of Homeland Security’s Industrial Control System-CERT, and has notified the vendors. Nine of the potential exploits have been patched so far.
“While cyber-attacks on SCADA systems may be rare when compared to the extraordinary number of incidents involving web applications or enterprise IT networks, the damage they are able to cause is disproportionately severe,” said Brewer. “The software is primarily responsible for critical operations and national infrastructures and, if exploited, could seriously damage the operations of electricity, water and power suppliers. The potential implications of a hack are terrifying and could not only result in the loss of data, but can also cause damage to physical assets and in certain scenarios, the loss of life.”
Some of the most notorious cyber-attacks in recent years – such as the Stuxnet and Flame viruses – have been SCADA breaches. And just last November one researcher uncovered 25 vulnerabilities in just a few hours. But adding insult to injury is the fact that traditional perimeter cybersecurity tools, such as anti-virus software, have proven their shortcomings time and time again.
“The Flame virus, for example, avoided detection from 43 different anti-virus tools and took more than two years to detect,” Brewer said. “Instead, organizations must have tools in place that allow them to identify threats, respond and expedite forensic analysis in real time.”
Brewer advocates continuous monitoring of all log data generated by IT systems in order to automatically baseline normal, day-to-day activity across systems and multiple dimensions of the IT estate – to identify any and all anomalous activity immediately. | <urn:uuid:8f3cf28e-1ae0-4b2b-9dfb-babd503129ed> | CC-MAIN-2017-04 | https://www.infosecurity-magazine.com/news/25-new-scada-flaws-emerge-in-critical/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00206-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959992 | 701 | 2.625 | 3 |
When people discuss Bitcoin, one of its properties that is often considered is its presumable anonymity. In this respect, it is often compared to cash. However, it shall be recognized and understood that Bitcoin is not as anonymous as cash; far from it, actually. Its anonymity relies on the concept of pseudonyms, which delivers some (unjustified) sense of anonymity, but very weak anonymity in practice.
A pseudonym of yourself is an identifier which is not directly linked to you by any mapping that is readily available. If you have a nickname, which does not clearly resemble your real name, and if you make sure that mapping from your nickname to your real name is not trivial (e.g., by not using both your real name and your nickname interchangeably), then your nickname is a pseudonym.
Bitcoin uses a similar concept to identify players. Each participant in the system has a randomly-looking string -- the hash of his public key -- which is used as his identity. This string identifies you in the Bitcoin system. When Bitcoin money changes hands, it is moved from one such identity to another, and a public ledger indicates which identity paid what identity and how much. There is no clear mapping between your real-life identity and your identifier in the Bitcoin system, and moreover, you can have as many identities as you like in the Bitcoin system, and move funds between those identities as you like.
The Bitcoin use of pseudonyms does not provide anonymity, certainly not of the type offered by cash.
Both academic and practical research had shown methods of "unmasking" pseudonyms, that is, of mapping them back to real identities, based on their being persistent across uses. When a pseudonym is used multiple times, its level of anonymity erodes. Each particular event in which it is used potentially narrows the circle around its owner. For example, imagine that you are "anonymously" surfing the web, identifying yourself only with a pseudonym. As you repeatedly use your pseudonym, it can be used to link your surfing actions over time. After some time, your real identity may be inferred from your surfing pattern alone. Narrowing on your identity is especially easy if you also surf to one or more uncommon websites. Also, one instance in which you surf to a location that discloses your real identity burns the entire pseudonym mask forever, and retroactively.
This situation is the same with Bitcoin, and is even more severe, for two reasons:
Bitcoin security is based on crowd-sourcing. The entire ledger of transactions is always available to the entire public. Therefore, finding patterns is made easier. Unlike with cash, all changes of hands are clearly documented. The fact that you can have many identities does not make a difference when all your internal transfers are documented along with the external ones.
The second point applies if you ever buy or sell Bitcoins for "real" currency. Trading Bitcoins for other currencies is done by entities that are bound by banking, fraud-prevention, and money laundering prevention regulations, and thus obtain your real world identity. As soon as a single Bitcoin finds itself out of your system, or gets into your system, your relevant Bitcoin identity is eternally revealed, as well as most likely any other Bitcoin identity you maintain.
We conclude that Bitcoin is not as anonymous as cash. For cash, there is no clear recorded trail of all its change of hands. Consequently, even when you withdraw or deposit cash, you only surrender information about your present ownership of the bills and coins you trade, nothing about what you did or will do with them. Moreover, I stress that Bitcoin may be considered as less anonymous than credit cards. The credit card company indeed knows all about your transactions, without de-anonymizing any pseudonym. However, it is only the credit card issuer that has this information (at least theoretically). In the Bitcoin case, revealing your identity requires some effort of de-anonymization, but this effort can be undertaken by anyone on earth, using the all-public records.
Bitcoin has the power to change economy, if it ever gains its critical mass and if it is fully commercialized. It has many advantages over traditional currency, such as by being decentralized and free of artificial inflation. Bitcoin has its advantages; but cash-like anonymity is not one of them.
Is Bitcoin pretending to be as anonymous as cash ?
If yes, your essay is quite amazing !!
Bitcoin is never officially claimed to be anonymous. However, it is rightfully claimed to be decentralized and unbacked by any state or financial institution. These features led to an implicit assumption of anonymity by some people. For example, Bitcoin is accepted by some websites that sell illegal goods over the net as well as by criminals running extortion activities.
Also, the common idea that Bitcoin will prevent sanctions and taxation has its roots based on an assumption of anonymity.
There are solutions for the lack of anonymity. Use coin remixers like coinjoin. blockchain.info provide a free, hassle free, opt-in remixer service.
Bitcoin is not anonymous and it is so boring and old. There are some real anonymous crypto. This year anonymymous cryptocurrencies will be in trend.
Just look at duckNote, one of my favorite crypto. duckNote!
duckNote brings idea of mixing and ASIC-resistance. Best crypto ever! Sorry for my emotions, but
duckNote is a true anonymous coin.
Form is loading... | <urn:uuid:0053ce5b-5dba-4744-b034-91b9a5f5938b> | CC-MAIN-2017-04 | https://www.hbarel.com/gen/security/bitcoin-does-not-provide-anonymity | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00536-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.96444 | 1,108 | 2.625 | 3 |
Cable Ties are a necessity for supporting and holding cables, wires, and other objects in place. They are helpful for those devices that spend most of the day separated from their respective fiber patch cords and chargers. Cable ties also have other countless applications, such as used to hold chicken wire to fence posts, to hold items like spare pens in secure bunches, and they are becoming increasing popular materials for both art and fashion.
General Structure Of Cable Ties
Cable ties consist of a long, straight, thin section with small, regularly-spaced teeth along one side (the “top”). At one end is the “head”, which is generally square in shape with an opening through the middle which contains a flexible ratchet-like device that rides up the teeth and prevents backslide. The opposite end of the cable tie is usually tapered to fit easily into the head. Once the loose end is fed through the head and the teeth are engaged, the resulting loop cannot be undone. A looped cable tie can only be pulled tighter.
A broad range of sizes, styles, and colors of cable ties are available
Cable ties are typically constructed of nylon or other plastic materials, though stainless steel and other non-plastic materials are also common.
Metal cable ties are designed to be resistant to weather effects, like rain and sun, over long periods of time. They resist corrosion, and can even withstand large amounts of radiation and extreme heat. Metal ties allow multiple heavy objects to be bound together, with some of the stronger varieties boasting tensile strengths of up to eight hundred pounds. This means that you could bind dozens of PVC pipes or even a string of cinder blocks and raise them off the ground with hollister no fear of the strap breaking. They have an uncoated temperature range of about one thousand degrees Fahrenheit to two hundred below. This means they can be used hollister uk virtually anywhere, for almost any period of time, without losing any of their integrity. They can even be used underground for years, and can also be used in air handling areas.
Most cable ties are single-use only. In some instances, it is possible to release the ratchet with the judicious use of a small screwdriver or similar implement; however, this tends to make the cable tie weaker and more likely to break upon reuse. Some specialty cable ties are designed with easily-releasable ratchets that make them reusable without weakening, but these tend to be rare.
Cable ties have unlimited applications such as in households, offices, manufacturing industries, automobile industries, marine industries, electronic industries and many more. Each cable tie is suitable for a specific type of application. One good example is the Stainless Steel Cable Tie that is recommended for applications that have various chemical, salts and acids exposure. Another example is the Velcro Ties and Straps where the application is not permanent and the user intends to reuse the product on other applications.
FiberStore offers the cost-effective cable ties that come in different types, sizes and colors to match with your different applications. In order to utilize the right capacity and strength of cable ties, users must know the requirements of each application or contact with us. More, you can find other cable management system at fs.com | <urn:uuid:c02dc9e1-2dbe-4d8d-a0b8-715ecf6d2b67> | CC-MAIN-2017-04 | http://www.fs.com/blog/cable-ties-of-cable-management.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00262-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946254 | 677 | 2.8125 | 3 |
On Mon, Oct 15, 2012 at 5:46 PM, Vernon Hamberg
The easiest example I use - the almost-only-one, actually, is when I use
the qsort API - when you call qsort, you tell it what data is to be
sorted, how long each element is, and a pointer to a procedure that will
compare 2 elements and return either that they are the same or that one
or the other is larger. That procedure is the callback - maybe we can
think of it as a CALL BACK to the caller of qsort. qsort walks through
the elements and passes pairs of them to the callback procedure.
I have never really liked the explanation that the called function is
"calling back" to the original caller. Yes, that's where the term
came from, but I don't think it emphasizes the right things.
When people talk in those terms, it sounds like there are TWO distinct
parties: (1) calling code and (2) called code. The calling code
provides the "callback function" as a parameter to the called
function. This explanation implies the callback function "belongs to"
or "is a part of" the calling code. And in a way it (usually) is,
because the calling code is (usually) in one source member, and that
source member (usually) does contain the code for the callback
function. And I suppose the called code is often in another member.
(See the picture in the Wikipedia entry for what I mean. They have
a box called "application program" and a box called "software
library". The application program calls the software library, which
"calls back" into the application program.)
To me, this is not the best way to think about it. I like to think of
it as THREE distinct parties: (1) a caller (or calling function), (2)
a called function, and (3) a callback function. We are really talking
about three functions (I'm going to use the term function because
that's the term used by most languages). The callback function must
be *known* to the caller (otherwise how would the caller pass it to
the called function?) but there is no reason to think of it as
belonging to or being a part of the caller. In some cases it can be,
but it's often not, so I'd recommend thinking of the callback function
as its own, separate beast, that happens to be known to the original
caller. And frankly, I would recommend not really thinking too hard
about what "calling back" means. It's really JUST ANOTHER FUNCTION,
and it happens to be passed as a parameter from the calling function
to the called function.
So again, the Wikipedia picture provides some context. I think
about those ovals, not about the enclosing boxes.
Now, it can be the case, especially in dynamic languages like
of, that the caller is actually creating the callback function on the
fly, at run-time. This might seem like craziness from a traditional
RPG standpoint, but as Scott mentioned, it's all over the place in
I'm probably stepping out on a limb - maybe one can look at it as a kind
of exit program. That's a user-defined program called at some point in a
system or application process. Close enough definition, I hope!
I don't think this is a bad limb to be on. It's another example
mentioned by Scott, and I think it's a useful way to think about it
for people who are comfortable with exit programs but not too familiar | <urn:uuid:9fa7a2a4-02ca-423d-b69e-ff853b15f0ec> | CC-MAIN-2017-04 | http://archive.midrange.com/midrange-l/201210/msg00642.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00288-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938691 | 774 | 2.5625 | 3 |
The widespread implementation of small computing devices inside the realm of the Internet of Things (IoT) makes it a perfect seeding, breeding and weeding ground for Linux with its heritage in hobbyist development.
Heart of the matter
Linux-driven open source and commercial single board computers and modules sit “at the heart of” the IoT phenomenon. This is the suggestion tabled by Eric Brown writing on Hackerboards this month as he explores the usage of Linux in IoT environments.
These machines are usually found in the form of gateways or hubs that aggregate sensor data from typically wirelessly enabled, sensor-equipped endpoints says Brown.
“Sometimes these endpoints run Linux as well, but these are more often simpler, lower-power MCU-driven devices such as Arduino-based devices. Linux and Windows run the show in the newly IoT-savvy cloud platforms that are emerging to monitor systems and analyze data fed from the gateways in so-called fog ecosystems,” he explains.
Indeed, Linux has long been aligning the embedded Linux community towards implementations of the open source codebase that can be productively brought to bear in small-scale devices.
The embedded Linux community is a complex and diverse universe with sub-communities devoting their working project time to aspects of functionality that go way beyond user interfaces and device networking.
Entire sub-cultures exist inside the embedded Linux world devoted to data scheduling, filesystems, log files and time stamp technology… and so on.
Smart device, dumb device
The future for embedded Linux in the IoT is open to debate. Linux creator Linus Torvalds argues that with all the dumb or stupid (i.e. not smart) devices in the IoT, Linux has a key role to play in inter-machine networking and intercommunication at the central level.
“You also need smart devices. The stupid devices talk different standards. Maybe you won’t see Linux on the leaf nodes, but you’ll see Linux in the hubs,” said Torvalds.
Technologies to consider here include RIoT a free and open source operating system developed by a grassroots community gathering companies, academia and hobbyists, distributed all around the world.
Software developers can work with RIoT’s issue tracker technology to inform the community about bugs and enhancement requests. They can also subscribe to the notifications mailing list to get informed about new issues, comments and pull requests. This type of collaboration and community-based information sharing is fundamentally important and will help decide which embedded software IoT projects win or fail.
Not everyone is a fan of embedded Linux given the open source operating system’s inherent ability to be ‘forked’ and re-channeled by the community itself.
Forking must (if it is to be productive) always equal growth, additional functionality and wider connectivity under a wider umbrella of security compliance also… and this, sometimes, is a big ask.
Linux loves the IoT for sure, but Linux in the IoT has some challenges ahead. | <urn:uuid:b837eca8-6384-4604-bba8-ce87b9ddd910> | CC-MAIN-2017-04 | https://internetofbusiness.com/iot-hearts-linux/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00224-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.904298 | 631 | 2.703125 | 3 |
What is IT Benchmarking?Performance benchmarks can be likened to government mileage estimates for automobiles. Actual performance in a customer environment with a customer workload will be different. Just because a particular database benchmark says a configuration can support 5,000 concurrent users or 8,000 transactions per second, does not mean that it is what a customer will experience with their own configuration. Some planners consider it a rule of thumb that actual results are unlikely to exceed published results. The major components of a benchmark are: 1) a workload, with associated metric(s) 2) a set of conditions, commonly called "run rules" 3) reporting requirements Predict Performance with Benchmarking For performance analysts and capacity planners, benchmarks can enhance the ability to estimate system hardware requirements and predict performance. Commercial capacity planning software base the what-if analysis of performance scenarios on published benchmark results. The number of possible benchmarks is only limited by the imagination, but they fall into three general categories: 1) industry-standard 2) vendor-oriented 3) customer-sponsored or internal benchmarking Industry-Standard Benchmarking ISBs (industry-standard benchmarks) are developed, maintained and regulated by independent organizations. These benchmarks typically execute on a wide variety of hardware and software combinations. The most well-known ISB organizations are the SPEC (Standard Performance Evaluation Corporation) and the TPC (Transaction Processing Council). Typically, hardware and software vendors are heavily represented in the membership of these organizations. The groups solicit input from members and the IT community when benchmarks are created and updated to reflect changes in the marketplace. Some common ISBs are:
IT benchmarking is the process of using standardized software, representing a known workload, to evaluate system performance. Benchmarks are designed to represent customer workloads, such as database or Web applications. They enable a variety of hardware and software configurations to be compared. Many benchmarks are integrated with cost components, so price and performance can be evaluated.
- TPC-C, representing a database transaction workload
- SPEC jAppServer, representing a multitier, Java 2 Platform, Enterprise Edition application server workload
- SPEC CPU2006, representing CPU-intensive workloads
- SPC (Storage Performance Council), representing storage-intensive workloads | <urn:uuid:5947a98e-450c-4bd3-b689-8c7bdecca91e> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Enterprise-Applications/How-to-Understand-and-Use-Benchmarking/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00280-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.884456 | 449 | 2.625 | 3 |
What Is Power And Cooling?
As data grows, so do data centres. And as data centres grow, so do their power demands. High‑density blade servers can deliver more processing power in a smaller footprint, but they also bring higher power costs—three to five times as much as previous‑generation equipment. With energy required both to power the computing infrastructure and to cool heat‑generating servers, conservation isn’t easy. But as utility rates rise, power and cooling accounts for more of data centre operating costs. IDC estimates that for every dollar spent on new hardware, organizations must spend an additional 66 cents on power and cooling. To keep these costs down, a complete power and cooling strategy is necessary. With the right solutions in place, you can get better control of your climate and your budget.
A data centre infrastructure management (DCIM) system collects and manages information about a data centre’s equipment, resource use, and operational status. This information is then distributed, integrated, analyzed, and applied in ways that help managers meet organization and service‑oriented goals and optimize their datacentre’s performance. In practice, DCIM systems may vary widely in focus, and complete solutions are likely to consist of a framework or suite of products, from one or many suppliers, that are designed to interoperate or complement each other. The close interworking of IT and mechanical/electrical systems will increasingly lead to the deployment of solutions that span datacentre facility infrastructure, physical IT assets, and virtual IT assets.
How Will Power And Cooling Solutions Benefit My Organization?
By implementing a comprehensive power and cooling strategy, you can:
- Reduce utility costs
- Improve utilization and efficiencies of systems
- Improve system cooling
- Improve infrastructure agility to meet future demands
- Increase visibility and manageability of data centre
How Can I Help My Organization Embrace Power And Cooling Solutions?
When evaluating power and cooling solutions, it is important to ask the following questions:
- How are you currently meeting environmental policy initiatives?
- How many physical servers do you have, and how well are they utilized?
- How many resources are dedicated to data centre maintenance and support?
- When will you run out of capacity, and how do you know when you do?
- How do you measure the impact of a Data Centre Event?
What Comprises A Power And Cooling Solution?
Designing a modular, energy‑efficient power and cooling system reduces costs and limits downtime. In a dense server environment, placing your data centre’s cooling mechanisms near equipment generating the most heat will ensure your systems are running at optimized levels. Also, right‑sizing power requirements can help maximize your hardware lifecycle.
Scalable Three‑Phase Power
Paralleling capabilities on internally scalable UPS from 10 kW to 2 MW enable right‑sizing and on‑demand adjustments as your capacity needs change with proven, high‑performance UPS. Highly granular modularity of 2 kW, 4 kW, 10 kW, 12 kW, 25 kW, 66 kW is ideal for data centres that need to scale up or down their power quickly.
Power Distribution Units (PDUs)
Power distribution is an effective way to increase energy efficiency and improve power management. Metered and switched rack PDUs, for example, are designed to help data centre managers control power capacity and functionality.
In an ever‑changing data centre, it is important to have cooling systems that can easily adapt to new environments. Modular systems can be moved from place to place and supplement traditional cooling to counteract hot spots. p>
You gain better control and management of your data centre’s availability and efficiency so you can better align your IT equipment to your organization’s needs.
Getting Started With Power and Cooling
Account Manager and certified Solution Architects are ready to assist you with every phase of choosing and leveraging the right solution for your IT environment. Our approach includes:
- An initial discovery session to understand your goals, requirements, and budget
- An assessment review of your existing environment and definition of project requirements
- Detailed vendor evaluations, recommendations, future design and proof of concept
- Procurement, configuration, and deployment of the final solution
- Ongoing product lifecycle support
Power protection solution offers reliability for school district officials.
The Organization: West DeMoines Community Schools
The Location: West Des Moines, Iowa
The Project: CDW redesigns West DeMoines Community School's outgrown electrical and cooling system. Get the story »
Smart Growth, Smart I.T. at Multi-Color Corp
The Organization: Multi-Color Corp.
The Location: Batavia, Ohio
The Project: Multi-Color Corporation's newly consolidated server infrastructure takes on a string of global acquisitions and delivers. Get the story »
State of the Arts Protection
The Organization: Salt Lake Community College
The Location: Salt Lake City, UT
The Project: Salt Lake Community College relies on APC’s power and cooling technology to keep its brand-new Center for Arts and Media data
center humming. Get the story »
IT Lends Children's Miracle Network Hospitals A Hand
The Organization: Children's Miracle Network Hospitals
The Location: Salt Lake City, UT
The Project: Children's Miracle Network Hospitals intends to grow by a factor of four over the next decade, and they are willing to make the necessary investment in infrastructure to do so. Get the story »
Turning the Engine
The Organization: International Speedway Corporation
The Location: Daytona Beach, FL
The Project: Racing giant accelerates operations with new IT infrastructure. Get the story »
Backup Data Center Offers Upfront Benefits
The Organization: Drexel University
The Location: Philadelphia, PA
The Project: Drexel University uses new facility as springboard to upgraded infrastructure. Get the story »
Nothing But Net
The Organization: Houston Rockets
The Location: Houston, TX
The Project: The Houston Rockets bolster the network infrastructure at the Toyota Center for the 2013 NBA All-Star Game -- and beyond. Get the story »
Take a Ride on a Better Network
The Organization: Palace Entertainment
The Location: Newport Beach, CA
The Project: Hammering out a network optimization solution to accommodate 40 locations in 11 states. Get the story »
Foundation for the Future
The Organization: West Islip Public Schools
The Location: Long Island, NY
The Project: Upgrade network and connectivity to offer benefits to the classroom and beyond. Get the story »
An Upgrade That’s Really Making the Grade
The Organization: Toccoa Falls College
The Location: Toccoa Falls, GA
The Project: New switches and more APs to boost WLAN speed and increase coverage. Get the story »
Room to Grow
The Organization: Joliet Township High School
The Location: Joliet, IL
The Project: When a school district started to outgrow their data centers, they turned to CDW. Get the story »
The Organization: Pico Quantitative Trading
The Location: New York, NY
The Project: Partnering with a technology partner with the offerings and insight to enable growth. Get the story »
Convergence Conquers Complexity
The Organization: Total Wine & More
The Location: Potomac, MD
The Project: Grow a converged data center platform for incremental agility, efficiency and savings. Get the story »
The Organization: NAVIS
The Location: Bend, OR
The Project: Use Cisco UCS to scale computing power to meet spikes in demand. Get the story »
Ready to Roam
The Organization: Pinckney Community Schools
The Location: Livingston County, MI
The Project: Network infrastructure overhaul to achieve its long‑term teaching and learning goals. Get the story »
The Organization: Rothstein Kass
The Location: Roseland, NJ
The Project: Use Cisco UCS to combine pervasive virtualization and automated management tools. Get the story » | <urn:uuid:907954c2-2d86-4ac9-a4b3-452713bc1804> | CC-MAIN-2017-04 | https://www.cdw.ca/it-solutions/ca/network-power-cooling.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00188-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.870426 | 1,652 | 2.65625 | 3 |
An impact resistance and fatigue-testing study reports that Telene, a thermoset polydicyclopentadiene (PDCPD), used as a matrix in a glass fiber composite, demonstrated improved impact strength and fatigue resistance for products subject to high stress factors. The study, released by the Department of Materials Engineering at the University of Leuven, Belgium, found that Telene exhibited 50% greater resistance in impact testing and four times longer life in fatigue testing than epoxy samples did, while maintaining the same overall tensile strength. The study compared Telene alongside an equivalent epoxy composite commonly used in the laminate process. While the epoxy-based laminates used with glass fibers exhibited revealed early local damage and loss of mechanical properties, Telene reportedly retained its mechanical characteristics throughout the extended testing protocol, until break. An abstract and the full study are available from Science Direct. This story is reprinted from material from Telene, with editorial changes made by Materials Today. The views expressed in this article do not necessarily represent those of Elsevier.
Abstract: Washington State University researchers have developed a novel nanomaterial that could improve the performance and lower the costs of fuel cells by using fewer precious metals like platinum or palladium. Led by Yuehe Lin, professor in the School of Mechanical and Materials Engineering, the researchers used inexpensive metal to make a super low density material, called an aerogel, to reduce the amount of precious metals required for fuel cell reactions. They also sped up the time to make the aerogels, which makes them more viable for large-scale production. Their work is published in Advanced Materials. Hydrogen fuel cells are a promising green energy solution, producing electricity much more efficiently and cleanly than combustion engines. But they need expensive precious metals to fuel their chemical reactions. This need has limited their acceptance in the marketplace. Aerogels, which are sometimes also called liquid smoke, are solid materials that are about 92 percent air. Effective insulators, they are used in wet suits, firefighting gear, windows, paints and in fuel cell catalysts. Because metal-based aerogels have large surface areas and are highly porous, they work well for catalyzing in fuel cells. The WSU team created a series of bimetallic aerogels, incorporating inexpensive copper and using less precious metal than other metal aerogels. Researchers introduced the copper in the bimetallic system through their new, one-step reduction method to create hydrogel. The hydrogel is the liquid-filled form of aerogel. The liquid component is carefully and completely dried out of the hydrogel to create aerogel. Their method has reduced the manufacturing time of hydrogel from three days to six hours. "This will be a great advantage for large scale production," said Chengzhou Zhu, a WSU assistant research professor who created the aerogel. The research is in keeping with WSU's Grand Challenges, a suite of research initiatives aimed at large societal issues. It is particularly relevant to the challenge of sustainable resources and its theme of energy. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.
ASM Fellow Richard D. Sisson has received Worcester Polytechnic Institute's 2016 Board of Trustees' Award for Outstanding Research and Creative Scholarship. The award recognizes continuing excellence in research and scholarship by faculty members over a period of at least five years. Prof. Sisson is the George F. Fuller Professor of Mechanical Engineering, director of Worcester Polytechnic Institute's Manufacturing and Materials Engineering Programs, and technical director of the WPI Center for Heat Treating Excellence. He is internationally recognized for his research in materials science and engineering, as well as manufacturing. His pioneering work spans several areas of physical metallurgy, and he has written more than 250 journal articles and 250 technical presentations. Prof. Sisson is currently principal investigator for a multi-million-dollar, multi-institution project aimed at developing new metallurgical methods and new lightweight alloys to help the military build more effective and durable vehicles and systems. He has won numerous national and international awards, and has served as president of the ASM Heat Treating Society.
Pollens, the bane of allergy sufferers, could represent a boon for battery makers: Recent research has suggested their potential use as anodes in lithium-ion batteries. "Our findings have demonstrated that renewable pollens could produce carbon architectures for anode applications in energy storage devices," says Vilas Pol, an associate professor in the School of Chemical Engineering and the School of Materials Engineering at Purdue University. Batteries have two electrodes, called an anode and a cathode. The anodes in most of today's lithium-ion batteries are made of graphite. Lithium ions are contained in a liquid called an electrolyte, and these ions are stored in the anode during recharging. The researchers tested bee pollen- and cattail pollen-derived carbons as anodes. "Both are abundantly available," says Pol, who worked with doctoral student Jialiang Tang. "The bottom line here is we want to learn something from nature that could be useful in creating better batteries with renewable feedstock." Research findings are detailed in a paper that appears today in Nature's Scientific Reports. Whereas bee pollen is a mixture of different pollen types collected by honeybees, the cattail pollens all have the same shape. "I started looking into pollens when my mom told me she had developed pollen allergy symptoms about two years ago," Tang says. "I was fascinated by the beauty and diversity of pollen microstructures. But the idea of using them as battery anodes did not really kick in until I started working on battery research and learned more about carbonization of biomass." The researchers processed the pollen under high temperatures in a chamber containing argon gas using a procedure called pyrolysis, yielding pure carbon in the original shape of the pollen particles. They were further processed, or "activated," by heating at lower temperature — about 300 C — in the presence of oxygen, forming pores in the carbon structures to increase their energy-storage capacity. The research showed the pollen anodes could be charged at various rates. While charging for 10 hours resulted in a full charge, charging them for only one hour resulted in more than half of a full charge, Pol says. "The theoretical capacity of graphite is 372 milliamp hours per gram, and we achieved 200 milliamp hours after one hour of charging," he says. The researchers tested the carbon at 25 C and 50 C to simulate a range of climates. "This is because the weather-based degradation of batteries is totally different in New Mexico compared to Indiana," Pol says. Findings showed the cattail pollens performed better than bee pollen. The work is ongoing. Whereas the current work studied the pollen in only anodes, future research will include work to study them in a full-cell battery with a commercial cathode. "We are just introducing the fascinating concept here," Pol says. "Further work is needed to determine how practical it might be." Electron microscopy studies were performed at the Birck Nanotechnology Center in Purdue's Discovery Park. The work was supported by Purdue's School of Chemical Engineering. The electron microscopy studies at Birck were funded by a Kirk exploratory research grant and were conducted by doctoral students Arthur D. Dysart and Vinodkumar Etacheri. An XPS measurement was conducted by Dmitry Zemlyanov at Birck. Other support came from the Hoosier Heavy Hybrid Center of Excellence (H3CoE) fellowship, funded by U.S. Department of Energy. Release Date: February 5, 2016 Source: Purdue University
News Article | April 7, 2016
Carbon fibers derived from a sustainable source, a type of wild mushroom, and modified with nanoparticles have been shown to outperform conventional graphite electrodes for lithium-ion batteries. Researchers at Purdue University have created electrodes from a species of wild fungus called Tyromyces fissilis. "Current state-of-the-art lithium-ion batteries must be improved in both energy density and power output in order to meet the future energy storage demand in electric vehicles and grid energy-storage technologies," says Vilas Pol, an associate professor in the School of Chemical Engineering and the School of Materials Engineering. "So there is a dire need to develop new anode materials with superior performance." Batteries have two electrodes, called an anode and a cathode. The anodes in most of today's lithium-ion batteries are made of graphite. Lithium ions are contained in a liquid called an electrolyte, and these ions are stored in the anode during recharging. Pol and doctoral student Jialiang Tang have found that carbon fibers derived from Tyromyces fissilis and modified by attaching cobalt oxide nanoparticles outperform conventional graphite in the anodes. The hybrid design has a synergistic result, Pol says. "Both the carbon fibers and cobalt oxide particles are electrochemically active, so your capacity number goes higher because they both participate," he says. The hybrid anodes have a stable capacity of 530 milliamp hours per gram, which is one and a half times greater than graphite's capacity. Findings are detailed in a paper appearing online in the American Chemical Society's Sustainable Chemistry & Engineering journal. One approach for improving battery performance is to modify carbon fibers by attaching certain metals, alloys or metal oxides that allow for increased storage of lithium during recharging. Tang got the idea of tapping fungi for raw materials while researching alternative sources for carbon fibers. "The methods now used to produce carbon fibers for batteries are often chemical heavy and expensive," Tang says. He noticed a mushroom growing on a rotting wood stump in his backyard and decided to study its potential as a source for carbon fibers. "I was curious about the structure so I cut it open and found that it has very interesting properties," he says. "It's very rubbery and yet very tough at the same time. Most interestingly, when I cut it open it has a very fibrous network structure." Comparisons with other fungi showed the Tyromyces fissilis was especially abundant in fibers. The fibers are processed under high temperatures in a chamber containing argon gas using a procedure called pyrolysis, yielding pure carbon in the original shape of the fungus fibers. The fibers have a disordered arrangement and intertwine like spaghetti noodles. The interconnected network brings faster electron transport, which could result in faster battery charging. Electron microscopy studies were performed at the Birck Nanotechnology Center in Purdue's Discovery Park. The work was supported by Purdue's School of Chemical Engineering. The electron microscopy studies at Birck were funded by a Kirk exploratory research grant and were conducted by former postdoctoral research associate Vinodkumar Etacheri. | <urn:uuid:a931834b-ffd4-4ce8-9aed-25167773cee4> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/materials-engineering-204393/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00188-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952146 | 2,247 | 3.03125 | 3 |
Learn More: Study With Others
We have all heard the statement, “Two minds are better than one.” And that statement can often be true while studying. Studying with a friend or a classmate is an excellent source of encouragement and motivation—even if he or she is studying a different subject. However, the best reason to study with a friend or classmate is to expand your own and his or her knowledge by sort of picking each other’s brains.
Performing non-surgical brain-picking is a great way to gain loads of outstanding information, advice and ideas. And it is usually easy to find someone with more knowledge or experience to study with during certification courses because students’ skill and knowledge levels usually vary. Learning from friends or classmates can help complete one’s understanding of the material because you are learning from different perspectives.
Also, teaching other people the material is a great way to improve your knowledge of a subject because you will most likely have to thoroughly explain your answers. People who have difficulty retaining information can definitely improve retention through training others or by listening to others explain the information to them again.
But—as always—be careful as to who you choose to collaborate with. Studying with a really good friend could be distracting and, therefore, probably wouldn’t be quality study time. Also, it is definitely not a good idea to study with a romantic interest. Be sure to pick someone that is serious about studying, someone that you could learn from—ideally, they should be able to learn from you as well—and obviously someone you get along with. | <urn:uuid:8a35d2a5-8bb3-44af-a2ce-e45f6268e840> | CC-MAIN-2017-04 | http://certmag.com/learn-more-study-with-others/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00032-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.972966 | 328 | 3 | 3 |
There are always some little touches left to make your linux even a bit more secure, involving suid, nouser, sudo and etc. Now, this article is newbie friendly, but it also requires some small amount of knowledge. Fear not, for I shall explain everything as painfully as I can. So sit back, grab yourself your favorite drink, some peanuts and relax. 3,2,1…
Let’s discuss suid. Yes, the suid, wich stands for ‘Set-user-ID’ root programs. As you can guess these programs run as root regardless of who is executing them. The reason suid programs are so dangerous is that interaction with the untrusted user begins before the program is even started. There are many other ways to confuse the program, using things like environment variables, signals, or anything you want. Exactly this ‘confusion’ of a program is a cause of frequent buffer overflows. More than 50 % of all major security bugs leading to releases of security advisors are accounted to suid programs. And some distributions are shipped with hundreds of these suid programs, most of which you’ll probably never use. Of course there are few wich are neccessary, in order that normal user might perform operations wich are normally done by root. Now let’s get to the root of the problem…
How can you find out about the suid programs on your system: the thing to do is to get a list of all suid programs on your system and start the boring task of going through them. Unfortunately, I can’t tell you here wich you need, might need or don’t need. But, again, fear not for logic is your best friend here. Just browse through the list of all suid programs, and find those that you use, sometimes or frequently or never use. But, I must warn you, the list could be looooong. Ok, here we go, type the following line(of course as root):
find / -type f -perm +6000 -ls
And the output, after a while, it depends on the amount of suid programs on your system will resemble something like this.
Now, let’s pretend that you want to remove the suid permission on /bin/ping, as you don’t plan on using it:
chmod -s /bin/ping
That’s it! Feel free to browse through man pages of chmod to find out more if you want (thats ‘man chmod’). Now the most annying fact is that you’ll have to do it for ALL suid programs that you don’t plan on using.
The other issue are files wich don’t belong to anyone, or don’t belong to a group. These are also dangerous, as they provide more ways to manipulate with your system. Also, an unowned file may be a signal indicating an intruder on your system. Let’s find them:
find / -nouser -o -nogroup
Nothing? Heh, that’s exactly what we expect! And if you find any, feel free to change the ownership of the file to any user you want, or to delete it. If you want to change the ownership you might want to check out the command ‘chown’, of course by typing ‘man’chown’.
Now, the last but especially not the least important, the sudo. By configuring sudo you can enable normal users (any user other than root) to perform certain action usually reserved for root. Did you ever want to shutdown your PC as a normal, average user (this is for example purposes only, as I don’t recommend it for security reasons) or perform any other action? Well thats exactly why I recommend configuring sudo. The file /etc/sudoers contains all that information. Now, as describing sudo and sudoers could eat up more than an article I’m not going to describe and talk more about it, I’ll leave that to you, remember, man pages are your friend, so ‘man sudo’ ‘man sudoers’ and in one afternoon you’ll fix it perfectly. Problems? Don’t have sudo? Just go here and download it. Keep exploring! | <urn:uuid:2fcc925d-3619-45aa-8e4d-47bf1c5cc087> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2002/04/08/suid-programs-getting-to-the-root-of-the-problem/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00546-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925376 | 918 | 2.546875 | 3 |
Google surveyed 2,000 web users to find the most common types of passwords they use. The results aren't really shocking, but strongly suggest it'd be really easy to hack into an account for anyone you know.
That, and lots of people own pets.
Here are the most common things people base their passwords on, according to the Google Apps survey:
- Pet's name
- Significant dates (like a wedding anniversary)
- Date of birth of close relation
- Child's name
- Other family member's name
- Place of birth
- Favorite holiday
- Something related to favorite football team
- Current partner's name
- The word "password"
Step one in creating stronger passwords: Use a random password generator (like those built into password management programs like LastPass and 1Password), so you can resist the urge to use "Fido" to secure your online accounts. Pass this along to someone you love.
Read more of Melanie Pinola’s Tech IT Out blog and follow the latest IT news at ITworld. Follow Melanie on Twitter at @melaniepinola. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook. | <urn:uuid:f4f3f7c9-9b97-4c1f-9a8f-4495d7e78add> | CC-MAIN-2017-04 | http://www.itworld.com/article/2707604/consumerization/can-you-guess-the-top-ten-sources-of-people-s-passwords-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00362-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.892983 | 250 | 2.515625 | 3 |
If You Could Read My MindBy Samuel Greengard | Posted 2016-02-19 Email Print
Humans have risen to the top because we're the smartest creatures on this planet. But what happens when a silicon-based species is smarter and less fragile?
It's abundantly clear that we're careening into a new era of information technology. Artificial Intelligence (AI) is advancing in quantum leaps.
In late January, Google announced that its DeepMind computer had annihilated a human champion in the ancient Chinese game of Go, which is far more complex than chess. Some experts say that this advance in neural network technology represents an even bigger achievement than when IBM's Deep Blue beat world chess champion Garry Kasparov in 1997.
Now there's word that robots will likely be able to read our minds by 2030. Nita Farahany, a professor of law and philosophy at Duke University, stated at the 2016 World Economic Forum (WEF) that brain function combined with an EEG device (which detects and records electrical activity in the brain) could be used to unlock and operate computers and other electronic devices. For instance, a user might think of a song or an object, and the device would recognize it and unlock itself.
While this sounds somewhere between incredible and mind-bending—after all, it could introduce the possibility of unhackable authentication—it also raises a hornet's nest of concerns. Based on today's attacks on government and corporate systems, and data that's increasingly accessible through clouds, it's reasonable to wonder whether the technology would raise the stakes on data privacy, security and warfare to new, completely unimaginable levels.
Of course, between now and 2030, machines could be running our businesses, thus unleashing wave after wave of unemployment. A WEF report, "The Future of Jobs," predicts that as many as 7.1 million jobs could vanish by 2020, while 2.1 million new jobs could be created—primarily in highly specialized areas, such as computing, mathematics, architecture and engineering. It doesn't take a certified accountant to figure out there's a fundamental problem here.
Where will all of this take us? Stephen Hawking, Elon Musk and others are now warning about AI surpassing human intelligence over the next few decades. Of course, at a certain point, the question will become whether humans are even necessary and have any value in a world where machines can do just about everything better.
Humans have risen to the top of the evolutionary pyramid because we're the smartest creatures on this planet. But what happens when another species—one that's silicon based on top of it—is smarter and less fragile? Then all bets are off. Musk has even gone so far as to question whether humans will simply become a "biological boot loader for digital superintelligence."
Sometimes, a series of incremental gains eventually lead to a net loss. Let's hope this isn't the case with AI. | <urn:uuid:8898b067-e5d8-47f6-898c-a2c1acc205a7> | CC-MAIN-2017-04 | http://www.baselinemag.com/blogs/if-you-could-read-my-mind-....html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00086-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946398 | 593 | 2.578125 | 3 |
Google adds a layer to climate modeling efforts
Earth Engine taps 25 years of satellite imagery in battle against deforestation
- By Kevin McCaney
- Dec 03, 2010
Google has added to the emerging effort toward global climate modeling with the Dec. 2 release of Google Earth Engine, which makes 25 years of satellite images available for studying changes and mapping trends in the Earth’s environment.
The images contain what Google says are trillions of measurements collected by Landsat satellites over the last quarter century, and the company is developing tools to help scientists interpret and analyze the data. A principle focus of the project is identifying areas of deforestation.
Google is offering the platform for free, and also will provide 20 million CPU hours to allow scientists and developing countries to use the tools, reports Juliet Eilperin in the Washington Post.
The release of Google Earth Engine, which the company has been working on for two years, coincides with the annual United Nations Framework Convention on Climate Change, being held in Cancun, Mexico. One of the convention’s goals is to agree on a way to compensate countries with rainforests for protecting them, Eilperin writes; the data from the platform’s applications could help validate successful efforts.
Advances in geospatial and mapping systems have helped fuel a number of efforts to make use of satellite, sensor and other data to more accurately map the Earth’s surface and environment, for purposes from supporting first responders in local emergencies to tracking global climate.
The National Atmospheric and Oceanic Administration, for example, earlier this year launched a prototype Web portal for scientific climate data at Climate.gov. In October, NOAA announced plans to build a $27.6 million, state-of-the-art supercomputer at its Environmental Security Supercomputing Center in West Virginia. The new high-performance center is part of the agency’s expanding climate modeling efforts, which have been supported in part by $170 million in stimulus funds.
NOAA also this year opened a new supercomputer for climate research, nicknamed Gaea, or Mother Earth, at Oak Ridge National Laboratory in Tennessee, and has plans to upgrade the 260-teraflop machine to petascale capability.
Meanwhile, NASA recently began testing a potentially ground-breaking, 40 gigabits/sec trial network at its Center for Climate Simulation at the Goddard Space Flight Center in Greenbelt, Md., home to one of the world’s premier climate modeling groups.
Kevin McCaney is a former editor of Defense Systems and GCN. | <urn:uuid:4c42fbd6-023e-46c6-8836-ae1fa0d879b2> | CC-MAIN-2017-04 | https://gcn.com/articles/2010/12/03/google-earth-engine-climate-modeling.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00483-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.919176 | 522 | 3.078125 | 3 |
Most discussions of big data focus on how to find the nuggets of valuable information contained in a company’s storage media. If you just apply the right algorithm with the right equipment, you’ll get a return on your investment—right? In some cases, yes, but will your journey into this realm yield big data value, or will it just result in big data hoarding?
The flood of data is growing: from increasingly high-resolution video (along with cameras everywhere) to various gadgets that record all manner of information, companies and consumers are awash in bits. And with the Internet of Things on the horizon, the problem is set to become much worse. Imagine every appliance, electronic device and electrical outlet in your home—along with many other items—fitted with a processor to enable measurements and intercommunication with each other and with the Internet. Now imagine all the additional data this scenario entails.
An EMC-sponsored forecast by IDC predicts that “[f] rom 2005 to 2020, the digital universe will grow by a factor of 300, from 130 exabytes to 40,000 exabytes, or 40 trillion gigabytes (more than 5,200 gigabytes for every man, woman, and child in 2020).” Obviously, that data won’t be evenly distributed—some companies (such as cloud providers) will store a larger share of it. Whether all of it can be stored economically is debatable, but assuming traditional storage is inexpensive enough or a new technology emerges to handle more data, another (perhaps greater) problem remains: what to do with it all.
Big Data Pack Rats
If you’ve ever lived in a house for a number of years and then moved, you have probably experienced the shocking realization that you have accumulated lots of junk. In a noble desire to avoid waste, we often accumulate worthless goods on the rationale that “they might come in handy someday.” For many individuals, getting rid of stuff takes a conscious effort. How much easier it is to be a pack rat when it comes to data, which can easily be moved out of sight.
The mantra of big data is that all these ones and zeroes contain information that can yield significant business value—if we can just figure out how to extract it. And therein lies the problem: most of the data is useless and is just taking up space. Much of it contributes little value. For some businesses, the value to be gained from analyzing massive amounts of information is insufficient to justify the costs of implementing a big data analytics system.
Even if a company decides not to pursue big data analytics, data-storage needs will continue growing unless it implements some approach to eliminating useless data. But how should useful and useless data be differentiated? Companies are thus in a difficult situation: the rising flood of data may be worth storing for analysis (which requires investment in a platform to process that data), but in either case, identifying the useful data is difficult.
Fixing Data Hoarding
If you’re not interested in big data analytics, but instead want to simply cut storage costs by eliminating useless data, you must determine a means of differentiating the good from the bad. (Eliminating useless data can also aid big data analytics in many cases as well.) But this is not a trivial matter. Although older data may tend to be less useful than newer data, simply deleting files older than a certain date is sure to destroy much valuable information. Attempts to differentiate by file type, frequency of use, location in storage, source, size and so on all run into the same problem. Going through data file by file is a tedious task that is probably uneconomical.
Some tradeoffs could be devised where, for instance, data of a certain type and older than a certain date is deleted on the assumption that the costs saved outweighs any value lost. Of course, short of quantifying ahead of time the value that is lost, justifying such approaches objectively is difficult. Individuals (consumers and employees) can take some steps to delete information they know to be useless (the 40 draft versions of a report, for instance), but this approach fails to address data created automatically by sensors, monitoring equipment and so on. Unfortunately, identifying and disposing of useless data may seem more costly (or at least more troublesome) than simply paying for additional storage and ignoring the problem. Data hoarding thus has momentum for many companies and individuals.
Big Data Analytics: To Analyze or Not To Analyze
Processing large amounts of data is costly: it requires primary storage, backup storage, capital costs in both storage and processing equipment and software, labor costs in implementing the system and ongoing costs to run everything. Implementing a big data platform should, like any business decision, be justified by a legitimate potential for returns. You need more than just a little bit of valuable insight from your reams of data—you need to cover the platform costs and provide enough of a return to justify focusing on this pursuit instead of something else.
Yes, your massive amounts of data probably have some useful insights to offer if you look long and hard enough, but you might also be better off simply deleting a large portion of that data. By doing so, you save on storage and backup costs—value that must be considered when deciding whether big data analytics has something to offer your company. As Baseline notes, deleting data (rather than storing it all) can also yield cost benefits in the event of litigation.
Determining whether big data analytics offers your company a source of valuable insights depends on a number of factors, including your industry, budget, goals and so on. For all companies (with, perhaps, a few exceptions), a campaign against data hoarding can help by reducing storage costs, as well as enabling a greater focus for those choosing to pursue big data. Unfortunately, determining what data to delete is problematic, largely because of the sheer amount of it.
The best approach to ending data hoarding—whether you’re a company or an individual—is to accept that you must make some tradeoffs. If you delete data, even if you’re extremely careful and do it by hand (a time-consuming and, arguably, wasteful approach), you’ll probably kick yourself one day over something of value about which you “should’ve know better than to delete.” Accept that this will happen, but consider also the costs saved in the meantime. By reducing your data load, you cut storage costs—value that could easily outweigh the tidbits of information you would have otherwise been able to use later.
Big data appeals to the “it might come in handy someday” attitude that so many of us harbor. Unfortunately, this attitude can be costly in numerous ways. Big data analytics is an area that may prove to offer enough value to justify itself, but it is also surrounded by a lot of hype. In the meantime, all your company’s data is soaking up money in storage and backup costs. The question each company must answer is whether the value of maintaining that data—and, perhaps, the value of big data analytics—is enough to justify the costs. Are you in search of big data value, or are you just big data hoarding? | <urn:uuid:3f41e346-a2b5-46ff-8d4d-0bf16e9b63cf> | CC-MAIN-2017-04 | http://www.datacenterjournal.com/big-data-big-data-hoarding/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00391-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941358 | 1,486 | 2.734375 | 3 |
SQL BOOLEAN type
From Ingres Community Wiki
This is an Ingres 10.0 DBMS project to implement the SQL predefined data type named BOOLEAN.
ISO/IEC 9075-2:2003 Information technology — Database languages — SQL — Part 2: Foundation (SQL/Foundation)
From section 4.5 "Boolean types":
"The data type boolean comprises the distinct truth values True and False. Unless prohibited by a NOT NULL constraint, the boolean data type also supports the truth value Unknown as the null value. This specification does not make a distinction between the null value of the boolean data type and the truth value Unknown that is the result of an SQL <predicate>, <search condition>, or <boolean value expression>; they may be used interchangeably to mean exactly the same thing."
Section 4.5 also describes comparison and assignment of booleans and operations involving booleans.
MySQL implements BOOLEAN (and BOOL) as "synonyms for TINYINT(1) with non-zero values equating to true. It also implements the boolean constants TRUE and FALSE as "aliases for 1 and 0."
PostgreSQL implements a BOOLEAN type as well as the the constants TRUE and FALSE. For input or assignment, either the unquoted constant keywords or character strings such as 'y', 'true', 'no', or '0' can be used. The canonical output is unquoted 't' or 'f'.
Oracle implements BOOLEAN for variables in PL/SQL, but does not support the type for table column definition.
SQL Server and IBM DB2 do not support BOOLEAN.
The iicommon.h header defines DB_BOO_TYPE "boolean" and describes it as follows:
"Datatype ID 1 is used for the internal "boolean" type. This is only used as the result type of the comparison function instances, when executed via the adf_func() call."
The project aims to extend the use of the internal type so that it can be used in CREATE TABLE statements, database procedures and other appropriate contexts. For example,
CREATE TABLE example (column1 BOOLEAN NOT NULL); CREATE PROCEDURE example_proc (flag BOOLEAN NOT NULL) AS DECLARE var1 BOOLEAN; BEGIN ... END;
In addition, the keywords TRUE and FALSE will be implemented so that they can be used in various SQL contexts and the parser will understand the boolean nature of a BOOLEAN column or variable. For example,
INSERT INTO example VALUES (FALSE); UPDATE example SET column1 = TRUE; SELECT * FROM example WHERE column1 IS TRUE; ... var1 = TRUE; WHILE var1 IS NOT FALSE ....
Connectivity Driver Implementations
- Information regarding the ODBC implementation is available here.
- Information regarding the .NET data provider implementation is available here.
- Information regarding the PHP driver implementation is available here
Also supported with OpenAPI & JDBC.
Ingres Enhancement Number
Issue 132541 requests implementation of "Search condition boolean and boolean data types."
Issue 133014 requests implementation of a BOOLEAN data type to ease migrations to Ingres.
DDS Review Summary
- The strings 'FALSE' and 'TRUE' will be accepted for input into a BOOLEAN column, in addition to the SQL literals FALSE and TRUE.
- The unquoted strings FALSE and TRUE will be shown in the SQL Terminal Monitor output for a BOOLEAN column.
- CAST(integer AS BOOLEAN) will be accepted for values 0 and 1. No other non-standard coercions will be supported.
- UNKNOWN will only be supported as part of IS UNKNOWN or IS NOT UNKNOWN operators.
- Product Management will be asked to decide whether support for BOOLEAN will be extended to the character-based front-ends.
- Other items discussed:
- ORDER BY boolean will result in grouping rows such that FALSE will come first, TRUE second and NULL last.
- A pre-10.0 client will get an error if it tries to access a 10.0+ server result that includes a BOOLEAN column.
- The various drivers developed or supported by Ingres Corp. (PHP, Python, ODBC, etc.) will be updated to support BOOLEAN. We cannot guarantee the same for third-party drivers such as Perl.
This will require testing through SQL Terminal Monitor, embedded SQL, OpenAPI, JDBC, .NET, and ODBC.
BOOLEAN Data Type
BOOLEAN can be used as a data type when defining a column in a table or a variable in a database procedure.
Boolean columns accept as input the literal values FALSE and TRUE, 0 and 1 (which correspond to false and true, respectively), and the strings 'FALSE' and 'TRUE'.
IS UNKNOWN is a synonym for IS NULL when dealing with Boolean values.
The input is not case sensitive.
Terminal Monitor output for a BOOLEAN column shows the unquoted strings FALSE and TRUE.
ORDER BY BOOLEAN results in grouping rows in this order: FALSE, TRUE, NULL.
CASE expressions can be used with BOOLEAN columns or literals. For example:
CASE expr WHEN cond1 THEN expr2 and CASE WHEN search_cond1 THEN expr1
accept FALSE or TRUE in condN or search_condN or part thereof, and exprN can include BOOLEAN columns or literals.
The CAST function supports casting BOOLEAN to and from character types and from the integer values 0 and 1. For example:
- CAST (BOOLEAN AS character_type) is allowed.
- CAST(character_type AS BOOLEAN) is accepted if the character type is the string 'FALSE' or 'TRUE', regardless of case.
- CAST(integer AS BOOLEAN) is accepted for values 0 and 1.
- CAST(integer_type AS BOOLEAN) is accepted if the integer type has the value 0 or 1.
For casting to strings, the data type must be of sufficient length (for example, CHAR(5) for FALSE) or silent truncation occurs (unless the string_truncation=fail is used at connect time). The shortcut CHAR(expr) returns a single character (that is, 'F' or 'T') because it is interpreted as CAST(expr AS CHAR(1)).
Internally, the BOOLEAN type is stored as a single-byte integer that can take only the values 0 and 1.
Ingres Star, Ingres Replicator, and OpenAPI support the BOOLEAN data type.
The BOOLEAN data type is supported by Ingres connectivity drivers, including ODBC, JDBC, .NET Data Provider, PHP, and Python.
Note: A pre-10.0 client will get an error if it tries to access a 10.0 or higher server result that includes a BOOLEAN column.
This feature adds to or changes the syntax of many statements, including:
- ALTER TABLE
- COPY TABLE
- CREATE INTEGRITY
- CREATE TABLE
- CREATE TABLE…AS SELECT
- DECLARE GLOBAL TEMPORARY TABLE
- INSERT INTO
- REGISTER TABLE
- WHERE clause of SELECT, DELETE, UPDATE
- JOIN source ON search_condition
The CREATE INDEX statement allows an index to be created on BOOLEAN columns.
Support of the BOOLEAN data type helps migrations from other database products.
Here are examples of using the BOOLEAN data type when creating a table or procedure:
CREATE TABLE example (column1 BOOLEAN NOT NULL);
CREATE PROCEDURE example_proc (flag BOOLEAN NOT NULL) AS DECLARE var1 BOOLEAN; BEGIN ... END;
Here is an example of using the literals FALSE and TRUE in an SQL context:
INSERT INTO example VALUES (FALSE); UPDATE example SET column1 = TRUE; SELECT * FROM example WHERE column1 IS TRUE; ... var1 = TRUE; WHILE var1 IS NOT FALSE ...
Connectivity Guide Updates
The Ingres Connectivity Guide for Ingres 10, chapter "Understanding ODBC Connectivity" will be updated to reflect support for the Boolean data type.
The Ingres ODBC driver supports SQL_C_BIT and SQL_BIT for BOOLEAN data types. Use unsigned char (UCHAR) to define Boolean fields. Also acceptable are char, CHAR, or SCHAR.
The Ingres Connectivity Guide for Ingres 10, chapter "Understanding .NET Data Provider Connectivity" will be updated to add the IngresType.IngresDate enumeration value to the Data Types Mapping Table.
The following information will be added to the Data Types Mapping Table:
|IngresType||IngresData Type||Description||.NET Data Type|
|Boolean||boolean||Boolean values of true and false||Boolean| | <urn:uuid:23090e6e-5f92-44fc-96b8-24f3b2dbeb79> | CC-MAIN-2017-04 | http://community.actian.com/wiki/SQL_BOOLEAN_type | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00235-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.715504 | 1,959 | 3.25 | 3 |
No back doors promise CERN, Harvard and MIT brains
An e-mail service invented by the researchers from CERN, Harvard and MIT claims to be fully secure even from sophisticated snoopers such as the NSA.
Founded last year in Geneva, Switzerland, ProtonMail was built with the intention of creating a better protected e-mail system.
Being Swiss based and therefore beyond EU and US jurisdiction ProtonMail servers promise users a legal force shield for privacy.
"We use only the most secure implementations of AES, RSA, along with OpenPGP. Furthermore, all of the cryptographic libraries we use are open source. By using open source libraries, we can guarantee that none of the encryption tools we are using have clandestinely built in back doors. We are constantly consulting security experts including IT scientists at CERN (the European Organization for Nuclear Research)," ProtonMail’s website said.
Jason Stockman, co-founder of the Swiss-based e-mail service asks end-users not to lose sleep over the ironclad security details.
The symmetric encryption incorporated in the system would ensure that mails could be sent without much fretting to users with lesser protected e-mails. When a non-ProtonMail user receives an encrypted message, a link is sent along with it to be loaded into the browser, which is then decrypted with the help of a password shared by the sender of the mail. | <urn:uuid:4e67b4c9-e2ee-4a9f-96b2-0d9c14261221> | CC-MAIN-2017-04 | http://www.cbronline.com/news/verticals/the-boardroom/any-takers-for-nsa-proof-email-190514-4271241 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00235-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92875 | 292 | 2.6875 | 3 |
Head-of-line blocking (HOL blocking) in networking is a performance issue that occurs when a bunch of packets is blocked by the first packet in line. It can happen specially in input buffered network switches where out-of-order delivery of packets can occur. A switch can be composed of input buffered ports, output buffered ports and switch fabric.
When first-in first-out input buffers are used, only the first received packet is prepared to be forwarded. All packets received afterwards are not forwarded if the first one cannot be forwarded. That is basically what HOL blocking really is.
As the time goes by and the network with more and more virtualised servers and other devices are making that network more complicated, overlay technologies are rising to save the day for network administrators.
Virtual Extensible LAN – VXLAN is a new encapsulation technology used to run an overlay network on current Layer 3 communication network. An overlay network is considered as a practical network that is set up on the top of current layer 2 network. It also considers additional layer 3 technologies to aid flexible computer architectures. VXLAN will make sure it is very easy for network engineers to level out the right cloud computing setting while reasonably separating cloud applications and tenants. A cloud computing environment is defined as a multitenant, every tenant needs its separately configured logical network, which in return needs it’s very own network ID or identification.
What the hell that means?
What it this VXLAN doing actually. To put it simple, VXLAN can create logical network to connect your virtual machines across different networks. It is enabling us to make a layer 2 network for our VMs on top of our layer 3 network. That’s why VXLAN is a overlay technology. In “normal” network if you are connecting virtual machine to get the connection to some other virtual machine on different subnet, you need to use a layer 3 router to make a connection between networks. With VXLAN we can utilize VXLAN gateway of some sort to connect them without even exiting into physical network.
Image: VXLAN frame – taken from blog.cisco.com website
Isolating Traffic inside a VLAN Using Private VLANs
In the article VACL – VLAN Access Lists we mention one way how to provide security on switch device like Cisco Catalyst switch. In this article we will see the other way of providing security with use of private VLANs – PVLAN.
The whole idea is to make possible to group VLANs inside the VLANs. You see from the picture here on the right that this will give you the opportunity to make group od computers or servers inside main (primary) VLAN. It will be possible to have two servers in the VLAN 10 and both of them on the same subnet. Here it becomes little bit strange, then they can be separated into two Secondary VLANs, VLAN 4 and VLAN 5. | <urn:uuid:cb5204cf-41ac-4c15-8f6e-ee00018438ca> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/tag/switching-2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00263-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.908226 | 609 | 2.59375 | 3 |
Killer robots and runaway nanotechnology will be taken seriously by leaders of a new institution at Cambridge University. The Centre for the Study of Existential Risk will examine sometimes fantastical threats that could wipe out the human species. Led by Cambridge's philosopher Huw Price, cosmology and astrophysics professor Martin Rees, and Skype co-founder Jaan Tallinn, the center will also focus on events like extreme weather or a meteor striking Earth.
“Our goal is to steer a small fraction of Cambridge's great intellectual resources, and of the reputation built on its past and present scientific pre-eminence, to the task of ensuring that our own species has a long-term future,” the founders wrote.
Price and Tallinn recently wrote an article outlining the dangers of artificial intelligence. Though fodder for science fiction for decades, the academics contended that an explosion of robot intelligence is possible and could have dire consequences. If robots were able to learn to be more intelligent than humans and began creating their own hardware and software, humans would simply be left to sit back and watch as people died off, they wrote.
“If that sounds far-fetched, the pessimists say, just ask gorillas how it feels to compete for resources with the most intelligent species – the reason they are going extinct is not (on the whole) because humans are actively hostile towards them, but because we control the environment in ways that are detrimental to their continuing survival,” they wrote. | <urn:uuid:4fc7b2d0-8492-4483-8251-b8651ea6c880> | CC-MAIN-2017-04 | http://www.govtech.com/security/Killer-Robots-Cambridge-University.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00171-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.962136 | 300 | 2.828125 | 3 |
As technology advances, trying to maintain privacy on the Internet has become increasingly difficult. Nowadays there are so many different ways to be tracked online. Many everyday activities now involve computers and the Internet. Cell phones, emails, web browsers, search engines, and social media sites are just a few examples of technologies that produce digital footprints as a natural byproduct of using them.
When what we do on the Internet is combined with other data about us, it creates a profile that can be tracked, and therein lies the problem of online privacy. It begins with large data brokers collecting information about us and building massive databases to store what they've found. Giant corporations like Google and AT&T also collect data about us, which is stored, analyzed, indexed, and sold as a commodity to data brokers and even shared with the federal government. The brokers, in turn, might sell it to other entities and have built enormous databases of our personal information.
The Department of Homeland Security (DHS) monitors private emails and collects our sensitive information, too. Even local school districts across the United States have begun data mining students and building student databases, despite federal student-privacy protections guaranteed by the Family Educational Rights and Privacy Act. Then these entities share the information with each other; corporations sell it to governments and vice-versa, and companies sell it to other companies, all without our knowledge or consent.
Although described with a rather fatalistic conclusion, security researcher Bruce Schneier touched on these privacy and security issues in a blog post entitled “Our Internet Surveillance State.” He summarized some of the challenges we face as follows:
In today's world, governments and corporations are working together to keep things that way. Governments are happy to use the data corporations collect -- occasionally demanding that they collect more and save it longer -- to spy on us. And corporations are happy to buy data from governments. Together the powerful spy on the powerless, and they're not going to give up their positions of power, despite what the people want.
At some point the lack of better privacy laws could present a very real threat. As senior security consultant Paul Hill told The New York Times, "They may have the best of intentions now, but who knows what they will look like 20 years from now, and by then it will be too late to take it all back."
The good news is there are measures we can take to protect our privacy. Maintaining privacy on the Internet is an important layer of security and keeps your digital life hidden from others. So we’ve put together some useful tips that may help you avoid being tracked on the Internet.
Use a Proxy Server
All computers participating in a network that uses the Internet Protocol for communication are assigned a numerical label called an Internet Protocol address (IP address). One function of an IP address is for identification purposes. So long as your Internet use can be tied to your unique IP address, this information can be used to track you.
Hiding your IP address is possible by connecting to a Virtual Private Network (VPN). These services route your data stream to a proxy server, which obscures your identity from websites and your Internet service provider by removing your IP address before the data is sent to its destination. Additionally, these services encrypt data traveling to and from their servers so it appears like random bits to anyone who would be monitoring wireless networks in public locations.
Tor is a good free VPN that will help you maintain privacy on the Internet. Take advantage of Tor to prevent others from discovering your location or browsing habits.
Encrypt Confidential Data
Losing confidential data during the exchange of messages and files between computers is a risk many of us worry about. Luckily, there are solutions to help keep this information hidden. One tool in particular, Pretty Good Privacy (PGP) software, exists to promote awareness of privacy issues and make the job of encrypting and signing data files easy to accomplish. PGP is considered by many the de-facto standard for email encryption today, with millions of users worldwide.
Sending messages using PGP software will help you maintain privacy on the Internet. You can download and install PGP software from the International PGP homepage.
Tweak Your Social Media Privacy Settings
If you're aiming to be more conscious of your Internet privacy, the default settings for your social media profiles could use some tweaking. We've previously written about some ways to improve your privacy online with a list of Facebook privacy settings that you should pay close attention to, and also mentioned how to enable Twitter's Do Not Track settings. In addition to updating your social media privacy settings, it’s a good idea to log off sites like Facebook and Twitter whenever you’re not using them.
Block Third-Party Cookies
Web browsers are beginning to include features that allow you to block third-party tracking, which has gained traction in recent years. Do Not Track settings are available in Safari, Firefox, Google Chrome, Opera, and other browsers. For details on blocking third-party cookies using your web browser, head over to this article that explains what Do Not Track is, and why you should care.
Block Third-Party Flash Content
Most people are aware that their web browser stores cookies. However, most people are not aware that Flash Player has its own method of using cookies, and that the information in these cookies can be shared among websites. Head over to this web page, which displays your Flash security and privacy settings, and make some changes to keep these cookies off your computer. Uncheck “Allow third-party Flash content to store data on your computer” to block these cookies from being stored. (Note: the downside to unchecking this setting is that it may prevent Flash content from playing on some websites.)
When it comes to maintaining privacy and security on the Internet, most of us want balance and transparency. Only in rare cases would people want their entire computer so locked it becomes a hassle to execute normal tasks. Other users may be fine with sharing some information with services they trust. But while we wait for better privacy laws to catch up with new advances in technology and protect us against intrusive data mining practices, use these tips to avoid letting your data get in the wrong hands.
What concerns you most about maintaining privacy on the Internet? Are there other privacy tools or security tips that you would recommend to others? Share your comments below!
proxy server image via Wikipedia | <urn:uuid:b389c151-29a7-4e21-baaa-c4998ae334b1> | CC-MAIN-2017-04 | https://www.intego.com/mac-security-blog/how-to-maintain-your-privacy-on-the-internet/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00225-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93341 | 1,304 | 3.03125 | 3 |
7.15 What are covert channels?
Covert communication channels (also called subliminal channels) are often motivated as being solutions to the ``prisoners' problem.'' Consider two prisoners in separate cells who want to exchange messages, but must do so through the warden, who demands full view of the messages (that is, no encryption). A covert channel enables the prisoners to exchange secret information through messages that appear to be innocuous.A covert channel requires prior agreement on the part of the prisoners. For example if an odd length word corresponds to ``1'' and an even length word corresponds to ``0'', then the previous sentence contains the subliminal message ``101011010011''.
An important use of covert channels is in digital signatures. If such signatures are used, a prisoner can both authenticate the message and extract the subliminal message. Gustavus Simmons [Sim93a]devised a way to embed a subliminal channel in DSA (see Section 3.4) that uses all of the available bits (that is, those not being used for the security of the signature), but requires the recipient to have the sender's secret key. Such a scheme is called broadband and has the drawback that the recipient is able to forge the sender's signature. Simmons [Sim93b] also devised schemes that use fewer of the available bits for a subliminal channel (called narrowband schemes) but do not require the recipient to have the sender's secret key.
- 7.1 What is probabilistic encryption?
- Contribution Agreements: Draft 1
- Contribution Agreements: Draft 2
- 7.2 What are special signature schemes?
- 7.3 What is a blind signature scheme?
- Contribution Agreements: Draft 3
- Contribution Agreements: Final
- 7.4 What is a designated confirmer signature?
- 7.5 What is a fail-stop signature scheme?
- 7.6 What is a group signature?
- 7.7 What is a one-time signature scheme?
- 7.8 What is an undeniable signature scheme?
- 7.9 What are on-line/off-line signatures?
- 7.10 What is OAEP?
- 7.11 What is digital timestamping?
- 7.12 What is key recovery?
- 7.13 What are LEAFs?
- 7.14 What is PSS/PSS-R?
- 7.15 What are covert channels?
- 7.16 What are proactive security techniques?
- 7.17 What is quantum computing?
- 7.18 What is quantum cryptography?
- 7.19 What is DNA computing?
- 7.20 What are biometric techniques?
- 7.21 What is tamper-resistant hardware?
- 7.22 How are hardware devices made tamper-resistant? | <urn:uuid:7d6f9919-7612-4cc4-ada3-0ae5e3eaab87> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-are-covert-channels.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00529-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.900297 | 600 | 2.65625 | 3 |
Classless routing uses a mask, or subnet mask, to define that point where the network portion stops and the host portion starts. The default subnet mask class for class A is 255.0.0.0, which says I have eight bits. The first octet is where the network portion stops. The mask can be defined two ways. One is by spelling it out just as we did here, 255.0.0.0; or there’s a short hand called a sider which just gives you the number of ones in the mask (/8). Either way works just fine.
Looking at the actual values of the mask, notice on the left that the lowest number we could have in any one of those octets would be 0. Keeping in mind it’s a series of 1’s starting from the left, the very next number we could have is 128.
If we wanted a 24-bit mask, it would look like this:
That’s a default for a Class C address. The mask tells us how far over to look to consider the host bits, or the network versus host.
How do we determine the size of a network? The size of the network is determined by the number of host bits. The more host bits you have, the larger your network will be. If you have one host bits it could either be a 0 or a 1. If we had two bits it could have 00, 01, 10, or 11, so we have four combinations with two bits. The size is determined by 2 to the number of bits. If we have 2 bits it’s 22 or 4 possible combinations. If we have 8 host bits it would be 28 which is equal to 256, and that would be the size of that network.
Guest Blogger: Jill Liles | <urn:uuid:a779b499-42da-4b84-a619-b1491cb63089> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2012/03/19/subnetting-made-easy-part-3-classless-addressing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00399-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938099 | 376 | 3.78125 | 4 |
Confusing Convenience with Security: SSH Keys
Secure Shell (SSH) keys are a common part of accessing Unix systems. If you’re at all concerned about your privileged passwords and are unaware of what’s going on in your Unix systems, you need to put some focus specifically on your organization’s use of SSH keys.
SSH keys provide access Unix servers by means of a public/private key pair and an associated pass phrase. The IT pro has the private key and pass phrase, with the server hosting the public key. Both the key and pass phrase are required to facilitate access to the Unix server, providing what can be considered a very secure method of access.
So, what’s wrong with SSH Keys?
If they’re used correctly, nothing at all. So, let’s look at some basic security concepts and apply them to SSH keys to identify how they should be used, and then compare that with how they’re actually used to see if security is really being maintained.
First off, if your goal is to authenticate an individual to allow access to a given Unix server, the password (or pass phrase, in this case) should only be known by one person. Second, if using some method of two-factor authentication, the password (which is something you know) should be accompanied by something you have which, in this case, is the private key – again, by only one person.
Here’s where SSH keys as a security mechanism start to fall apart. An SSH key isn’t tied to an individual user outside the Unix server; only to an account on the Unix server. So that means, literally anyone with the key who knows the pass phrase can access the server!
Additionally, it’s commonplace for organizations to use the same pass phrase with multiple keys (with each key granting access to a different server). So, now if you have half the puzzle (the pass phrase) you only need to get access to the key.
To make matters worse, it’s not an easy task to replace keys – it’s largely a manual process, which is why the same pass phrase gets used multiple times and why they’re not often changed.
So, if you look at where your Unix authentication is at today – pass phrases that never expire, used by multiple users, on countless keys the grant access to all your Unix servers – and you realize that what started out as a very cool method of secure access has merely become a way of conveniently giving insecure access over what was intended on being a secure method of access, you’ve got trouble.
To learn more on how to properly manage SSH Keys as part of an overall privileged password management strategy, download chapter 3 of the ebook, “Six Critical Capabilities for Password Management”.
Author: Nick Cavalancia | Techvalgetlist | Founder and Chief | <urn:uuid:029815db-0d1d-4c3c-bde0-055c80976362> | CC-MAIN-2017-04 | https://www.beyondtrust.com/blog/confusing-convenience-for-security-ssh-keys/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00335-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940111 | 603 | 2.953125 | 3 |
http://news.com.com/2100-1001-912785.html By Robert Lemos Staff Writer, CNET News.com May 14, 2002, 6:05 AM PT BERKELEY, Calif.--Law enforcement and intelligence agents may have a new tool to read the data displayed on a suspect's computer monitor, even when they can't see the screen. Marcus Kuhn, an associate professor at Cambridge University in England, presented research Monday showing how anybody with a brawny PC, a special light detector and some lab hardware could reconstruct what a person sees on the screen by catching the reflected glow from the monitor. The results surprised many security researchers gathered here at the Institute of Electrical and Electronics Engineers' (IEEE) Symposium on Security and Privacy because they had assumed that discerning such detail was impossible. "No one even thought about the optical issues" of computer information "leakage," said Fred Cohen, security practitioner in residence for the University of New Haven. "This guy didn't just publish, he blew (the assumptions) apart." Many intelligence agencies have worried about data leaking from classified computers through telltale radio waves produced by internal devices. And a recent research paper outlined the threat of an adversary reading data from the blinking LED lights on a modem. Kuhn's research adds the glow of a monitor to the list of dangers. Eavesdropping on a monitor's glow takes advantage of the way that cathode-ray tubes, the technology behind the screen, work. In most computer monitors, a beam of electrons is shot at the inside of the screen, which is covered in various phosphors, causing each pixel to glow red, green or blue, thereby producing an image. The beam scans from side to side, hitting every pixel--more than 786,000 of them at 1024-by-768 resolution--in sequence; the screen is completely scanned anywhere from 60 to 100 times every second. The light emitted from each pixel of phosphor will peak as the pixel is hit with electrons, creating a pulsating signal that bathes a room. By averaging the signal that reflects from a particular wall over nearly a second and doing some fancy mathematical footwork, Kuhn is able to reconstruct the screen image. Not so fast Yet Kuhn, who is still completing his doctoral thesis, is quick to underscore the problems with the system. "At this point, this is a curiosity," he said. "It's not a revolution." First off, Kuhn performed the experiments in a lab at a short distance--the screen faced a white wall 1 meter away, and the detector was a half meter behind the monitor. There have been no real-world tests where, for example, other light sources are present and the detector is 30 feet across a street. Other light sources, including the sun, make things much more difficult if not impossible. Normal incandescent lighting, for example, has a lot of red and yellow components and tends to wipe out any reflections of red from the image on a screen. And several countermeasures are effective, including having a room with black walls and using a flat-panel liquid-crystal display. LCD monitors activate a whole horizontal line of pixels at once, making it immune to this type of attack. Still, other researchers believe that Kuhn may be on to something. "Anyone who has gone for a walk around their neighborhood knows that a lot of people have a flickering blue glow emanating from (their) living rooms and dens," said Joe Loughry, senior software engineer for Lockheed Martin. While Kuhn calculated that the technique could be used at a range of 50 meters at twilight using a small telescope, a satellite with the appropriate sensors could, theoretically, detect the patterns from orbit, said several security experts. That could open a whole new can of worms for privacy. If Kuhn's technique proves to be practical, the result of the research could be a new round of battles between law enforcement agencies and privacy advocates in the courts over whether capturing the faint blue glow from a home office is a breach of privacy. Until that's resolved, the safest solution is to compute with the lights on. - ISN is currently hosted by Attrition.org To unsubscribe email majordomoat_private with 'unsubscribe isn' in the BODY of the mail.
This archive was generated by hypermail 2b30 : Wed May 15 2002 - 06:14:46 PDT | <urn:uuid:21999930-62e5-4ac0-bd02-478c46fcd575> | CC-MAIN-2017-04 | http://lists.jammed.com/ISN/2002/05/0101.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00179-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947195 | 900 | 2.734375 | 3 |
The Perils of Web Services
In case you dont already know, Web services are modular chunks of functionality that organizations publish and allow trading partners to access. Many of todays popular Web applications use Web services as the behind-the-scenes engine for their more complex functionality.
This raises the troubling question: How do we secure these new interfaces we are developing?
The Big Gun Threats
In order to secure something you need to first understand the threats to which it may be vulnerable. Web services have an interesting threat profile. They are standard pieces of functionality, typically written in .NET or Java, and often connect to file systems and databases like the programs we are accustomed to writing. As a result, Web services are not exempt from the major threats that we concern ourselves with when securing traditional software. Attack vectors like the buffer overflow, SQL injection and other parameter tampering threats, also apply to Web services. However, Web services introduce a few more, including:
WSDL Scanning: A WSDL (Web services definition language) is used to describe the Web service to connecting parties. Our trading partners use these documents to discover what pieces of functionality are available to them and how to format their requests to the Web service. Care needs to be taken when creating and publishing these documents. Often the documents are automatically generated from the code and functionality not meant to be exposed to outside entities is included in our WSDL. This may allow an attacker unintended access functionality.
XPath Injection: XPath is a language for querying information from XML documents. Similar to SQL Injection, if user input is not properly sanitized, it is possible for a malicious user to influence the XPath query being run by the software to garner more information than he/she would normally have access to.
Recursive Payload: The communication sent back and forth via Web services is all XML based, giving the attacker a new avenue of attack. Knowing that the Web service will need to parse the XML message in order to process the request, an attacker can send a request which contains a large amount of nested opening tags, but never supply a closing tag. The Web service, when trying to parse this file, will often consume too many system resources or even crash as it needs to track open tags until the matching close tag occurs. This can cause a denial of service to the Web service.
Opening pieces of functionality to third parties is wrought with threats, both old and new. For this reason it is paramount that developers understand these threats and how to protect their applications from potential attack. The biggest roadblock to securing Web services is understanding that it is difficult to do so.
The three tenets of security are confidentiality, integrity and availability (CIA). In the world of Web services, availability is the most straightforward to achieve. Typical attacks against Web services availability would be based on bad data, which is determined to choke the application and cause it to crash. Developers need to define strict rules for their input to act as guidelines for validation. Any and all data is then validated against these rules prior to use by the system. This will help protect against availability attacks. Although protecting the availability of Web services is no simple task, it is much easier than protecting confidentiality and integrity.With confidentiality, we want to ensure that only the intended audience is able to access information. Integrity means that we know where the data came from and that it has not been altered in transit. For this to occur, we need to have strong authentication that allows the system to validate a true identify and authorization which grants access permission to only authorized users. When we attempt to implement these measures in Web services we find ourselves falling down a rat hole of acronyms and cobbled pieces which only address part of the issue.
Often the first place security professionals tend to look for help is the WS-Security (Web services security) standard. WS-Security is a proposed standard for dealing specifically with confidentiality and integrity for Web services. Ive seen many implementations of Web services which attempt to sprinkle magic SSL/TLS security dust on the problem to make it go away. While using a protocol like HTTPS to transmit the messages between the requestor and the Web service, this only provides point-to-point security and does not address security for the message after it reaches the other point. We need so called end-to-end security and WS-Security attempts to provide us with that.
WS-Security allows users to attach timestamps to messages to ensure freshness and prevent replay attacks. There is a mechanism included for encrypting messages which provides the needed confidentiality. There is also a mechanism for digitally signing messages, which authenticates the sender and ensures the message has not been tampered with; meeting our integrity requirement. It also allows us to attach security tokens to a message such as username/password or X.509 certificates which can be used for authentication.
While a step in the proverbial right direction, WS-Security does have drawbacks such as performance issues and key management and distribution concerns. The most glaring however, is that it does not provide any authorization to know if the requestor has access to the information and functionality they are requesting. For this we can link in SAML (security assertion markup language), turn to XACML (eXtensible access control markup language); or use both.
SAML and XACML attempt to provide a means to create access control policies that can be enforced by the system. This allows restricted access to certain data and functionality based on a requestors identity. Both SAML and XACML can create policies which describe proper access controls for data and operations. The problem is that these access control policies are not easy to create, understand or manage. It is also difficult to determine which language to use.
As inferred from the challenges mentioned above, securing a Web service is a daunting task. Unfortunately, there is little help for developers in determining how best to integrate these components into their programs. Developers are left to flail about, hoping to stumble upon secure implementations. The typical response Ive seen is to either delegate security to the network appliances or to ignore it all together. Neither of these options presents a desirable situation. As a security community we have to make it easier for developers to create secure code. It is our responsibility to shed light on the issue and not to leave them alone in the dark.
John Carmichael leverages his strong lab development, programming and security process skills to deliver secure software development training courses to some of the worlds largest organizations including Adobe, EMC and MassMutual. Prior to joining Security Innovation, John was a systems analyst who led various Web development labs and product training for both technical and non-technical audiences. | <urn:uuid:c469cf6d-d76d-482e-ae93-b4fb79b04c53> | CC-MAIN-2017-04 | http://www.cioupdate.com/print/trends/article.php/3716751/The-Perils-of-Web-Services.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00179-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94074 | 1,360 | 2.609375 | 3 |
Is Smart Grid coming soon to your neighborhood? Don’t count on it. With respect to consumer Smart Meters, only about 25% of the installed base has been replaced or approved for replacement.
Smart Grid’s fundamental concept is that the addition of digital technologies to the electric utility distribution system will permit the monitoring, analysis, control and communications required to maximize throughput while reducing consumption. This will enable utilities to distribute electricity as efficiently as possible and allow end users to consume electricity as economically as possible.
While the benefits of Smart Grid are well understood and the technologies to implement the Smart Grid vision have been developed and are rapidly maturing, deployments to date are limited. Let’s be clear, however: Smart Grid is a global market where some countries are moving much faster than the USA and where global manufacturers and software companies with sophisticated offerings are staking claims to a share of the market. The worldwide Smart Grid product and services market has been estimated at $69 Billion in 2009 growing to $186 Billion by 2015.
If Smart Grid is such a good thing, why is it not being deployed more rapidly in the USA?
- The Return on Investment is in years or decades, not months – However, in August Black & Veatch showed that Commonwealth Edison (ComEd) customers could save $2.8 Billion on their electric bills over the 20 year life of smart meters, based on the results of a one year pilot.
- Utilities are not early adopters of technology – Try and sell to a utility a product that has not been deployed somewhere else without failing for ten years. The culture of the electric utility industry has been to emphasize reliability over operations optimization.
- The electric utility industry in the USA is highly fragmented – Privately-owned, local-government owned, and co-op utilities operate independently and with a monopoly in their markets.
- Rate increases are controlled by regulatory bodies – Utilities make investment decisions independently but must justify cost recovery to state regulatory agencies before raising rates to cover Smart Grid costs. State regulatory agencies are subject to political forces, primarily the enthusiasm of the consumer for an investment.
- Smart Grid is not entertainment – There is no Steve Jobs selling Smart Grid ‘pods’ to the masses. The average electricity consumer is woefully undereducated about Smart Grid benefits and consequently lukewarm in his or her enthusiasm for rate increases to cover Smart Grid investments.
Industry watchers believe that as deployment of Smart Grid technologies becomes more prevalent, that acceptance and then enthusiasm will accelerate the trend.
DLT Solutions’ vendor partners are actively supporting the Smart Grid trend with products and solutions:
Oracle Utilities Network Management System 1.11, released in August, offers new modeling and analysis features to improve distribution-grid management for electric utilities. These enhancements provide real-time views of grid activity to help align electricity generation with demand.
Understanding potential outcomes under multiple possible conditions, such as storms, extreme temperatures and fluctuations in commercial and consumer usage, helps utilities improve contingency plans to maintain continuous operations through events that previously resulted in outages. Version 1.11 also provides greater flexibility in integrating geographic information systems (GIS).
With v 1.11 utilities can incorporate electricity from renewable resources like solar roofs and backyard windmills, allowing utilities to use local generation to meet renewable resource targets.
The new supervisory control and data acquisition (SCADA) integration options in v 1.11 alleviate reliance on a particular vendor’s SCADA system, allowing utilities the option to retain their current SCADA while moving efficiently and cost effectively into the smart grid era. Oracle delivers the flexibility to change to a new SCADA system without disrupting major smart grid capabilities.
Also last month, Autodesk released two products for use by utilities in their Smart Grid design work. AutoCAD Utility Design allows a designer to create a design based on business rules and share the design as a template with others in the organization. The product ships with more than 8,000 rules, enabling the designer to more rapidly develop standards-compliant design artifacts.
Autodesk Infrastructure Design Suite Premium 2012 offers model-based design, analysis, and visualization capabilities for a comprehensive Building Information Modeling (BIM) solution for infrastructure and utility design. This includes electric and gas transmission lines and distribution networks, power generation facilities, and substation design projects.
Symantec addresses both the data management and cybersecurity issues inherent in Smart Grid deployments. National recognition of cyber threats to critical infrastructure, particularly SCADA systems, requires that utilities address network security within their operations control centers. Symantec solutions for the control center enable network managers to prevent, detect, and respond to threats that would disable the grid.
Smart Grid meter data will require unprecedented data management capability by utilities. With the regulatory requirement for 7 year retention of smart meter data received at 15 minute intervals, it has been estimated that 28 petabytes of storage will be required for a 10 million meter deployment. Protecting the privacy of data while making it available for timely analysis is a challenge that Symantec is addressing.
To ensure that data received from throughout the electricity distribution system is trusted, encryption is desirable. With Verisign User Authentication Symantec offers a set of solutions, including managed services, that addresses this need.
Energy providers need to automate the management of endpoints to ensure that systems and devices have the latest security patches without a labor intensive manual update. Symantec solutions provide antivirus, white listing, hardening, and patch management facilities to accomplish this critical task.
Power Analytics (formerly EDSA)
Power Analytics’ Paladin SmartGrid software platform is designed specifically for on-line management and control of “hybrid” power infrastructures, integrating traditional utility power with on-site power generation. The platform optimizes energy consumption in multi-energy-source sites, whether they are focused on minimizing annual costs, carbon footprint, peak load or public consumption.
As more organizations move to supplement their utility power with on-premise power generation (including solar power, wind turbines, and battery storage), Paladin SmartGrid serves as a “master controller” for intelligent grid design, monitoring real-time power quality, utilization and capacity; monitoring transactions between public electric service and micro grid infrastructure; and maintaining rates and pricing information for managing private-public exchange. | <urn:uuid:e5c6bc40-5a82-4c46-8c39-53268deb46cf> | CC-MAIN-2017-04 | http://blogs.dlt.com/smart-grid-mirage-reality/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00576-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925044 | 1,292 | 2.515625 | 3 |
Two types of engineering information are widely used in highway departments: (1) video images collected through photo or video logging and (2) tabulated site data. The video images provide visual information for pavement management, highway signing and marking improvement, and accident analysis. The tabulated site data contains information on construction and rehabilitation history, pavement layer information, pavement width and type, average daily traffic, accident history and signing, and marking inventory.
Currently, analog-based video information is limited
in accessibility and usability. Simultaneous and synchronized access to both visual information and tabulated data is presently not possible. The capability of multiple-use access cannot be provided by existing systems. The analog nature of the video signals also presents difficulties in integrating the visual information with other types of data.
In 1995, recognizing the shortcomings of current photo logging systems and the potential of new technologies, the Arkansas State Highway and Transportation Department initiated a research project to develop a full-digital, networkable and MultiMedia-based Highway Information System (MMHIS). The Civil Engineering Department of the University of Arkansas, Fayetteville, received the contract to conduct this research at the Intergraph Transportation Lab, donated by Intergraph Corp. Reasearch support also came from the Mack-Blackwell Transportation Center, founded by a grant from the U.S. Department of Transportation and located at the university.
MMHIS utilizes advanced technologies in digital video, computer networking and video server to combine video and tabulated site data into a comprehensive information source. MMHIS also provides full-motion digital video at 30 frames per second and synchronized site data, instead of the still images provided by previous systems.
The highway video is presented along with the corresponding site data. Data sets shown in the table above the video present location and accurate information on the road. There are also two windows showing two graphs containing roughness and rutting data. Dynamic graphing is built into the two graphing windows, allowing the user to view the values of various attributes, such as roughness and rutting with curves, showing values for the entire road section. A vertical bar on the curves indicates the location of the road.
The following operations can be conducted with MMHIS:
Running the video. While the video is playing, information in the site data window will change accordingly. The video's play speed, the video window's size and many other factors can also be configured through the menu button next to the play/stop button.
Dragging the video to a new location of the highway. Use the mouse to drag the slider on the slider bar at the bottom of the video window.
Changing the data update rate for site data table. The fastest data update rate is every 25 meters. The actual displaying rate and quality of video motion is limited by the machine speed and the distance spacing among adjacent records in the database. Currently the database contains records for every 25 meters.
Opening another query. The user can open multiple video windows simultaneously, and windows of each query can be resized and repositioned. The system allows the running of one query's video while all the other queries' videos are frozen.
Multiple query in MMHIS. The system allows users to choose which way to proceed at intersections or exit ramps. When the vehicle approaches an intersection or an exit ramp, the highway video pauses and MMHIS displays arrows to show possible turning movements. Users can click on one of the arrows to make turns. If none of the arrows is clicked for a certain time (10 seconds), the system will take the default option of the "through movement" and the video continues.
User selectable turning movements in MMHIS hardware and software environment. MMHIS runs on IBM compatibles with an operating system of Microsoft Windows NT version 3.51 or later.
The current system also uses motion JPEG format to store and replay the highway-section videos. An MMHIS-capable computer requires a motion JPEG encoding/decoding board to work with the video files. Engineering site data are stored in Microsoft Access Version 7 format. Microsoft's Visual C++ version 4.2 is used to develop the main operating environment of the MMHIS. Open Database Connectivity drivers are used as the database interface for this system.
MMHIS provides unprecedented multimedia data viewing capabilities to highway engineers. It allows a highway agency to efficiently examine road and roadside structures without taking certain field trips. In addition, MMHIS is effective in communicating design and improvement ideas among engineers and managers and to the general public.
Additional work underway includes:
Applying more advanced video techniques to reduce the storage requirement and improving video quality, such as MPEG-2.
Further enhancing the dynamic graphing capability by providing space-time 3-D zooming functions.
Studying approaches to using 3-D terrain visualization techniques to display statewide terrain surface, so that MMHIS queries will be readily conducted on a 3-D surface map in a GIS environment.
Kelvin C.P. Wang, Robert P. Elliot and Xuyang Li are in the Department of Civil Engineering at the University of Arkansas.
PROBLEM/SITUATION: Integrating video into a
highway information system.
SOLUTION: Multimedia system which uses digital video.
JURISDICTIONS: Arkansas State Highway and Transportation Department; University of Arkansas.
VENDORS: Intergraph, Microsoft.
[ June Table of Contents] | <urn:uuid:09d03314-5b70-43f2-93ba-03d7dd52e196> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Full-Digital-Blacktop.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00208-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.888451 | 1,117 | 2.796875 | 3 |
What’s inside a cloud?
Imagine you could easily pick out, which cloud telephony vendors you needed to speak to according to your hardware or software requirements, simply because you understood how the cloud was structured and the prime benefit each part presented.
It’s not impossible…
A previous blog gave a brief definition of cloud computing. The next question to tackle is “What actually makes up the cloud”. Once you have an understanding of cloud structure, it will be far easier for you to absorb, which cloud telephony players you need to talk to in order to replace different ‘in-house’ or ‘on-premise’ systems and equipment.
A good way to view the cloud is to think of it being comprised of three distinct layers, each of which are essentially service levels – independently playing the role of the hardware and/or software you already have installed.
1. Central to any cloud is Infrastructure-as-a-Service (IaaS). This relates to the physical infrastructure, which as far as end-users (perhaps you) are concerned, is abstracted to provide storage, networking and compute resources. Examples would be Amazon EC2 and Rackspace
2. A second level is Platform-as-a-Service (PaaS), which could be an operating system or computer language interpreter that enables bespoke applications to be written and deployed. Example would be Google Voice, Aculab Cloud
It may help you to better understand each of these by exploring their role in relation to the benefits they provide. | <urn:uuid:b6c45ae8-1e6d-47ab-a0d8-6e9f46620123> | CC-MAIN-2017-04 | http://blog.aculab.com/2011/05/chapter-1-what-you-need-to-know-about.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00382-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961038 | 321 | 3.234375 | 3 |
Fault Tree Analysis Made Easy
If you are ITIL certified, you’ve heard of Fault Tree Analysis, or FTA. But if you’re like most, you probably have no idea how to actually perform or use FTA!
Simply put, FTA is a method for discovering the root causes of failures or potential failures. FTA then helps you understand how to fix or prevent the failure.
FTA is an analysis that starts with a top-level event, like a service outage. You then work it downward to evaluate all the contributing faults, and the causes of faults, that may ultimately lead (or have led) to its occurrence. You use the fault tree diagram to identify countermeasures to eliminate the causes of the failure.
FTA requires nothing more complex than paper, pencil, and an understanding of the service at hand. You will need accurate Configuration Information (CI) contextual information in order to get the most value from FTA. The following 6 simple steps can help you resolve tough design issues or problems quickly and easily.
- Select a top level event for analysis. Try to be specific, for example, “Email server down for more than 4 hours.” Sources of top level events include:
- Problem/Known Error Records
- Service Outage Analysis (SOA)
- Potential failures from brainstorming and a Technical Observation Post (TOP)
- “What-if” scenarios based on Service Level Agreements, etc.
- Identify faults that could lead to the top level event. Continuing the above example, some possible faults leading to an outage lasting more than 4 hours might be “loss of power,” another might be “hardware failure.” List all the faults under the top-level event in boxes and connect the fault boxes to the top-level event box by drawing lines.
- For each fault, list as many causes as possible in boxes below the related fault. Continuing the example above, in the case of “loss of power,” some causes might be “electrical outage,” “power supply failure,” and so on. Connect the boxes to the appropriate fault box.
- Two logic operators – And and Or, also known as logic gates – are used to represent the sequencing of faults and causes. For example, “Email server down for more than 4 hours” could be caused by “loss of power” Or “hardware fault.” Another might be “loss of building power” And “battery backup exhausted.” Update faults and causes by grouping logically related items using And or Or between faults and events; and faults and causes. Re-draw the lines from top-level event to logic gates to faults to logic gates to causes. The result is a graphical fault tree diagram as follows:
- Continue identifying causes for each fault until you reach a root cause, or one that you can do something about. For example, the root cause of “power supply failure” might be “filter clogged”; the root cause of “battery backup exhausted” might be “battery backup too small.”
- A root cause is one you can do something about; so now you need to think of the countermeasures you might apply to each root cause. List countermeasures for each root cause in a box under the root cause. For example, for “filter clogged,” a countermeasure might be “clean filter monthly.” Link the countermeasure to the root cause by drawing a line.
And that's it! Now you have a fault tree! Fault trees show how an event can occur, and what you can do about it from a design or change perspective. For Problems, you also have a possible root cause and a solution!
As you see, FTA is very simple. Don’t let its simplicity fool you, however. If you want to get fancy, you can play with probability statistics to try to get even more precise – determining the “chance” that a fault or cause could occur. Very precise calculations are possible. But even if you do not get fancy, you will have taken a powerful step toward preventing problems in the first place, or resolving tough problems. Often the act of creating a fault tree generates excellent ideas and possible solutions where before there were none.
FTA can be used by Technical Observation Post (TOP) teams, Problem Managers, Availability Managers, and even IT Service Continuity Management teams with a minimum of training. The graphical nature of FTA makes it easy to understand and easy to maintain in the face of Changes.
All in all, FTA is a powerful tool if you are trying to “Do IT Yourself.” | <urn:uuid:71f9ba39-c558-49e1-b855-989f56fe0cfc> | CC-MAIN-2017-04 | http://www.itsmsolutions.com/newsletters/DITYvol4iss47.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00108-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921873 | 989 | 3.078125 | 3 |
2.3.5 What improvements are likely in factoring capability?
Factoring (see Question 2.3.3) has become easier over the last 15 years for three reasons: computer hardware has become more powerful, computers have become more plentiful and inexpensive, and better factoring algorithms have emerged.
Hardware improvement will continue inexorably, but it is important to realize hardware improvements make the RSA cryptosystem more secure, not less. This is because a hardware improvement that allows an attacker to factor a number two digits longer than before will at the same time allow a legitimate RSA algorithm user to use a key dozens of digits longer than before. Therefore, although the hardware improvement does help the attacker, it helps the legitimate user much more. However, there is a danger that in the future factoring will take place using faster machines than are currently available, and these machines may be used to attack RSA cryptosystem keys generated in the past. In this scenario, the attacker alone benefits from the hardware improvement. This consideration argues for using a larger key size today than one might otherwise consider warranted. It also argues for replacing one's key with a longer key every few years, in order to take advantage of the extra security offered by hardware improvements. This point holds for other public-key systems as well.
Recently, the number of computers has increased dramatically. While the computers have become steadily more powerful, the increase in their power has not compared to their increase in number. Since some factoring algorithms can be done with multiple computers working together, the more computers devoted to a problem, the faster the problem can be solved. Unlike the hardware improvement factor, prevalence of computers does not make the RSA cryptosystem more secure
Better factoring algorithms have been more help to the attacker than have hardware improvements. As the RSA cryptosystem and cryptography in general have attracted much attention, so has the factoring problem, and many researchers have found new factoring methods or improved upon others. This has made factoring easier for numbers of any size, irrespective of the speed of the hardware. However, factoring is still a very difficult problem.
Increasing the key size can offset any decrease in security due to algorithm improvements. In fact, between general computer hardware improvements and special-purpose hardware improvements, increases in key size (maintaining a constant speed of RSA algorithm operations) have kept pace or exceeded increases in algorithm efficiency, resulting in no net loss of security. As long as hardware continues to improve at a faster rate than the rate at which the complexity of factoring algorithms decreases, the security of the RSA cryptosystem will increase, assuming users regularly increase their key sizes by appropriate amounts. The open question is how much faster factoring algorithms can get; there could be some intrinsic limit to factoring speed, but this limit remains unknown. However, if an "easy" solution to the factoring problem can be found, the associated increase in key sizes will render the RSA system impractical.
Factoring is widely believed to be a hard problem (see Question 2.3.1), but this has not yet been proven. Therefore, there remains a possibility that an easy factoring algorithm will be discovered. This development, which could seriously weaken the RSA cryptosystem, would be highly surprising and the possibility is considered remote by the researchers most active in factoring research
There is also the possibility someone will prove factoring is difficult. Such a development, while unexpected at the current state of theoretical factoring research, would guarantee the security of the RSA cryptosystem beyond a certain key size.
Even if no breakthroughs are discovered in factoring algorithms, both factoring and discrete logarithm problems (see Question 2.3.7) can be solved efficiently on a quantum computer (see Question 7.17) if one is ever developed. | <urn:uuid:8ba017a3-79ad-4b9f-a767-0d95cd8ea2c9> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-improvements-are-likely-in-factoring-capability.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00502-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950092 | 777 | 3.0625 | 3 |
126.96.36.199 What is Cipher Block Chaining Mode?
In CBC mode (see Figure 2.3), each plaintext block is XORed with the previous ciphertext block and then encrypted. An initialization vector c0 is used as a "seed" for the process.
CBC mode is as secure as the underlying block cipher against standard attacks. In addition, any patterns in the plaintext are concealed by the XORing of the previous ciphertext block with the plaintext block. Note also that the plaintext cannot be directly manipulated except by removal of blocks from the beginning or the end of the ciphertext. The initialization vector should be different for any two messages encrypted with the same key and is preferably randomly chosen. It does not have to be encrypted and it can be transmitted with (or considered as the first part of) the ciphertext. However, consider the vulnerability described in Question 188.8.131.52.
The speed of encryption is identical to that of the block cipher, but the encryption process cannot be easily parallelized, although the decryption process can be.
PCBC (Propagating Cipher Block Chaining) mode is a variation on the CBC mode of operation and is designed to extend or propagate a single bit error in the ciphertext. This allows errors in transmission to be captured and the resultant plaintext to be rejected. The method of encryption is given by
ci = Ek(ci-1 Åmi-1 Åmi)
and decryption is achieved by computing
mi = ci-1 Åmi-1 ÅDk(ci).
There is a flaw in PCBC [Koh90], which may serve as an instructive example on cryptanalysis (see Section 2.4) of block ciphers. If two ciphertext blocks ci-2 and ci-1 are swapped, then the result of the ith step in the decryption still yields the correct plaintext block. More precisely, by (2.1) we have
mi = Dk(ci) Å(ci-1 ÅDk(ci-1)) Å(ci-2ÅDk(ci-2)) Åci-3 Åmi-3.
As a consequence, swapping two consecutive ciphertext blocks (or, more general, scrambling k consecutive ciphertext blocks) does not affect anything but the decryption of the corresponding plaintext blocks. Though the practical consequences of this flaw are not obvious, PCBC was replaced by CBC mode in Kerberos version 5. In fact, the mode has not been formally published as a federal or national standard. | <urn:uuid:df430165-e730-4266-b925-2ccb1f537336> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-is-cipher-block-chaining-mode.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00226-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.918152 | 545 | 3.65625 | 4 |
The Smart Way to Safer Hospitals
Hospitals in Scandinavia were early adopters of this technology, and Germany has recently issued healthcare smart cards to its entire 80 million-strong population. In the UK, many hospitals are now waking up to the benefits of using contactless smart cards to control physical access to their buildings and logical access to the IT systems that house confidential patient data.
"So as well as safeguarding the security of patients' personal information, using a smart card for logical access can also create efficiencies in terms of time.”
- Holly Sacks
- HID Global
In the past, it was relatively easy for an intruder to walk unchallenged around a hospital, accessing areas meant only for authorised staff. In rare cases, this led to security breaches where babies were removed from paediatric wards. Contactless smart cards are addressing this physical access problem by using encryption to offer differing levels of building access to certain staff. For example, a cardio-thoracic surgeon would require access to the operating theatre, while a registrar might need access to all the wards in the hospital.
Medical professionals can also use their smart card to access sensitive patient data on a network. So as well as safeguarding the security of patients' personal information, using a smart card for logical access can also create efficiencies in terms of time. If a doctor can access crucial IT systems with just a smart card, this saves on time wasted in remembering and entering usernames and passwords and frees up more time for patient care. It also helps healthcare professionals to demonstrate that they are storing and managing patient details in a safe and secure way to comply with the Data Protection Act.
Smart cards can come in contact or contactless form, and can offer three levels of security: single, dual or three-factor authentication. With single-factor authentication, using the card on its own will give access to a system or open a door. Dual-factor authentication - the most common level of smart card authentication in UK hospitals - adds on an extra level of security in the form of a PIN code. Three-factor authentication goes a step further, using a PIN and an extra security measure such as a biometric scan. Contactless smart cards are traditionally used for physical access control and are now being adopted for logical access control as well.
One surprising area where this technology is making an impact is infection control – a topic that is never far from the headlines. We've all seen the bottles of antibacterial hand gel that now stand at the doorway to every hospital ward, and no one can have missed the government swine flu posters that landed on every doormat across the country. Just think about a doctor on her morning ward round. In just a few hours, a doctor could see as many as 20 patients on five different wards, accessing different areas of the hospital and different computer systems as she goes. With this many potential touch points, it's easy to see how infection can be spread. Contactless smart cards – where the card is passed in front of a reader device – are playing a key role in limiting this spread of infection. After all, if your pass card never touches the reader, it can't spread germs.
With this many advantages, adopting contactless smart technology seems like a no-brainer. But some hospitals are still using the most basic form of secure access control: the magnetic stripe – or ‘mag-stripe' – card, where magnetic data is stored on the back of the card.
While mag-stripe cards are cheap to produce, they can end up more expensive in terms of maintenance. Magnetic stripe cards come in contact with the reader when inserted, and any debris that collects on the card inevitably ends up inside the reader and on its contact pins. They are also susceptible to magnetic interference and wear and tear: constant swiping through the card reader causes the stripe to deteriorate and eventually fail. This type of card is also very restricted in terms of its data storage capacity compared to that of smart cards, some of which now have up to 164K of memory.
But perhaps their biggest disadvantage is that they are very easy to clone. You can even buy a mag-stripe reader from a high-street store that will let you take data off one of these cards and use it to create an unlimited number of clones.
It's fair to say that the cost of upgrading to contactless smart cards can be a barrier to deployment for some hospitals, where funding priorities can mean that management has to choose between upgrading physical and logical access systems and having another 30 patient beds. On the other hand, is it really possible to put a price on effective infection control or security in a maternity ward?
When you weigh up the costs of contactless smart card technology against the benefits, it can offer outstanding value to the healthcare sector, saving time and money, protecting patients and staff and safeguarding their personal data. Portable and secure, contactless smart cards are fast becoming a valuable tool for safeguarding physical security and guaranteeing the privacy of sensitive electronic information.
HID Global is exhibiting at Infosecurity Europe 2010, the No. 1 industry event in Europe held on 27th – 29th April in its new venue Earl's Court, London. The event provides an unrivalled free education programme, exhibitors showcasing new and emerging technologies and offering practical and professional expertise. For further information please visit www.infosec.co.uk | <urn:uuid:5a2168e2-3e75-4528-9fdc-97444a9faa8c> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/netsecur/smart-way-safer-hospitals | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00530-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954061 | 1,112 | 2.78125 | 3 |
Referencing cells in an Excel spreadsheet and using the data quickly for calculations automatically is one of the spreadsheet programs greatest abilities. What if your information is held in another worksheet or even another workbook? Can you reference the cells in different sheets or even different files? Yes, indeed you can. In this article, we’ll start with the basics and you’ll find the formulas needed to do just that, screenshots demonstrating it, and two excel files to download to help see the formula in action.
A normal reference (utilizing cells within the same sheet), as an example, for cell D3 to be the product of the contents of cell D1 and the contents of cell D2 would use this formula:
By default, a new workbook starts with 3 worksheets respectively named Sheet1, Sheet2, and Sheet3. When you start using multiple sheets, you should rename the sheets something unique and descriptive and delete any unused sheets. Right-click on the tab of a sheet in the bottom-left corner in order to Rename a sheet to something describing the content or the function of the worksheet. You can also choose to delete a sheet from the right-click menu if you need to remove the sheet from the workbook.
If you need to rename the sheet later, references using the name will automatically be updated, so no need to worry if you change your mind about a name.
Linking to Another Sheet in the Same Workbook
You can follow along with this example by downloading the linking_sheets.xls workbook.
Linking_sheets.xls is a workbook that contains three worksheets inside of it: Main, Triple, and TripledCost. Main contains the Quantity, Cost, and total. Quantity and cost are manually entered while total is calculated by multiplying the quantity by the cost for each row.
The Triple sheet pulls the Quantity from the main sheet for column A. This way if the quantity were to change, you would only have to change it in the main sheet and all your formulas in the other sheets that are derived from that quantity are updated automatically. Column B in this sheet just takes the value in column A of the same row and multiplies it by 3. As you can see in the screenshot, the formula to pull from a cell in another sheet is:
Or, in this example:
The third sheet, TripledCost, pulls the tripled quantity (column B) from the Triple sheet for column A. It then populates column B with the cost from the Main sheet’s column B. Column C is calculated by multiplying column A by column C.
Row 6, Column A:
Row 6, Column B:
Row 6, Column C:
If the quantity or cost were to change, you would only have to update them in the Main sheet and all of the other calculations in the workbook that derive their values from that information would be automatically updated.
Linking to a Sheet in a Different Workbook
Open both Excel files that you want to use. One will be the source and the other will be the destination. The file with the data you want will be the source and the file where you want the data will be the destination. When you have both files open, switch to the View tab and click the button “View Side by Side”. This should automatically arrange the two workbooks next to each other neatly.
Now, go to the cell where you want the data and select it. In the formula for that cell enter ‘=’.
Now you can enter the formula following this formula, assuming the files are in the same directory. If not, you can specify the path between the equals sign (=) and the left square bracket ([).
Or in this example:
An easier and more accurate way is just to click the cell you want in the source worksheet after you entered the equals sign in the previous step. You should see a formula in the same format as the one above fill in the formula bar as a dashed border highlights the source cell. Click Ok and you’ll be all set.
The Fill Handle, in my experience, doesn’t work in the same way for external sheets with automatically incrementing the formula. If somebody knows a fix to this, please explain in the comments. You’ll also want to be careful about renaming files as this will not automatically update workbooks referencing them.
The linking_wkbks.xls file has one sheet. The first column is pulled from the Total Cost column of the TripledCost sheet of the linking_sheets.xls file. Column B is just manually entered numbers and Column C is calculated from Column B minus (-) Column A. | <urn:uuid:b91fb18e-6c14-4fbc-9f3d-c7a11f0266db> | CC-MAIN-2017-04 | https://www.404techsupport.com/2009/11/referencing-cells-in-excel/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00254-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.890281 | 977 | 3.09375 | 3 |
It is fairly common to hear about switch being non-blocking. It’s because almost all switches today are non-blocking. But what that means? When I asked people around me on what exactly non-blocking switch means, they were unable to get to the same conclusion.
I was going through a lot of different internet places and vendor documents before I wrote this here, but, do not hesitate to add something in comments if you have different view on the subject.
Line-rate switch means the same as if you would said wire-speed switch. It basically means that this switch has the forwarding capacity that supports concurrently all ports at full port capacity. It should be true for minimum packet sizes to. Non-blocking switch means the same thing. Non-blocking Switch internal bandwidth can handle all the port bandwidths, at the same time, at full capacity. Sometimes for high end switches non-blocking is also refereed to switch architecture ability to significantly reduce head-of-line blocking (HOL blocking).
Little about speed names
Wire speed or wire rate simply means that you can take two switch ports of the same “speed” and send data between them with no packet loss at maximum port supported rate. Backplane bandwidth is a measure of the internal architecture bandwidth of the switch. It is most often the measure of the total switching capacity of the system internally. Forwarding rate is usually the measure of how many 64-byte packets forwarding engine can process. Is measured in packets-per-second (pps).
Little about diversities in naming the quantities
Speed is always used but does not means anything to precise in the networking world. Speed is mistakenly used to represent bandwidth capacity of a link or application data flow. I think that speed is best defined as cross reference of ration between bandwidth and latency. Bandwidth is a measure of how many data bits can pass in a given interval between two network nodes. Measured in bps (bits per second). For downloading a file with FTP, bandwidth is a concern. You want your data to download fast. Latency in networking is a measure of how long it takes a unit of data put into the network on one end to come out on the other end. Latency is usually measured in milliseconds (ms). Usually applications that strive for low latency are time sensitive apps like VoIP calls apps. For talking over VOIP network, latency is a concern. VoIP packets are small, but you need them to arrive fast. High latency will make delays between speaker speaking and receiver hearing. | <urn:uuid:7ad24bb2-40d2-4de7-bc5d-b0366adcd15b> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2015/what-is-a-non-blocking-switch | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00098-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961098 | 516 | 2.953125 | 3 |
Contractors of large businesses are constantly on the lookout for ways to expand their business, because it is a best practice. Small businesses can also ensure compliance with the best practice of looking for ways to expand. By doing this, small businesses can take advantage of opportunities by adapting to the requirements of the federal government. The main requirement of the Federal Government includes compliance with the North American Industry Classification System (NAICS) Code.
The NAICS Code:
This code was assigned by the Census Bureau as a means of ensuring best practices in business activities. The NAICS codes must be fulfilled when businesses are completing mandatory registration, and before applying for a government contract. Other trade associations, government agencies and regulation boards may assign their unique codes depending on their requirements for best practices. The database of businesses registered with the NAICS code helps contracting officers when they search for suppliers.
The North American Industry Classification System was founded by three organizations to ensure compliance with best practices in marketing for businesses with Federal Government. These were the Office of Management and Budget’s Economic Classification Policy Committee, Statistics Canada, and Mexico’s National Institute of Statistics. In addition, assistance with more information and geography was also used in creating this code with best practices. The classification of businesses using NAICS is by using a six digit code.
The Code Structure
The code structure was designed using best practices. The first two digits of the code represent one of the twenty industry sectors. The third digit is the industry subsector, the fourth is the industry group, and the fifth is the industry itself. The sixth digit represents Mexico, Canada or the United States.
How to Get the Code
Obtaining the code is one of the easies best practices. The NAICS code can be obtained from the NAICS Association or the Census Bureau websites. These are easy to find using the search engines online.
Used of NAICS Code
The NAICS code system was developed for determining the economic status by collecting and publishing statistical data for best practices. Assigning the NAICS code ensures that vendors meet the code requirements and business standard. The vendors must have the code because it allows the best practice of registering for the Central Contractor Registry. It also ensures compliance with policies and regulations governing businesses with Federal Government. Most importantly, the NAICS code is used by contractors for searching for suppliers.
The Federal Government requires NAICS registration in order to facilitate best practices and proper statistical information on commercial businesses in business with them. This is why the NAICS code is an important best practice and must be obtained. | <urn:uuid:b2b75578-30e2-4860-8449-6735a5193950> | CC-MAIN-2017-04 | http://www.best-practice.com/compliance-best-practices/corporate-compliance/federal-government-requirements-for-marketing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00180-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936432 | 517 | 2.609375 | 3 |
Certain types of bacteria in the gut can leverage the immune system to decrease the severity of stroke, according to new research from Weill Cornell Medicine. This finding can help mitigate stroke -- which is the second leading cause of death worldwide. In the study, published March 28 in Nature Medicine, mice received a combination of antibiotics. Two weeks later, the researcher team -- which included collaborators at Memorial Sloan Kettering Cancer Center -- induced the most common type of stroke, called ischemic stroke, in which an obstructed blood vessel prevents blood from reaching the brain. Mice treated with antibiotics experienced a stroke that was about 60 percent smaller than rodents that did not receive the medication. The microbial environment in the gut directed the immune cells there to protect the brain, the investigators said, shielding it from the stroke's full force. "Our experiment shows a new relationship between the brain and the intestine," said Dr. Josef Anrather, the Finbar and Marianne Kenny Research Scholar in Neurology and an associate professor of neuroscience in the Feil Family Brain and Mind Research Institute at Weill Cornell Medicine. "The intestinal microbiota shape stroke outcome, which will impact how the medical community views stroke and defines stroke risk." The findings suggest that modifying the microbiotic makeup of the gut can become an innovative method to prevent stroke. This could be especially useful to high-risk patients, like those undergoing cardiac surgery or those who have multiple obstructed blood vessels in the brain. Further investigation is needed to understand exactly which bacterial components elicited their protective message. However, the researchers do know that the bacteria did not interact with the brain chemically, but rather influenced neural survival by modifying the behavior of immune cells. Immune cells from the gut made their way to the outer coverings of the brain, called the meninges, where they organized and directed a response to the stroke. "One of the most surprising findings was that the immune system made strokes smaller by orchestrating the response from outside the brain, like a conductor who doesn't play an instrument himself but instructs the others, which ultimately creates music," said Dr. Costantino Iadecola, director of the Feil Family Brain and Mind Research Institute and the Anne Parrish Titzell Professor of Neurology at Weill Cornell Medicine. The newfound connection between the gut and the brain holds promising implications for preventing stroke in the future, which the investigators say might be achieved by changing dietary habits in patients or "at risk" individuals. "Dietary intervention is much easier to accomplish than drug use, and it could reach a broad base," Dr. Anrather said. "This is a little far off from the current study -- it's music of the future. But diet has the biggest effect of composition of microbiota, and once beneficial and deleterious species are identified, we can address them with dietary intervention."
Yao Y.,Rockefeller University |
Yao Y.,University of Minnesota |
Norris E.H.,Rockefeller University |
Mason C.E.,New York Medical College |
And 4 more authors.
Nature Communications | Year: 2016
Muscle-resident PDGFRβ+ cells, which include pericytes and PW1 + interstitial cells (PICs), play a dual role in muscular dystrophy. They can either undergo myogenesis to promote muscle regeneration or differentiate into adipocytes and other cells to compromise regeneration. How the differentiation and fate determination of PDGFRβ+ cells are regulated, however, remains unclear. Here, by utilizing a conditional knockout mouse line, we report that PDGFRβ+ cell-derived laminin inhibits their proliferation and adipogenesis, but is indispensable for their myogenesis. In addition, we show that laminin alone is able to partially reverse the muscle dystrophic phenotype in these mice at the molecular, structural and functional levels. Further RNAseq analysis reveals that laminin regulates PDGFRβ+ cell differentiation/fate determination via gpihbp1. These data support a critical role of laminin in the regulation of PDGFRβ+ cell stemness, identify an innovative target for future drug development and may provide an effective treatment for muscular dystrophy. © 2016, Nature Publishing Group. All rights reserved. Source
Kamel H.,Feil Family Brain and Mind Research Institute |
Hunter M.,York College |
Moon Y.P.,York College |
Yaghi S.,York College |
And 8 more authors.
Stroke | Year: 2015
Background and Purpose-Electrocardiographic left atrial abnormality has been associated with stroke independently of atrial fibrillation (AF), suggesting that atrial thromboembolism may occur in the absence of AF. If true, we would expect an association with cryptogenic or cardioembolic stroke rather than noncardioembolic stroke. Methods-We conducted a case-cohort analysis in the Northern Manhattan Study, a prospective cohort study of stroke risk factors. P-wave terminal force in lead V1 was manually measured from baseline ECGs of participants in sinus rhythm who subsequently had ischemic stroke (n=241) and a randomly selected subcohort without stroke (n=798). Weighted Cox proportional hazard models were used to examine the association between P-wave terminal force in lead V1 and stroke etiologic subtypes while adjusting for baseline demographic characteristics, history of AF, heart failure, diabetes mellitus, hypertension, tobacco use, and lipid levels. Results-Mean P-wave terminal force in lead V1 was 4452 (±3368) μV∗ms among stroke cases and 3934 (±2541) μV∗ms in the subcohort. P-wave terminal force in lead V1 was associated with ischemic stroke (adjusted hazard ratio per SD, 1.20; 95% confidence interval, 1.03-1.39) and the composite of cryptogenic or cardioembolic stroke (adjusted hazard ratio per SD, 1.31; 95% confidence interval, 1.08-1.58). There was no definite association with noncardioembolic stroke subtypes (adjusted hazard ratio per SD, 1.14; 95% confidence interval, 0.92-1.40). Results were similar after excluding participants with a history of AF at baseline or new AF during follow-up. Conclusions-ECG-defined left atrial abnormality was associated with incident cryptogenic or cardioembolic stroke independently of the presence of AF, suggesting atrial thromboembolism may occur without recognized AF. © 2015 American Heart Association, Inc. Source
Li S.,New York Medical College |
Labaj P.P.,University of Vienna |
Zumbo P.,New York Medical College |
Sykacek P.,University of Vienna |
And 12 more authors.
Nature Biotechnology | Year: 2014
High-throughput RNA sequencing (RNA-seq) enables comprehensive scans of entire transcriptomes, but best practices for analyzing RNA-seq data have not been fully defined, particularly for data collected with multiple sequencing platforms or at multiple sites. Here we used standardized RNA samples with built-in controls to examine sources of error in large-scale RNA-seq studies and their impact on the detection of differentially expressed genes (DEGs). Analysis of variations in guanine-cytosine content, gene coverage, sequencing error rate and insert size allowed identification of decreased reproducibility across sites. Moreover, commonly used methods for normalization (cqn, EDASeq, RUV2, sva, PEER) varied in their ability to remove these systematic biases, depending on sample complexity and initial data quality. Normalization methods that combine data from genes across sites are strongly recommended to identify and remove site-specific effects and can substantially improve RNA-seq studies. © 2014 Nature America, Inc. Source
Afshinnekoo E.,New York Medical College |
Afshinnekoo E.,Queens College, City University of New York |
Meydan C.,New York Medical College |
Chowdhury S.,New York Medical College |
And 51 more authors.
Cell Systems | Year: 2015
Summary The panoply of microorganisms and other species present in our environment influence human health and disease, especially in cities, but have not been profiled with metagenomics at a city-wide scale. We sequenced DNA from surfaces across the entire New York City (NYC) subway system, the Gowanus Canal, and public parks. Nearly half of the DNA (48%) does not match any known organism; identified organisms spanned 1,688 bacterial, viral, archaeal, and eukaryotic taxa, which were enriched for genera associated with skin (e.g., Acinetobacter). Predicted ancestry of human DNA left on subway surfaces can recapitulate U.S. Census demographic data, and bacterial signatures can match a station's history, such as marine-associated bacteria in a hurricane-flooded station. This baseline metagenomic map of NYC could help long-term disease surveillance, bioterrorism threat mitigation, and health management in the built environment of cities. © 2015 The Authors. Source | <urn:uuid:8f928be3-6b0d-4413-91f3-20a8af53eb65> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/feil-family-brain-and-mind-research-institute-942356/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00025-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91706 | 1,903 | 3.46875 | 3 |
IP Basics (e) - Flash
- Course Length:
- 1 hour of eLearning
NOTE: While you can purchase this course on any device, currently you can only run the course on your desktop or laptop.
As the communications industry transitions to wireless and wireline converged networks to support voice, video, data and mobile services over IP networks, a solid understanding of IP and its role in networking is essential. IP is to data transfer as what a dial tone is to a wireline telephone. A fundamental knowledge of IPv4 and IPv6 networking along with use of VLANs is a must for all telecom professionals. A solid foundation in IP has become a basic job requirement in the carrier world. Starting with a brief history, the course provides a focused basic level introduction to the fundamentals of IP technology. It is a modular introductory course only on IP basics as part of the overall eLearning IP fundamentals curriculum.
This course is intended for those seeking a basic level introduction to the Internet Protocol (IP).
After completing this course, the student will be able to:
• Describe the purpose and structure of an IP address
• Describe network prefix
• Explain the purpose of CIDR Prefix
• Explain the purpose of Subnet Mask
• Describe IP Subnets
• Explain the IP header and its key fields
• Describe broadcasting in IP networks
• Describe multicasting in IP networks
1. IP Address
2. IP Subnets
3. IP Header
4. Multicast and Broadcast | <urn:uuid:5a6de8a7-c1de-4fd3-9032-4579765e4d0e> | CC-MAIN-2017-04 | https://www.awardsolutions.com/portal/elearning/ip-basics-e-flash?destination=elearning-courses | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00329-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.891395 | 314 | 3.65625 | 4 |
There are a million reasons why you might want to regulate the Active Directory under Windows 2000. In this article, I'll discuss some situations in which the default Active Directory permissions might not be appropriate. I'll then go on to explain how to make some security changes. Before we begin
Before we get started, it's important to have a little bit of background about Active Directory. As you're no doubt aware, Active Directory is a database that Windows 2000 uses to maintain various aspects related to the network. For example, all the user accounts are stored in the Active Directory. These accounts contain the traditional features, such as passwords and account policies, all of which are maintained within the Active Directory. However, unlike the Windows NT Security Accounts Manager, Active Directory is also useful from an end-user perspective--the Active Directory can contain a wealth of information about each user. For example, you can specify a user's department, phone number, birthday, or any other information you want people to know. It's possible to use the Active Directory database as a company directory.
Why restrict access?
Because of the type of information the Active Directory stores and can store, you may not want everyone to have access to everything. For example, suppose you use the Active Directory as a company directory. You probably want everyone to be able to read the company directory--but you don't want just anyone to be able to change it. For example, you wouldn't want a user to change another user's phone number. Each user should only have access to change his or her own information.
Likewise, you'll probably want to hide certain fields from most users. For example, you might restrict the home phone number field to managers or to the human resources department.
As I mentioned, the Active Directory's primary purpose is to manage various aspects of the operating system. Of course, this portion of the Active Directory is restricted by default. However, in some situations you may want to grant access to a portion of the system side of the Active Directory to various users. For example, suppose you decide that you want your help desk to be able to reset passwords, but you don't want to give them full administrative access. You can accomplish this by granting them access to a portion of the Active Directory, rather than adding them to the Administrators group or the Account Operators group.
Similarly, in a large company, a department may have a computer-savvy manager who is willing to take responsibility for managing that department's user accounts. Depending on the structure of your Active Directory, you can grant the manager permission to change passwords for his department only. You can also grant permission for that manager to add users to the groups associated with that department. By doing so, you've removed some of the administrative burden from the IT staff without jeopardizing your network's security. Basically, with Active Directory, it's easy to give users control over the aspects that you want them to control without granting them access to anything extra. Setting Active Directory security
Now that we've discussed why you might want to change some of your Active Directory permissions, let's take a look at how to do so. Unfortunately, space limitations prevent me from discussing all the intricacies of Active Directory security in this article. For now, let's look at a method for allowing your help desk staff to reset passwords without granting them excessive permissions. Follow these steps:
- Open the Active Directory Users and Computers tool from the Start|Programs|Administrative Tools menu.
- Select the Users folder and then select the Group command from the Console menu's New menu.
- Create a group called Help Desk. You can make the group domain local, global, or universal, depending on your needs.
- Navigate to Active Directory Users|your domain|Domain Controllers. Right-click on the Domain Controllers object and select the Delegate Control command from the resulting context menu. Doing so will launch the Delegation Of Control Wizard.
- Follow the prompts until you reach the screen that asks you to select a user or group. Select the Help Desk group and continue with the wizard.
- The next screen allows you to delegate common tasks, such as resetting passwords or managing user accounts.
You can use the Delegation Of Control wizard to easily add a permission that allows members of the Help Desk group to reset passwords, without giving the group full administrative privileges. If you need to grant someone authority beyond just the simple tasks listed in the Tasks To Delegate screen, you can select the Create A Custom Task To Delegate radio button and then click Next. Doing so will present you with a series of screen that let you delegate any user right or combination of rights that you can possibly imagine. // Brien M. Posey is an MCSE who works as a freelance writer and as the Director of Information Systems for a national chain of health care facilities. His past experience includes working as a network engineer for the Department of Defense. You can contact him via e-mail at Brien_Posey@xpressions.com. Because of the extremely high volume of e-mail that Brien receives, it's impossible for him to respond to every message, although he does read them all. | <urn:uuid:75f72d64-8a4c-45ce-800a-412654ef6e82> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/netos/article.php/621651/Active-Directory-Allowing-or-Denying-Access.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00539-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940089 | 1,065 | 2.5625 | 3 |
Biopesticides are an eco-friendly alternative to the synthetic pesticides and offer a unique and innovative approach to the management of weeds, using formulated microbial agents as an active ingredient. Microbes that are used in this approach include fungi, bacteria, viruses and nematodes.
The global biopesticides market is expected to reach USD XX million by 2021 from USD 2,466.70 million in 2015, witnessing a double digit market growth during the forecasted period. Currently, the market is highly competitive and to create a stronghold in the biopesticides market, many major biopesticide firms are working on strategies to adopt smaller firms.
The prevalence of chemical or synthetic pesticides in crop protection is expected to continue, however, human, animal and environmental health concerns are also likely to play a key role in driving the growth of the biopesticides market. Several countries are adopting a stringent approach when it comes to imports, with a special focus on regulating the quantity of pesticide residues. As a result, the demand for regulated food safety and quality is increasing, which is another reason for growers to adopt biopesticides is their farming practices. Despite their low toxicity and environment impact levels, biopesticides face a major restraint in the form of high costs and low availability. In addition to frequent reapplication, due to their short lifespan, different biopesticides need to be used for different pests because of their high specificity towards pests. This increases the cost factor for the growers, and in turn proves to be a constraint.
The future opportunity for companies lies in the increasing registrations of biopesticides in the various regional markets. The EPA requires fewer data and provides a simpler registration process, which is completed within a year, compared to the three-year tenure for synthetic pesticides. As a result, more companies are expected to take a voluntary step towards increasing their biopesticide portfolio. Also, the increasing awareness about the product advantages among the growers, along with the ease of access, will further drive the market growth.
The market is segmented based on the product type, formulation type, ingredient type, mode of application, crop and non-crop application and geography. By the product type, the market is segmented into bioherbicides, bioinsecticides, and biofungicides, with application in both crop-based and non-crop-based categories. Based on the main ingredient type, the market is divided into microbial, plant and biochemical-based biopesticides. Microbial biopesticides form the major part of the biopesticides market. Bt-based microbial pesticides are dominating the microbial pesticides segment with nearly 50% of the market share, followed by fungi (XX %), viruses (XX %) and nematodes (XX %).
Based on the type of formulation, the biopesticides market has been segmented into solid formulations and liquid formulations. Solid formulations increase crop yield but offer a low shelf-life. Contrary to this, liquid formulations provide an increased shelf-life and are preferred to solid formulations. Biopesticides can be used to treat the plants by using foliar sprays, seed treatment, soil treatment or can be applied post-harvest. By geography, Europe is the largest market. However, the emerging economies in Asia-Pacific are likely to take the lead in the adoption of biopesticides. North America and South America are also expected to show significant growth in the segment.
Major companies in the sector include:
Recent Industry Insights
The biopesticides market is considered to be an emerging market, which is evident from the mergers and acquisitions by the industry giants. The acquisition of the two top companies AgraQuest and Prophyta by Bayer CropScience is one such example and has helped Bayer in making a significant impact on the biopesticides business sector. After releasing two products Kodiak® Concentrate and Serenade SOIL® in 2014, it plans to release Requiem™ by the end of 2017. Valent BioSciences entered into a North American distribution agreement for a biofungicide product range with BioAg Allianc, constituting Monsanto and Novozymes. Likewise, Syngenta entered into a technical partnership with Pasteuria Bioscience for the development of a bionematicide. Companies such as Andermatt Biocontrol, Koppert Biological Systems, Certis USA, and many more, have introduced many biopesticides in the USA, Australia, Canada and South Africa in the year 2015.
Key Deliverables in the Study
The report holds importance for the following stakeholders:
Related Reports Hyperlinks: | <urn:uuid:226d58c0-5ab9-4678-8d25-9c61373fdb1f> | CC-MAIN-2017-04 | https://www.mordorintelligence.com/industry-reports/global-biopesticides-market-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00265-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931721 | 954 | 2.53125 | 3 |
The responsibility to protect sensitive private information is now legally mandated and has become a key focus for many regulations within multiple industries. Information security is vital to the success of an organisation’s day-to-day operations; and must be managed as a proactive and strategic business process throughout the entire enterprise – not an intermittent or point-in-time event for technology staff alone.
Love them or loathe them, log files play a central role in this. Logs are the lifeblood. They tell us the Who, the What, the Where, and the When. They give is insight. They give us answers. Very occasionally they might even make us laugh when the computer jargon points out the very obvious or make a simple fault sound incredibly serious.
Because of the widespread deployment of networked servers, workstations, and other computing devices, and the ever-increasing number of threats against networks and systems, the number, volume, and variety of computer security logs has increased greatly. This has created the need for computer security log management, which is the process for generating, transmitting, storing, analysing, and disposing of computer security log data.
Log files are critical to the successful investigation and prosecution of security incidents, therefore best practices recommend logging all events. However, enforcing such a policy can often overwhelm already overworked system administrators. The last thing you want is information overload. But it is true to say that logging only subsets is a risk. There are emerging solutions that do indeed gather a log for every event that takes place on the network, and provide an easy way to retrieve specific information if and when required.
Log files generally fall into one of three categories. Security software logs primarily contain computer security-related information, while operating system logs and application logs typically contain a variety of information, including computer security-related data.
- Anti-Virus Software
- Intrusion Detection & Protection
- Remote Access Software
- Web Proxies
- Vulnerability Management Software
- Authentication Servers
- Network Devices
Operating systems (OS) for servers, workstations, and networking devices (e.g., routers, switches) usually log a variety of information related to security. The most common types of security-related OS data are:
System Events. System events are operational actions performed by OS components, such as shutting down the system or starting a service. Typically, failed events and the most significant successful events are logged. The details logged for each event also vary widely; each event is usually timestamped, and other supporting information could include event, status, and error codes; service name; and user or system account associated with an event.
Audit Records. Audit records contain security event information such as successful and failed authentication attempts, file accesses, security policy changes, account changes (e.g., account creation and deletion, account privilege assignment), and use of privileges.
Operating systems and security software provide the foundation and protection for applications, which are used to store, access, and manipulate the data used for the organization’s business processes.
Some applications generate their own log files, while others use the logging capabilities of the OS on which they are installed. Applications vary significantly in the types of information that they log.
Account information such as successful and failed authentication attempts, account changes (e.g., account creation and deletion, account privilege assignment), and use of privileges. In addition to identifying security events such as brute force password guessing and escalation of privileges, it can be used to identify who has used the application and when each person has used it.
Usage information such as the number of transactions occurring in a certain period (e.g., minute, hour) and the size of transactions (e.g., e-mail message size, file transfer size). This can be useful for certain types of security monitoring (e.g., a ten-fold increase in e-mail activity might indicate a new e-mail-borne malware threat; an unusually large outbound e-mail message might indicate inappropriate release of information).
In determining which data is sufficient and appropriate to collect, organisations should implement processes that:
- Identify components and events that warrant logging.
- Establish the amount of data to be logged.
- Identify and establish mandated log retention timeframes.
- Implement polices for securely handling and analysing log files.
The issue of retention has become a difficult one for many organisations. Satisfying the reporting demands of government regulations and corporate security policies requires the retention of vast amounts of security data. Not only must you collect log and event data from security products like firewalls and identity management systems, auditors must also be able to go back several years to trace security violations. One effect of government regulations is that security information, including event logs and transaction logs, has now become legal records that must be produced when requested by legal authorities. This could potentially stretch data retention periods to the duration of the litigation process.
Penalties for non-compliance include monetary fines, civil liability and executive accountability. In some cases, such as with Sarbanes-Oxley, the statutes allow for fines that may reach into the millions of dollars. However, the largest penalties for non-compliance are likely to be the market-driven costs of having the company name associated with a security breach, and not being able to demonstrate reasonable security precautions with an acceptable compliance statement. The damaged trust relationship effects customer satisfaction, consumer confidence, and the organization’s ability to compete in the marketplace.
On top of retention requirements, log files must be secured and access restricted and monitored. In an attempt to conceal unauthorised access or attempted access, intruders will try to edit or delete log files. Efforts to secure log files should include:
- Encryption of data residing on database and in transit where necessary.
- Segregation of logged data to an independent server.
- Collection of data on Write Once Read Many (WORM) disks or drives.
- Secure storage of backup and destruction of log files.
Secure log files also assist in effective and timely identification and response to security incidents and to monitoring and enforcement policy compliance.
A good log management solution should provide a scalable and centralized process that can collect, normalise, aggregate, compress and encrypt log data from disparate sources such as routers, switches, firewalls, IDS/IPS, AV, SPAM/spyware, Windows, UNIX, and Linux systems to identify security breaches, hacker intrusion and or any other activity that could potentially be crippling valuable corporate assets. A good log management solution should also automate the process of producing reports, with relevant information that will indicate an anomaly or glitch. Having the system email these reports to your inbox at set intervals can save trouble and most importantly time.
A solution that automatically mines and manages that data can provide immediate insight into network activity, helping IT departments respond rapidly to security events and other network availability problems. Additionally, with stricter requirements imposed by best practices frameworks and regulatory legislation, companies must find more reliable ways of managing and securely archiving complete log data for compliance purposes and legal protection. Reporting requirements for security information are going to increase. Regulations are sure to call for log data from additional sources. Plan now for performance to handle streams of security information without impacting application performance and storage capacity that offers efficient growth paths as the enterprise storage requirements grow.
Log files may not be pretty, but they make fantastic partners, working tirelessly in the background, never complaining, always on top! Sometimes, they can be difficult to make sense of. A centralised log management system will undoubtedly help. | <urn:uuid:c1293974-9001-43d5-ba14-eeef54078e5f> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2007/01/29/log-management-lifeblood-of-information-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00227-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929325 | 1,556 | 2.796875 | 3 |
Last time I gave a high level overview of SIP and also took a look at SIP network element types. This time, I'll be looking at SIP message and method types, and describing how SIP network elements communicate. If you can't remember the SIP network elements that I described last time, it's probably a good idea to take a quick look at my last post.
There are two overall types of SIP message:
Requests: a SIP request is sent from SIP clients (UACs, such as SIP phones) to a SIP servers. Requests are used to invoke certain operations on the servers.
Responses: responses are sent by servers to clients, and indicate the status of the request sent by the client. Responses can be either provisional or final, and can, for example, indicate the request has been successful, or that there has been an error.
Specific SIP response ranges include:
1XX (100-199): these are provisional or informational responses.
2XX: these indicate success.
3XX: these indicate redirection.
4XX: 4XX response indicate client errors.
5XX: these indicate server errors.
6XX: these describe global failures.
As mentioned, SIP requests can be used to invoke operations. These operations are also referred to as methods, and the most common methods are as follows:
INVITE: this message type is used when a client wants to initiate a session. The INVITE is sent to a server (UAS), and the server processes the INVITE and returns an appropriate response. An example of the usage of an INVITE request (method) is one sent by a SIP phone in order to invite another SIP phone to participate in a voice call (session).
REGISTER: this message is used to register contact information with a registrar server, and the contact information is used to build a location database (the registrar server is the front end of the location service). When a user wants to initiate a session with another user, it must first locate that user, and to do this the location service can be consulted.
ACK: this message is the reply to the final response for an INVITE.
BYE: this is used to terminate a session.
CANCEL: this is used to terminate a request for which a final response has not yet been received. This could be used, for example, if a SIP phone sends an INVITE to a second phone in order to initiate a call, but before the second phone is answered (and a final response sent), the first phone sends a CANCEL to terminate the call initiation request.
OPTIONS: this is used by a UA, such as a SIP phone, in order to query another UA about its capabilities.
Other SIP methods include SUBSCRIBE, PRACK, INFO, REFER, NOTIFY, and UPDATE.
If you are wondering how all these messages fit together in practice, here's an example of call setup and disconnect between a Cisco SIP gateway (GW1), and SIP IP phone (User B), both of which are functioning as SIP UAs. Note that ‘User A' and ‘PBX A' in this example are not enabled for SIP. There are numerous other example of message flows in the referenced document, but two others of particular interest may be call setup via a SIP proxy server, and call setup via a SIP redirect server. In these two examples, the PBXs and user phones are again not enabled for SIP - only the gateways and the proxy/redirect servers are SIP enabled, and the gateways are functioning as SIP UAs.
Next time I'll describe the Media Gateway Control Protocol (MGCP). | <urn:uuid:99efc69d-0bb0-4fff-8de6-ca3eb5220ff1> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2344063/cisco-subnet/ccie-voice-ccvp--telephony-protocols---sip-messages.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00531-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922085 | 789 | 3.046875 | 3 |
Privilege Guard v3.8 introduces Drive Rule
The Drive Rule is a new validation rule that lets you match applications being executed from particular types of drive. Not too dissimilar in concept to the file path rule (where applications are matched based on their directory location), the drive rule lets you target the drive itself.
So what do we mean by drive?
Basically, anything which shows up under My Computer with a drive letter.
Why is that important?
As you know, storage comes in many forms and all modern PC’s and laptops allow extra storage, or peripherals, to be plugged into external ports. Any storage peripheral that is plugged in, or loaded in the case of a CD-Rom, will then register itself with Windows as a drive, and pop up in Explorer ready for the user to access.
Below is a summary of the different categories a drive would fall under:
- Fixed Disk – Any drive identified as an internal Hard Disk
- Network – Any drive mapped to a network share
- RAM Disk – Any drive identified as a RAM disk
- Removable Media – Any drive identified as generic Removable Media
- USB – Any drive detected as a USB connected device
- CD/DVD – Any drive identified as a CD or DVD optical drive
- eSATA Drive – Any drive detected as an eSATA connected device
OK, so why should I be concerned?
There are a number of reasons why a drive type is cause for security concerns, specifically any drive that allows the user to transport code onto their computer. Below are some example scenarios which highlight just some of the issues:
Untrustworthy file systems
Non-standard media, such as homemade CD/DVD Rom’s and USB sticks have an unreliable file system format. In many cases it will be FAT, which does not include any security information. So robust policies designed to match on properties such as Trusted Owner may not be available.
Executing unknown code from personal media and devices should always be blocked by default.
TIP: The device rule, when used in combination with the Trusted Owner rule creates a robust layer of protection to prevent users executing code from untrusted devices, and also prevents users from attempting to bypass this by copying it to a trusted drive. NTFS security ensures that the user who introduces code becomes the owner, and in the case of a standard user, an untrusted owner, which will result in an automatic block by Avecto software.
Many USB devices as well as CD/DVD’s include auto-run capabilities, where a specific application on the media will execute automatically once connected or inserted. This is a common attack vector used by cybercriminals against unsuspecting users to gain control of a computer. For example, a malicious auto-run executable that installs a trojan or keylogger is presented on a CD to a target through social engineering, or on a USB stick dropped by an office entrance. Simply plugging in the media to your desktop is all that it takes to seize control of the computer and open a backdoor for further exploitation.
Auto-run executables pose a significant security risk to any organization, and should be blocked by default.
Portable app installs
Portable apps offer users a convenient way of transporting their favorite application, web browser or game, from computer to computer. They do not need to be installed (that’s what makes them portable), and they are generally designed to run without admin rights (many apps only need admin rights because they require access to protected areas of the registry and filesystem.
Because of the lightweight nature of portable apps, they can very easily slip past application control mechanisms; you only need to take a look at the range of available portable apps to realize this can pose significant problems from both a security and license compliance perspective.
Portable apps allow users to run untrusted and unauthorized code, and should be blocked by default.
To block or not to block?
Applying a blanket stop on all of the above may be a great idea from a security perspective, but there are cases where users genuinely need to run code from CD/DVD’s (for example a vendor installation disk). Likewise many IT departments have genuine use cases for using portable debugging tools. So a flexible, granular and policy based level of control is required.
Privilege Guard 3.8 Drive Rule
The new drive rule can be used in combination with any other of the 18+ validation rules that Avecto offers, giving you a diverse set of criteria to target applications individually or by classification.
Firewall style rules means you can easily build a robust security model for dealing with unauthorized code introduced through unknown drive types, and strong validation rules allow whitelisting of trusted and authorized applications.
Edit: Privilege Guard has now evolved into the brand new security suite, Defendpoint, which encompasses Privilege Management, Application Control and Sandboxing. For more information, please visit www.avecto.com/defendpoint. | <urn:uuid:f2256792-5a22-458f-9ebe-52cf141b652a> | CC-MAIN-2017-04 | https://blog.avecto.com/2013/08/beware-the-usb-stick/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00163-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.918494 | 1,028 | 3.4375 | 3 |
Many of VoIP's security vulnerabilities are nothing new; they are simple the consequence of routing voice traffic over IP networks. Traditional telephony has been spared the kind of denial of service (DoS) attacks and worms that have bedeviled the Internet since Robert Tappan Morris set the first worm loose in 1988. However, the transport medium changes everything, even if VoIP lets users make and receive telephone calls with the same ease as with traditional phone service.
"You have to consider the underlying infrastructure," Infonetics directing analyst for enterprise voice and data Matthias Machowinski says. "If worms and viruses bog down your network, it's a data security issue, of course, but that's also going to affect voice quality and reliability."
In fact, real-time traffic like voice is particularly susceptible to any attacks on the IP network carrying it. Few users, Machowinski notes, will notice a network hiccup when they're downloading an e-mail attachment, but the same minute delay could play havoc with voice data. The bottom line is that VoIP security is only as good as the overall security of the network it's on, but even that's just a starting point.
"VoIP inherits every one of the denial of service vulnerabilities that you have on the net," Zar says. "It's also vulnerable to DoS attacks that are protocol-aware." | <urn:uuid:861c0acc-14cb-468d-858b-53e3be681ad4> | CC-MAIN-2017-04 | http://www.networkcomputing.com/networking/protect-yourself-against-worst-voip-dangers/956838878 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00283-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964191 | 282 | 2.640625 | 3 |
Learn the practices of managing information privacy while preparing for the CIPM examination.
In this course, you will gain foundational knowledge on concepts of privacy and data protection laws and practice. You will learn common principles and approaches to privacy as well as understand the major privacy models employed around the globe. An introduction to information security concepts and information security management and governance will be covered including frameworks, controls, and identity and access management. You will also learn about online privacy as it relates to using personal information on websites and other internet-related technologies.
You will learn how to create a privacy program at an organizational level, develop and implement a framework, and establish metrics to measure program effectiveness. In an interactive format applying practices to a real-world scenario, you will review privacy program practices through the privacy life cycle: assess, protect, sustain and respond.
This two-day program covering practices in managing information privacy includes:
Note: Your contact information must be provided to the IAPP and will be used by IAPP for membership services fulfillment in accordance with IAPP's policies.
Receive face-to-face instruction at one of our training center locations.
Experience expert-led online training from the convenience of your home, office or anywhere with an Internet connection.
Train your entire team in a private, coordinated professional development session at the location of your choice.
Receive private training for teams online and in-person.
Request a date or location for this course. | <urn:uuid:4c1ef11e-183b-41a1-98e1-6cbb090cf2ee> | CC-MAIN-2017-04 | https://www.globalknowledge.com/ca-en/course/121326/certified-information-privacy-manager-cipm-prep-course/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00127-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.917516 | 298 | 2.546875 | 3 |
7.8 What is an undeniable signature scheme?
Undeniable signature scheme, devised by Chaum and van Antwerpen [CV90] [CV92], are non-self-authenticating signature schemes (see Question 7.2), where signatures can only be verified with the signer's consent. However, if a signature is only verifiable with the aid of a signer, a dishonest signer may refuse to authenticate a genuine document. Undeniable signatures solve this problem by adding a new component called the disavowal protocol in addition to the normal components of signature and verification.
The scheme is implemented using public-key cryptography based on the discrete logarithm problem (see Question 2.3.7). The signature part of the scheme is similar to other discrete logarithm signature schemes. Verification is carried out by a challenge-response protocol where the verifier, Alice, sends a challenge to the signer, Bob, and views the answer to verify the signature. The disavowal process is similar; Alice sends a challenge and Bob's response shows that a signature is not his. (If Bob does not take part, it may be assumed that the document is authentic.) The probability that a dishonest signer is able to successfully mislead the verifier in either verification or disavowal is 1/p where p is the prime number in the signer's private key. If we consider the average 768-bit private key, there is only a minuscule probability that the signer will be able to repudiate a document they have signed.
- 7.1 What is probabilistic encryption?
- Contribution Agreements: Draft 1
- Contribution Agreements: Draft 2
- 7.2 What are special signature schemes?
- 7.3 What is a blind signature scheme?
- Contribution Agreements: Draft 3
- Contribution Agreements: Final
- 7.4 What is a designated confirmer signature?
- 7.5 What is a fail-stop signature scheme?
- 7.6 What is a group signature?
- 7.7 What is a one-time signature scheme?
- 7.8 What is an undeniable signature scheme?
- 7.9 What are on-line/off-line signatures?
- 7.10 What is OAEP?
- 7.11 What is digital timestamping?
- 7.12 What is key recovery?
- 7.13 What are LEAFs?
- 7.14 What is PSS/PSS-R?
- 7.15 What are covert channels?
- 7.16 What are proactive security techniques?
- 7.17 What is quantum computing?
- 7.18 What is quantum cryptography?
- 7.19 What is DNA computing?
- 7.20 What are biometric techniques?
- 7.21 What is tamper-resistant hardware?
- 7.22 How are hardware devices made tamper-resistant? | <urn:uuid:fcd09b77-b069-44f5-a5bb-5dc21a7c4ee6> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-is-an-undeniable-signature-scheme.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00337-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.896305 | 620 | 2.6875 | 3 |
The Internet of Things is upon us. And with it, a whole range of stacks and protocols to wirelessly drive the multitude of devices and applications. The wireless front has become a complex weave of multiple technologies. Due to high number of handoffs and conversions, next-gen wireless networks are characterized by the extreme diversification of protocols and standards on one hand, and that of network architectures and topologies on the other. IMS, LTE, LTE-A, Wi-Fi, Small Cells, UTRAN, GERAN, HetNet, 2G/3G /4G wireless systems, there’s definitely a lot of going on out there.
Mobile technologies may come varied, but the convergence of the services they deliver is the norm today. Wireless carries data, video and voice for everything, everywhere. That’s the trend. That’s the Internet of Things.
Thankfully, there are ressources out there that provide an overview of multiprotocol wireless networks. The following Mobile Networks Reference Poster illustrates the different protocols and standards and includes an extensive glossary of telecom acronyms. This is free, handy and you can order your print copy now. | <urn:uuid:2112efb3-36d2-4610-ab43-7064221b82fa> | CC-MAIN-2017-04 | http://exfo.com/corporate/blog/2015/protocols-wireless-poster | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00245-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.90936 | 242 | 3.078125 | 3 |
Energy aims to retake supercomputing lead from China
Department's next high-performance system, being built by IBM, expected to surpass the current leader
- By Henry Kenyon
- Feb 11, 2011
China currently holds the lead position for the world’s fastest supercomputer, but not for long. The U.S. is working on a new class of computers that will greatly outperform all of the planet’s current supercomputers. These machines themselves will pave the way for even faster computers scheduled to appear by the end of the decade.
Commissioned by the Energy Department’s Argonne National Laboratory, the computer will be able to execute 10 quadrillion calculations per second, or 10 petaflops. Nicknamed Mira, the machine will be built by IBM and based on a version of the upcoming version of the firm’s Blue Gene supercomputer architecture, called Blue Gene/Q, Computerworld reported. The supercomputer will be operational in 2012.
According to Computerworld, the 10-petaflop performance will be vastly higher than today’s most powerful machine, the Tianjin National Supercomputer Center’s Tianhe-1A system, which has a peak performance of 2.67 petaflops.
There’s a new supercomputing champ in town
Making sense of exaflops
The added speed and computing muscle will allow Mira to conduct a variety of modeling and simulation tests that current machines cannot do. In a statement, IBM said the computer could be used in a variety of applications, such as modeling new, highly efficient batteries for electric cars or developing better climate models.
Argonne officials expect that Mira will not only be the fastest computer in the world, but the most energy efficient as well. These efficiencies will be achieved by a combination of new microchip designs and very efficient water cooling. The Argonne Leadership Computing Facility (ALCF), which will house Mira, won an Environmental Sustainability (EStar) award in 2010 for the innovative and energy efficient cooling designed for its current system. Laboratory officials predict that Mira will be even more efficient.
Mira is also a stepping stone in U.S. efforts to develop exascale computers — a class of machines that would be a thousand times faster than the upcoming petascale systems. Computerworld noted that by 2012, Mira will be one of three IBM systems able to operate at 10 petaflops or higher. The company is also developing a 20-petaflop machine called Sequoia for the DOE’s Lawrence Livermore National Laboratory. Another IBM-built 10 petaflop machine in production is the Blue Waters system for the National Science Foundation-funded National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign. | <urn:uuid:998c8d13-07ef-419b-936b-1fed5923e683> | CC-MAIN-2017-04 | https://gcn.com/articles/2011/02/11/energy-supercomputer-to-break-performance-records.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00238-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.919184 | 586 | 2.921875 | 3 |
We carry miniature computers around with us – but are we doing enough to protect them and our data?[Article written by Nick Booth, andpublished in The Review, October 2012]
We take them for granted these days, but smartphones are actually miniature computers with more processing power than NASA used to put a man on the moon. Thanks to mobile broadband, smartphones now outsell PCs and laptops because they can do anything that a desktop computer can do - and more.
But while we routinely protect our PCs from viruses and hackers, a third of mobile users have no protection, according to McAfee research. This makes us - and our devices - incredibly vulnerable. If a criminal can plant some software on your phone, they can take control of it, steal all your banking details, spy on you and run up huge phone bills on your account. And with the number of smartphone users increasing by the day and ever more services being created, there is an increasing need for vigilance.
Criminals only need to find one open window for the virus writer, hacker or identity thief to steal everything. So what are the windows of opportunity for criminals and how do you close them down?The most obvious way to safeguard your privacy is to passcode-protect your phone in case you lose it. Some handset vendors now offer biometric recognition; Motorola, for example, created a fingerprint sensor for the Atrix, its Android mobile phone.
Malware – rogue software used for criminal purposes – is the next biggest threat. Criminals can fool you into allowing rogue software on to your phone when you download apps, respond to texts or visit Facebook. As with desktop PCs, downloading apps from unknown sources is the biggest risk, as they can be conduits for malware.But it is SMS texting, which is still phones’ most-used feature, that creates a hacker’s biggest opportunity to steal from you. Mobile malware can make your phone send thousands of premium-rate SMS texts and you won’t even know it until your six-figure phone bill arrives. By the end of 2011, there were 130,000 malware apps in existence for Android phones alone, according to Trend Micro, and most were for SMS fraud.
Even legitimate mobile apps have their security vulnerabilities, and cybercriminals are finding these coding weaknesses and beginning to load their rogue code into them.
The moral is that you must never assume your software is safe, even if it comes from a reputable supplier. So how do you minimize the risk of falling prey to all these online threats? Here are some strategies to adopt.
Limit the number of downloads you make. The sites you visit most frequently are also likely to be havens for criminals, who try to exploit popular apps, URLs, attachments, social media or email. By clicking a link or downloading an attachment on your mobile device, you may end up installing mobile malware instead.
App stores are a danger area. Although the proprietors try to monitor their stores for malware, rogue software vendors can sneak in. Malware disguised as a stock market app – that was actually designed to steal information from the downloader’s device – made it into the iTunes App Store recently.
Apple users should avoid the temptation to “jailbreak” their iPhones using software that allows them to break out of the confines of iOS. This can lead to a malware invasion. If you use an Android phone, jailbreaking isn’t an issue as Android phones have no boundaries. That’s not to say they’re risk-free, however: in the last seven months of 2011, malware targeting Android grew by 3,325% and Android malware accounted for about 46.7% of unique malware samples, according to Juniper Networks. Google is now attempting to secure its App Market with an internal malware detector called Bouncer that scans apps submitted to the Android Market.
Even the most vigilant mobile users drop their guard at times, so it is vital to install security management systems. These software solutions and gadgets will create a secure foundation. The rest is up to you.
Robert Winter, 48, mobile data recovery manager, UK
1- Go into the Settings menu and set up a passcode for your phone.
2 - While in Settings, Android users should turn off the Access from Unknown Sources option.
3 - Check the reputation of any publisher before you buy an app from it.
4 - When you install an app, check the permissions it asks for. Be very careful about granting any. No game app needs to know your contacts or location.
5 - Watch out for social media – hackers are now placing malicious links on your friends’ profiles that install malware on your device when you click them.
6 - Keep your phone updated with the latest security firmware to correct possible vulnerabilities.
7 - Block the installation of rogue software by using the Tools menu of your internet browser to disable Java.
8 - Don’t trust public Wi-Fi, especially for financial or other secure personal transactions. | <urn:uuid:f45dd3f5-4b68-45bf-89ca-d17611ddba6a> | CC-MAIN-2017-04 | http://www.gemalto.com/mobile/inspired/mobile-security | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00146-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927265 | 1,022 | 2.609375 | 3 |
Some time ago I was working on IPv6 implementation and in that period I wrote an article about NDP (you can read it here). After a while I received some comments that is not written very well so I reviewed a huge part of it. It looks my english was far worst two years ago that I was really aware of 🙂
In the reviewing process I realised that NDP usage of Solicited-Node multicast addresses was not clearly explained. This is the follow-up article which should explain how and why Solicited-Node multicast address are used in NDP. After all this kind of multicast addresses are there to enable IPv6 neighbor discovery function of NDP to work properly.
Solicited-node multicast address is IPv6 multicast address used on the local L2 subnet by NDP Network Discovery Protocol. NDP uses that multicast address to be able to find out L2 link-local addresses of other nodes present on that subnet.
NDP replaces ARP
As we know, NDP in IPv6 networks replaced the ARP function from IPv4 networks. In IPv4 world ARP used broadcast to send this kind of discovery messages and find out about neighbours IPv4 addresses on the subnet. With IPv6 and NDP use of broadcast is not really a good solution so we use special type of multicast group addresses to which all nodes join to enable NDP communication. | <urn:uuid:6fd47402-d71e-4012-8065-7c9ce00e4cdf> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/tag/ndp | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00266-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954299 | 280 | 2.703125 | 3 |
The goal of a data warehouse is to integrate and conform the data from across the organization.
To conform means to bring into agreement, or to make similar. Conforming data entails having business units agree on terms that describe the important entities in the business. For example, a bank that originates and services mortgage loans, and then sells them on the secondary market will often have a discrete application for each function in the mortgage lending lifecycle.
In turn, the codes of the respective operational systems must be mapped to this agreed-to set of descriptions when the data moves into the data warehouse.
In addition, the users of a data warehouse will often tell you that consistency means knowing exactly what they are getting from the data warehouse. If the users ask for revenue or cost totals, they want to know what the components of revenue and cost are, and they want those components to be the same for all queries.
The first thing your checkup must do is determine whether your data warehouse strategy is committed to the philosophy of making data consistent. Then you must examine how well your governance process for maintaining and growing the data warehouse functions in light of this commitment:
Simply put, conforming dimensions means bringing consistency to the textual attributes of an organization. These are the attributes by which users slice and dice information -- customer demographics, product categories, organizational hierarchies, time periods, and many more.
Conforming facts means bringing consistency to the measures of an organization -- the number of customers, total revenue, gross profit return on inventory, end-of-month balances, and a host of others.
Examine your commitment to and process for conforming data. Unless data is conformed across the organization, the full value of the data warehouse will never be achieved.
Data Warehouse Usage
The real purpose for a data warehouse is to enable the business users to make decisions. To find out how well the data warehouse is assisting the decision making process, ask the users.
Since the data warehouse has been functioning for some time, tackle the user interviews from three directions: the intended users, the neglected potential users, and the entrepreneurs.
The data warehouse was built with a set of users in mind, and it was designed to answer their questions. Ask the intended users who are currently using the data warehouse:
Ask the intended users who are not using the data warehouse what prevents them from using it, and what would make them change their minds.
Next, turn to acknowledged potential users of the data warehouse whose needs were not addressed in initial releases. These neglected information consumers will give you a better understanding of future wants: the key data elements to be migrated to the data warehouse, the data sources, the volume and frequency of migration, and the impact the information will have on making decisions.
This will help shape the conforming committees and the governance process by which new information is released to current and new consumers.
Finally, engage those select individuals who are using the data warehouse to do great things. Find out what hidden stories they unearth in the data. The entrepreneurs on the periphery are critical change agents, and they will help make your data warehouse a success.
If there are no entrepreneurs to be found, consider whether everyone is asleep, or whether your data warehouse is too cumbersome to use, contains no useful information, or both.
Data Model Composition
Now we turn to the underpinnings of the data warehouse.
The data model of a data warehouse should be the optimal model for delivering data to the end users. The model must be easy to understand and provide speedy query performance. Anything else is beside the point.
Dimensional modeling is the best way to model decision support data, and a well-designed dimensional data model is the prerequisite to a high-performing data warehouse.
The data model for your data warehouse should be checked by a data architect or data modeler versed in dimensional data modeling techniques.
This assessment looks at: data mart alignment with business functions; fact table granularity; and dimensional structure.
Data Mart Alignment with Business Functions: Each data mart or family of fact tables should focus on a business theme that's important to the business, possibly tracking that theme through various phases of the business lifecycle.
For example, managers and analysts who worry about quality in the order fulfillment process would be interested in knowing about:
The data modeler examines each family of fact tables or individual facts to ascertain how well they align with the business requirements.
Fact Table Granularity: The cause of much user heartburn and complaints regarding data warehouse queries can often be laid to the granularity of the individual fact table record.
The design of a data mart begins with a declaration of the grain. The grain might be an end-of-day snapshot of individual account balances or every parking ticket issued by a parking garage.
If the fact table contains every segment of a telephone call as it is routed through the system, then each call segment is the grain. Every question tied to a business theme depends on the grain of the fact table.
Very often, not enough consideration has been given to visualizing and declaring the correct grain. The data modeler studies the impact the granularity of each fact table has on user queries. Changing the granularity can have a substantial impact on the choice of attributes in both dimension and fact tables. | <urn:uuid:f02adba6-93d0-4670-acc8-4515d373cbd3> | CC-MAIN-2017-04 | http://www.cioupdate.com/reports/article.php/11050_3509291_2/Does-Your-Data-Warehouse-Need-a-Fix.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00082-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916856 | 1,082 | 2.671875 | 3 |
Practice Tests, Self-Assessments and Skill Drills
As formal learning methods are increasingly applied to certification programs and preparation, aspiring candidates will find an ever-broader range of options to help them through the certification process. Formal learning methods today recognize three major milestones in the certification process, and it’s increasingly the case that candidates can (or must) work with tools at each step along the way:
- Assess: This preliminary step runs candidates through a systematic inventory of exam objectives, topics, concepts, skills and so forth. Competency and knowledge in any area indicates that area won’t require extensive preparation or review. Lack of competency or knowledge in any area indicates elements that must be included in learning and preparation plans and materials. The latter elements therefore define the skills and knowledge gap that candidates must close before they can test.
- Learn: From a general perspective, this step consists of mastering the terms, concepts, tools, techniques and skills that the certification seeks to verify. Formal learning methods narrow this step to simply closing the skills and knowledge gap that the assessment stage uncovers.
- Measure: From a general perspective, this step consists of testing candidates on their knowledge of concepts, tools, techniques and so forth, and of checking their level of skills and competency on the operational side of the certification.From this perspective, practice tests, self-assessments and skill drills can all play a pivotal role throughout the first two steps of the process and in making sure candidates are good and ready to succeed in completing the third step.
Though not as common as either practice tests or skill drills, self-assessments are designed specifically to analyze skills and knowledge gaps in their takers. Even better, most self-assessments are also designed to create learning plans designed to remedy the very gaps they detect. Thus, it’s not unreasonable to re-use self-assessments as a kind of ongoing measure of exam readiness, though it’s probably more accurate to use multiple self-assessments at various stages of preparation to avoid assessments that may become inflated owing to repeated exposure to the same question banks.
Self-assessments are excellent tools to help certification candidates get started preparing and to measure exam readiness. That probably explains why certification sponsors and practice-test vendors are starting to get in on this act.
In April 2003, Microsoft partnered with practice-test vendor MeasureUp to offer “skills assessments” on the Microsoft Web site (www.microsoft.com/traincert/assessment/). In their first six months of availability, more than 150,000 of these assessments have been completed, according to numbers obtained from MeasureUp. This nearly matches the total number of Microsoft certifications earned during the same period, so clearly candidates are taking advantage of these tools to help them determine where to focus their efforts in preparing for exams.
I expect to see assessments becoming a more standard part of the certification preparation process in the near future, so that you’ll start finding more of them at sponsor Web sites, as part of learning systems, in study guides and other preparation aids and so forth. Because I’ve long advocated using practice exams plus a review of exam objectives and prep materials to perform manually what assessments do automatically (analyze results, perform gap analysis and construct a remedial learning plan), this offers great value for cert candidates, even if they must pay for assessments.
Although practice tests don’t always perform gap analysis and propose learning plans in response to test results as self-assessments do, best-of-breed offerings increasingly aim at aiding learning, as well as assessing exam readiness. That is, most of the better practice test vendors take time to explain both correct and incorrect answers to their questions and often point to additional resources and information as part of their answer discussions. This is a deliberate value-add designed to help candidates improve their skills and knowledge when they encounter questions that may indicate further learning is needed.
Using a practice test to assess exam readiness is reasonably easy, assuming that the practice test effectively models the real thing. In that case, candidates must aim for a score that’s at least as high as the passing or cut score on the real exam before they consider moving from the practice court to the testing center. Because the stress of taking a real exam can often depress exam results, no matter how well prepared a candidate might be, I usually recommend that candidates shoot for a score that’s 10 percent higher than the cut score to compensate for such effects.
Using a practice test to plan further learning may take extra effort, depending on how much information the practice-test vendor provides in its answers. At a minimum, candidates must map questions to specific exam objectives (or related topics or subtopics). At the top end of this scale, candidates should also map questions to related study aids, training guides and other technical background materials, hands-on labs or exercises or whatever’s needed to expand their understanding of and ability to deal with the subject matter in the exam situation. Those who fail certification exams, in fact, are advised to memorize as much of what they didn’t understand on the exam as possible for the very same reason—namely, to allow them to analyze their skills and knowledge gaps after they leave the testing center, and to plan the right kind of remediation to ensure a passing score on their next try.
Although practice tests try to model real exams as much as possible and include far more hands-on activity than certification exams did as recently as three years ago, skill drills take the crucial dimension of building experience one step further. That is, skill drills should take certification candidates through multiple passes over key tasks, activities, analyses and troubleshooting situations that they’re likely to see on exams. The guiding concept is to provide a safe, well-supported environment in which candidates can explore, learn and refine their understanding of and ability to interact with important tools, utilities, consoles and other items with which they must contend in the exam situation.
Practice exams do indeed require more hands-on interaction and more actual experience with systems, software and tools nowadays, but they don’t always provide the ideal vehicle through which to learn about such things. Skill drills on the other hand, would include elements of training and familiarization (if not downright desensitization, which is where the repetition and rote practice involved in drilling comes into play), as well as providing exposure to the kinds of situations and problems that test-takers are likely to encounter in the real thing. In fact, this is where labs and simulators often do what skill drills should, in that they’re designed to provide safe, well-documented and well-supported interaction to help certification candidates get comfortable with hands-on exam content.
After reading this description, I imagine at least some of you are asking, “Where can I get a skill drill?” Alas, it’s now time to bring reality crashing down: While many training companies and cert-prep vendors are working toward this model, there really aren’t any stunning model demonstrations just yet. Today, the closest you can come is a good online lab or simulator backed up by a well-designed set of labs or hands-on exercises. But as certifications become more performance-oriented and hands-on skills | <urn:uuid:7bbf2d22-9752-4bb4-8b3b-1e1a5c42de77> | CC-MAIN-2017-04 | http://certmag.com/practice-tests-self-assessments-and-skill-drills/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00476-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953485 | 1,515 | 2.828125 | 3 |
HTTPS is supposed to be secure, right? Of course, nothing on the internet is ever truly safe. This week, a new vulnerability in OpenSSL was uncovered, allowing hackers to access websites secured with SSLv2. Although this security protocol is out of date, over 11 million websites—1/3 of all HTTPS secured servers—are at risk.
Plenty of websites that store sensitive information like credit card details are vulnerable to DROWN, which is an acronym for Decrypting RSA with Obsolete and Weakened eNcryption. Websites can be hacked in just minutes using this attack vector.
SSLv2 and SSLv3 have since been replaced by SSLv4 or TLS1.2 due to the possibility of man in the middle attacks. TLS doesn't allow SSLv2 connections, but if your website security certificate is used anywhere else on the internet that does support SSLv2, you are still at risk. That means SMTP, IMAP, and POP e-mail servers, which are all very common, or specific instances of older HTTPS that may be tied to an application.
Check your website on the DROWN test site. A patch is already available, so be sure to patch your servers ASAP. As news of this vulnerability has spread quickly, hackers will be on the hunt for any vulnerable servers while the opportunity is still ripe.
From a Linux computer that has OpenSSL libraries installed, you can also run the following command, which instructs OpenSSL to connect to a server using the SSLv2 protocol. If you get an error as shown below, SSLv2 is disabled. If you get the certificate returned, SSLv2 is still installed.
$ openssl s_client -connect hostname:443 -ssl2 CONNECTED(00000003) 7668:error:1407F0E5:SSL routines:SSL2_WRITE:ssl handshake failure:s2_pkt.c:428:
OpenSSL users should upgrade to version 1.0.2g or 1.0.1s. If SSLv2 is still enabled on your server, you'll want to disable it. You can do so by following these instructions:
1) Open the registry for editing.
2) Open or create this path: Hkey_Local_Machine\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Server
3) Add "Enabled" as a DWORD value set to zero and reboot.
On Apache servers, open the httpd.conf file (this may be the ssl.conf file depending on your configuration). Use Putty SSH to login, then type:
You can also use WinSCP to open the file path with a text editor.
Edit the following and then restart:
SSLProtocol -all +TLSv1 +SSLv3 SSLCipherSuite HIGH:MEDIUM:!aNULL:+SHA1:+MD5:+HIGH:+MEDIUM
Information on other server software, the DROWN vulnerability, and full technical papers are available at https://drownattack.com/ | <urn:uuid:4cf9c0d1-65c9-4195-88a9-d49108d39ce7> | CC-MAIN-2017-04 | https://www.greenhousedata.com/blog/dont-drown-check-your-servers-for-https-vulnerability | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00412-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.878296 | 655 | 2.734375 | 3 |
For decades, fusion was considered the stuff of science fiction, but in early 2014, the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory made headlines when it formally announced that record amounts of energy had been created through fusion. Using a laser 100 times more powerful than any other laser on Earth, NIF scientists can create conditions on Earth that once existed only in the center of stars.
Fusion is just one of many experiments taking place at NIF, which was built to sustain three missions: nuclear stockpile stewardship, energy science, and “basic” sciences such as experimental astronomy and astrophysics.
Massive quantities of data are essential to the success of NIF’s experiments, from experimental design through to execution and analysis. After the experiments comes the laborious task of sifting through the data to extract meaningful results and refine future experiments. NIF needed a way to provide scientists at NIF and around the world with rapid, uninterrupted access to critical data.
With a private cloud based on the NetApp® flash technology and NetApp clustered Data ONTAP® operating system, NIF has eliminated planned and unplanned downtime, reclaiming up to 60 hours of downtime per year for science and preventing costly delays. The secure multi-tenancy features of clustered Data ONTAP allow the NIF team to partition and protect sensitive data belonging to such users as the U.S. Department of Energy, the world’s most prestigious universities and laboratories, and scientists in the pursuit of the Nobel Prize. NIF combined clustered Data ONTAP with the strategic use of NetApp flash technologies to reduce latency by 97%. As a result, data can be available to scientists within 15 minutes after an experiment—sometimes faster.
With NetApp technology, NIF has been able to:
- Deliver nonstop, 24/7 availability for critical data.
- Cut planned downtime by 60 hours per year to maximize facility availability.
- Reduce latency by 97%.
- Increase the virtual storage footprint by 20% without performance degradation.
- Enable secure multi-tenancy to protect sensitive data. | <urn:uuid:978d28bd-96d4-49d9-a869-60fea7bf0256> | CC-MAIN-2017-04 | http://www.netapp.com/us/company/about-netapp/customer-showcase/nif.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00256-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913837 | 432 | 3.0625 | 3 |
A group of cybersecurity and computer science experts backed by the Defense Advanced Research Projects Agency began a long-term encryption research effort last month that could one day prevent hackers from being able to reverse engineer and steal software.
The goal of DARPA’s Safeware program is to develop technology that can cryptographically obscure software code, making it impossible for cybercriminals or competitors to reverse-engineer stolen software. The researchers are about one month into a four year effort, but officials said the fundamental nature of the research being conducted means practical capabilities and products are likely 10 to 20 years away.
Kurt Rohloff, an encryption expert and professor at the New Jersey Institute of Technology, is currently leading the DARPA-funded team, which includes two MIT professors, a University of California San Diego professor, and defense contractor Raytheon BBN Technologies.
“This is right now pie-in-the-sky research,” Rohloff explained, during an interview with MeriTalk. “Up until now, most have used basic hacks to do obscuration. The problem with the initial capability was that it was very slow,” he said, “It was very, very hard to get the programs to run in any kind of reasonable time.”
But a mathematical breakthrough that occurred two years ago showed it is theoretically possible to perform cryptographic obfuscation of programs. And it’s this new capabiltiity that Rohloff and his team are exploring.
The immediate research focus is on lattice-based cryptography. Also known as post-quantum encryption – lattice-based encryption is resistant to even the massive power of a quantum computing device. “A big part of my research is focused on building an open-source library to develop and provide lattice crypto technology. We’re getting pretty close to our first release,” Rohloff said.
It’s precisely because of the long-term nature of the research that DARPA is involved, according to Rohloff. “There’s historically a 20-year lag between mathematical breakthroughs and actual consumer use of encryption technology,” he said, pointing as an example to the development of public key encryption in the 1970s and its eventual use in the 1990s.
But there are other potential uses of the technology in the network security realm. Criminal hackers typically begin their attacks by simply looking for vulnerabilities and weaknesses. One example that Rohloff uses is the exploitation of printer drivers.
“Printer drivers are often written by folks whose main business is to build hardware and don’t have a lot of experience in cybersecurity,” he said. “And these printer drivers are often installed at the last minute and aren’t updated that often. And because they provide a network interface, they are used as vectors of attack for adversaries to get into a network.”
But Rohloff’s research may one day provide an additional layer of security that could help prevent those vulnerabilities from even being discovered. “One of the possibilities for encrypted obfuscation technology is if an adversary or cybercriminal were to get their hands on a printer driver – which is pretty easy to do – they wouldn’t be able to decompile the printer driver to look at the inner workings of it to see how it can be used to get into a network,” he said.
Still, the effort remains “fundamental research,” Rohloff cautioned. “Being able to do this in a real-time environment is a very, very long-term vision. One of the challenges that we’re facing is that obfuscation technology provides a relatively different compute model. So we’re still trying to figure out what are the optimal ways of designing the algorithms so they can be deployed efficiently,” he said.
“There are some things that we think we can do really well, like signature matching. But there are some things that we think will probably be quite challenging,” he said. “There’s a lot of gray space. A big part of the research right now is trying to figure out what we can actually get running.” | <urn:uuid:1e2150dd-305e-46e0-851f-098f21bebedf> | CC-MAIN-2017-04 | https://www.meritalk.com/articles/darpa-researchers-begin-hunt-for-unhackable-code/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00376-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.962197 | 870 | 3.140625 | 3 |
The NASA Jet Propulsion Laboratory uses software developed by Vint Cerf in an approach called disruption-tolerant networking to transmit dozens of space images to and from a NASA science spacecraft located about 20 million miles from Earth. The first deep-space communications network modeled on the Internet, DTN is expected to be used on a variety of upcoming space missions.
It took 10 years, but legendary Vint Cerf, who is widely credited as one of
the scientific founders of the Internet, has helped reach yet another Internet
milestone: the first deep-space communications network modeled on the Internet.
NASA announced Nov. 18 that engineers from NASA's JPL (Jet Propulsion
Laboratory) used software that Cerf helped develop over 10 years called DTN
(disruption-tolerant networking) to transmit dozens of space images to and from
a NASA science spacecraft located about 20 million miles from Earth.
"This is the first step in creating a totally new space communications
capability, an interplanetary Internet," Adrian Hooke, team lead and
manager of space-networking architecture, technology and standards at NASA,
said in a statement.
NASA began a month-long series of DTN demonstrations in October, using the
which is on a two-year mission to Comet Hartley 2, as a Mars
data-relay orbiter. The tests are the first in a series of planned
demonstrations of DTN. NASA eventually hopes to use DTN on a variety of
upcoming space missions.
According to NASA, the Interplanetary Internet could allow many new types of
space missions such as complex assignments involving multiple-landed, mobile
and orbiting spacecraft. DTN could also allow reliable communications for
astronauts on the surface of the moon.
"In space today, an operations team has to manually schedule each link
and generate all the commands to specify which data to send, when to send it
and where to send it," said Leigh Torgerson, manager of the DTN Experiment
Operations Center at JPL. "With standardized DTN, this can all be done
DTN sends information using a method that differs from the normal Internet's
TCP/IP communication suite, which Cerf
co-designed. Unlike TCP/IP on Earth, the DTN
does not assume a continuous end-to-end connection. In a typical TCP/IP
design, if a destination path can't be found, the data packets are not
With DTN, on the other hand, each network node keeps custody of the
information as long as necessary until it can safely communicate with another
node. This store-and-forward method means that information does not get lost
when no immediate path to the destination exists. Eventually, the information
is delivered to the end user.
"There are 10 nodes on this early interplanetary network," said
Scott Burleigh, JPL's lead software engineer for the demonstrations. "One
is the EPOXI spacecraft itself and the other nine, which are on the ground at
JPL, simulate Mars landers, orbiters and ground mission-operations
For the next round of testing, a NASA-wide
demonstration using new DTN software loaded on board the International Space
Station is scheduled to begin in summer of 2009. | <urn:uuid:0f384e35-8425-44f1-9433-6883c4a29559> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Application-Development/NASA-Tests-Interplanetary-Internet | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00128-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92088 | 693 | 3.578125 | 4 |
432 million internet users are found to frequently copy content illegally.
Research carried out by NetNames has claimed that the piracy of digital content including music, films and software costs about $80bn a year, although it is not clear whether that figure indicated lost sales, some legal fees, or even unemployment through piracy.
The research said that 432 million internet users frequently copy content illegally, with 327 million unique Internet users across North America, Europe, and Asia-Pacific openly sought infringing digital content.
About 23.8% of the overall bandwidth across the three regions was used for infringing digital content.
The report also added that 98% of data transferred by means of peer-to-peer networks is copyrighted, while 42% of software being used globally has been downloaded illegally.
According to report, online digital piracy directly has an effect on the software, gaming, film, TV, music and eBooks industries in addition to other types of online media services that depend on paid subscriptions or download fees. | <urn:uuid:6e3de9c0-12c6-49c7-84b4-0682e7d1136a> | CC-MAIN-2017-04 | http://www.cbronline.com/news/verticals/the-boardroom/digitally-pirated-music-films-and-software-cost-80bn-per-year-011113 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00338-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946228 | 201 | 2.65625 | 3 |
de Vere N.,National Botanic Garden of Wales |
Rich T.C.G.,National Museum Wales |
Ford C.R.,National Botanic Garden of Wales |
Trinder S.A.,National Botanic Garden of Wales |
And 10 more authors.
PLoS ONE | Year: 2012
We present the first national DNA barcode resource that covers the native flowering plants and conifers for the nation of Wales (1143 species). Using the plant DNA barcode markers rbcL and matK, we have assembled 97.7% coverage for rbcL, 90.2% for matK, and a dual-locus barcode for 89.7% of the native Welsh flora. We have sampled multiple individuals for each species, resulting in 3304 rbcL and 2419 matK sequences. The majority of our samples (85%) are from DNA extracted from herbarium specimens. Recoverability of DNA barcodes is lower using herbarium specimens, compared to freshly collected material, mostly due to lower amplification success, but this is balanced by the increased efficiency of sampling species that have already been collected, identified, and verified by taxonomic experts. The effectiveness of the DNA barcodes for identification (level of discrimination) is assessed using four approaches: the presence of a barcode gap (using pairwise and multiple alignments), formation of monophyletic groups using Neighbour-Joining trees, and sequence similarity in BLASTn searches. These approaches yield similar results, providing relative discrimination levels of 69.4 to 74.9% of all species and 98.6 to 99.8% of genera using both markers. Species discrimination can be further improved using spatially explicit sampling. Mean species discrimination using barcode gap analysis (with a multiple alignment) is 81.6% within 10×10 km squares and 93.3% for 2×2 km squares. Our database of DNA barcodes for Welsh native flowering plants and conifers represents the most complete coverage of any national flora, and offers a valuable platform for a wide range of applications that require accurate species identification. © 2012 de Vere et al. Source
Walker K.J.,Botanical Society of the British Isles |
Pinches C.E.,Natural England
Biological Conservation | Year: 2011
In England Pulsatilla vulgaris is a threatened herb that declined from 130 to 33 sites between 1750 and the 1960s due to ploughing-up of calcareous grassland. We examined the subsequent fate of these populations using documentary evidence and field survey. Demographic trends were related to changes in grassland composition, structure and management and responses to increased above-ground competition (caused by reduced grazing) were simulated in a 10-year shading experiment. Since 1968 P. vulgaris has been lost from 16 sites and gradually declined on four others. However, the total population size increased by 258% due to the reintroduction of winter grazing on three sites. This produced a significantly shorter, more herb-rich sward, with a lower cover of Bromopsis erecta (c. 10%), than on sites where populations remained stable or declined. Experimental shading had a significant negative effect on plant survival and flowering performance. These results confirmed that reduced grazing is now one of the major threats to species dependent on short swards, especially on isolated sites where livestock farming is no longer economically viable. That many of these declines took place on nature reserves highlights the difficulties of managing isolated grasslands, and the urgent need to re-instate grazing on reserves supporting populations of threatened species in otherwise intensively managed lowland landscapes. © 2011 Elsevier Ltd. Source
Rapacciuolo G.,Imperial College London |
Rapacciuolo G.,UK Center for Ecology and Hydrology |
Roy D.B.,UK Center for Ecology and Hydrology |
Gillings S.,British Trust for Ornithology |
And 3 more authors.
PLoS ONE | Year: 2012
Conservation planners often wish to predict how species distributions will change in response to environmental changes. Species distribution models (SDMs) are the primary tool for making such predictions. Many methods are widely used; however, they all make simplifying assumptions, and predictions can therefore be subject to high uncertainty. With global change well underway, field records of observed range shifts are increasingly being used for testing SDM transferability. We used an unprecedented distribution dataset documenting recent range changes of British vascular plants, birds, and butterflies to test whether correlative SDMs based on climate change provide useful approximations of potential distribution shifts. We modelled past species distributions from climate using nine single techniques and a consensus approach, and projected the geographical extent of these models to a more recent time period based on climate change; we then compared model predictions with recent observed distributions in order to estimate the temporal transferability and prediction accuracy of our models. We also evaluated the relative effect of methodological and taxonomic variation on the performance of SDMs. Models showed good transferability in time when assessed using widespread metrics of accuracy. However, models had low accuracy to predict where occupancy status changed between time periods, especially for declining species. Model performance varied greatly among species within major taxa, but there was also considerable variation among modelling frameworks. Past climatic associations of British species distributions retain a high explanatory power when transferred to recent time - due to their accuracy to predict large areas retained by species - but fail to capture relevant predictors of change. We strongly emphasize the need for caution when using SDMs to predict shifts in species distributions: high explanatory power on temporally-independent records - as assessed using widespread metrics - need not indicate a model's ability to predict the future. © 2012 Rapacciuolo et al. Source
Redhead J.W.,UK Center for Ecology and Hydrology |
Sheail J.,UK Center for Ecology and Hydrology |
Bullock J.M.,UK Center for Ecology and Hydrology |
Ferreruela A.,Forestal Catalana |
And 2 more authors.
Applied Vegetation Science | Year: 2014
Questions: What is the time-scale for natural regeneration of calcareous grassland? Is this time-scale the same for individual plant species, plant community composition and functional traits? Location: Defence Training Estate Salisbury Plain, Wiltshire, UK. Methods: We investigated the rate of natural regeneration of species-rich calcareous grassland across a 20 000-ha landscape. We combined a large-scale botanical survey with historic land-use data (6-150 yrs before present) and examined differences between grassland age classes in the occurrence of individual plant species, plant community composition and plant community functional traits. Results: Many species showed a significant association with grasslands over 100 yrs old. These included the majority of those defined elsewhere as calcareous grassland indicators, although some appeared on grasslands <10 yrs old. Plant community composition showed increasing similarity to the oldest grasslands with increased grassland age, with the exception of very recently ex-agricultural grasslands. Most functional traits showed clear trends with grassland age, with dispersal ability differing most strongly between recent and older grasslands, whilst soil fertility and pH tolerance were more influential over longer time-scales. Conclusions: Even in a well connected landscape, natural regeneration to a community resembling ancient calcareous grassland in terms of functional traits and plant community composition takes over a century, although changes at the level of individual species may occur much earlier. These findings confirm the uniqueness of ancient calcareous grassland. They also suggest that the targets of re-establishment efforts should be adjusted to account for the likely time-scale of full community re-assembly. We examined natural regeneration in an extensive calcareous grassland landscape over a 150 yrs timescale. Results showed that natural regeneration takes over a century when measured by functional traits and plant community composition, despite comparatively rapid changes in the occurrence of individual species. These findings emphasize the value of existing ancient calcareous grasslands and the challenges facing restoration efforts. © 2013 International Association for Vegetation Science. Source
Roy H.E.,UK Center for Ecology and Hydrology |
Preston C.D.,UK Center for Ecology and Hydrology |
Harrower C.A.,UK Center for Ecology and Hydrology |
Rorke S.L.,UK Center for Ecology and Hydrology |
And 11 more authors.
Biological Invasions | Year: 2014
Information on non-native species (NNS) is often scattered among a multitude of sources, such as regional and national databases, peer-reviewed and grey literature, unpublished research projects, institutional datasets and with taxonomic experts. Here we report on the development of a database designed for the collation of information in Britain. The project involved working with volunteer experts to populate a database of NNS (hereafter called "the species register"). Each species occupies a row within the database with information on aspects of the species' biology such as environment (marine, freshwater, terrestrial etc.), functional type (predator, parasite etc.), habitats occupied in the invaded range (using EUNIS classification), invasion pathways, establishment status in Britain and impacts. The information is delivered through the Great Britain Non-Native Species Information Portal hosted by the Non-Native Species Secretariat. By the end of 2011 there were 1958 established NNS in Britain. There has been a dramatic increase over time in the rate of NNS arriving in Britain and those becoming established. The majority of established NNS are higher plants (1,376 species). Insects are the next most numerous group (344 species) followed by non-insect invertebrates (158 species), vertebrates (50 species), algae (24 species) and lower plants (6 species). Inventories of NNS are seen as an essential tool in the management of biological invasions. The use of such lists is diverse and far-reaching. However, the increasing number of new arrivals highlights both the dynamic nature of invasions and the importance of updating NNS inventories. © 2014 Springer International Publishing Switzerland. Source | <urn:uuid:4359fb82-8b59-4f06-97d5-cee2119124c4> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/botanical-society-of-the-british-isles-945154/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00366-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.912366 | 2,067 | 3.0625 | 3 |
NASA is taking its final steps to launching astronauts from American soil again and the first steps in sending humans into deep space and Mars.
The space agency announced that Boeing Co. and SpaceX, both U.S. companies, have landed highly-sought-after Commercial Crew Transportation Contracts to build the spacecraft that will ferry astronauts from Cape Canaveral Air Force Station in Florida to the International Space Station and back.
The contract covers a minimium of two missions and can be extended to cover up to six.
NASA has a deadline of launching astronauts from American soil by 2017, giving the two companies only a few years to finish their designs, build, test and certify their spacecraft.
"This sets the stage for the most ambitious and exiting chapter of human space flight," said NASA Administrator Charlie Bolden during a Tuesday afternoon press conference at Kennedy Space Center, which leads the agency's Commercial Crew Program. "The greatest nation on Earth should not be dependent on anyone to get into space. Today we're one step closer to launching astronauts from U.S. soil on American spacecraft and end our reliance on Russia."
Since the U.S. retired its fleet of space shuttles in 2011, NASA has depended on Russia to ferry its astronauts back and forth to the International Space Station, paying the Russian space agency about $70 million per astronaut.
That arrangement has proved to be increasingly sticky given the increased tensions between the two countries since Russia's aggressive moves toward Ukraine.
Kathy Lueders, deputy program manager for NASA's Commercial Crew Program, pointed out during the news conference today that both SpaceX and Boeing must meet five certification milestones, including flight readiness, all under NASA oversight. The companies also must conduct a flight test, carrying cargo and one astronaut, to the space station, where it will dock and then return the crew safely home.
Both SpaceX and Boeing have had considerable experience working with NASA.
SpaceX, one of two private companies ferrying supplies, food and scientific experiments to the space shuttle, wants to be the company ferrying humans, as well.
The commercial space company, which aims to populate Mars one day, is scheduled to launch a resupply mission to the space station on Sept. 20 from Cape Canaveral.
As for Chicago-based Boeing, the leading aerospace company is developing a Commercial Crew Vehicle that can be launched on a variety of space vehicles for NASA. The company has been working under a separate $18 million NASA project to develop the systems and key technologies, including life support, avionics and landing systems, needed for a capsule-based commercial crew transport system that can ferry astronauts to the space station.
Boeing appears to be getting some help from a well-known name - Jeff Bezos, the founder and chief executive of Amazon.com Inc.
Bezos has been quietly working to set up Blue Origin LLC, a company focused on developing technologies for private and commercial space flight.
The company has been working with both NASA and Boeing on developing commercial spacecraft. According to The Wall Street Journal, Blue Origin is working with Boeing for the NASA contract to carry astronauts to and from the space station.
This story, "NASA Gives Key Space Taxi Contracts to SpaceX, Boeing " was originally published by Computerworld. | <urn:uuid:1fe4e91b-5a4e-4887-8c98-e42f86ab687c> | CC-MAIN-2017-04 | http://www.cio.com/article/2684065/space/nasa-gives-key-space-taxi-contracts-to-spacex-boeing.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00274-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948956 | 659 | 2.8125 | 3 |
Debora Plunkett, head of the NSA’s Information Assurance Directorate, has confirmed what many security experts suspected to be true: no computer network can be considered completely and utterly impenetrable – not even that of the NSA.
“There’s no such thing as ‘secure’ any more,” she said to the attendees of a cyber security forum sponsored by the Atlantic and Government Executive media organizations, and confirmed that the NSA works under the assumption that various parts of their systems have already been compromised, and is adjusting its actions accordingly.
To preserve the availability and integrity of the systems it has the duty to protect, the NSA has turned to standardization, constant auditing, and the development and use of sensors that will be placed inside the network on specific points in hope of detecting threats as soon as they trigger them, reports Reuters.
The problem with cyber defense – especially when it comes to attacks backed by governments and intelligence organizations – is that attackers are usually highly motivated and often very well funded.
Organizations can think of a hundred things to do to secure a system, but the attackers have time, money and incentive to keep at it as long as it takes to identify that crack in the armor that will allow them to get in.
So far, as I can see, the main aspect of cyber defense that every one should concentrate on is real-time detection of intrusions that would allow defenders to actively fight off the attackers – and the NSA is possibly on the right track if the sensors the plan to deploy will allow them to do that. | <urn:uuid:9f99cb12-4f34-46c6-aa89-c02bb12191a4> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2010/12/17/nsa-considers-its-networks-compromised/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00090-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.973449 | 324 | 2.578125 | 3 |
Nominet and Oxford Flood Network are using Internet of Things technologies to help prevent and mitigate flood damage in the UK.
The UK domain name registry has also launched an interactive, online map highlighting how technology can be used to improve flood defences anywhere in the world.
The project is currently being trialled in the Oxford area and uses IoT sensors to create a localised, early-warning system in flood-prone areas. More than 30 IoT devices are being employed to monitored water levels in the streams, groundwater and basins of both the Thames and Cherwell rivers. The data is then processed and combined with information collected by the Environment Agency before being presented in map form.
Ben Ward, director of the Flood Network, believes that more insights are likely to follow as the technology is expanded.
“This map will show the water situation at street level and help people to make better decisions as when a flood occurs, we can complement existing models with live observations on the ground,” he explained. “We’ve been working with great volunteers across the city to make the Flood Network happen, and we’re keen to get more on board to get an even clearer picture of Oxford’s water situation. As the network grows and connects more places, it gathers data which can be fed back to the authorities to improve flood models, leading to better defences and emergency responses.”
In order to ensure reliable communication between the IoT devices, some of which are located in hard-to-reach places, Nominet utilises its TV white space (TVWS) database to identify which frequencies can be used to transfer information. In addition, because the IoT sensors make use of existing Internet standards, like DNS, they represent an easily scalable solution. This is particular important not only for enhancements in the Oxford area, but also to bring the technology to other parts of the UK.
Just last week, the devastation in Cumbria from ‘Storm Desmond’ provided a timely reminder of the importance of reliable monitoring systems when it comes to limiting flood damage. | <urn:uuid:19a2b5af-7758-49c0-96a9-0330bddd81c7> | CC-MAIN-2017-04 | https://internetofbusiness.com/iot-used-to-bolster-uk-flood-defences/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00487-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936466 | 420 | 2.640625 | 3 |
Using the advanced computational resources at the Texas Advanced Computing Center (TACC) at The University of Texas at Austin, researchers uncovered a link between Alzheimer’s disease and cancer that may pave the way for better treatment options and new medicines. The two afflictions share a pathway in gene transcription, a process essential for cell reproduction and growth. The team, led by Houston Methodist Research Institute (HMRI), published its findings in December 2013 in the open access journal Scientific Reports by the Nature Publishing Group.
The scientists used TACC’s Lonestar and Stampede supercomputers to analyze and compare data from thousands of genes, looking for common cell signaling pathways shared by the two diseases. The Lonestar and Stampede systems are part of the Extreme Science and Engineering Discovery Environment (XSEDE), a virtual science environment that supports the interactive sharing of compute resources, data and expertise. Funding for the research comes from the T.T. and W.F. Chao Foundation, and by grants from the National Institutes of Health (NIH).
When comparing with normal brain tissues, the microarray profiles for brain tissues from Alzheimer’s disease and GBM patients show significantly reversed signaling activities, highlighted by Gene Ontology terms (nodes) enriched with genes down-regulated in AD (shown as blue in the node face, with darker color indicate larger fold changes) and up-regulated in GBM (red in the node boundary). Credit: Image and caption used with permission by Stephen Wong.
According to lead investigator Stephen Wong, a medical researcher and bioengineer with HMRI, the study is the first to establish a link at the molecular level between Alzheimer’s disease, the most prevalent type of neurodegenerative disease, and glioblastoma multiform (GBM), the most aggressive type of brain cancer.
Earlier studies in 2012 and 2013 found an inverse association between Alzheimer’s disease, which is characterized by nerve cell death and tissue loss in the brain, and with cancer, which occurs when abnormal cells grow and spread very fast. The data pointed to a common genetic pathway, but the details weren’t there.
“No one understands why this link is there, in a biological sense,” Wong said. “And that’s the reason we did this study. I think we are among the first to study it this way.”
The first step in finding the common genes expressed in each disease is to use DNA microarray to reveal the active and inactive genes shared between the two diseases.
The active genes are then mapped to known pathways through a process called pathway analysis. The group began with a working list of potential common pathways and narrowed this down through validation tests performed with cell cultures and live mice.
Knowing this pathway will be a huge step forward in the search for new therapies for this debilitating and deadly diseases.
The results of this study show that the ERK/MAPK cell signal pathway is up-regulated in brain cancer, while the Angiopoietin Signaling pathway is up-regulated in Alzheimer’s disease. In Alzheimer’s cells from mice, tumor suppression is mediated by the ERK-AKT-p21-cell cycle pathway and anti-angiogenesis pathway.
“Although GBM and Alzheimer’s both affect nearly 50% for aged population between 65 and 85 years of age, the body itself has very fine regulation at a very detailed level within the individual signaling pathways to make these two diseases exclude each other,” said study co-author Hong Zhao with the HMRI. “Different kinds of cells, like Alzheimer’s disease cells or cancer cells, have very fine and elaborated regulations on the general molecular signaling pathways, which depend on the cells’ response to the microenvironments.”
The study relied on microarray data covering 524 AD and 1,091 GBM subjects. The analysis included gene annotation, pathway expansion, enrichment analysis, and other details, which was enabled by TACC’s powerful supercomputers.
From this data set, the scientists identified more than 2,000 significant genes with 15 gene ontology terms marked as significantly changed.
“TACC helped us in accomplishing data analysis. We’re using TACC’s Lonestar and Stampede supercomputing clusters to do all this number crunching,” Wong said.
While this study mainly looked at “fairly manageable” data sets of microarray data, the next stage will require that the team analyze much more fine-grained and computationally costly gene sequencing data.
“The gene sequencing data size would easily be 1000-fold larger than the microarray data in the reported study,” Wong said, “which means the need to use TACC’s Lonestar and Stampede supercomputing clusters for number crunching is even more eminent.” | <urn:uuid:c8d223af-b868-41c2-8be2-44ced91d983d> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/04/22/brain-cancer-alzheimers-share-cellular-process/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00119-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923637 | 1,021 | 2.796875 | 3 |
Device Identity, Authentication, and Data Encryption
Digital certificates play a crucial role in establishing identity, and maintaining data and device integrity. PKI uses digital certificates to enable device-to-device or device-to-server identity authentication. Certificates also protect the data exchanged between devices. Digital certificates are the foundation of a network's IoT security, protecting its data, authenticating its devices, and creating trust for everyone interacting with the network. With the IoT, networks are expanding and becoming more powerful therefore maintaining the integrity of data and privacy has never been more important.
A PKI-based certificate solution does not require tokens or passwords. Instead, digital certificates are used to solve the authentication challenge. PKI tackles the challenge by using digital certificates in addition to security protocols to encrypt and secure communications within an IoT network.
To talk to an expert, call 1-801-877-2119 »Let Us Contact You
PKI for Authentication, Encryption, and Integrity
As the Internet of Things continues to grow and weave its way into more and more industries, networks are expanding and becoming more powerful. Maintaining privacy and the integrity of data must be at the forefront of all IoT projects.
Certificates for Device Identity
The best security practices require strong security credentials in order to trust devices on networks and in online applications. Before secure communications can take place between connected devices (e.g., device and a server or a device and a mobile phone), proper authentication must take place.
Secure data exchange or a verified software update can only begin after both devices have been identified and authenticated. With PKI, certificates are installed (embedded) on connected devices and used to securely authenticate one device to another, ensure that only trusted devices are allowed to connect to a nearby server, and enable trusted communications between devices to take place.
DigiCert's PKI solution customizes the certificate deployment process to fit the needs of each individual IoT project and can scale from a few thousand to millions of digital certificates to support IoT deployments of all sizes.
Certificates for Protecting Data in Transit
The Internet of Things is composed of interconnected networks of diverse systems allowing a variety of communications—even some that are unauthorized—therefore creating a privacy issue. Because these communications facilitate powerful services, secure communication capabilities become a critical matter.
IoT brings the benefit of being able to analyze information in real-time but that same benefit can expose systems to risks such as eavesdropping on sensitive messages and/or sending fraudulent messages.
DigiCert's PKI solution is ideal for safeguarding data exchange (sensitive and non-sensitive) in the IoT. DigiCert certificates guarantee that the highest level of encryption is being used to secure messages and ensures that exchanged messages are not intercepted, modified, or replaced with false messages.
CertCentral®: Manage Thousands of Certificates Efficiently
To maintain IoT device and system security simply issuing and embedding/installing digital certificates is not enough. On-going tracking of the digital certificate and timely certificate renewal before the certificate expires are critical to comprehensive IoT device and system security. DigiCert's CertCentral® platform is designed to simplify certificate management, providing admins with the tools to track their digital certificates, whether hundreds, thousands, or millions, throughout the certificate and devices lifecycle—issuance/reissuance to expiration.
Talk to an IoT PKI Expert
If you have specific questions about our PKI solution for securing IoT devices, please enter your information in the form below, and an IoT security expert will contact you for a personal consultation.
|Request More Information|
|Fill out this form to request more information or call an expert at 1-801-877-2119| | <urn:uuid:99555678-4a4f-4d3d-b9fb-a90e69033cc6> | CC-MAIN-2017-04 | https://www.digicert.com/iot/digital-certificates.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00513-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.90534 | 766 | 2.9375 | 3 |
Sandia Develops Cognitive Machines
ALBUQUERQUE, N.M. -- A new "smart" machine that could fundamentally change how people interact with computers is being tested at the Department of Energy's Sandia National Laboratories.
For the past five years, a team led by Sandia cognitive psychologist Chris Forsythe has been developing cognitive machines that accurately infer user intent, remember experiences with users, and allow users to call on simulated experts to help them analyze situations and make decisions.
The initial goal of the work was to create a "synthetic human" -- a software program/computer that could think like a person.
"The benefits from this effort are expected to include augmenting human effectiveness and embedding these cognitive models into systems like robots and vehicles for better human-hardware interactions," said John Wagner, manager of Sandia's Computational Initiatives Department. "We expect to model, simulate and analyze humans and societies of humans for Department of Energy, military and national security applications."
Massive computers that could compute large amounts of data were available, said Forsythe. "But software that could realistically model how people think and make decisions was missing," he said.
There were two significant problems with previous modeling software. First, the software did not relate to how people actually make decisions -- it followed logical processes, which people don't necessarily do. People make decisions based, in part, on experiences and associative knowledge. Software models of human cognition also did not take into account factors such as emotions, stress and fatigue.
In an early project, Forsythe developed the framework for a computer program that used both factors.
Follow-up projects developed methodologies that allowed the knowledge of a specific expert to be captured in computer models and provided synthetic humans with episodic memory -- memory of experiences -- so they might apply their knowledge of specific experiences to solving problems in a manner that closely parallels what people do.
"Systems using this technology are tailored to a specific user, including the user's unique knowledge and understanding of the task," said Forsythe.
Work on cognitive machines started in 2002 with a contract from the Defense Advanced Research Projects Agency (DARPA) to develop a machine that can infer an operator's cognitive processes. This capability provides the potential for systems that augment the cognitive capacities of an operator through "discrepancy detection."
In discrepancy detection, the machine uses an operator's cognitive model to monitor its own state, detecting discrepancies between the machine's state and the operator's behavior.
Early this year, work began on Sandia's Next Generation Intelligent Systems Grand Challenge project.
"The goal of this Grand Challenge is to significantly improve the human capability to understand and solve national security problems, given the exponential growth of information and very complex environments," said Larry Ellis, the principal investigator.
"It's entirely possible," said Sandia's Forsythe, "that these cognitive machines could be incorporated into most computer systems produced within 10 years." -- Sandia National Laboratories
IBM Delivers World's Most Powerful Linux Supercomputer
TOKYO -- Japan's largest national research organization announced at the end of July that it ordered an IBM eServer Linux supercomputer that when completed, will deliver more than 11 trillion calculations per second, making it the world's most powerful Linux-based supercomputer.
It is expected to be more powerful than the Linux cluster at Lawrence Livermore National Laboratory in Livermore, Calif., which is currently ranked the third most powerful supercomputer in the world, according to the independent TOP500 List of Supercomputers.
The plan is to integrate the supercomputer with other non-Linux systems to form a massive, distributed computing grid -- enabling collaboration between corporations, academia and government -- to support various research including grid technologies, life sciences, bioinformatics and nanotechnology.
The system -- with a total of 2,636 processors -- will include 1,058 eServer 325 systems. The powerful new supercomputer will help Japan's National Institute of Advanced Industrial Science and Technology (AIST), known worldwide for its leading research in grid technologies, to accelerate research using grid technology for a wide variety of projects.
These projects include the search for new materials to be used for superconductors and fuel cell batteries, and the search for new compounds that could be the basis for a cure for various malignant diseases.
Each new IBM eServer 325 system delivered to AIST contains two powerful AMD Opteron processors in a 1.75" rack mounted form factor. AIST will run SuSE Linux Enterprise Server 8 on the supercomputer. The grid will incorporate the Globus Toolkit 3.0 and the Open Grid Services Infrastructure.
The grid is also planned to link heterogeneous and geographically dispersed computing resources, including servers, storage and data, allowing researchers to collaborate. The eServer 325 systems are designed to run either Linux or Windows operating systems, and 325 can run both 32-bit and 64-bit applications simultaneously. -- IBM
FDA Approves Stair-Climbing Wheelchair
WASHINGTON, D.C. -- The U.S. Food and Drug Administration approved a battery-powered wheelchair in August that relies on a computerized system of sensors, gyroscopes and electric motors, which allow indoor and outdoor use on stairs, and on level and uneven surfaces.
The FDA expedited review of the product -- the Independence iBOT 3000 Mobility System -- because it has the potential to benefit people with disabilities. An estimated 2 million people in the United States use wheelchairs.
FDA Commissioner Mark B. McClellan said, "It can help improve the quality of life of many people who use wheelchairs by enabling them to manage stairs, reach high shelves and hold eye-level conversations."
Four-wheel drive enables users to traverse rough terrain, travel over gravel or sand, go up slopes and climb 4-inch curbs. For use on stairs, two sets of drive wheels rotate up and over each other to climb up or down, one step at a time. Because of its unique balancing mechanism, the wheelchair remains stable and the seat stays level during all maneuvers.
The user can push a button to operate the wheelchair in several different ways.
To climb up stairs, the occupant backs up to the first step, holds onto the stair railing, shifts his weight over the rear wheels, which causes the chair to begin rotation of the front wheels over the rear wheels and then down to the first step. As the user shifts his weight backward and forward, the chair senses this and adjusts the wheel position to keep his center of gravity under the wheels. The chair ascends stairs backward and descends forward (the user always faces down the stairs).
To reach high shelves or hold eye-level conversations with people who are standing, the occupant shifts his weight over the back wheels, so the iBOT lifts one pair of wheels off the ground and balances on the remaining pair. The user then presses a button to lift the seat higher.
People must weigh no more than 250 pounds and must have the use of at least one arm to operate the chair. They also must have good judgment skills to discern which obstacles, slopes and stairs to prevent serious falls. Users must be capable of some exertion when climbing stairs in the wheelchair by themselves. However, for users who cannot tolerate such exertion, there is a feature that allows someone else to hold onto and tilt the chair's back to allow it to climb up or down stairs.
Physicians and other health professionals must undergo special training to prescribe the iBOT. The chair must be calibrated to the patient's weight; and patients must be trained in its use and pass physical, cognitive and perception tests to prove they can operate it safely.
The FDA approved the wheelchair based on a review of extensive bench testing of the product conducted by the manufacturer -- Independence Technology of Warren, N.J. -- and on a clinical study of its safety and effectiveness. Approval was also based on recommendation of the Orthopedic and Rehabilitation Devices Panel of the FDA's Medical Devices Advisory Committee.
The firm performed a wide range of tests on the chair, including mechanical, electrical, performance, environmental and software testing.
In the pivotal clinical study, 18 patients -- mostly people with spinal cord injury -- were trained to use the iBOT. They test-drove for two weeks to allow researchers to check maneuverability, falls and other problems compared to those encountered with their regular wheelchairs. They also tested it going up hills, over bumpy sidewalks, crossing curbs, reaching shelves and climbing stairs. Twelve patients could climb up and down stairs alone with the iBOT and the other six patients used an assistant. When these same 18 patients used their regular wheelchairs, one patient could "bump" down stairs, but no one could go up just one step.
During the pivotal study, three patients fell out of the iBOT and two fell out of their own wheelchairs. None of the falls occurred on stairs. Two patients experienced bruises while using the iBOT.
As a condition for approval, the manufacturer agreed to provide periodic reports to the FDA to document the chair's usage, functioning and any patient injuries. The manufacturer also said the iBOT will be available throughout the next few months in strategically located clinics across the country. -- The U.S. Food and Drug Administration | <urn:uuid:d10aa6a3-de4b-4d52-9fd1-11dbf26838cc> | CC-MAIN-2017-04 | http://www.govtech.com/products/99415999.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00423-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941786 | 1,914 | 2.984375 | 3 |
Using IR for night time surveillance is a popular choice. The added non visible light can improve image quality when ambient street or visible lighting is absent. However, adding the right amount of IR light can be challenging. Add too much and the scene is over exposed. Add too little and the scene is still too dark. Measuring IR illumination is a useful way to get it just right and to identify invisible problems. In this report, we explain how to do so.
The Typical Approach
Typically, when you want to choose IR illumination, you look at a manufacturer's specification for maximum distance and beamwidth. The maximum distance indicates how far you can 'see' from the illuminatior while the maximum beamwidth signifies how wide you can view (e.g., 10 degrees, 30 degrees, etc.)
The challenge in those numbers, like camera manufacturer minimum illumination ratings, are hard to use and compare. They are not standardized or vetted. The specified levels may not match actual performance plus comparing different manufacturers based on specs is unlikely to be accurate.
Day Time Approach
During the day or with conventional camera setups, if you wanted to measure light levels, you would use a lux meter (and you should use a lux meter - see our tutorial on using lux meters). Knowing the exact (visible) light levels can really help knowing how well a camera will work in a given scene.
Unfortunately, 'regular' lux meters are not designed to measure IR illumination. If you have an IR illuminator on in the dark and hold up your 'regular' lux meter to it, it will likely register 0 - an unhelpful and misleading result.
Specialized Meters to Measure IR
Specialized meters do exist that measure IR light. The big downside is that these options are much more expensive (thousands of dollars) and scientific oriented.
We did find one that was relatively inexpensive (~$500) and field usable - the Coherent LaserCheck. We bought one and did a test. Here is a sample image of us demonstrating it.
Inside, we provide two video screencasts showing how to use this meter in action and how to best optimize your use of IR illumination. | <urn:uuid:0241febe-1c8c-4105-88b6-cbe57ab880b6> | CC-MAIN-2017-04 | https://ipvm.com/reports/how-to-measure-ir-illumination | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00331-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.917716 | 443 | 2.71875 | 3 |
When the old Washington Convention Center in downtown Washington, D.C., was demolished, researchers from the National Institute of Standards and Technology (NIST) conducted experiments aimed at improving emergency radio communications for first responders.
First responders who rely on radio communications often lose signals in environments such as basements or elevator shafts of buildings. It also is very difficult to detect radio signals through the dense rubble of a collapsed building.
NIST is using "laboratories" like the convention center for its experiments to investigate new tools for improving communications in disaster environments, such as methods for detecting weak radio signals and using improvised antennas made of metal found in debris to boost signals. -- The National Institute of Standards and Technology
A consortium of companies backing new storage technology created the HVD Alliance in early February to spread the word about holographic versatile discs (HVD).
Holographic recording technology records data on discs in the form of laser interference fringes, enabling discs the same size as today's DVDs to store more than one terabyte of data, 200 times the capacity of a single layer DVD, with a transfer rate of more than 1 Gbps, 40 times the speed of DVD. -- Optware
Can You Feel Me Calling?
Until recently, visual cues and ring tones were the only ways to interact with cell phones. Immersion Corp.'s TouchSense system will allow cell phone users to operate the devices with their sense of touch.
TouchSense controls a phone's vibration motor through software and custom actuator technology to deliver varying strengths and frequencies of vibrations.
A person in a quiet meeting could discern who is calling through distinct vibrations. A person could also use different vibrations assigned to particular functions of the phone to scroll through menus, select features or connect a call. -- Immersion
New E-Waste Bill
The National Computer Recycling Act, recently introduced in Congress, would direct the EPA to develop and implement a national electronic waste (e-waste) recycling program.
The average life span of a computer has shrunk from five years to two, and approximately 50 million computers are discarded every year. Without a national law, states are creating a patchwork of different laws from coast to coast, making it difficult and expensive for manufacturers and retailers to adhere to 50 different laws, according to the measure's sponsors.
The bill -- introduced by Reps. Mike Thompson, D-Calif., and Louise Slaughter, D-N.Y. -- directs the EPA to develop a grant program to encourage municipalities, individuals and organizations to start e-waste recycling programs; conduct a comprehensive e-waste study; and assess a fee of as much as $10 on new computers to fund the grant program. Manufacturers and retailers with existing recycling programs would be exempt from the fee. -- Rep. Mike Thompson
DARPA Goes AI
The U.S. military wants smarter soldiers. It wants smarter machines too. The Defense Advanced Research Projects Agency (DARPA) is funding research at Rensselaer Polytechnic Institute to investigate key issues associated with learning and reasoning, including developing algorithms and representations for machines with artificial intelligence.
The project is called "Poised-for-Learning." The Poised-for-Learning intelligent machine is in the design phase and will be based on Multi-Agent Reasoning and Mental Metalogic, a machine reasoning system based on Athena, a system developed by a Rensselaer scientist in previous work. -- The Rensselaer Polytechnic Institute
Government Open Source
Red Hat, one of the dominant providers of enterprise open source solutions, has formally put out a shingle for its new government business unit.
The company isn't a stranger to the market -- its U.S. government customers include the National Oceanic and Atmospheric Administration, the U.S. General Services Administration, the Federal Aviation Administration, the Department of Homeland Security/Federal Emergency Management Agency, the U.S. Patent and Trademark Office and the Defense Department.
The next major release of Red Hat Enterprise Linux is set to contain the first enterprise-ready implementation of Security-Enhanced Linux (SELinux), developed through Red Hat's collaboration within the open source community and the National Security Agency. -- Red Hat
According to Evans Data Corp., 92 percent of Linux systems have never been infected by a virus.
More than half of U.S. consumers said they are not satisfied with their mobile telephone service and give the lowest ratings to cellular providers involved in large mergers, according to a Consumer Reports survey. Poor call quality, difficulty in comparing service plans and poor customer service were problems cited in the magazine's September survey of 39,000 people in 17 major cities. -- Reuters
Cell Phone Frenzy
The number of subscribers to cellular phone services in Russia increased by 8.3 million in December 2004, giving the country a total of 73.9 million subscribers at the start of January, reported AC&M Consulting, a market research company. The December 2004 increase was almost twice the figure for November 2004 and triple that for December 2003. | <urn:uuid:1c0eef0c-8425-467d-a03a-639e1cbb73a8> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/100496284.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00541-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933573 | 1,028 | 2.53125 | 3 |
Performance: It’s All About Balance…
April 2, 2013 2 Comments
Storage For DBAs: Everyone wants their stuff to go faster. Whether it’s your laptop, tablet, phone, database or application… performance is one of the most desirable characteristics of any system. If your system isn’t fast enough, you start dreaming of more. Maybe you try and tune what you already have, or maybe you upgrade to something better: you buy a phone with a faster processor, or stick an SSD in your laptop… or uninstall Windows 🙂
When it comes to databases, I often find people considering the same set of options for boosting performance (usually in this order): half-heartedly tuning the database, adding more DRAM, *properly* tuning the database, adding or upgrading CPUs, then finally tuning the application. It amazes me how much time, money and effort is often spent trying to avoid getting the application developers to write their code properly, but that’s a subject for another blog.
The point of this blog is the following statement: to achieve the best performance on any system it is important that all of its resources are balanced.
Let’s think about the basic resources that comprise a computer system such as a database server:
- CPU – the processor, i.e. the thing that actually does the work. Every process pretty much exists to take some input, get on CPU, perform some calculations and produce some output. It’s no exaggeration to call this the heart of the system.
- Network – communications with the outside world, whether it be the users, the application servers or other databases.
- Memory – Dynamic Random Access Memory (DRAM) provides a store for data.
- Storage – for example disk or flash; provides a store for data.
You’ll notice I’ve been a bit disingenuous by describing Memory and Storage the same way, but I want to make a point: both Memory and Storage are there to store data. Why have two different resources for what is essentially the same purpose?
The answer, which you obviously already know, is that DRAM is volatile (i.e. continuous power is required to maintain the stored information, otherwise it is lost) while Storage is persistent (i.e. the stored information remains in place until it is actively changed or removed).
When you think about it like that, the Storage resource has a big advantage over the Memory resource, because the data you are storing is safe from unexpected power loss. So why do we have the DRAM? What does it bring to the party? And why do I keep asking you questions you already know the answer to?
Ok I’ll get to the point, which is this: DRAM is used to drive up CPU utilisation.
The Long Walk
The CPU is interacting with the Memory and Storage resources by sending or requesting data. Each request takes a certain amount of time – and that time can vary depending on factors such as the amount of data and whether the resource is busy. But let’s ignore all that for now and just consider the minimum possible time taken to send or receive that data: the latency. CPUs have clock cycles, which you can consider a metronome keeping the beat to which everything else must dance. That’s a gross simplification which may make some people wince (read here if you want to know why), but I’m going to stick with it for the sake of clarity.
Let’s consider a 2GHz processor – by no means the fastest available clock speed out there today. The 2GHz indicates that the clock cycle is oscillating 2 billion times per second. That means one oscillation every half a nanosecond, which is such a tiny amount of time that we can’t really comprehend it, so instead I’m going to translate it into the act of walking, where each single pace is a clock cycle. With each step taken, an instruction can be executed, so:
One CPU Cycle = Walking 1 Pace
The current generation of DRAM is DDR3 DRAM, which has latencies of around 10 nanoseconds. So now, while walking along, if you want to access data in DRAM you need to incur a penalty of 20 paces where you potentially cannot do anything else.
Accessing DRAM = Walking 20 Paces
Now let’s consider storage – and in particular, our old friend the disk drive. I frequently see horrible latency problems with disk arrays (I guess it goes with the job) but I’ll be kind here and choose a latency of 5 milliseconds, which on a relatively busy system wouldn’t be too bad. 5 milliseconds is of course 5 million nanoseconds, which in our analogy is 10 million steps. According to the American College of Sports Medicine there are an average of 2,000 steps in one mile. So now, walking along and making an I/O request to disk incurs a penalty of 10,000,000 steps or 5,000 miles. Or, to put it another way:
Accessing Disk = Walking from London to San Francisco
Take a minute to consider the impact. Previously you were able to execute an instruction every step, but now you need to walk a fifth of the way around the planet before you can continue working. That’s going to impact your ability to get stuff done.
Maybe you think 5 milliseconds is high for disk latency (or maybe you think anyone walking from London to San Francisco might face some ocean-based issues) but you can see that the numbers easily translate: every millisecond of latency is equivalent to walking one thousand miles.
Don’t forget what that means back in the real world: it translates to your processor sitting there not doing anything because it’s waiting on I/O. Increasing the speed of that processor only increases the amount of work it’s unable to do during that wait time. If you didn’t have DRAM as a “temporary” store for data, how would you ever manage to do any work? No wonder In-Memory technologies are so popular these days.
Moore’s Law Isn’t Helping
It’s often stated or inferred that Moore’s Law is bringing us faster processors every couple of years, when in fact the original statement was on doubling the number of transistors on an integrated circuit. But the underlying point remains that processor performance is increasing all the time. Looking at the four resources we outlined above, you could say that in a similar way DRAM technologies are progressing while network protocols are getting faster (10Gb Ethernet is commonplace, Infiniband is increasingly prevalent and 40Gb or 100Gb Ethernet is not far away).
On the other hand, disk performance has been stationary for years. According to this manual from Seagate the performance of CPUs increased 2,000,000x between 1987 and 2004 yet the performance of hard disk drives only increased 11x. That’s hardly surprising – how many years ago did the 15k RPM disk drive come out? We’re still waiting for something faster but the manufacturers have hit the limits of physics. The idea of helium-filled drives has been floated (sorry, couldn’t resist) and indeed they could be on the shelves soon, but if you ask me the whole concept is so up-in-the-air (sorry, I really can’t help it) that I have serious doubts whether it will actually take off (ok I promise that’s the last one).
The consequence of Moore’s Law is that the imbalance between disk storage and the other resources such as CPU is getting worse all the time. If you have performance issues caused by this imbalance – and then move to a newer, faster server with more processing power… the imbalance will only get worse.
The Silicon Data Centre
Disk, as a consequence of its mechanical nature, cannot keep up with silicon as the number of transistors on a processor doubles every two years. Well as the saying goes, if you can’t beat them, join them. So why not put your persistent data store on silicon?
This is the basis of the argument for moving to flash memory: it’s silicon-based. The actual technology most vendors are using is NAND flash but that’s not massively important and technologies will come and go. The important point is to get storage onto the graph of Moore’s Law. Going back to the walking analogy above, an I/O to flash memory takes in the region of 200 microseconds, i.e. 200 thousand nanoseconds. This is a number of orders of magnitude faster than disk but still represents walking 400,000 paces or 200 miles. But unlike disk, the performance is getting better. And by moving storage to silicon we also pick up many other benefits such as reduced power consumption, space and cooling requirements. And most importantly we restore some balance to your server infrastructure.
Think about it. You have to admit that, as an argument, it’s pretty well balanced.
Footnote: Yes I know that by representing CPU clock cycles as instructions I am contributing to the Megahertz Myth. Sorry about that. Also, I strongly advise reading this article in the NoCOUG journal which makes some great points about DRAM and CPU utilisation. My favourite quote is, “Idle processors do not speed up database processing!” which is so obvious and yet so often overlooked. | <urn:uuid:ac31fe4a-ae13-4e52-a22f-0fc06108ae60> | CC-MAIN-2017-04 | https://flashdba.com/2013/04/02/performance-its-all-about-balance/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00449-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935353 | 1,974 | 2.734375 | 3 |
ZIFFPAGE TITLEAttack In ProgressBy Kim S. Nash | Posted 2006-04-06 Email Print
In moments, hackers with bot code can break into vulnerable computers, turn them into zombies, steal information and spread the infection. While you scramble to secure your network--and the vital data on it--botmasters sell access to your hacked machines
Attack In Progress
To gain an understanding of bots and botnets, what happened at Auburn University serves as a good example of how these attacks occur. In fact, according to FBI Supervisory Special Agent Kenneth McGuire, the Auburn incident had "all the earmarks of [Ancheta's] type of activity." The Auburn bots, which were based on code called Rbot/Rxbot, sought out the specific LSASS weakness in the Windows operating system. In addition, Auburn's records of the attack show malware coming from a Web site with the address resili3nt.superihost.com. Ancheta, according to the government, used the hacker name Resili3nt, and several variations—resjames, resilient24, Resilient, ResilienT, ir Resilient.
Ancheta was never under suspicion for the Auburn attack—the university didn't report the attack and the FBI did not investigate. Anyone could have launched it. But no matter the source, like all bot attacks the raid against Auburn was swift.
It arrived through Internet Relay Chat, a worldwide network of online channels that lets people exchange text messages and meet in chat rooms, either publicly or privately. IRC is the forerunner of today's instant messaging applications and has been the source of other hack attacks.
Within seconds of penetrating the university, malicious code on the invaded PC contacted an IRC channel controlled by a hacker and downloaded a server that could receive software through the File Transfer Protocol, or FTP, which transfers data and software over the Internet. Among those files was a scanner—a software probe—to find other machines to infect.
On a command from IRC, the infected PC began scanning computers on Auburn's network, looking for other computers to infect through Microsoft's LSASS bug. It sent packets of data, requests to connect, over Port 445, which Microsoft reserves as a pathway in the operating system for networked Windows PCs to share files, printers and other resources—"like going down the street knocking on doors," Wilson says. He had already closed outside access to Port 445 on Auburn's firewall after an earlier attack on that port. But with the malicious code inside the network, the firewall was helpless to stop the scans. Within minutes,
47 PCs were infected.
Wilson was tipped to the attack by Auburn's open-source intrusion detection system, Snort, which picked up the flood of data traffic on Port 445 and sent an e-mail. By examining one infected PC, he could see the attack's pattern—the same malware (FTP server, remote administration software, scanner and a chat client) kept showing up in the same Windows directory on each PC. He and his team scrambled to get the Internet Protocol addresses of the infected machines, find the network switch they were connected to and disconnect them from the campus network. But the infection was spreading so quickly that they couldn't quarantine machines fast enough.
This attack also had a twist, Wilson noticed. He saw that the chat client commandeered the buddy list from the student's instant messaging program and invited those friends to click on the link, too. Whenever the code penetrated another PC, the cycle began again.
Reviewing Snort's archived logs of the attack, Wilson remembers feeling frightened. "This was about the worst attack I'd seen," he says. "This was different from a worm or a virus. It was a live channel of communication going back and forth."
As Auburn's PCs were taken over, they sent their Internet Protocol addresses back through IRC so the various botmasters running the attack would know how many and which machines they controlled.
Alerted by the IRC messages, bots belonging to other IRC channels immediately raced to add their own malware to those freshly infected PCs using FTP, Wilson says, as if playing some life-size computer game. Messages then flew over the chat system as individual hackers took credit for penetrating PCs at specific IP addresses.
Hackers swarmed and bragged "like a bunch of schoolkids on a playground," Wilson says. He stared as the university's PCs communicated with IRC channels all over the world—from Brazil to Greece and throughout the U.S.
Several hours and 7,000 messages later, the attack ended as suddenly as it began, when the last hacker typed, "#Exit."
The invasion was over. The network traffic had died down. But Wilson was left with a hostile army of bots that he now had to subdue. | <urn:uuid:9b339c59-4776-4f73-b1e2-b403aeb2429c> | CC-MAIN-2017-04 | http://www.baselinemag.com/c/a/Projects-Management/Security-Alert-When-Bots-Attack/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00267-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964504 | 982 | 2.84375 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.