text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Definition: A heuristic that moves the target of a search to the root of the search tree so it is found faster next time.
Aggregate parent (I am a part of or used in ...)
See also move-to-front heuristic.
Note: This technique speeds up search performance only if the target item is likely to be searched for again soon.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 17 February 2004.
HTML page formatted Mon Feb 2 13:10:40 2015.
Cite this as:
Paul E. Black, "move-to-root heuristic", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 February 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/movetoroot.html | <urn:uuid:5e9e1c30-4a58-4013-8060-8d7ef14200cf> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/movetoroot.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00283-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.832953 | 196 | 2.734375 | 3 |
Whether we’re ready for it or not, the Internet of Things is coming, and soon. You might see all sorts of connected devices on the market today that you would never think to connect to the Internet, but it’s our responsibility to inform you that these new devices could potentially put not just your business infrastructure at risk, but even your own life.
Security experts have long discussed the repercussions that the Internet of Things will have on the world of cyber security. It’s been predicted by Gartner that an average of 5.5 million “things” are added to the Internet of Things every day. This could include anything that connects to the Internet, but usually only refers to consumer goods that wouldn’t normally have any sort of wireless network connection built into them. By the end of this year, there will be approximately 6.4 billion IoT devices on the market.
The real problem here is that these numbers continue to increase by the day, and if the IoT’s growth is any indication, it’s not slowing down anytime soon. There were 3.8 billion in 2014, and 5 billion in 2015, so it’s not a stretch to suggest that the number of “things” connected to the Internet of Things will exceed 20 billion by the time 2020 runs around. Many researchers believe that the first major IoT data breach will happen sometime within the next few years.
At first glance, it might seem like many IoT devices are of little consequence and shouldn’t be worried about on a cyber security level. Appliances like blenders and toasters seemingly don’t hold much value to hackers. The problem, however, comes not from the devices themselves, but the networks that they’re connected to. If a hacker can bypass the security features of a smart device, they can potentially gain access to the network, and other devices connected to it.
Of course, the potential for damage extends far beyond the scope of just your own business. When you consider how computerized cars and physical infrastructure components, like dams and power plants, have become, you might realize that there is the potential for disaster, all thanks to the Internet of Things.
For example, what happens when a hacker disables a car’s brakes, or they decide to override a system setting on a dam and flood the surrounding landscape? As the potential for damage increases, so too does the potential for a hacker to grow interested in a target.
Why Vendors Aren’t Doing Anything About It
One of the major reasons why vendors are creating devices with security vulnerabilities is perhaps because of the lack of actual regulation and standards put into place to ensure quality of the device. In part, this is due to organizations refusing to spend money on devices that aren’t guaranteed to turn a profit. Thus, popular devices from different markets--not just consumer electronics, but also appliances and other industries--may wind up being manufactured with major security flaws that can be exploited by hackers.
Then there’s the problem with applying patches or updates to these IoT devices. When you think about it, there are two major ways to resolve a problem with your device; either download the patch, or replace it entirely. Considering how many of these IoT devices are both expensive and difficult to replace, the latter isn’t exactly feasible. Imagine purchasing a smart car with a security vulnerability that cannot be patched. You’d have to purchase a new one in order to keep yourself safe. That’s not just unreasonable--it’s also economically challenging, as this new technology is still quite expensive, and remains as such until demand or competition increases.
What You Can Do
Due to the Internet of Things’ incredible reach, it might seem like an intimidating notion to protect your business from the countless threats that could reach your infrastructure. You need to implement enterprise-level security solutions that can keep unapproved devices from connecting to your Internet connection, and you should always be conscious of how and where your data is shared outside of the office environment. Therefore, it becomes necessary to implement solutions with preventative security in mind, that keep threats from entering your network in the first place.
Nerds That Care can assist your organization with the solutions you need to secure your network. With comprehensive solutions like enterprise-level firewalls, antivirus, spam blocking, and content filtering, you can keep your in-house network locked down nice and tight. Furthermore, you need to implement a solid BYOD policy that helps you manage the devices that connect to your business’s network. This should include a mobile device management solution that allows you to limit device exposure to corporate data, whitelist and blacklist apps, and remotely wipe devices should they be lost or stolen. To learn more, reach out to us at 631-648-0026. | <urn:uuid:c6a076d2-6413-4609-a08f-c11976cbd4b4> | CC-MAIN-2017-04 | https://nerdsthatcare.com/newsletter-content/entry/why-a-major-internet-of-things-security-breach-is-inevitable | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00309-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95367 | 997 | 2.796875 | 3 |
NOTE: This is the third feature of the June 2014 issue of FutureStructure. We'll be posting the other features over the coming weeks but if you'd like to get a PDF of the entire issue you can DOWNLOAD IT NOW.
As a society, we’re becoming more conscious of what goes into the products we choose to consume before they get to us. Take our food, for instance. Media coverage, scientific research and a generally increased awareness have brought needed attention to food additives and chemical pesticides. The organic food movement is still booming. The farm-to-table movement has highlighted the virtues of local, healthful and sustainable food production. We aren’t only concerned with what goes into our bodies, but about the constitution of all of the products that come our way.
But what happens after we finish with these things?
We certainly don’t think about waste in a way that is interconnected with the other systems like water management, energy and beyond. Maybe we should.
Closing the loop on waste – and integrating it with other systems – may be more than a noble policy goal. In fact, it may make smart economic sense as well. Waste streams often still contain things of remarkable value - if they are extracted and used in the right way. Landfill mining advocates note that landfills have a higher concentration of aluminum than the metallic ore that is normally used as a raw material. The East Bay Municipal Utility District in California is using food and bio waste to save $3 million per year and generate more than enough electricity to meet its own needs. “Waste to energy” projects are cropping up in Mexico, Canada, Scotland and Norway. And as water rights become an increasingly difficult issue – especially in the American West and South – reusing water from the waste stream is a particularly encouraging prospect.
To make this happen, we need to start thinking of waste as a system.
Talking Trash: Why the Status Quo Isn’t Pretty
Trash talk is ugly and garbage isn’t glamorous. The expansion of cities and urbanization, coupled with increasing wealth worldwide and the fact that humans in general produce growing amounts of garbage, are creating rising global waste management problems. Worldwide, the volume of annual municipal solid waste is projected to double – from today’s 1.3 billion tons per year to 2.6 billion tons – by 2025. Add to that the fact that e-waste is astronomical, with 1.7 million tons in the U.S. sent to landfills or incinerated in 2010 alone.
There are some bright spots. The American Society of Civil Engineers (ASCE) gave solid waste a grade of a B- in its 2013 Report Card on America’s Infrastructure. For comparison’s sake, our energy infrastructure received a D+ and our drinking water a D. For one, the ASCE report cited large gains in recycling rates. Between 1980 and 2010 the percentage of municipal solid waste (MSW) disposed in landfills decreased by 35 percent and recycling diverted 85 million tons of MSW from landfills in 2010, compared with only 15 tons in 1980.
But despite these gains, the U.S. remains a top trash producer – so much so that our No. 1 export is garbage – and our go-to disposal method is the landfill. The Environmental Protection Agency (EPA) reports that while the number of landfills has declined over the years, the average size has increased. Fifty-five percent of America’s waste is dumped in these landfills and left to slowly decay. In comparison, Germany sends less than 1 percent of its trash to landfills, converting 38 percent of the rest to energy and recycling 62 percent.
In the U.S., local leaders aren’t convinced their waste infrastructures can get the job done. According to the survey by the Governing Institute, only 6 percent of respondents agreed that their community’s waste management infrastructure completely met their needs.
One of the largest challenges with waste and other utilities are that they are part of a “hidden infrastructure,” a complicated process that takes time and money, but that we don’t fully see and are unlikely to appreciate.
Perhaps one of the most beautifully written passages about waste in society was penned by Italo Calvino in his novel Invisible Cities. His description of the city of Leonia captures human nature’s attitude toward refuse.
“That fact is that street cleaners are welcomed like angels, and their task of removing the residue of yesterday’s existence is surrounded by a respectful silence, like a ritual that inspires devotion, perhaps only because once things have been cast off nobody wants to have to think about them further. Nobody wonders where, each day, they carry their load of refuse. Outside the city, surely; but each year the city expands, and the street cleaners have to fall farther back. The bulk of the outflow increases and the piles rise higher, become stratified, extend over a wider perimeter. Besides, the more Leonia’s talent for making new materials excels, the more the rubbish improves in quality, resists time, the elements, fermentations, combustions. A fortress of indestructible leftovers surrounds Leonia, dominating it on every side, like a chain of mountains.”
Like the residents of Leonia, most of us don’t wonder where the trucks carry our garbage. We prefer our waste to be invisible – buried in a landfill or shipped someplace else as long as it’s not in our backyard. But we also have an expectation that our waste will be taken care of. In the same way we expect the light to turn on when we flip the switch and the water to run as we turn the knob, trash retrieval seems as certain as death and taxes.
A second problem with waste management is an issue inherent in government agencies: siloed departments that can make laser-focused decisions. In 2013, Houston’s “One Bin for All” proposal was awarded $1 million by Bloomberg Philanthropies. The idea – to have residents discard all materials into a single bin and centrally process and sort them – was proposed largely because of Houston’s dismal recycling rates (14 percent). The city, like many others across America, was putting significant resources into recycling – each day, multiple trucks would go out to pick up waste from multiple bins on multiple routes.
It’s hard to argue that recycling isn’t a good policy, but what happens when we look at the carbon emissions increase from having multiple trucks on the road? Is the effort worth the effects? Perhaps not. A study at Washington State University found that test subjects asked to cut paper into strips to evaluate scissors used three times as much paper when they were told a recycling bin was in the room as opposed to when they were told a waste basket was in the room.
In looking at cities as systems, we can begin to see the possible consequences – positive or negative – of our policies, infrastructure investments and technological implementations. According to the Governing Institute survey, 67 percent of respondents thought it was important to integrate energy into waste management systems, 51 percent thought it was important to integrate water and 33 percent though it was important to integrate transportation.
Integrating and Innovating: Policies, Programs and People
If we look at waste management through a FutureStructure lens, we need to first consider our current policies, programs and people and what ideas could make a difference. One policy gaining prominence – despite being around for decades – are so called “pay as you throw” (PAYT) programs, which provide financial incentives to decrease waste, treating trash as we do other utilities like electricity and water. PAYT has been shown to change consumer behaviors, such as choosing products with less packaging or composting yard waste.
Zero waste policies – in which no discards are sent to landfills or designated for high-temperature destruction – are also increasing in popularity. San Francisco’s pledge to attain zero waste by 2020 advocates for citizens to reduce waste first, then reuse, and finally to recycle and compost. Seattle has been moving toward zero waste for over a decade, with a goal to divert 60 percent of trash from the landfill by 2015 (the city was at 55.7 percent in 2012) and 70 percent by 2022.
Sacramento’s “farm-to-fork-to-fuel” initiative is one of the best examples of how policies and programs can turn a supply chain mentality to that of a supply cycle. Nonprofit organizations and corporate entities are collaborating to divert organic waste from landfills and turn it into anaerobically digested renewable waste, use compressed natural gas to power public and private vehicles and create zero waste zones.
Dubuque, Iowa, Mayor Roy Buol said getting everyone involved – government leaders from different parts of the city as well as constituents – is the key to success with all sustainability measures. Buol launched “Sustainable Dubuque,” a bottom-up initiative that brings a coalition of local interests together and gives everyone in the city a chance to contribute ideas to move the city forward. Dubuque became an IBM Smarter City in 2010 and has since reduced water usage, electricity usage and optimized transportation resources.
“Citizen involvement – making them part of the process, no matter what project you are trying to develop – is the underlying key to success,” said Buol. “My mantra has always been ‘engaging citizens as partners.’”
In Edmonton, Alberta, a city that has a 60 percent landfill diversion rate and is aiming for 90 percent, leaders also advocate for citizen involvement. In working on its plan to hit 90 percent landfill diversion rates, Roy Neehall, general manager of Waste RE-Solutions Edmonton, said, "We did not dictate to residents. We listened, educated, listened." What the city found, was that its residents were "way ahead of politicians and administrators" on this issue.
Respondents to the Governing Institute’s research survey agree with Buol and Neehall. Sixty-seven percent said that public awareness was an important part of a successful waste management system.
Coupling Traditional Infrastructure with Technology
Boston provides a great example of how combining hard infrastructure and technology can turn around even the worst environmental conditions. Once known as the “dirtiest harbor in America,” Boston’s waterfront was plagued by sewage and other waste seeping into the Charles River since America’s founding. Sewage and other waste received very limited treatment before being dumped in the harbor, and the water was filthy, poisoned by the waste of the city and its surrounding areas. A federal court order mandated the city clean up this blight and the Massachusetts Water Resources Authority was born. Today, after a $4 billion investment in a state-of-the-art sewage treatment plant, the harbor is clean enough for children to swim in and the city to enjoy.
The Deer Island Wastewater Treatment Plant is the key to the harbor’s cleanliness. Each day, 350 million gallons of water travel underground through Boston’s pipes, arriving at the plant for processing. Through a multi-part process, the plant removes all raw sewage – including “floatables” and other large debris. What is left is “sludge,” a mixture of liquefied waste that once would have mixed with the water in the harbor. Now, large egg-shaped digesters that act like churning stomachs use bacteria to eat the sludge (the process known as anaerobic digestion), reducing it by one-third and producing methane gas as a byproduct. This methane is used to create steam and hot water for the facility. Remaining pathogens are killed by chlorine, the chlorine is killed by another chemical and what is released into the harbor is purified and pristine H20.
Gasification and Pyrolysis
Like anaerobic digestion, gasification can also create energy through a waste treatment and recovery process. It works like this: The waste is heated in a low-oxygen environment, which causes some of the waste to combust and the rest to decompose – that then turns into hydrogen, carbon monoxide and methane. These gases go to a boiler, which burns them cleanly and makes steam to run a turbine that produces electricity. Ash, the biggest byproduct of the process, is run through a magnet to capture iron for recycling.
The technique is used in Alexandria, Va., which is powering more than 20,000 homes with the electricity produced from 100 tons of municipal waste each day, as well as Indianapolis, where the steam helps power Lucas Oil Stadium and other buildings downtown.
The waste-to-energy concept has gained steam over the past several years, but proponents still say the United States is missing opportunities compared to Europe, where waste to energy has become the preferred method of disposal. The EU runs 420 waste-to-energy plants (compared to the 87 in the United States), which provide power to 20 million people. The practice is becoming so popular that Norway, which has the largest share of waste-to-energy production, is importing trash to feed its incinerators.
One of the reasons the U.S. has been slow to adopt waste to energy is the harmful gases produced by combustion emissions. But technology has provided a helping hand, say proponents, and emissions are 80 to 90 percent under limits set by the EPA in facilities like the one in Alexandria.
As with any other policy or process, waste to energy should be viewed as a component of a community, city or country’s system and considerations should be made to ensure the process is optimized within the greater whole.
Similar to gasification, pyrolysis decomposes waste in the absence of oxygen. Products of pyrolysis include oil, gas and char, or steam that can be used to generate electricity. In Ireland, discarded plastic is turned into fuel through the pyrolysis process – 20 tons of plastic is converted into 19,000 liters of synthetic fuel.
A recent study from the Illinois Sustainable Technology center, a division of the University of Illinois, found that fuel derived from non-recycled plastics from waste (such as shopping bags) through pyrolysis was easily compatible with fuels from bio-based and traditional fuel sources, had equally high energy content and was better performing in several other criteria.
Integrating Waste Management for the Future
There is no one-size-fits-all to waste management, but there is a movement toward trying different techniques, sometimes with smaller scale and more flexible technologies that can transform trash. Looking at waste management from all angles and thinking about how reducing, recycling and recovering can work together to promote environment and economic sustainability is a good place to start. In FutureStructure fashion, involving all stakeholders and hearing the point of view of the transportation department, utilities, environmentalists, finance and the collective voice of citizens is absolutely crucial to avoiding a siloed policy that lacks common sense. By doing this we can move firmly from a supply chain to a supply cycle.
Jeana Bruce Bigham is the Custom Content Specialist for e.Republic’s Custom Media department. She is passionate about simple, innovative technologies that improve the lives of citizens and help transform communities. She has held various positions within the Center for Digital Government and the Center for Digital Education, including Editor of Converge magazine, Director of Publications and Director of Custom Media. Bigham earned a degree in journalism from the University of Missouri, Columbia. She resides in St. Louis. | <urn:uuid:6ae81952-9cad-4a6b-8caa-332190d8b6c0> | CC-MAIN-2017-04 | http://www.govtech.com/fs/news/From-Trash-to-Treasure-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00245-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957183 | 3,227 | 2.90625 | 3 |
Any healthcare organization or facility can benefit from more streamlined processes and increased patient turnover rates. ZigBee is one way to achieve those goals.
What Is ZigBee?
ZigBee is a wireless protocol designed for short-range personal area networks based on the IEEE 802.15.4 standard. It differs considerably from competitors like Wi-Fi or Bluetooth. ZigBee is designed to be an inexpensive and simple way of transmitting relatively small amounts of information at regular intervals. | <urn:uuid:6cde2109-32b5-485a-b19d-49d9b8d98a8d> | CC-MAIN-2017-04 | https://www.infotech.com/research/healthcare-zigbee-can-cut-long-term-care-costs | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00182-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913283 | 94 | 2.875 | 3 |
Cell Phone Cancer Warning Labels Proposed in Maine, San Francisco
Spurred by reports of a possible link between cell phone radiation and cancer, a Maine legislator is seeking warning labels on cell phones while San Francisco Mayor Gavin Newsom is pushing a city law to force retailers to display a phone's absorption rate level in print at least as big as the price.Despite a paucity of hard scientific evidence, both the state of Maine and the city of San Francisco are considering legislation requiring cell phone makers to affix labels on their devices warning consumers of possible brain cancer risks due to electromagnetic radiation.
In Maine, State Rep. Andrea Boland has won approval for the state legislature to consider a bill requiring that warning labels be placed on the packaging and the cell phone itself. San Francisco Mayor Gavin Newsom would require retailers to display the absorption rate level next to each phone in print at least as big as the price.
The San Francisco initiative was prompted by an EWG (Environmental Working Group) report Sept. 9, 2009, stating that "recent studies find significantly higher risks for brain and salivary gland tumors among people using cell phones for 10 years or longer." The report added, "The state of the science is provocative and troubling, and much more research is essential."
The World Health Organization and National Cancer Institute, though, have said there is little clear evidence to prove the linkage. The Federal Communications Commission says cell phones are safe and maintains a standard for the specific absorption rate of radio-frequency energy, but doesn't require manufacturers to reveal radiation levels. Boland said she is convinced warning labels are needed based on "what she had read" about the possible linkage between cell phones and cancer.
"The main thing is that the warning labels get on there, and when people go to purchase something, they have a heads-up that they need to really think about it," Boland told the New York Times. "This is a big important industry, and it's a small modification to assure people that they should handle them properly." | <urn:uuid:71211749-15e8-4bc7-a6ae-af0d19f60f8f> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Mobile-and-Wireless/WARNING-Maine-SF-Consider-Cell-Phone-Cancer-Labels-720959 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00512-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949553 | 412 | 2.6875 | 3 |
It was recently reported that a group of Russian cybercriminals stole billions of credentials and pieces of personally identifiable information from hundreds of thousands of websites, putting the majority of Internet users at risk.
However, when individuals are asked about what they’re doing to protect themselves from this most recent breach, many say they won’t change anything. According to The Arizona Republic, experts believe that people are experiencing breach fatigue, or an assumption that the constant news of high-profile security breaches are hyperbolic and their data isn’t really in that much danger.
While breach fatigue may lead individuals and companies to think otherwise, data breaches are very real and are a dangerous threat to sensitive information. Everyone uses passwords to protect their data, but they have their limitations and can only keep information safe to a degree.
Depending on the type of password being used, you may actually be helping hackers steal your information. It’s very common for people to use the same login for multiple sites, meaning if one account is compromised they are all vulnerable.
According to SplashData, a password management company, among the list of the top 25 passwords “123456” and “password” ranked as the top two. Cybercriminals utilizing a brute-force attack, which can guess 1,000 passwords a minute, won’t be deterred in the least by such obvious answers.
Increase Protection With Two-Factor Authentication
Some companies do understand the importance of stronger cybersecurity defenses and have implemented two-factor authentication for company webmail systems and administrative tools. This is a great tool that requires multiple sets of identification to allow access to a system, but most enterprises overlook their use on external services like cloud platforms and social media accounts.
Dark Reading contributor Maxim Weinstein recently reported on a company called Code Space that was put out of business after a malicious actor took control of its Amazon Web Services account.
The hacker was able to obtain access to the account and delete the company’s servers and the entirety of the data stored on them. While this may seem like an extreme case, it is becoming easier every day for cybercriminals to access company domains and cloud services and cause major damage due to poorly secured passwords and lack of a secondary security system.
Some companies that offer authentication require information to be shared with a website or service that is then left vulnerable to exploitation. Best practices strongly advise businesses to implement two-factor authentication for all major applications and systems and have total control over the process. Identifiable data is kept within the enterprise and used only to serve as a credential, increasing security and protection. | <urn:uuid:cd1516c6-82c9-40fb-bce3-5ead32a4acc4> | CC-MAIN-2017-04 | https://www.entrust.com/passwords-arent-enough-implement-two-factor-authentication-increased-protection/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00238-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951456 | 532 | 2.625 | 3 |
In order to protect against a threat, it must first be identified. The International City/County Management Association (ICMA), along with 18 endorsing organizations, created a list of the top eight emerging threats facing communities. The ICMA participated in a white paper released by the National Homeland Security Consortium in which members recommended that communities protect against imminent threats as well as long-term foreseeable threats.
The report highlighted the importance of coordination between local, state, federal and international agencies to address many of the threats outlined. According to the report, member organizations should also attempt to establish greater understanding, as these threats are not fully understood. “The public is best served by open, honest, and genuine debate amongst all those charged with the protection and service to that very public,” it concludes.
The top eight health, safety and security concerns identified are:
- Cyber hazards
- Climate change
- Demands on global resources
- Changing demographics
- Emerging technologies
- Violent extreme ideologies
- WMD proliferation
- Mega hazards and catastrophic cascading consequences
Access the full report, entitled "Protecting Americans in the 21st Century: Communicating Priorities for 2012 and Beyond," via ICMA's website. | <urn:uuid:e1450ce1-7a43-47a8-9713-46cbbb974d4d> | CC-MAIN-2017-04 | http://www.govtech.com/Top-8-Threats-Facing-the-Nation.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00356-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949206 | 247 | 2.71875 | 3 |
Cybercriminals have discovered a new attack vector: Exploiting the trust that keys and certificates establish.
By using keys and certificates, hackers are able to go about their business on your network, authenticated, and with legitimate access. They are able to successfully steal your data while remaining undetected for months—sometimes years—at a time. Stuxnet and Duqu provided the blueprint, and now attacks on keys and certificates are commonplace. Common Trojans such as Zeus and SpyEye steal these trust assets.
Download this white paper now to learn how cybercriminals are taking advantage of keys and certificates to infiltrate your network. Understand what strategies you can implement to better mitigate against trust-based (key and certificate) attacks. | <urn:uuid:771a8427-9bc5-454b-8b29-c7c17472cf8e> | CC-MAIN-2017-04 | https://www.infosecurity-magazine.com/white-papers/advanced-targeted-attacks-key/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00348-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952495 | 149 | 2.625 | 3 |
How Google Works: The Google File SystemBy David F. Carr | Posted 2006-07-06 Email Print
For all the razzle-dazzle surrounding Google, the company must still work through common business problems such as reporting revenue and tracking projects. But it sometimes addresses those needs in unconventional—yet highly efficient—ways. Other
The Google File System
In 2003, Google's research arm, Google Labs, published a paper on the Google File System (GFS), which appears to be a successor to the BigFiles system Page and Brin wrote about back at Stanford, as revamped by the systems engineers they hired after forming Google. The new document covered the requirements of Google's distributed file system in more detail, while also outlining other aspects of the company's systems such as the scheduling of batch processes and recovery from subsystem failures.
The idea is to "store data reliably even in the presence of unreliable machines," says Google Labs distinguished engineer Jeffrey Dean, who discussed the system in a 2004 presentation available by Webcast from the University of Washington.
For example, the GFS ensures that for every file, at least three copies are stored on different computers in a given server cluster. That means if a computer program tries to read a file from one of those computers, and it fails to respond within a few milliseconds, at least two others will be able to fulfill the request. Such redundancy is important because Google's search system regularly experiences "application bugs, operating system bugs, human errors, and the failures of disks, memory, connectors, networking and power supplies," according to the paper.
The files managed by the system typically range from 100 megabytes to several gigabytes. So, to manage disk space efficiently, the GFS organizes data into 64-megabyte "chunks," which are roughly analogous to the "blocks" on a conventional file system—the smallest unit of data the system is designed to support. For comparison, a typical Linux block size is 4,096 bytes. It's the difference between making each block big enough to store a few pages of text, versus several fat shelves full of books.
To store a 128-megabyte file, the GFS would use two chunks. On the other hand, a 1-megabyte file would use one 64-megabyte chunk, leaving most of it empty, because such "small" files are so rare in Google's world that they're not worth worrying about (files more commonly consume multiple 64-megabyte chunks).
A GFS cluster consists of a master server and hundreds or thousands of "chunkservers," the computers that actually store the data. The master server contains all the metadata, including file names, sizes and locations. When an application requests a given file, the master server provides the addresses of the relevant chunkservers. The master also listens for a "heartbeat" from the chunkservers it manages—if the heartbeat stops, the master assigns another server to pick up the slack.
In technical presentations, Google talks about running more than 50 GFS clusters, with thousands of servers per cluster, managing petabytes of data.
More recently, Google has enhanced its software infrastructure with BigTable, a super-sized database management system it developed, which Dean described in an October presentation at the University of Washington. Big Table stores structured data used by applications such as Google Maps, Google Earth and My Search History. Although Google does use standard relational databases, such as MySQL, the volume and variety of data Google manages drove it to create its own database engine. BigTable database tables are broken into smaller pieces called tablets that can be stored on different computers in a GFS cluster, allowing the system to manage tables that are too big to fit on a single server.
Also in this Feature: | <urn:uuid:579b5e24-1aeb-4054-a4bb-0fbdbbda3d6b> | CC-MAIN-2017-04 | http://www.baselinemag.com/c/a/Infrastructure/How-Google-Works-1/4 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00008-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931283 | 769 | 3.078125 | 3 |
Getting Rid of Unwanted Cookies
Computer terminology is a strange thing. One popular computing term many of us are familiar with is “cookie.” The word cookie is derived from the Dutch word koekje or koekie, and refers to a small cake. But when computing is involved, the term “cookie” has a completely different meaning. A cookie is a file placed onto your computer’s hard drive by a Web site you visit, enabling whoever is on the other end to monitor your use of the site. Popular misconceptions and rumors about what cookies can and cannot do have frightened some users.
Persistent cookies use an extended expiration date and are stored on your disk until that date. A persistent cookie can be used to track your browsing habits by identifying you whenever you return to a site. Information about where you come from and what Web pages you visit already exists in a Web server’s log files and also could be used to track your browsing habits. Cookies just make the job of data collection easier.
While cookies are not dangerous in and of themselves, if a hacker were somehow to gain access to your computer, he or she might be able to gather personal information about you through these files. An ounce of prevention is worth a pound of cure—and one of the easiest steps you can take is to alter the security settings of your Web browser to either limit or block cookies. For example, to ensure that other sites are not collecting personal information about you without your knowledge or consent, choose to only allow cookies for the Web site you are visiting and block or limit cookies from third parties.
For cookie management, you need not spend a dime: Numerous freeware products abound. A favorite is CookieWall by AnalogX. This easy-to-configure Windows utility allows you to easily decide which cookies can stay on your system and which should be deleted. CookieWall can be set up a few different ways. Cookies can be deleted as soon as they arrive, or you can choose to be notified when new ones are placed on your hard drive. Another option is to have CoookieWall store them temporarily for viewing at a later date. It is important to note that CookieWall is currently only compatible with Microsoft Internet Explorer and similar derivatives. To download a copy, visit www.analogx.com/contents/download/network/cookie.htm.
Another freeware product is Cookie Monster by Alberto Martinez Perez, a student of computer engineering at the University of Oviedo in Spain. This handy tool, which can be downloaded at www.ampsoft.net, can help you manage and delete your browser cookies. It supports several different Web browsers, such as Internet Explorer, Netscape and Opera, as well as the increasingly popular Firefox.
Once loaded, Cookie Monster will quickly list all the cookies found on your hard drive and allow you to view the content of selected cookies. Armed with this information, you can then use the program to either delete the cookies or preserve them in case you are unsure whether the cookie is necessary to log on to certain Web sites.
Always keep in mind that everyone who uses the Internet is responsible for his or her own personal security and privacy. If you are using a public computer, such as those in an Internet café or a public library, you should make sure that cookies are disabled to prevent other people from accessing or using your personal information.
For additional information on how to do this, visit www.kcsoul.com/website/cookies.htm.
Douglas Schweitzer, A+, Network+, i-Net+, CIW, is an Internet security specialist and the author of “Securing the Network From Malicious Code” and “Incident Response: Computer Forensics Toolkit.” He can be reached at email@example.com. | <urn:uuid:fafd988e-1d28-4965-abf2-8def689de247> | CC-MAIN-2017-04 | http://certmag.com/getting-rid-of-unwanted-cookies/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00310-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93956 | 786 | 3.25 | 3 |
Despite the volatile economic conditions in Europe and Japan, both areas continue to be a hotbed of supercomputer deployments. This week alone, four new university systems were announced, two installed, with the other two on order.
First up are a couple of systems that have been deployed at two German universities in the state of Rhineland-Palatinate. According to the press release, both are already up and running, one at Johannes Gutenberg University Mainz, and the other at the University of Kaiserslautern.
The supercomputers will only be available to researchers at the two universities, as part of a joint HPC facility, known as the Alliance for High-Performance Computing Rhineland-Palatinate (AHRP). Access to the machines will be provided via a 120 Gbps network pipe connecting the Mainz and Kaiserslautern.
The Mainz system is a 287-teraflop cluster, known as “Mogon” (the Roman name for Mainz), while the University of Kaiserslautern will be host to a smaller machine, known as “Elwetritsch” (named after a mythical creature of southwest Germany). Elwetritsch is said to be about half the size of its Mainz sibling (although no flops rating was provided), and is slated for be expansion in 2013. The new systems will be host to an array of science and engineering applications in physics, mathematics, biology, medicine, and the geosciences.
Mogon and Elwetritsch came with a price tag of €5.5 million ($6.9 million), an investment that was shared between the German federal government, the German Research Foundation, and the two universities. System vendors were not revealed.
Meanwhile in the UK, the University of Leicester announced plans to install a multi-million pound (pound sterling, not tonnage) supercomputer there sometime this summer. The system will be dedicated to astronomy apps, supporting research in areas like dark matter studies, star formation, and black hole physics.
Once again, the machine’s flops performance was not revealed, but the cost suggests something in the hundreds of teraflops range. HP will provide the system.
The fourth new supercomputer announced this week is a new Fujitsu PRIMEHPC FX10 machine for the University of Kobe, in Japan. The system will be used for “creating new fields of research and interdisciplinary areas utilizing supercomputer technology.”
The PRIMEHPC FX10 is the commercial implementation of Japan’s famous K computer, the current reigning champ of the TOP500. Although using the older generation SPARC64 VIIIfx CPU, the original K super delivers over 10 petaflops of performance. By contrast, the new SPARC64 IXfx-powered system to be installed at Kobe is a much smaller machine, and will deliver just 20 teraflops. It’s scheduled for boot-up in August. | <urn:uuid:e447159a-ef0d-4dbd-b358-69d9009b4ecf> | CC-MAIN-2017-04 | https://www.hpcwire.com/2012/06/07/a_quartet_of_new_supercomputer_installations_for_europe_and_asia/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00459-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928214 | 623 | 2.953125 | 3 |
Establishing a Common Taxonomy for Patient Safety Reporting
Overcoming Inconsistent Definitions of Errors and Unreliable Reporting
The various approaches used in healthcare to define and classify near misses, adverse events, and other patient safety concepts have generally been fragmented. The definition of an error or mistake is inconsistent, and the reliability of reporting is also a concern.
Having access to standardized data would make it easier to file patient safety reports and conduct root cause analyses in a consistent fashion. The Joint Commission on Accreditation of Health Care Organizations (JCAHO) developed a Patient Safety Event Taxonomy that was tested in this study.
Aggregating data into a standardized taxonomy was successful used by epidemiologists to detect nosocomial infections and also to establish patterns and trends in patient safety. Click "Download Whitepaper" to request the URL to this resource.
- How Does the Cybersecurity Information Sharing Act (CISA) Impact the Hospital and Healthcare Industry
EHR / EMR
- Presentation on Patient Safety: Achieving A New Standard for Care (Institute of Medicine Committee on Data Standards for Patient Safety November, 2003)
- The JCAHO Patient Safety Event - Taxonomy: A Standardized Terminology and Classification Schema for Near Misses and Adverse Events | <urn:uuid:ecb8d437-562d-4a39-a3b2-22f435afd30d> | CC-MAIN-2017-04 | https://www.givainc.com/healthcare/patient-safety-event-taxonomy-healthcare.cfm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00027-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.906961 | 263 | 2.703125 | 3 |
Up To: Contents
See Also: Host Checks, Service Checks, Event Handlers, Notifications
The current state of monitored services and hosts is determined by two components:
There are two state types in Nagios - SOFT states and HARD states. These state types are a crucial part of the monitoring logic, as they are used to determine when event handlers are executed and when notifications are initially sent out.
This document describes the difference between SOFT and HARD states, how they occur, and what happens when they occur.
Service and Host Check Retries
In order to prevent false alarms from transient problems, Nagios allows you to define how many times a service or host should be (re)checked before it is considered to have a "real" problem. This is controlled by the max_check_attempts option in the host and service definitions. Understanding how hosts and services are (re)checked in order to determine if a real problem exists is important in understanding how state types work.
Soft states occur in the following situations...
The following things occur when hosts or services experience SOFT state changes:
SOFT states are only logged if you enabled the log_service_retries or log_host_retries options in your main configuration file.
The only important thing that really happens during a soft state is the execution of event handlers. Using event handlers can be particularly useful if you want to try and proactively fix a problem before it turns into a HARD state. The $HOSTSTATETYPE$ or $SERVICESTATETYPE$ macros will have a value of "SOFT" when event handlers are executed, which allows your event handler scripts to know when they should take corrective action. More information on event handlers can be found here.
Hard states occur for hosts and services in the following situations:
The following things occur when hosts or services experience HARD state changes:
The $HOSTSTATETYPE$ or $SERVICESTATETYPE$ macros will have a value of "HARD" when event handlers are executed, which allows your event handler scripts to know when they should take corrective action. More information on event handlers can be found here.
Here's an example of how state types are determined, when state changes occur, and when event handlers and notifications are sent out. The table below shows consecutive checks of a service over time. The service has a max_check_attempts value of 3.
|Time||Check #||State||State Type||State Change||Notes|
|0||1||OK||HARD||No||Initial state of the service|
|1||1||CRITICAL||SOFT||Yes||First detection of a non-OK state. Event handlers execute.|
|2||2||WARNING||SOFT||Yes||Service continues to be in a non-OK state. Event handlers execute.|
|3||3||CRITICAL||HARD||Yes||Max check attempts has been reached, so service goes into a HARD state. Event handlers execute and a problem notification is sent out. Check # is reset to 1 immediately after this happens.|
|4||1||WARNING||HARD||Yes||Service changes to a HARD WARNING state. Event handlers execute and a problem notification is sent out.|
|5||1||WARNING||HARD||No||Service stabilizes in a HARD problem state. Depending on what the notification interval for the service is, another notification might be sent out.|
|6||1||OK||HARD||Yes||Service experiences a HARD recovery. Event handlers execute and a recovery notification is sent out.|
|7||1||OK||HARD||No||Service is still OK.|
|8||1||UNKNOWN||SOFT||Yes||Service is detected as changing to a SOFT non-OK state. Event handlers execute.|
|9||2||OK||SOFT||Yes||Service experiences a SOFT recovery. Event handlers execute, but notification are not sent, as this wasn't a "real" problem. State type is set HARD and check # is reset to 1 immediately after this happens.|
|10||1||OK||HARD||No||Service stabilizes in an OK state.| | <urn:uuid:5a84d245-51d1-4570-ad34-09de85c12140> | CC-MAIN-2017-04 | https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/3/en/statetypes.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00147-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.889598 | 901 | 2.53125 | 3 |
18.104.22.168 What is a PKI?
A public-key infrastructure (PKI) consists of protocols, services, and standards supporting applications of public-key cryptography. The term PKI, which is relatively recent, is defined variously in current literature. PKI sometimes refers simply to a trust hierarchy based on public-key certificates , and in other contexts embraces encryption and digital signature services provided to end-user applications as well [OG99]. A middle view is that a PKI includes services and protocols for managing public keys, often through the use of Certification Authority (CA) and Registration Authority (RA) components, but not necessarily for performing cryptographic operations with the keys.
Among the services likely to be found in a PKI are the following:
- Key registration: issuing a new certificate for a public key.
- Certificate revocation: canceling a previously issued certificate.
- Key selection: obtaining a party's public key.
- Trust evaluation: determining whether a certificate is valid and what operations it authorizes.
Key recovery has also been suggested as a possible aspect of a PKI.
There is no single pervasive public-key infrastructure today, though efforts to define a PKI generally presume there will eventually be one, or, increasingly, that multiple independent PKIs will evolve with varying degrees of coexistence and interoperability. In this sense, the PKI today can be viewed akin to local and wide-area networks in the 1980's, before there was widespread connectivity via the Internet. As a result of this view toward a global PKI, certificate formats and trust mechanisms are defined in an open and scaleable manner, but with usage profiles corresponding to trust and policy requirements of particular customer and application environments. For instance, it is usually accepted that there will be multiple ``root'' or ``top-level'' certificate authorities in a global PKI, not just one ``root,'' although in a local PKI there may be only one root. Accordingly, protocols are defined with provision for specifying which roots are trusted by a given application or user.
Efforts to define a PKI today are underway in several governments as well as standards organizations. The U.S. Department of the Treasury and NIST both have PKI programs [2,3], as do Canada and the United Kingdom . NIST has published an interoperability profile for PKI components [BDN97]; it specifies algorithms and certificate formats that certification authorities should support. Some standards bodies which have worked on PKI aspects have included the IETF's PKIX and SPKI working groups [6,7] and The Open Group .
Most PKI definitions are based on X.509 certificates, with the notable exception of the IETF's SPKI.
PKI - PC Webopedia Definitions and Links:
Government Information Technology Services, Federal Public key Infrastructure:
NIST Public key Infrastructure Program:
The Government of Canada Public key Infrastructure:
The Open Group Public key Infrastructure, Latest Proposals for an HMG PKI.
Public key Infrastructure (X.509) (pkix) working group:
Simple Public key Infrastructure (spki) working group:
The Open Group Public key Infrastructure: | <urn:uuid:c0361043-35d5-4549-9423-59a951d55248> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/pki.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00147-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924241 | 657 | 3.640625 | 4 |
By Rob High, IBM Fellow, Vice President & Chief Technology Officer of IBM Watson
As artificial intelligence (AI) begins to power more technology across industries, it’s been truly exciting to see what our community of developers can create with Watson. Developers are inspiring us to advance the technology that is transforming society, and they are the reason why such a wide variety of businesses are bringing cognitive solutions to market.
With AI becoming more ubiquitous in the technology we use every day, developers need to continue to sharpen their cognitive computing skills. They are seeking ways to gain a competitive edge in a workforce that increasingly needs professionals who understand how to build AI solutions.
It is for this reason that today at World of Watson in Las Vegas we announced with Udacity the introduction of a Nanodegree program that incorporates expertise from IBM Watson and covers the basics of artificial intelligence. The “AI Nanodegree” program will be helpful for those looking to establish a foundational understanding of artificial intelligence. IBM will also help aid graduates of this program with identifying job opportunities.
Nanodegree programs offer unique, hands-on learning opportunities for students to master critical skills in cutting-edge fields, while simultaneously building up project-based portfolios of work that demonstrate those skills for future employers.
The AI Nanodegree will guide students through courses and projects on different aspects of artificial intelligence, such as:
- Game playing/search
- Logic and planning
- Probabilistic inference
- Computer vision
- Cognitive systems
- Natural language processing
IBM experts have collaborated with Udacity on the curriculum for a number of these courses and provided supporting guidance on how Watson works and aligns with the Nanodegree objectives. The courses will include online videos of these experts leading exercises that cover the core concepts of Watson. The Nanodegree will also feature capstone projects for students to demonstrate their mastery of the skills and techniques taught.
The AI Nanodegree is comprised of two, 13-week terms, the first of which will open in early 2017. Upon completing the program, students will:
- Develop an understanding of the importance of artificial intelligence as an area of continued study
- Become grounded in the basic mathematical and technical competencies needed to participate meaningfully in the AI community
- Be able to write programs to solve computational problems important in AI
The benefits of the program are multiple: not only do students graduate with the Nanodegree itself – a credential that is recognized by technology companies looking for programmers, developers and other skilled workers – but over the course of their study, they create portfolios of useable projects that demonstrate the skills they have learned.
The launch of this Nanodegree continues on IBM’s commitment to equip developers with the educational resources they need. In addition to offering an ongoing variety of hackathons and academic partnerships, we recently announced the IBM Watson Application Developer Certification, which is designed to help developers all across the world build and validate their skills as well as connect with companies looking to leverage their unique talents. We also created IBM Learning Lab to help developers learn, build, and innovate with emerging technologies such as AI. IBM Learning Lab features 80+ curated courses from providers like Codeacademy, Coursera, Big Data University, and Udacity as well as real world use cases and inspiration on how to build with our Watson services.
There has never been a better time to build a foundation in artificial intelligence and enter the cognitive era. Apply today for the new AI Nanodegree program –and go here to learn more! | <urn:uuid:174bf1c2-036b-4be4-8216-38c64b014c89> | CC-MAIN-2017-04 | https://www.ibm.com/blogs/watson/2016/10/future-cognitive-workforce-part-1-announcing-ai-nanodegree-udacity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00449-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944851 | 730 | 2.796875 | 3 |
TORONTO--(Marketwire - Feb 26, 2013) - On March 8, countries and cultures around the world recognize women's accomplishments, opportunities and challenges with International Women's Day. Celebrated for over 100 years, the holiday is marked with activities and events that herald women and, often, focus on ways to continue reaching new goals. In honour of this special day, the travel experts at Cheapflights.ca, the online leader in finding and publishing travel deals, pay homage to women trailblazers and their contributions throughout history with a list of Top 10 Monuments to Women Leaders. These memorials and the tales of greatness behind them are tangible history lessons about leaders and leadership across times and cultures.
Below are four monuments to make our list that honour women leaders from Commonwealth countries across the globe.
- Victoria Memorial, London, England - A fitting tribute to the longest reigning British monarch (Queen Elizabeth II has more than two years to go to catch her), this bronze and marble memorial stands grandly in front of Buckingham Palace in the centre of the Queen's Gardens. Dedicated in 1911 and finished in 1924, the Victoria Memorial features layers upon layers of detail. Most notable is the gilt-coated, winged Victory figure that towers down from on high. The statue of Queen Victoria, a 13-foot likeness carved in stone, faces away from the Palace and down the Mall. She is joined by an angel of Truth on one side and Justice on the other with Charity (or Motherhood), a figure with three children, rounding out the circle at the centre of the memorial. Flowing out from there are fountains, a series of nautical details and groupings of bronze statues. This expansive display of honour for a beloved and historic Queen was, appropriately, the setting for the 2012 concert marking Queen Elizabeth's Diamond Jubilee, a celebration of her 60th year on the throne.
- Edith Dircksey Cowan Memorial, King's Park, Perth, Australia - An established champion of women's and children's issues, Edith Cowan campaigned hard in the effort to pass the 1920 legislation to open Parliament to women. When the legislation passed, she immediately put it to the test, running in the 1921 State Election to represent West Perth in the Legislative Assembly of Western Australia. Well known through her work for the Red Cross during World War I and her involvement with groups ranging from the National Council of Women and Children's Protection Society, she won her seat and became the first female in any of Australia's Parliaments. While in office, she continued her trailblazing ways, helping enact legislation that opened the legal profession to women. Her role in history is marked by an elegant clock tower, roughly 20 feet tall, at the entrance to King's Park. Built in 1934, the memorial itself is a trailblazer too as it is believed to be the first civic monument built to honour a woman in Australia.
- The Women are Persons! Monument, Parliament Hill, Ottawa, Canada - The route to political empowerment in Canada took its own interesting route. In 1927, Emily Murphy, Irene Marryat Parlby, Nellie Mooney McClung, Louise Crummy McKinney and Henrietta Muir Edwards, who history has dubbed the Famous or Valiant Five, petitioned the Supreme Court to clarify whether the word "persons" in the founding documents of Canada and its government included females. The Supreme Court ruled that it did not include women if the question meant could they be appointed to the Senate. This ruling, however, was overturned by the British Judicial Committee of the Privy Council. The Ottawa monument, and the version in Calgary, capital of their home province of Alberta, captures an imagined and larger than life celebration by the Famous Five, complete with a newspaper reading the headline of the day: We are Persons! Dedicated in 2000 atop Parliament Hill, the bronze sculpture includes an empty chair so passersby can join in the victory.
- Kate Sheppard Memorial, Christchurch, New Zealand - When counting New Zealand's many claims to fame, make sure to include it was the first country to introduce universal suffrage, granting women the right to vote. The 1893 petition for the right to vote had more than 31,000 signatures. The group spearheading the seven-year process of championing the cause and gathering those names was led by Kate Sheppard. A 10-foot tall bronze sculpture shows Sheppard and fellow suffragette leaders Helen Nicol, Ada Wells, Harriet Morison, Meri Te Tai Mangakahia and Amey Daldy bringing the petition to Parliament in a cart. Unveiled on 1993, this monument along the Avon River, holds a time capsule of documents capturing women's lives in 1993. The success of the right-to-vote movement in New Zealand made Sheppard and the other leaders role models for voting activists in countries around the world. Sheppard even returned to her native England in 1894 where she spent almost two years working with suffrage leaders and encouraging supporters with her speeches. In the early 1900s, she travelled again to England, the U.S. and Canada, meeting with women leaders as she went.
Rounding out this impressive list of memorials in honour of women across the globe are: Julia Tuttle Statue, Bayfront Park, Miami, Florida; Fremiet's Joan of Arc, Place des Pyramides, Paris, France; The "Swing Low" Harriet Tubman Statue, New York, New York; The Portrait Monument, United States Capitol, Washington, D.C.; Monument to Catherine the Great, St. Petersburg, Russia and Estatua de Policarpa Salavarrieta, Bogotá, Colombia. To read Cheapflights.ca's complete list of Top 10 Monuments to Women Leaders, visit www.cheapflights.ca/travel/top-10-monuments-to-women-leaders.
About momondo Group
momondo Group is an online travel media and technology company that is driven by the belief that an open world is a better world. The group now serves travel search and inspiration to over 13 million visitors a month -- plus 6 million travel newsletter subscribers -- via its Cheapflights (www.cheapflights.ca) and momondo (www.momondo.com) brands.
Skygate began the sourcing of complex air-travel data in 1992, while Cheapflights pioneered the online comparison of flight deals for users in 1996 and momondo launched meta-search in the Nordic countries in 2006.
The Group has offices in London, Copenhagen, Boston and Toronto, with a consumer base across 16 core international markets but users all over the world.
Follow us on Twitter: twitter.com/cheapflights
Follow us on Facebook: www.facebook.com/cheapflights | <urn:uuid:f5463cde-7eb5-4138-9e76-1a5a9c1aed29> | CC-MAIN-2017-04 | http://www.marketwired.com/press-release/cheapflightsca-pays-tribute-to-womens-leadership-1761542.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00201-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933008 | 1,399 | 2.703125 | 3 |
Defense Dept. Jumps On Climate Change ResearchDOD says poor intelligence on the negative effects of climate change poses a national security threat.
Slideshow: 50 Most Influential Government CIOs (click image for larger view and for slideshow)
The Department for Defense (DOD) is calling for a better system for collecting and analyzing data related to climate change in order to make more accurate forecasts about the world's changing weather patterns.
A report by the DOD's Defense Science Board task force calls for the creation of a "climate information system" that will gather intelligence from multiple agencies and experts both inside and outside the federal government, and allow that information to be used to forecast and mitigate the negative effects of climate change.
"The current collection of observational and model assets while important for conducting exploratory climate science do not constitute a robust, sustained, or comprehensive resource for generating actionable climate forecasts," according to the report.
Doing so requires collaboration and information sharing between multiple agencies, including the National Oceanic and Atmospheric Administration, NASA, the U.S. Geological Survey, the CIA, and the departments of Agriculture, Defense, Energy, and State, as well as private-sector climate researchers and experts.
Indeed, climate-change research is already for a priority for agencies like NASA and NOAA, which are using advanced technology such as satellites and supercomputers to study changes in Earth's weather and climate patterns.
[Supercomputers keep getting more powerful. Check out Top 10 Supercomputers: U.S. Still Dominates.]
The report lists attributes of the proposed climate information system, which the DOD would manage. Those characteristics include the collection of "reliable, sustained climate data" over decades, including observations and system models; minimal gaps in data collection and minimal service interruptions; a clearinghouse of data records that keep track of "essential climate variables"; and global data records.
Other attributes a system should have include decision support tools to "enable synthesis assessment and translation of climate data records" into metrics that have benefits to society; transparency and the ability to reproduce observational data, models, and decision support tools and analysis; and sustained support for ongoing climate research, according to the report.
The report identifies climate change as a very real problem that has broad socioeconomic implications across the globe. Therefore, finding a way to harness research being done to "manage the consequences" of climate change should be a priority for the U.S. government, according to the report.
"Changes in climate patterns and their impact on the physical environment can create profound effects on populations in parts of the world and present new challenges to global security and stability," DOD science panel co-chairs Gen. Larry Welch and Dr. William Howard wrote in a memo attached to the report. "Failure to anticipate and mitigate these changes increases the threat of more failed states with the instabilities and potential for conflict inherent in such failures." | <urn:uuid:668bdcda-40ed-48e8-a9ca-1066ca0d696a> | CC-MAIN-2017-04 | http://www.darkreading.com/risk-management/-defense-dept-jumps-on-climate-change-research/d/d-id/1101410 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00019-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920425 | 591 | 2.609375 | 3 |
(17-Nov-2011) APIs and messaging protocols, including some that are standards, can let users build software-defined networks today. The key issue, though, is that not everyone implements the same ones or implements them the same way. Will OpenFlow get us all on the same path to SDN nirvana?
OpenFlow is an open source API defined to enable multivendor switches and routers to be programmable through software on a central control element -- hence, "software-defined networking." It's designed to manage and direct traffic among routers and switches from various vendors by separating the programming of routers and switches from underlying hardware in order to provide consistency in flow management and engineering.
OpenFlow proponents say the API and protocol, and SDNs in general, will open up networks to more innovation by providing a level of abstraction, or virtualization, between network control and the physical infrastructure. | <urn:uuid:bdcf03cd-c88e-4edc-936e-c042a44c412b> | CC-MAIN-2017-04 | https://www.infotech.com/research/it-computerworld-openflow-not-the-only-path-to-network-revolution | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00257-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.907484 | 182 | 2.546875 | 3 |
A pilot flying over the edge of a wildfire in Southern California concentrates on keeping the helicopter over the advancing line of flames 50 feet below. In the passenger seat, an observer watches for hazards -- power lines, aerial tankers, other helicopters. In the back seat, a fire management officer with a pocket-sized computer/GPS system enters the locations of ground units, burning and threatened structures and logging roads. The input data appear on the display as color-coded symbols, along with the line of fire and the current position of the helicopter. As the craft reaches its starting point on the perimeter, the map line closes to form a polygon. With the perimeter completed, the crew, one of several tactical GIS teams, returns to the situation analysis mapping center (SAMC) where the data is loaded directly into a computer and printed out on maps.
At the SAMC, incident commanders go over the new map, assess the fires current speed and direction, the topography of the area it is moving into and the location of access roads. The map will assist them in making strategic decisions -- deployment of ground and air resources, structures to be saved and areas to be evacuated.
Scenarios similar to this took place during the Viejas wildfire that struck the mountainous region of San Diego County, Calif., in January 2001. Fanned by 65 mph winds, the fire burned 10,353 acres of the Cleveland National Forest, destroyed homes and forced the evacuation of hundreds of residents. It took 2,000 firefighters, 225 engine companies and several air tankers six days to control the fire.
GIS had a key role in suppressing the Viejas fire, according to Tom Patterson, fire management officer at Joshua Tree National Park in California. "The information we were providing to incident commanders in near realtime was critical for tactical decisions," said Patterson, who was a member of the tactical mapping team for the Viejas fire.
Maps are essential tools in fighting wildfires. They provide incident commanders with the information needed to deploy resources, track the speed, direction and perimeter of the fire, assure the safety of firefighters and plan evacuation routes. Given the rapidly changing conditions associated with this type of fire, the need for up-to-date topographic and planimetric information about an area is especially critical for resource deployment in the early stages. "Knowing where your resources are is the key to safety during any fire suppression activity," Patterson said.
In the past, just collecting the data to develop a map of a fire area could take several days. By then, a fire could have progressed beyond the area covered by the data. Today, GIS units turn out area maps in minutes. Tactical GIS teams collect fire perimeter data and transmit it to a SAMC in realtime, or have it in the hands of incident commanders within a couple of hours. For the past decade, standard equipment for the task has been laptops, separate GPS receivers and cell phones, along with cables, adapters and spare batteries. Managing all this while bouncing around in a helicopter has never been easy, particularly while observing and entering data about a fast-moving fire.
New Technologies Measure Up
In the Viejas fire, teams used ArcPad, an intuitive GIS, running on Compaqs iPAQ and Hewlett Packards Journado personal digital assistants. Each was cabled to a Garmin GPS III Plus receiver. According to Patterson, the combined package weighed one pound and fit into the pocket of a flight suit.
Patterson said Viejas was the first time the group had mapped a fire using PDAs. "The ArcPad software integrates both mapping and GPS functions so that position information is automatically displayed as moving crosshairs on an actual map, rather than as numerical coordinates. We could zoom in on the map and see our position very clearly and in realtime."
Bob Bower, a resource information specialist with the Bureau of Land Management, and member of the tactical mapping team on the Viejas fire, said the GIS software draws the perimeter without manual input. "We tell it to start a polygon file, then each time it receives a position, it advances the perimeter line. When we tell it to stop editing, it closes the polygon by zapping a line from our last position to our beginning point on the perimeter."
As the perimeter is being drawn, the observer is also entering fire area information -- lines and points in a shape file -- using either a keyboard or handwriting recognition software. Structures fully engulfed, threatened structures or those 10 miles from the fires front are each indicated on the display by different colored symbols. Bower said that since the GIS software runs on desktops and laptops as well as on PDAs, the information can be transferred directly to SAMC desktops.
Bower said his group did not use wireless modems in the Viejas fire because transmission between the helicopter and the tactical center requires an ArcIMS server site. "We dont have that yet. Also, were out in the boondocks and dont have wireless access yet to the Internet."
According to Patterson, ArcPad 6.0, which was due in July, will not only enable tactical GIS teams to transmit their position wirelessly, it will also allow them to receive data from other sources transmitting their positions. "The ideal situation is to have all division supervisors and operations staff hiking the fire line carrying wireless PDAs in their packs. Think of the tactical value -- being able to see where the fire is going, the resources coming to assist you and the locations of all your ground crews."
Patterson cautions that, despite the effectiveness of the system, ground truthing is still necessary and field observers are still needed to confirm the actual location of the fire. Still, GIS holds huge potential for future firefighting endeavors.
"GIS is in the forefront of fire fighting technology today," Patterson said. "Everyone Ive talked to says that in a few years it will be a tool as common as fire engines, hose nozzles and shovels."
Bill McGarigle is a writer specializing in communications and information technology. He is based in Santa Cruz, Calif. | <urn:uuid:a7d84d78-6b0b-407b-8583-a9c74782a155> | CC-MAIN-2017-04 | http://www.govtech.com/public-safety/GIS-Tracks-a-Moving-Target.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00165-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945437 | 1,254 | 3.265625 | 3 |
All of the application data used by BaanERP is stored in database tables in the underlying RDBMS. To keep the majority of the BaanERP processing independent of the RDBMS, BaanERP uses its own data dictionary. The data dictionary includes domain, schema, and referential integrity information that is stored in a database independent manner.
The BaanERP system provides a RDBMS interface, called 'database driver', to the major RDBMSs (Oracle, Informix, Sybase, DB2, and Microsoft SQL Server). The BaanERP database driver has a built-in mechanism for preserving referential integrity; it does not depend on the underlying RDBMS for maintaining referential integrity.
BaanERP Database Concepts
A relational database presents information to the user in the form of tables. In a table, data is organized in columns and rows. Each column (also referred to as a field) represents a category of data. Each row (also referred to as a record) represents a unique instance of data for the categories defined by the columns. A field always refers to a domain, which defines a set of values from which one or more fields can draw their actual values. For example, the "tcweek" domain is the set of all integers greater than zero and less than or equal to 53.
Every database table has a field, or a combination of fields, which uniquely identify each record in the table. This unique identifier is referred to as the primary key. Primary keys are fundamental to database operations, as they provide the only record-level addressing mechanism in the relational model. Primary keys act as references to the records in a table.
With a relational database, you can store data across multiple tables and you can define relationships between the tables. This means that individual tables can be kept small and data redundancy can be minimized. A relationship exists between two tables when they have one or more fields in common. So, for example, a Customer Detail table can be linked to an Order table by including a Customer ID field in both tables. In the Customer Detail table, the Customer ID field is the primary key. In the Order table, it is referred to as a foreign key. By linking the two tables in this way, there is no need for the Order table to include customer details such as name and address. Note that references from one table to another must always use the primary key.
Indexes facilitate speedy searching and sorting of database tables. An index is a special kind of file (or part of a file) in which each entry consists of two values, a data value and a pointer. The data value is a value for some field in the indexed table. The pointer identifies the record that contains this value in the particular field. This is analogous to a conventional book index, where the index consists of entries with pointers (the page numbers) that facilitate the retrieval of information from the body of the book. Note that it is also possible to construct an index based on the values of a combination of two or more fields. Every table must have at least one index, which is an index on the primary key field(s). This is referred to as the primary index. An index on any other field(s) is referred to as a secondary index.
With respect to database actions, a transaction is a sequence of related actions that are treated as a unit. The actions that make up a transaction are processed in their entirety, or not at all.
A transaction ends with the function commit.transaction() (all changes made during the transaction are stored in the database) or with the function abort.transaction() (no changes are stored in the database). A transaction starts either at the beginning of a process, with the function set.transaction.readonly(), with the function db.lock.table(), or after the preceding transaction has ended. A transaction is automatically rolled back (that is, it is undone) when a process is canceled and if a program ends without a commit.transaction() or abort.transaction() after the last database call. Undoing a transaction is only possible if the underlying database system supports this.
Certain database actions cannot be placed within a transaction, because they cannot be rolled back. These actions are: db.create.table(), db.drop.table(), and set.transaction.readonly()]. These functions can be called only at the start of a program or after the end of the preceding transaction.
You can set a retry point immediately before a transaction. In case of an error, the system returns to this point and re-executes the transaction from there.
A read-only transaction is a transaction in which you are permitted only to read records (without lock) from the database. You retain read consistency during the entire transaction. This means that during the transaction your view of the database does not change, even if other users update the records. A read-only transaction starts with the function set.transaction.readonly() (this must be called after ending the preceding transaction or at the beginning of the program) and ends with a commit.transaction() or abort.transaction(). A consistent view consumes a large amount of memory, so a read-only transaction must be as short as possible; user interaction during the transaction is not recommended.
Database inconsistencies can arise when two or more processes attempt to update or delete the same record or table. Read inconsistencies can arise when changes made during a transaction are visible to other processes before the transaction has been completed for example, the transaction might subsequently be abandoned.
To avoid such inconsistencies, BaanERP supports the following locking mechanisms: record/page locking, table locking, and application locking.
To ensure that only one process at a time can modify a record, the database driver locks the record when the first process attempts to modify it. Other processes cannot then update or delete the record until the lock has been released. However, they can still read the record. While one process is updating a table, it is important that other processes retain read consistency on the table. Read consistency means that a process does not see uncommitted changes. Updates become visible to other processes only when the transaction has been successfully committed. Some database systems do not support read consistency, and so a dirty read is possible. A dirty read occurs when one process updates a record and another process views the record before the modifications have been committed. If the modifications are rolled back, the information read by the second process becomes invalid. Some databases, such as SYBASE and Microsoft SQL Server 6.5, use page locking instead of record locking. That is, they lock an entire page in a table instead of an individual record. A page is a predefined block size (that is, number of bytes). The number of records locked partly depends on the record size.
Locking a record for longer than required can result in unnecessarily long waiting times. The use of delayed locks solves this problem to a great extent. A delayed lock is applied to a record immediately before changes are committed to the database and not earlier. When the record is initially read, it is temporarily stored. Immediately before updating the database, the system reads the value of the record again, this time placing a lock on it. If the record is already locked, the system goes back to the retry point and retries the transaction. If the record is not locked, the system compares the content of the record from the first read with the content from the second read. If changes have been made to the record by another process since the first read, the error ROWCHANGED is returned and the transaction is undone. If no changes have occurred, the update is committed to the database.
You place a delayed lock by adding the keyword FOR UPDATE to the SELECT statement. For example:
SELECT pctst999.* FROM pctst999 FOR UPDATE
pctst999.dsca = "...."
A retry point is a position in a program script to which the program returns if an error occurs within a transaction. The transaction is then retried. There are a number of situations where retry points are useful:
- During the time that a delayed lock is applied to a record/page, an error can occur that causes the system to execute an abort.transaction(). In such cases, all that BaanERP can do is inform the program that the transaction has been aborted. However, if retry points are used, the system can automatically retry the transaction without the user being aware of this.
- Some database systems generate an abort.transaction()] when a dirty record is read (that is, a record that has been changed but not yet committed). An abort.transaction() may also be generated when two or more processes simultaneously attempt to change, delete, or add the same record. In all these situations, BaanERP Tools can conceal the problem from the user by using retry points. It simply retries the transaction. If there is no retry point, the transaction is aborted and the session is terminated.
- In BaanERP, updates are buffered, so the success or failure of an update is not known until commit.transaction() is called. If an update fails, the commit of the transaction also fails, and the entire transaction must be repeated. If retry points are used, the system automatically retries the transaction.
- Retry points can also resolve potential deadlock problems. If, for example, the system is unable to lock a record, it rolls the transaction back and tries again.
It is vital that retry points are included in all update programs. The retry point for a transaction must be placed at the start of a transaction. The following example illustrates how you program retry points:
db.retry.point() | set retry point
if db.retry.hit() then
...... | code to execute when the system
| goes back to retry point
...... | initialization of retry point
The function db.retry.hit() returns 0 when the retry point is generated that is, the first time the code is executed. It returns a value unequal to 0 when the system returns to the retry point through the database layer.
When the system goes back to a retry point, it clears the internal stack of functions, local variables, and so on that were called during the transaction. The program continues from where the retry point was generated. The value of global variables is NOT reset.
When a commit fails, the database automatically returns to its state at the start of the transaction; the program is set back to the last retry point. It is vital, therefore, that the retry point is situated at the start of the transaction. The db.retry.hit() call must follow the db.retry.point() call. Do not place it in the SQL loop itself as this makes the code very nontransparent. When a retry point is placed within a transaction, the system produces a message and terminates the session.
BaanERP provides a table locking mechanism, which enables you to lock all the records in a specified table. A table lock prevents other processes from modifying or locking records in the table but not from reading them. This is useful when a particular transaction would otherwise require a large number of record locks. You use the db.lock.table() function to apply a table lock.
An application lock prevents other applications and users from reading and/or modifying an applications data during critical operations. It is not part of a transaction and so is not automatically removed when a transaction is committed. Instead, an application lock is removed when the application ends or when appl.delete() is called.
Microsoft SQL Server Database Driver
This section describes the RDBMS interface issues with respect to Microsoft SQL Server.
Because so many tables are needed, a convention is used for naming tables, columns within tables, and indexes to data within the tables. This chapter describes the data dictionary and the naming conventions used by the BaanERP database drivers to access data stored in the RDBMS. It also discusses how BaanERP data types are mapped to SQL Server data types.
The BaanERP data dictionary maps BaanERP data types, domains, schemas, and referential integrity information to the appropriate information in the RDBMS. When storing or retrieving data in the RDBMS, the database driver maps data dictionary information to database table definitions. BaanERP data dictionary information can be kept in shared memory where it will be available to all running BaanERP application servers. The data dictionary information is shared among all the sessions open within a single database driver.
The BaanERP data types cannot be used directly by the database driver to create SQL Server tables. This is because not all BaanERP data types exactly match SQL Server data types. To create valid SQL Server tables, the driver must perform some mapping or translation. When mapping the BaanERP data dictionary to tables in SQL Server, conventions are used for the table names, column names, and index names.
Table naming convention
The external name of a BaanERP table stored in SQL Server has the following format:
The components of the external table name are:
- A two-letter code referring to the BaanERP package the table belongs to. For example, a table defined by the Tools package has the package code tt.
- The data dictionary table name consists of a three-letter module identifier followed by a three-digit number. The module identifier refers to the module the table belongs to; the number is just a sequence number.
- Within BaanERP, three-digit numbers are used to identify different instances of the BaanERP application database, called 'companies'. Company number 000 denotes the meta-database containing various system data common to all companies (including currencies and languages used). In addition to company 000, there may be several other companies in a BaanERP system, each with its own set of tables for application data.
Column naming convention
Each column in the BaanERP data dictionary corresponds to one or more columns in a SQL Server table. The rules for column names are as follows:
- When a BaanERP column name is created in SQL Server, it is preceded by the string t_. For example, the BaanERP column with the name cpac is created in SQL Server with the name t_cpac. If a BaanERP column name contains a period, it is replaced by the underscore character.
- Long string columns
- BaanERP columns of type string can exceed the maximum length of character columns in SQL Server. The SQL Server data type CHAR has a limit of 254 characters. When a BaanERP string column exceeds this limit, the column is split into segments with up to 254 characters each. The first 254 characters are mapped to a column where the name of the column is extended with
_1. The next 254 characters are mapped to a column with a name extended by 2, and so on, until all the characters of the string are mapped to a column. For example, if a BaanERP string column called desc contains 300 characters, the following two SQL Server columns are created: t_desc_1 with size 254, and t_desc_2 with size 46.
Table 1: Mapping between BaanERP and MSQL data types.
|BaanERP data types
||MSQL data types
- Array columns
- In the BaanERP data dictionary, array columns can be defined. An array column is a column with multiple elements. The number of elements is called the depth. For example, a column containing a date can be defined as an array of three elements: a day, a month, and a year. In SQL Server, the three elements of the array column are placed in separate columns. The names of these columns include a suffix with the element number. For example, an array column called date will be transformed to: t_date_1 for element 1, t_date_2 for element 2, and t_date_3 for element 3.
Data type mapping
Table 1 shows the mapping between BaanERP data types and their SQL Server counterparts.
Note that the MSQL driver uses the SQL Server CHAR data type since ANSI-compliant behavior is expected for character data, such as with the BaanERP string type. Since BaanERP SQL expects ANSI-compliant string comparison semantics, the SQL Server CHAR data type is used instead of VARCHAR. This SQL Server data type is used because a BaanERP string data type has characteristics that conform to the ANSI specification for character data. When the CHAR data type is used, operations such as comparison and concatenation can be done in a predefined manner with predictable results.
- In addition to the above naming conventions and data types, the following rules apply when mapping BaanERP data to SQL Server data:
- Since the binary sort order is selected during the installation, SQL Server treats object names with case sensitivity.
- All columns created by the BaanERP database driver have the NOT NULL constraint. BaanERP does not support the NULL value concept of SQL.
- The date range supported by the BaanERP application server is not the same as the range for SQL Server (SQL Server is more restrictive), so some BaanERP dates are not valid when stored with the MSQL driver. The BaanERP date number 0 is mapped to the earliest possible date in SQL Server (01-Jan-1753). The earliest possible BaanERP date is then 02-Jan-1753 and the latest is 31-Dec-9999.
The ODBC interface
ODBC is an application programming interface (API) used to communicate with the database server. It is made up of a function library that can be called from an application program to execute SQL statements and communicate with the data source. The ODBC functions called by the MSQL database driver perform the following actions:
- Connect to Microsoft SQL Server (open session)
- Allocate a statement handle
- Parse a SQL statement
- Bind input variables
- Define result variables
- Execute a SQL statement
- Fetch the resulting rows
- Commit or abort a transaction
- Close, unbind and drop a cursor
- Disconnect from MSQL (close session) | <urn:uuid:8b1604f6-beb6-4d47-9bb6-a1a759237dce> | CC-MAIN-2017-04 | http://baanboard.com/node/45?s=35ad7fdbaf76ecc9d2f418e63de1eec9 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00285-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.873571 | 3,831 | 2.546875 | 3 |
The incident management life cycle begins long before an incident. Effective incident management begins with pre-incident planning, which includes mitigation and preparation.
By Hannah Snyder, xMatters
01 | Mitigate Risk
In order for a business to be resilient, it must be well prepared to respond to unexpected events. All Business Resiliency disciplines require proactive planning that focuses on mitigating risk. For incident management and crisis management, this means identifying and documenting all hazards that would present a threat to employee life safety and property. For business continuity planning, this means identifying and documenting all hazards that would negatively impact business operations. Once these risks are identified, they need to be understood, prioritized and plans must be developed that address how to respond to these events. Risk assessment is the identification and analysis of risk conducted in preparation for risk mitigation planning. Mitigation includes reducing the likelihood that a risk event will occur and/or reduction of the impact of a risk event if it does occur.
Risk mitigation strategies and specific action plans characterize the root causes of risks that have been identified and quantified in earlier phases of the risk management process. The plans evaluate risk interactions and common causes, identify alternative mitigation strategies, methods, and tools for each major risk, assess and prioritize mitigation alternatives, select and commit the resources required for specific risk mitigation alternatives and communicate planning results for implementation. Some risks, once identified, can be eliminated or reduced. However, most risks are more difficult to mitigate, particularly high-impact, low-probability risks. Therefore, risk mitigation and management must be “living” efforts, and being able to respond to various types of risk requires preparation, which must occur before an incident disrupts business operations, causes harm to employees or damages property and assets.
To continue reading, download the full article below. | <urn:uuid:cc87105b-4a55-4ce1-a4e7-8c75fba27a9e> | CC-MAIN-2017-04 | http://www.bsminfo.com/doc/the-stages-of-effective-incident-management-0001 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00403-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942816 | 368 | 2.6875 | 3 |
Mobile routing protocol advances
- By Joab Jackson
- Mar 11, 2008
PHILADELPHIA'A network may be difficult to maintain if its routers come and go at random intervals. Most IP routing techniques rely on relatively static router configurations, a stability not likely to be encountered by mobile devices being used in combat or other highly dynamic scenarios. To this end, Naval Research Laboratory researchers are helping develop a set of routing protocols for setting up mobile ad-hoc networks (MANETs).
This week, the Internet Engineering Task Force's MANET working group has posted
three Internet Drafts, as well as published a new request for comments this week, all of which advance standards work in the area. The group met this week at IETF's 71st meeting, held in Philadelphia.
NRL researcher Joseph Macker, who started work on MANET almost two decades ago, co-chairs group, which includes engineers from Motorola, Juniper, Cisco, Johns Hopkins University's Applied Physics Laboratory and other research institutions.
According to the Working Group charter
, the MANET protocols will be used for lightweight mobile devices involved in mesh, wireless or other networks with dynamic topologies. It is tailored for devices with limited memory and computational power. It assumes that nodes will drop in and out of the network in a varying state.
According to Macker, the MANET architecture involves building a routing table in which each node not only knows its neighbors one hop away, but the nodes next to these neighbors as well. Together, all the nodes can work together to generate routing tables that can efficiently send packets through the topology.
A MANET network can work in reactive mode, in which the route is not determined until the network must convey a packet, or in proactive mode, in which the nodes confer with each other to build routing tables regardless of traffic. Proactive networks can convey traffic more quickly, though involve more processing and bandwidth overhead.
Of the three new Internet Drafts, one is on a neighborhood discovery protocol
that allows nodes to discover and work with nodes one and two hops away. A second one is about how to build a packet format
capable of carrying multiple messages. A third one, about the Management Information Base, describes
a set of tools for configuring and managing routers on a mobile network.
In addition, IETF approved "Jitter Considerations in Mobile ad-Hoc Networks" as a Request for Comment (RFC 5148
). This work suggests ways to randomly vary packet transmission times in order to avoid packet collision. Internet Drafts are submitted to IETF for consideration as standards. Once approved, they become RFCs.
Researchers at the Working Group meeting also unveiled some MANET test cases and prototype implementations. NRL has developed
a C++ library that will allow developers to package and unbundle MANET packets. The library works with standard networking interfaces such as sockets, timers and routing tables.
In addition, other researchers described implementations of MANET's Optimized Link State Routing, a protocol for building link tables for ad-hoc networks, including those built at France's
Laboratoire d'Informatique de l'Ecole Polytechnique and Japan's Niigata University.
Joab Jackson is the senior technology editor for Government Computer News. | <urn:uuid:2e1fb29b-4910-443a-a18e-a49fb5605a0e> | CC-MAIN-2017-04 | https://gcn.com/articles/2008/03/11/mobile-routing-protocol-advances.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00066-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922599 | 676 | 2.8125 | 3 |
You'd think after a few billion years of living next to each other, you'd know everything about your neighbors. But in recent days scientists have reported some unexpected and counterintuitive findings about Venus and Mars, the planets on either side of Earth in our solar system. Maybe not as unexpected as learning that the prim accountant next door is a cross-dresser, but still. According to the European Space Agency, Venus -- the second-closest planet to the Sun (after Mercury) but the hottest, with a mean surface temperature of 863ºF -- contains a "surprisingly cold region high in the planet’s atmosphere that may be frigid enough for carbon dioxide to freeze out as ice or snow." How cold? Try –175ºC, or -283ºF. To give you some perspective, the coldest recorded temperature on Earth was −128.6ºF in Antarctica in 1983. Break out the shorts and tank tops! Results of the data analysis will be published in the Journal of Geophysical Research. On to Mars, which has a mean temperature of –63ºC, or -81ºF, and goes as low as –143ºC, or -225ºF. NASA's Mars Rover Curiosity, equipped with weather data instrumentation, has detected afternoon temperatures as high as 6ºC, or 43ºF. NASA's Mars Science Laboratory scientists say temperatures have risen above freezing for more than half of the days since Curiosity has been monitoring environmental conditions in Gale Crater just south of the Martian equator. What's especially interesting about this is that it's late winter where Curiosity has been roaming since landing on Mars on Aug. 5. One would assume temperatures will climb higher during daytime in the spring and summer. It's a different story at night, however, thanks to the thin Martian atmosphere and its inability to retain heat. Curiosity has measured nighttime temperatures as low as -70ºC, or -94ºF. So if you're going out, bring a sweater.
Now read this: | <urn:uuid:5f988609-735d-41af-9a82-b57b6870a526> | CC-MAIN-2017-04 | http://www.itworld.com/article/2721865/enterprise-software/bizarre-stuff-you-never-knew-about-venus-and-mars.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00460-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.9497 | 411 | 3.5625 | 4 |
Generally the Computer Anti-Virus Reasearch Organization CARO issues virus names which are used as standard throughout the Anti-Virus
A name of a virus is generally derived from the code of the virus, or the execution of the virus.
If nothing else helps a name can be assigned to a virus randomly for the sole purpose of identifying that virus.
Before a new virus receives an official CARO name, various Anti-Virus organizations or software publishers
might give differing names to a new virus which will be used as aliases after the official naming of the concerned
Viruses are grouped into families, for example the Stoned family. The
virus families are grouped into variants and, if necessary, these variants are divided into groups of | <urn:uuid:96c75f5c-a05f-4eea-b05e-d363790deefb> | CC-MAIN-2017-04 | https://www.f-secure.com/v-descs/info/alias.shtml | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00276-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92168 | 153 | 3.296875 | 3 |
The Australian melaleuca tree was imported into South Florida in the early part of the century to dry out "worthless swampland" and transform it into lumber-producing forests. At the time, the tree was thought to have enormous water-absorbing qualities. Melaleucas were also brought in for their beauty and planted as ornamentals throughout South Florida, from Lake Okeechobee, south to the Gulf.
Although there is no scientific evidence that melaleucas draw more water than the native cypress, assumptions of early importers were not entirely incorrect. Large concentrations of the trees do dry up marshes. They also spread rapidly. Today the melaleuca is scattered over 2,000,000 acres of South Florida, with heavy concentrations in county, state and federal parks, including Big Cypress National Preserve (BCNP), a 720,000-acre watershed established by Congress in 1974 to protect Everglades National Park water. Fresh water from the preserve sustains the Everglades and acts as a natural barrier to the incursion of seawater from the gulf.
Dade County Biologist Sandra Wells described the potential impact of the tree on the county's 130-acre wetlands park, which is part of the historical Everglades. "What we have now is a grassy swamp, which we could lose. Leaf matter dropped from hundreds of melaleuca trees can raise the ground elevation, transforming marsh into forest and wiping out wildlife that live in the marsh."
Since marsh water percolates down through the earth to replenish natural underground acquifers, loss of wetlands also means loss of water resources. Amy Ferriter of the South Florida Water Management District (SFWMD) said the melaleuca is taking over water-conservation areas.
"Historically, Lake Okeechobee was largely marsh with many different plant species," she said. "There is a 100,000-acre marsh in that lake. In the 1940s, the Army Corps of Engineers planted melaleucas around the rim of the lake; now there are tree forests covering 12,000 acres of the marsh."
Tony Pernas, resource management specialist with the National Park Service (NPS) at BCNP, said several factors have contributed to the spread of melaleucas in the preserve -- offroad vehicles, ornamental plantings at home sites and hunting camps, fires, and a lack of natural enemies. "After a fire, a single melaleuca tree releases about 20 million seeds. The same fire kills off a lot of our native species, so the seeds have little or no competition for light and space. Native plants and trees have some sort of bugs that eat them and keep them in balance, but the melaleuca has no natural enemies here, no biological controls for checks and balances. The trees multiply and become so dense they eventually shade out a lot of the competition."
In 1984, NPS resource managers identified the melaleuca as a serious threat to the ecosystem of the preserve, and initiated an eradication program. Since the tree crosses many political boundaries, NPS efforts have been joined by other government and public agencies, and volunteer organizations. Eradication efforts involve systematic reconnaissance flights, aerial photography and ground crews. The driving technologies of the program are GIS and GPS, which are used to locate and provide map coverage of the trees and to maintain databases on treatment progress and follow-up programs.
According to SFWMD's Dan Thayer, the task of coordinating efforts and resources is the responsibility of an interagency steering committee made up of all the resource management people who have to deal with the melaleuca problem. "We function in a coordinated and systematic fashion," Thayer said, "so we know where everyone is working and which areas have been treated. We also share information, funding resources, and the research we all feel is needed."
Ferriter pointed out that the agencies involved in melaleuca eradication all use basically the same approach and share information on resources and procedures. "SFWMD, for example, flies periodic aerial surveys at 500 feet over pre-determined, east-west flight lines laid out in a grid across the state," she said. "An observer on each side of the plane has a Trimble Differential GPS (DGPS) receiver, with built-in datalogger. Every eight seconds the system beeps, and observers enter attribute codes relating to density and species seen in about a half-acre area. The data are given to the Forest Service, and they generate the estimated acreage for us."
Data from the aerial surveys are used to set up sectors to be treated. Ground crews go to the sectors by airboat or helicopter, depending on the distance. When crews spot a tree, they cut it down, treat the stump with herbicide, and pull up any seedlings in the area. A crew member enters the latitude and longitude of the stump into the GPS and fills out a report, noting treatment methods, number and size of trees treated and other related data.
Field data are entered into a GIS, which is used to manage information relating to tree location, size and concentration, type of treatment, date and scheduled follow-up. SFWMD Geographer Teresa Bennett first puts the GPS latitude and longitude coordinates into Florida State Plane Coordinates so they can be overlaid with databases on county lines, roads, water canals, trails and camps. "Then I make a plot that shows which sectors the field crews treated, what was done, when, and what eradication methods were used. For a basemap, we use digitized USGS 1:24,000 quad sheets. Sector treatment dates are indicated on the map by color dots. All of '94 might be in red; '93 in green; '96 in another color. We can display all the years on one map so we can tell what year a particular sector was treated."
Dade County GIS Specialist Cathy Black said, "I do a lot of the work in ArcInfo, then take it into ArcView because it makes map display and printing easier, and allows us to query management data in Paradox. For example, the data groups are assigned a plot ID number. From ArcView, I can pull up that ID number and find out the percentage of herbicide used, or the number of saplings killed -- whatever is in that database. I can also do complicated queries by taking the coverage back into ArcInfo, along with the Paradox table."
SFWMD project manager for the melaleuca program, Francois Laroche, said that after a sector has been treated, systematic follow-up treatments are scheduled over a period of several years, with field work eventually tapering off to inspections and monitoring. "For example, we have an area we've been working in for five years, and we've been back in there twice to treat those trees a year and a half apart. When we kill a tree, it's dead. But if it drops seeds, we have to go back periodically to pull up the seedlings and ensure there is no regrowth." Laroche expects that it will take five years or more of going back to sites and doing labor-intensive work. After that, it will be a matter of inspecting and monitoring scattered seedlings, which can be easily pulled.
Pernas said NPS has a similar follow-up program. "We started on the southern boundary to protect the Everglades first, then moved our way north. Every three years we go back to an area we've treated and re-treat it. We allow the seedlings to get up high enough so we can spot them and pull them up easily before they produce seeds. We've been going back to some sites for eight years, and we're still finding stuff at the original site, but each time it gets less and less. We hope eventually we will get all of it." | <urn:uuid:f7c54977-0f10-497e-a46e-5cb77a01984e> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/GIS-Aids-Eradication-of-Florida-Melaleuca.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00184-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948865 | 1,627 | 3.984375 | 4 |
Governments deliver some of society's most vital services. In an emergency, citizens look to public agencies to continue providing public assistance checks and public utilities, in addition to emergency services, including medical care or shelter. In today's technological age, governments must ensure vulnerabilities are found and mitigated before disaster strikes.
In April, Government Technology surveyed our readers and found that respondents were mostly confident in their ability to rebound after a catastrophe. Of the 121 readers who responded, about 60 percent were highly confident in their organizations' ability to maintain operations during a disaster. Eighty-one percent were at least moderately confident.
Although many of the responses showed a positive trend in government's preparedness, some experts warn that truly being prepared is more than ticking items off a checklist.
For example, 95 percent of respondents said they had identified key staff to be contacted in an emergency. But there often is confusion among those key staff members when an emergency happens, said Eric Holdeman, principal with ICF International's Emergency Management and Homeland Security Practice.
"You've identified staff," he said, "but do the staff know they've been identified? That happens much more than you'd ever believe."
Even when staff members are aware they've been identified, they don't always know what's required of them, added Holdeman. "They've never received any information about what that duty might be or received any training on what it would mean." In addition, planners must consider whether those key staff members will be available during an emergency. For instance, they may be the primary caretaker for a family member or are unable to leave their children because they have no alternate care, said Holdeman, who was director of the King County, Wash., Office of Emergency Management prior to joining ICF International.
"It's one thing to have a name on a roster; it's another thing to really have a plan in place and be prepared," Holdeman said, adding that names on the roster should belong to people who know they will be expected to perform some duty and be committed to doing so.
Practice Is Essential
David Taylor, CIO of the Florida Department of Health, said the survey results are encouraging and show improvement from a few years ago, though he believes the most important thing about having a disaster recovery plan is practicing with it.
"The risk in all of these plans is that folks create a plan and then put it on the shelf and don't pull it off until the first time they're in a crisis situation," Taylor said. "It is imperative that they practice the plan, the individuals who are involved in the plan know exactly what their role is and what the relationship of their role is to other people's roles in the plan. Unless that's practiced, people really don't understand how to implement their role in a disaster."
Tabletop exercises can be helpful, he said, but real-life simulations are the most valuable for identifying vulnerabilities. "Actually using the stuff in a closest to real-life simulation as possible - that's where the real learning takes place and the real value happens," Taylor said.
In Florida, agencies are required to test their plans at least once per year, he said. The Florida Department of Health tests its plan biannually.
Since Taylor's agency is responsible for coordinating emergency health and medical care in a disaster, the agency must be able to have necessary systems up and running in a very short time. "It's important for us to set up treatment areas regardless of the physical location - buildings, warehouses, campgrounds, etc. - so we practice that," he said. "We bring out the satellite systems, the 800 MHz radio and get all of that in place, and we test that twice a
For the most recent test, Taylor's staff went to a 4-H Club camp, where they simulated a mass evacuation due to falling satellite debris and in fewer than 30 minutes set up the technology equipment necessary to administer emergency medical care to evacuees. "We revise our processes and procedures each time we do it," he said.
Drew Leatherby, issues coordinator of the National Association of State Chief Information Officers (NASCIO), said the need for education throughout an organization is something NASCIO has tried to emphasize at the state level.
"You must have an education program in place not only to educate your critical staff, but so that even your rank-and-file staff should be aware that there is a disaster recovery plan and at least be marginally aware of what's going to be expected of them if there is a shutdown," said Leatherby, who has written several papers on the topic for NASCIO.
In addition to educating internal staff, state CIOs should work across organizations to coordinate across the state as a whole, he said.
"The state probably has its own disaster recovery plan. As an IT organization, the CIO's office should cross that boundary and make sure its [disaster recovery] plans are in sync with what's going on in the state as well," Leatherby said. "I think that may be where some of the CIO offices fall short: They just look at their critical staff when they're looking at [disaster response/business continuity] and education."
Understanding interdependencies when planning for continuity of government operations is an important component to preparation, said Holdeman. For example, even though most survey respondents (81 percent) said key staff members can access resources they need remotely, it's likely they won't be able to remotely access the network all at once. In some cases, this is due to infrastructure issues the organization cannot control. In a pandemic flu situation, if everyone signs on from home, providers may not be able to keep up, he said.
"If you're in a cable-based system, you're actually sharing bandwidth with everybody in your neighborhood," he said. "So it isn't just based on what you've put into place; it's the infrastructure that's within your community."
Working with other public and private organizations can help planners' understanding of these interdependencies, he said.
"Some would say, 'Well we're doing what we can within our realm of control,' which is appropriate, but then you have to understand - through these public-private partnerships - what the limit is," Holdeman said. "Don't just be thinking, 'We're going to save our bacon by having this in place.'"
Cross-organizational relationships are difficult, however, and require time and energy. According to survey results, close to 60 percent of respondents have relationships in place with other government agencies to assist with disasters, and the variance between state and local respondents was negligible. But when it comes to having similar relationships with private-sector entities, the response between state and local governments varied significantly: More than 50 percent of state respondents claimed those relationships existed, while only a third of local government respondents had relationships in place with private-sector entities to assist in a disaster.
Leatherby, who is also the author of two NASCIO reports on disaster recovery - IT Disaster Recovery and Business Continuity Tool-kit: Planning for the Next Disaster and Pandemic Planning and Response for State IT: Where's My Staff? - said communicating with private-sector partners before an incident is critical.
"One of the big recommendations we made in our report was to make sure that you have prepositioned contracts in place with your vendors and to make sure all your ducks are in order with your outside contractors and things of that nature," said Leatherby.
According to Holdeman, a lack of resources is one reason local governments may struggle more when working with the private sector.
"It's probably a degree of how many resources the local government has opposed to the state," he said, adding that often local government emergency management organizations consist of only one person, and in some cases, organizations devote less than a full-time equivalent (FTE) position to the job. "It's part of an FTE, and it's not his or her primary duty."
Reaching outside of the organization to find solutions also can be a cultural challenge for government, Holdeman added. Sometimes it's merely a factor of how much energy managers are willing to expend creating those relationships.
"It sounds simple, but it's really hard gaining and maintaining relationships across the board within and between governments. Then with the public-private sector, it just adds a whole new dimension."
Getting resources for those planning efforts also isn't easy, he said. "We compete against all these other daily needs for something that might happen at some point in the future."
Making the Financial Case
Judging by the survey results, disaster recovery planning in local governments is harder hit by the current budget crunch - only 12 percent of local government respondents said continuity of government projects are being maintained at previous funding levels. By contrast, nearly 30 percent of state government respondents said their continuity of government projects are maintained at the same level or exceed previous funding levels.
Holdeman said funding for disaster recovery and continuity is a difficult case to make, but planners should watch for opportunities.
"Be prepared for the windows of opportunity - even if it's not an opportunity you want - where disaster impacts or comes close to impacting your jurisdiction," said Holdeman. "If it is a televised event, those types of things call people to action, and for a short period of time you have constituencies asking their elected officials, 'What about us?'"
For example, he pointed to a situation when a major earthquake hit California and he was working for the Washington State Emergency Management Division.
"It happened on a holiday," Holdeman said. "I called the director and said, 'Hey, they just had an earthquake in California. We need to get into work and figure out what we want to ask the Legislature for because they're in session.' And sure enough, that afternoon they were calling and saying, 'What is it we should be doing?'" Holdeman said the agency requested a 24-hour duty officer and a new emergency operations center [EOC], and received the 24-hour duty officer. "It wasn't until the next big disaster in the state that we got funding for a new EOC."
NASCIO's Leatherby said part of the reason his organization produces information on continuity of government and disaster recovery planning is to help decision-makers understand the need for this type of planning.
"If all the IT functions for the state go down, they're going to be feeling the pinch from their constituents when people aren't receiving their welfare checks, when they're not able to access services and things like that," he said.
"I think you kind of have to scare the decision-makers, especially in light of all the budget problems you're having, into realizing that this is an essential line item," Leatherby continued. "It's not just a luxury." | <urn:uuid:f59be927-c0cc-4a2f-9b57-64f366587d82> | CC-MAIN-2017-04 | http://www.govtech.com/pcio/Business-Continuity-Survey-Gauges-Governments-Ability.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00514-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.975116 | 2,227 | 2.515625 | 3 |
Although workers on NASA's Gravity Recovery and Interior Laboratory (GRAIL) mission may not be attending New Year's Eve parties this weekend, they aren't too disappointed.
A quarter of a million miles away, the mission's two small spacecraft will enter the moon's orbit to begin what promises to be one of the most detailed studies of its surface and gravity.
"Our team may not get to partake in a traditional New Year's celebration, but I expect seeing our two spacecraft safely in lunar orbit should give us all the excitement and feeling of euphoria anyone in this line of work would ever need," David Lehman, project manager for GRAIL at NASA's Jet Propulsion Laboratory in Pasadena, Calif., said in a statement.
The first of the two spacecraft is scheduled to arrive at the moon at 4:21 p.m. EST on Saturday, with the second arriving at 5:05 p.m. EST on Sunday. Both were launched in early September on the same United Launch Alliance Delta II rocket from Cape Canaveral, Fla.
By precisely measuring how the moon's gravity affects the distance between the two spacecraft as they orbit during the 82-day mission, researchers expect to better understand the origins of the moon, where humans may someday spend more than a few passing hours at a time. They also hope the $350 million mission will provide some insight into how the Earth and other rocky planets formed.
"I predict we are going to find something ... that is really, really going to surprise us and turn our understanding of how the Earth and other terrestrial planets formed on its ear," said Maria Zuber, the principal investigator with the mission, in an August news briefing. | <urn:uuid:a64e3cc4-98c6-4bd2-86f0-8146e7fb64c5> | CC-MAIN-2017-04 | http://www.nextgov.com/technology-news/2011/12/grail-spacecraft-celebrating-new-years-from-the-moon/50381/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00332-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959162 | 341 | 3.265625 | 3 |
Here is a collection of highlights from this week’s news stream as reported by HPCwire.
Nuclear Reactor Simulated on a Supercomputer
A New York Times article this week reports on the development of a new kind of nuclear reactor that uses depleted uranium for fuel, posing a much lower risk than traditional nuclear reactors. The so-called traveling wave nuclear reactor is emerging as a potential game-changer according to top science and energy officials.
The article explains the design:
This reactor (pdf) works something like a cigarette. A chain reaction is launched in one end of a closed cylinder of spent uranium fuel, creating a slow-moving “deflagration,” a wave of nuclear fission reactions that keeps breeding neutrons as it makes way through the container, keeping the self-sustaining reaction going.
Usually, these types of projects are publicly-funded, but, in this case, a private research firm, TerraPower LLC, is running the show. And although this is a private venture, the team gets support from MIT, DOE’s Argonne National Laboratory and other scientific centers.
According to the head of TerraPower, former Bechtel Corp. physicist John Gilleland, the reactor, once ignited, could continue to react for 100 years.
“We believe we’ve developed a new type of nuclear reactor that can represent a nearly infinite supply of low-cost energy, carbon-free energy for the world,” Gilleland said.
The project relies on supercomputing resources to simulate and verify the traveling wave concept. The supercomputers are also engaged in finding alloys for the reactor cylinders that can withstand the heavy damage caused by neutron impacts.
The story is replete with lots of “ifs” and “whens” and acknowledges that no one has actually created a working deflagration wave. However, the Massachusetts Institute of Technology’s Technology Review magazine selected the traveling wave reactor last year as one of 10 emerging technologies with the highest potential impact.
Gilleland said that we may see a commercial version of the reactor in 15 years, pending a working physical prototype.
NSF Award to Create Center Dedicated to Reducing Power Consumption
The National Science Foundation (NSF) has awarded $24.5 million to UC Berkeley researchers for the development of a multi-institutional center whose aim is to increase the energy-efficiency of electronics. The lofty goal? A million-fold reduction in the power consumption of electronics. The five-year NSF grant will be used to establish the Center for Energy Efficient Electronics Science, or E3S.
From the release:
To reduce the energy requirement of electronics, researchers will focus on the basic logic switch, the decision-maker in computer chips. The logic switch function is primarily performed by transistors, which demand about 1 volt to function well. There are more than 1 billion transistors in multi-core microprocessor systems.
Eli Yablonovitch, UC Berkeley professor of electrical engineering and computer sciences and the director of the Center for E3S, explains that the transistors in the microprocessor are what draw the most power in a computer, giving off heat in the process.
According to Moore’s Law, named after Intel co-founder Gordon E. Moore, the number of transistors on an integrated circuit double every two years. But Moore also predicted that the power consumption of electronic components will drop dramatically.
Researchers plan to design lower-voltage transistors, noting that the wires of an electronic circuit could operate on as little as a few millivolts. Power needs drop by the square of the voltage, so a thousand-fold reduction in voltage requirements adds up to a million-fold reduction in power consumption, says Yablonovitch.
With the increase in information processing needs skyrocketing, the importance of changing the underlying power requirements at the most basic level of our computational technology cannot be overstated. | <urn:uuid:a1e20a89-e211-4866-9ad7-27cce563cecb> | CC-MAIN-2017-04 | https://www.hpcwire.com/2010/02/25/the_week_in_review/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00542-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924461 | 823 | 2.9375 | 3 |
Common Science Questions with Answers
Here is a collection of some commonly asked science questions with answers for students and parents. Please share more of such questions in your comments.
What influence does the moon have on the Earth?
The main effect of the moon on Earth is its influence on tides. Ocean tides result because the side of the Earth that is facing the moon is affected by the moon's gravity more than the center or opposite side. This creates the effect of ocean water constantly being attracted to two bulges on opposite sides of the Earth, thus creating the tides.
How hot is the sun?
The surface of the sun is 5,778 K, or 9,941 degrees Fahrenheit. The core of the sun, however, is a staggering 15,700,000 K, or 28,259,540 degrees Fahrenheit. The Earth is just far enough away from the sun to not burn up and just close enough to not turn into a frozen wasteland. Other planets in our solar system are uninhabitable because of their proximity to the sun.
What is an endangered species?
An endangered species is defined as any species at risk of extinction. This can be caused by dwindling numbers or impending environmental changes. Although the term endangered species is used in a broad manner, there are other classifications within the larger category of threatened organisms, including vulnerable and critically endangered. Examples of endangered species include blue whales, snow leopards, tigers, and the albatross.
How many types of insects are there?
Six and 10 million unique species of insects are in existence on our planet. However, only around 1 million of these have been officially discovered. Among discovered insects, beetles are greatest in number, with around 360,000 unique species. A total of 170,000 species of butterflies have been recorded, while only 300 types of webspinners have been discovered.
What is global warming?
Global warming is a description of the increase in Earth's surface temperature. This phenomenon has been noticed since the mid-1900s, and is predicted to continue unless humans do something to curb or reverse it. Global warming is widely acknowledged to be a result of the greenhouse effect: as our atmosphere grows thicker with greenhouse gases, such as carbon dioxide, the sun's rays penetrate the atmosphere, reflect off of the Earth's surface, and are then unable to pass back through the atmosphere, heating up the earth.
This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you.
We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus | <urn:uuid:7ba91ec4-0657-4cf2-ad78-5afe579d1db8> | CC-MAIN-2017-04 | http://www.knowledgepublisher.com/article-943.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00020-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94606 | 579 | 3.765625 | 4 |
I have two files. I need to compare date variables of these two files. When i declared two variables for two files and when compared the dates... the data in the first datastep is having value but when it comes to another datstep it is showing .(missing value).
The missing value in SAS (period) means that the numeric variable did not have a number read in the position(s) you specified. Please post the SAS code and the error messages in the SASLOG for more help.
It will always say that ... you've got a DATA step followed by a DATA step. Unless you link the two, the data read in the second step has no relationship to the data read in the first step. From your code, what I think you need to do is:
IF _N_ = 1 THEN SET PARM_CARD;
INFILE REINACDC EOF=TOTALS; | <urn:uuid:e622fd51-8a6d-44cf-91ba-304e38c865b2> | CC-MAIN-2017-04 | http://ibmmainframes.com/about33208.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00230-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.901806 | 189 | 2.515625 | 3 |
Table of Contents
Windows 8 hides certain files so that you are not able to view them while exploring the files and folders on your computer. Windows has two types of files that Windows will classify as hidden and hide from the user. The first type are actually hidden files, which are ones that are given the +H attribute or specified as Hidden in a file or folder's properties. The second type of file are System files, which are files that are required for the proper operation of Windows 8 as thus are hidden so that they are not changed or deleted by accident.
There are times, though, that you need to see the files that are hidden on your computer. Whether it is because malware has created them and set them to be hidden or you need to repair a problem on your computer that requires you to view Hidden or System files. Due to this it can be beneficial at times to be able to see all files, including hidden ones, that may be on your computer. This tutorial will explain how to show all hidden files in Windows 8.
If you just need to see hidden files and you do not wish to see the files that are classified as Windows 8 System files, then please follow these steps. Please note that this is the recommended setting if you wish to see just hidden files.
You will now see hidden files and a filename's extension.
If you need to see system and hidden files in Windows 8, then please follow these steps:
You will now be able to see all Windows 8 system files and any files that have been marked as hidden on your computer. To reverse these changes, simply go back into the Folder Options screen as described above and change the settings to Don't show hidden files, folder or drives and uncheck Hide extensions for known file types and Hide protected operating system files (Recommended).
Without a doubt, being able to view any and all the files on your computer is an immensely useful tool when troubleshooting Windows problems. Using the instructions above you can enable the viewing of all hidden and system files so that you can properly troubleshoot your issues, and when finished, revert them back to Windows' default settings. If you have any questions about this process please feel free to post them in our Windows 8 forums.
The Windows 8 Metro Start screen contains small squares and rectangles, called tiles, that are used to represent various programs that you can access. The default tiles that are on your Start screen are not, though, the only programs that you can add. It is possible to add other programs by searching for them or using more advanced techniques to make them available. This guide will explain how to ...
Windows 7 hides certain files so that they are not able to be seen when you exploring the files on your computer. The files it hides are typically Windows 7 System files that if tampered with could cause problems with the proper operation of the computer. It is possible, though, for a user or piece of software to set make a file hidden by enabling the hidden attribute in a particular file or ...
By default Windows hides certain files from being seen with Windows Explorer or My Computer. This is done to protect these files, which are usually system files, from accidentally being modified or deleted by the user. Unfortunately viruses, spyware, and hijackers often hide there files in this way making it hard to find them and then delete them.
In the past when you wanted to uninstall an application in Windows, you would uninstall it from the Uninstall a Program control panel. Though this option still exists for installed programs, Metro Apps that are purchased from the Windows Store or that come with Windows 8 are not shown in this control panel. In order to uninstall these Apps, you will need to use a different procedure. To uninstall ...
The Windows 8 Metro Start screen is designed to make it so that you can easily resize and move tiles as well as make new tile groups. This allows you to organize the interface in a way that works best for you. The instructions below will explain how you can perform these tasks in the Windows 8 Start screen. | <urn:uuid:55de6c12-ae21-4b15-a6f4-a503e95a5890> | CC-MAIN-2017-04 | https://www.bleepingcomputer.com/tutorials/show-hidden-files-in-windows-8/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00046-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940166 | 822 | 2.65625 | 3 |
The 3D smartphone will be blasted off with a cargo spacecraft to the ISS on Friday.
NASA is planning to take the smartphone with 3D sensing technology, being developed under Google’s Project Tango device, to space to use it on International Space Station.
The 3D sensing technology will be used in the Synchronized Position Hold, Engage, Reorient, Experimental Satellites, or SPHERES.
Ball shaped SPHERES is being prepared to carry out the daily tasks of astronauts as well other high risk duties outside the space station.
This smartphone has customised hardware and software that has human like qualities of understanding of motion and space. It features motion-tracking camera and an infrared depth sensor which can create a 3D map to let the SPHERES easily navigate through modules.
In 2010 engineers at NASA’s Ames Research Center in Mountain View, California tried to make the SPHERES smarter by adding redesigned smartphones which included shatter proof display and extra batteries. It gave the satellites a visual capability but that wasn’t enough.
NASA’s Smart SPHERES project manager Chris Provencher told Reuters, "This type of capability is exactly what we need for a robot that’s going to do tasks anywhere inside the space station. It has to have a very robust navigation system."
"We wanted to add communication, a camera, increase the processing capability, accelerometers and other sensors. As we were scratching our heads thinking about what to do, we realized the answer was in our hands." | <urn:uuid:7fb6779f-d442-4a42-baed-596856b92203> | CC-MAIN-2017-04 | http://www.cbronline.com/news/mobility/devices/nasa-to-take-googles-3d-smartphone-to-space-4313214 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00258-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926746 | 316 | 2.984375 | 3 |
What is the right amount of water to drink daily?
This is an important information about the right amount of water one must drink daily. Please read every word of it and share with your friends as well.
A lot of people ask this question and they want to know the right amount of water to drink daily. Learn how much water you should drink every day. You need half of your body weight in ounces of water daily. This is the number most individuals recognize and aspire to for their daily water intake. A good rule for drinking the right amount of water is to drink one glass of water each hour. An average person should drink this amount of water daily. It is the recommended daily water intake amount to drink 12 glasses of water. If you drink alcohol, you should drink at least an equal amount of water. Drinking a good amount of water could lower your risks of a heart attack.
Water is such a critical part of our health and physiology that when a human body gets an insufficient amount of it on a regular basis, the negative health effects are virtually endless. The question remains, however, how much water is the right amount? Many health experts have made a general recommendation of 12 glasses of water every day. This is certainly an increase in water consumption for most people, and as such following this recommendation would be an improvement for many. However, it seems to go against common sense to suggest that there is any single amount of water that is right for all people under all circumstances. People are of vastly different sizes, live in very different climates and have very different levels of physical exertion, and 12 glasses a day for all of them defies any sort of logic.
How then do you decide how much water is right for you? Fortunately, there is a simple and, in most circumstances, foolproof way to easily check whether you are giving your body enough water. All that most people need to do to accurately gauge their water consumption needs is to note the color of their urine. Unless there is reason to suspect a health condition that would affect the color of urine, such as jaundice, then this is an excellent and sensitive way to see if you are giving your body enough water on a regular basis.
When your body is getting a sufficient supply of water, the urine will be very light in color. When urine is a dark yellow in color, this is a strong sign that your body is dehydrated and would benefit from a substantial increase in the amount of water you drink every day. Simply begin to increase the amount of water you drink slowly over a period of several days, and you will notice quickly the change in urine color. Once you have consistently reached the point where the color is very light, you will know that you are drinking the amount of water that is generally appropriate for your body and your current lifestyle.
- 75% of Americans are chronically dehydrated. (Likely applies to half the world population)
- In 37% of Americans, the thirst mechanism is so weak that it is mistaken for hunger.
- Even MILD dehydration will slow down one's metabolism as 3%.
- One glass of water will shut down midnight hunger pangs for almost 100% of the dieters according to a University of Washington study.
- Lack of water, the number 1 trigger of daytime fatigue.
- Preliminary research indicates that 8-10 glasses of water a day could significantly ease back and joint pain for up to 80% of sufferers.
- A mere 2% drop in body water can trigger fuzzy short term memory, trouble with basic math, and difficulty focusing on the computer screen or on a printed page.
- Drinking 5 glasses of water daily decreases the risk of colon cancer by 45%, plus it can slash the risk of breast cancer by 79% and one is 50% less likely to develop bladder cancer.
Most of us know we should be drinking more water, but how do you know how much water is the right amount for you? The same amount can't be right for everyone! Fortunately, the above mentioned tips are a simple and common sense way to accurately gauge your water consumption, under any circumstances.
This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you.
We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus | <urn:uuid:e50307f2-cc6f-4c35-8e9a-bb4a616e71dc> | CC-MAIN-2017-04 | http://www.knowledgepublisher.com/article-880.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00010-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.966955 | 933 | 2.53125 | 3 |
Last week, Mathematica inventor Stephen Wolfram announced that he would be launching a new kind of Internet search engine in May, with the not-so-modest name of Wolfram Alpha. Actually it isn’t a search engine at all — it’s more like a fact-finding engine. Wolfram himself calls it a “computational knowledge engine,” which is meant to convey the idea that the software will be computing results, rather than just indexing Web pages based on keywords.
So presumably you could ask Alpha a question like “When was the last time SGI’s stock was above $10 a share?” and Alpha would go fetch — sorry, compute — the answer. (By the way, the answer is Oct. 8, 2008.) Usually queries like that would only be possible with highly-specialized database applications. Alpha aims to generalize that kind of capability using only the unstructured data that exists on the Web. If Wolfram has really succeeded in doing this, it would introduce an entirely new kind of Web interaction.
Well, maybe not entirely new. Even Google lets you do simple math calculations in its search box and allows you to find definitions of words by using a “define:” prefix on a keyword. But the scope of Alpha is much larger.
Wolfram provided few details on how his new software would be implemented and made no mention if he had a supercomputer in his basement to do all this computational heavy lifting. The basic idea is that you would present a question to the Alpha box (which coincidentally looks a lot like Google’s search box). Alpha would then untangle the semantics of the question, map it to the desired operation and then go distill the answer from the available data.
The last step is the most mysterious part. Whereas some people have suggested that the Web’s data needs to be systematically tagged to make it semantically friendly, Wolfram has apparently taken a different tack. He says the engine will be based on Mathematica and the computational approach outline in his 2002 book, A New Kind of Science (NKS) to provide the engine’s intelligence. In a nutshell, NKS describes a way of applying a general set of methods to computational systems, the idea being that a great deal of complexity is able to arise from a very simple set of rules.
If that’s not obtuse enough for you, Wolfram says the Alpha engine will “explicitly implement methods and models, as algorithms, and explicitly curate all data so that it is immediately computable.” I’m not at all sure what he means by “curate all data” other than reorganizing the Web data on the fly so that it’s more digestible to the software. Any way you look at, Alpha’s going to need a lot of smarts to do even basic fact finding, especially considering that there’s no way to verify the accuracy of data encountered on the Web.
All of this might be passed off as worthless hype, except for the fact that Wolfram has plenty of street cred in the industry. As the founder and CEO of Wolfram Research, he has built a highly successful business based on a very useful piece of software. Before his success in business, Wolfram was an accomplished scientist in his own right, having received his Ph.D. in particle physics from Caltech at the age 20. His NKS book opened to mix reviews, but the ideas presented in the text show he can still plumb the depths of computational theory and mathematical modeling.
I’m anxious to see how Alpha performs in the wild. If it lives up to even a fraction of the hype it has generated, Alpha is destined to become a common Web tool like Google and Wikipedia. And maybe someday, when your son or daughter asks you why the sky is blue, you’ll say: “Let’s wolfram it.” | <urn:uuid:6433c4ae-dba0-4e9b-83c5-6dd329b58137> | CC-MAIN-2017-04 | https://www.hpcwire.com/2009/03/11/gird_your_loins_google_here_comes_wolfram_alpha/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00312-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956406 | 828 | 2.890625 | 3 |
You probably don't interact with your PC's BIOS (Basic Input/Output Operating System) much, but it occupies a unique and highly privileged position in your computer's architecture. Since the BIOS loads before the operating system--and before you enter your user credentials--malware surreptitiously introduced into the BIOS could activate itself long before any anti-malware software has an opportunity to detect it. A sophisticated and malicious program operating at such a low level could take control of your PC without providing a clue that it was there.
And fortunately, there have been very few confirmed cases of malware infections at the BIOS level. The most famous is 1998's Chernobyl virus, and the vulnerabilities that enabled that exploit are not present in new PCs. UEFI (Unified Extensible Firmware Interface) and the secure boot mechanism in Windows 8 will make this less of an issue, but that's a topic for another article. But it's always better to be safe than sorry. The first step in your safety plan is to protect your BIOS with an administrator password that must be entered before a BIOS update can occur. We'll show you how.
Step 1:Boot or reboot your PC. While it's starting up, repeatedly tap the 'DEL,' 'F1,' or whatever other special key is required to launch the BIOS. This information is typically displayed onscreen during the boot process, although it might not be immediately obvious. This text, for instance, appears verbatim at the bottom of the screen for just a few moments after we start our computer:
Step 2: Once your BIOS setup menu is loaded, look for the menu item that enables you to set up a password. There might be more than one. Our BIOS, for example, has provisions for setting up both a "supervisor" password and a "user" password. In our case, you must log in with the supervisor password to make changes to the BIOS. The user password only allows you to see the current BIOS values.
Step 3: Select the menu item for creating the password and enter a password (usually twice, to verify what you typed the first time). If you think you might have trouble remembering the password later, as you'll access your BIOS infrequently, store it in a password locker utility such as LastPass. Save your BIOS changes and your computer will reboot. From here on out you'll need to enter this password before any changes can be made to your BIOS, ensuring malware will have a harder time harming your PC.
This story, "How to secure your BIOS" was originally published by PCWorld. | <urn:uuid:28df3048-3f29-414e-9f6a-edb7ac111a54> | CC-MAIN-2017-04 | http://www.itworld.com/article/2728573/security/how-to-secure-your-bios.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00489-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936468 | 528 | 2.59375 | 3 |
Security Exam Study Strategies
Several study ingredients are key to security certification success, for security credentials at all levels. But because there are so many security certifications to choose from, we won’t describe a detailed plan of attack for some particular security certification exam. Rather, we’ll provide a set of general guidelines and approaches that should help you prepare for just about any security exam.
We’ll also identify a collection of key topics about which any competent security professional should be knowledgeable. Most programs cover these topics at some level of detail or another—the more senior or advanced certifications tend to dig more into the details, while the more junior or less advanced programs tend to concentrate more on concepts, terminology and basics.
- Master the basics of information security: Obtain and read a good general security book; use it to teach yourself the vocabulary, concepts, tools and techniques associated with good information security. Consider this a necessary orientation to the overall subject matter.
- Understand security policy: A security policy captures an organization’s posture to identify and protect key assets, including information, systems, facilities and more. Formulating security policy requires performing risk analyses and assessments. Maintaining proper security means revising and revisiting security policy in light of new threats or attacks, new tools and technologies to foil or avoid them and changes to organizational priorities and investments. In short, it’s a job that’s never done!
- Investigate the ins and outs of risk assessment: Ultimately, security rests on an understanding of what’s at risk and what protecting an organization from such risks is worth. It’s essential to understand how this exercise works, the steps involved in completing it, what kinds of tools exist to support the process and what kinds of documentation should be created to capture the results. Any decent security book will spend some time on this topic.
- Recognize, catalog and analyze threats and appropriate responses: A sense of history and a feel for the flow of events that define security countermeasures is essential to practicing security as a discipline. This means familiarizing yourself with at least the outlines of the catalog of known attacks and threats, recognizing well-known types and classes and understanding what kinds of responses are appropriate, both in the short and long terms. Keeping up with current events in the security field means keeping current on attacks and exploits on an ongoing basis.
- Understand the security regimen: New attacks happen regularly; new vulnerabilities are discovered in the platforms and applications in use in your organization; new security risks, countermeasures, tools and techniques come along all the time. Security is an ongoing process, not a “fix it and forget it” task. Learning about security practices, processes and procedures will reinforce this notion, but understanding the day-to-day work involved will be your single most intense and valuable learning experience in becoming a competent information security professional.
The key security topics about which every well-prepared certification candidate should be informed include the following:
- Cryptography and keys: As we’ve mentioned elsewhere in this study guide, modern information security rests firmly on a foundation of cryptography and related services and benefits. These include the notions of privacy, confidentiality, digital signature, symmetric and asymmetric keys and the public key infrastructure (PKI) and all the types of keys and cryptography algorithms that make these things both possible and important. Any competent security professional, no matter what level, needs to understand the terms and concepts that fall within this broad heading. The more senior the credential, the more the candidate needs to understand its inner workings, as well as related design, implementation, management and troubleshooting issues, practices and procedures.
- Securing communications: Given ubiquitous access to the glorious but dangerous Internet and the ease with which users come and go across organizational boundaries into what virus hunters call “the wild” (public networks in general), an understanding of communications is essential to establish competency as an information security professional. This means mastering the protocols and vulnerabilities involved, as well as key tools and technologies including virtual private networks, tunneling, address and port translation, firewalls, intrusion detection systems, filtering and proxying methods and other communications security tools and techniques on an “as needed” basis.
- Implementing security policy: Ultimately, security policy reflects an organization’s posture, practices, education, investment and dependence on information security. There is no single topic that ties the field together more thoroughly, nor is there any other topic that professionals must understand more completely, than this one. More junior candidates need to understand its contents, requirements and lifecycle. More senior candidates need to know how to design, implement and manage security policy and should understand its significance at the business level (finance, strategy, costs and benefits) as well.
- Understanding physical security: Without physical security, no other kind of security is possible. Creating good physical security involves evaluating and securing one’s premises, managing access to sensitive information, equipment and infrastructure and more. Above all, it involves educating users about the importance of security and the risks and consequences associated with security failures. Make it a hobby horse as you prepare for certification, and you’ll be repaid for your obsession on your exams, and in the workplace!
You’ll find plenty of places to turn for more information and education on the topic of information security in our resource guide, online at www.certmag.com/issues/feb02/sg/securityresources. But please remember that your best overall study strategy when preparing for any security certification exam is to be well-informed on all the relevant topics. Although we give you a great leg up in this article, you should make a thorough review and analysis of the actual exam’s objectives the linchpin of your preparation efforts. Build a laundry list of areas where you need more knowledge, understanding, skills and experience (or any combination of these factors), use it to drive your studies and bone up all you can on topics where you’re not completely comfortable, and it’s hard to go wrong on any exam. Good luck!
Ed Tittel is president of LANwrights Inc. and is contributing editor for Certification Magazine. Ed can be reached at firstname.lastname@example.org.
James Michael Stewart is a senior writer, project manager and instructor at LANwrights Inc. He can be reached at email@example.com. | <urn:uuid:baa0c487-8d52-4fdb-9b44-e58b6906c64f> | CC-MAIN-2017-04 | http://certmag.com/security-exam-study-strategies/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00121-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926738 | 1,319 | 2.671875 | 3 |
Vishing and Toll Fraud
Vishing is quite similar to the term Phishing and it means collecting private information over the telephone system.
In the technical language the terminology of phishing is a recent addition. The main concept behind phishing is that –mail is sent to user by an attacker. The e-mail looks like a form of ethical business. The user is requested to confirm her/his info or data by entering that data on the web page, such as his/her “social security number”, even “bank or credit card account” number, “birth date”, or mother’s name. The attacker can then take this information provided by the user for unethical purposes. | <urn:uuid:4bb835f8-16b7-4ea3-aea8-0c90a38f16c1> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/tag/vishing-and-toll-fraud | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00241-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934894 | 147 | 2.9375 | 3 |
Benn A.R.,UK National Oceanography Center |
Weaver P.P.,UK National Oceanography Center |
Billet D.S.M.,UK National Oceanography Center |
van den Hove S.,Median |
And 4 more authors.
PLoS ONE | Year: 2010
Background: Environmental impacts of human activities on the deep seafloor are of increasing concern. While activities within waters shallower than 200 m have been the focus of previous assessments of anthropogenic impacts, no study has quantified the extent of individual activities or determined the relative severity of each type of impact in the deep sea. Methodology: The OSPAR maritime area of the North East Atlantic was chosen for the study because it is considered to be one of the most heavily impacted by human activities. In addition, it was assumed data would be accessible and comprehensive. Using the available data we map and estimate the spatial extent of five major human activities in the North East Atlantic that impact the deep seafloor: submarine communication cables, marine scientific research, oil and gas industry, bottom trawling and the historical dumping of radioactive waste, munitions and chemical weapons. It was not possible to map military activities. The extent of each activity has been quantified for a single year, 2005. Principal Findings: Human activities on the deep seafloor of the OSPAR area of the North Atlantic are significant but their footprints vary. Some activities have an immediate impact after which seafloor communities could re-establish, while others can continue to make an impact for many years and the impact could extend far beyond the physical disturbance. The spatial extent of waste disposal, telecommunication cables, the hydrocarbon industry and marine research activities is relatively small. The extent of bottom trawling is very significant and, even on the lowest possible estimates, is an order of magnitude greater than the total extent of all the other activities. Conclusions/Significance: To meet future ecosystem-based management and governance objectives for the deep sea significant improvements are required in data collection and availability as well as a greater awareness of the relative impact of each human activity. © 2010 Benn et al. Source
Tinch R.,Median |
Balian E.,Median |
Carss D.,UK Center for Ecology and Hydrology |
de Blas D.E.,CIRAD - Agricultural Research for Development |
And 13 more authors.
Biodiversity and Conservation | Year: 2016
To address the pressing problems associated with biodiversity loss, changes in awareness and behaviour are required from decision makers in all sectors. Science-policy interfaces (SPIs) have the potential to play an important role, and to achieve this effectively, there is a need to understand better the ways in which existing SPIs strive for effective communication, learning and behavioural change. Using a series of test cases across the world, we assess a range of features influencing the effectiveness of SPIs through communication and argumentation processes, engagement of actors and other aspects that contribute to potential success. Our results demonstrate the importance of dynamic and iterative processes of interaction to support effective SPI work. We stress the importance of seeing SPIs as dynamic learning environments and we provide recommendations for how they can enhance success in meeting their targeted outcomes. In particular, we recommend building long-term trust, creating learning environments, fostering participation and ownership of the process and building capacity to combat silo thinking. Processes to enable these changes may include, for example, inviting and integrating feedback, extended peer review and attention to contextualising knowledge for different audiences, and time and sustained effort dedicated to trust-building and developing common languages. However there are no ‘one size fits all’ solutions, and methods must be adapted to context and participants. Creating and maintaining effective dynamic learning environments will both require and encourage changes in institutional and individual behaviours: a challenging agenda, but one with potential for positive feedbacks to maintain momentum. © 2016 Springer Science+Business Media Dordrecht Source
Young J.C.,UK Center for Ecology and Hydrology |
Waylen K.A.,James Hutton Institute |
Sarkki S.,University of Oulu |
Albon S.,James Hutton Institute |
And 14 more authors.
Biodiversity and Conservation | Year: 2014
A better, more effective dialogue is needed between biodiversity science and policy to underpin the sustainable use and conservation of biodiversity. Many initiatives exist to improve communication, but these largely conform to a 'linear' or technocratic model of communication in which scientific "facts" are transmitted directly to policy advisers to "solve problems". While this model can help start a dialogue, it is, on its own, insufficient, as decision taking is complex, iterative and often selective in the information used. Here, we draw on the literature, interviews and a workshop with individuals working at the interface between biodiversity science and government policy development to present practical recommendations aimed at individuals, teams, organisations and funders. Building on these recommendations, we stress the need to: (a) frame research and policy jointly; (b) promote inter- and trans-disciplinary research and "multi-domain" working groups that include both scientists and policy makers from various fields and sectors; (c) put in place structures and incentive schemes that support interactive dialogue in the long-term. These are changes that are needed in light of continuing loss of biodiversity and its consequences for societal dependence on and benefits from nature. © 2014 The Author(s). Source | <urn:uuid:74f00ae0-a2f0-453a-8eff-7c752906962c> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/median-958433/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00359-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91535 | 1,114 | 3.171875 | 3 |
In a January 30 address explaining his Computer Science For All initiative, President Barack Obama said, “We have to make sure all our kids are equipped for the jobs of the future, which means not just being able to work with computers, but developing the analytical and coding skills to power our innovation economy.”
The initiative, if approved by congress, earmarks $4 billion for states and another $100 million for districts to train teachers and purchase the tools for elementary, middle and high schools to provide opportunities to learn computer science to promote more Science, Technology, Engineering and Math (STEM) skills. The funding programs, which will appear in the president’s forthcoming budget proposal for 2017, are just the latest effort from the White House to bring more science and technology education to students.
With this said, adding more computer science and STEM instruction to K-12 teaching and learning in the next year is most likely inevitable. In order to prepare for adding more coding in the classroom, though, schools need to lay the groundwork by creating and implementing solid technology management and digital citizenship practices.
Technology management and STEM
In order for more coding and other hands-on STEM learning to become an integral part of every student’s education, school’s management of technology and digital devices will need to be well thought out and executed. Schools are already beginning to beef up technology infrastructure and Internet connectivity through initiatives like e-Rate funding. Additionally, schools are purchasing more and more digital devices like chromebooks and tablets so that students in all grades have access to online resources. But how are these devices managed?
The U.S. Department of Education Office of Educational Technology’s 2014 publication, Future Ready Schools: Building Technology Infrastructure for Learning, stresses the importance of planning and implementing procedures that employ system-level controls for device and application management. School district staff should be able to push out updates, security protocols and other critical functions from a central location (versus physically touching each device).
As more devices are added and more students use them, this will become increasingly important. Schools will need software that not only allows remote management of devices, but allows remote monitoring of how and when the devices are being used. This will prevent misuse by students while saving significant amounts of time for IT managers.
Digital citizenship and STEM
As more and more teaching and learning of coding is adding into K-12 education, it will be imperative to allow students and teachers to access the resources required to do so. Both private and corporate organizations, such as Cartoon Network and MIT Media Lab, are taking initiative to provide curriculum for coding. But in order for students access these resources, they have to be unblocked on school’s networks. Network management will need to shift from simply blocking and filtering websites and apps to more robust pairing of digital citizenship and monitoring of online activity.
To address this idea of monitoring and promoting digital citizenship, the Future Ready Schools publication also states, “Less ability to modify or change the device settings can make it easier for IT staff to maintain devices, but gives students less freedom to personalize devices for their needs. The decision to allow more control over a device may vary depending on the student. A multitiered model of permissions and restrictions gives students who demonstrate responsible behavior more privileges and restricts access for students who fail to show responsible behavior. As you consider these policies, remember that restricting a student’s access in one class will affect that student’s ability to participate in learning in subsequent classes as well.”
Having a technology management system in place that allows the remote monitoring of all devices, coupled with a multitiered model of monitoring online activity, will allow students to make choices and be responsible online. This will then allow instructors and schools to give students all the resources necessary to get the most from computer science and STEM education in the future.
Need a network, classroom, and e-safety management solution? Impero Education Pro software can help. For more information, call 877.883.4370 or email now. | <urn:uuid:2e5eb313-6983-4c71-b1b7-3d07d0bf28ca> | CC-MAIN-2017-04 | https://www.imperosoftware.com/adding-more-stem-education-in-k-12-starts-with-digital-citizenship-and-technology-management/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00415-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933119 | 824 | 3.265625 | 3 |
Productivity and speed are the two major criteria in supply chain and logistics. Cross-docking is just one strategy which can be implemented to help achieve a modest advantage. Implemented appropriately and in the right conditions, cross-docking can provide significant improvements in efficiency and handling times.
What is cross docking?
The name ‘cross docking’ explains the process of receiving products through an inbound dock and then transferring them across the dock to the outbound transportation dock.
Cross docking is a procedure in logistics industry where products from a supplier or manufacturing plant are distributed directly to a customer or wholesale chain with marginal to no handling or storage time. Cross docking takes place in a distribution docking terminal; usually consisting of trucks and dock doors on two (inbound and outbound) sides with minimal storage space.
In modest languages, inbound products arrive through road transportation such as trucks/trailers, and are assigned to a receiving dock on one side of the ‘cross dock’ terminal. Once the inbound transportation has been docked its products can be moved either directly or indirectly to the outbound destinations; then they can be unloaded, sorted and screened and other processes gets completed to identify their end destinations. After being sorted, products are moved to the other end of the ‘cross dock’ terminal via a MHE’s, conveyor belt, pallet truck or another means of transportation to their destined outbound dock. When the outbound transportation has been loaded, the products can then make their way to customers.
When is cross-docking used?
The process of cross docking will not outfit all warehouses needs. It is therefore important to make a conversant decision as to whether cross-docking will increase the productivity, costs and customer satisfaction for your specific business. Cross docking can improve the supply chain for a variety of specific products. For instance, unpreserved or temperature controlled items such as food which need to be transported as quickly as possible can be benefitted by this process. Additionally, already packaged and sorted products ready for transportation to a particular customer can become a faster and more efficient process through cross docking.
Some of the main reasons cross docking is executed is to:
- Provide a vital site for products to be sorted and similar products combined and delivered to multiple destinations in the most cost effective and fastest way. This process is also known as “hub and spoke”
- Conglomerate various smaller product loads into one mode of transport to save transportation costs. This process is defined as ‘consolidation processes’.
- Break down large product loads into smaller loads for transportation to create an easier delivery process to the customer. This process is defined as ‘deconsolidation processes’.
Designing the cross docking facility
Cross-dock facilities are generally designed in an "I" configuration for facilities with 150 doors or less. The goal in using this shape is to maximize the number of inbound and outbound doors that can be added to the facility while keeping the floor area inside the facility to a minimum.
For facilities with 150–200 doors, a "T" shape is more cost effective.
For facilities with 200 or more doors, the cost-minimizing shape is an "X".
Cross docking is the process which is followed to move the shipments without storing the cargos in logistics and distribution. HCL is focussed on building new solutions and propositions to meet increasing customer expectations in the 3PL industry. | <urn:uuid:826a1308-f969-4555-a69d-44a7c9ee6d1d> | CC-MAIN-2017-04 | https://www.hcltech.com/blogs/travel-transportation-hospitality-and-logistics/cross-docking-warehouse | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00323-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948579 | 716 | 2.796875 | 3 |
This post has nothing to do with IT, just happened to have been a curiosity conjured during my travels up North and back down South on various IT projects.
The Earth is not a perfect sphere, it is a spheroid that bulges out at the equator – the Earth's equatorial radius is greater than the Earth's polar radius. From high-school physics we know that Potential Energy = mass x gravity x height and so it follows that we might expect the potential energy of an object on the Earth's surface (sea-level) at the equator, to be greater than the potential energy of an object on the Earth's surface closer to the poles, since we can think of sea-level at the equator as being higher (further away from the Earth's core/center of mass) than sea-level close to the poles.
Application of the Hypothesis
If I travel from London to Glasgow achieving an MPG of 50 (Diesel), by how much would I expect the MPG to be affected on the drive back from Glasgow to London (because of the need to burn more fuel to acquire the additional potential energy)?
This application is based on a complete fantasy scenario where there are no traffic problems, the road is upon a perfectly flat spheroidal Earth (it could be argued that even with undulations in the carriageway, would still need to acquire more potential energy on the drive to London,) and I travel from sea-level in London to sea-level in Glasgow, and is really more of a mathematical exercise that attempts to calculate if there would be any noticeable difference. Apologies in advance for any flaws in the calculations!
An old copy of Maple 7 was used for the calculations, and some of the lines below in red represent the Maple Execution Group Inputs with some formulas in blue.
Latitudes in degrees North:
The Earth's equatorial radius a and polar radius b in metres:
Mass of the automobile in kg:
Calorific value of diesel in J/kg:
Density of petroleum diesel in kg/l:
Litres in a UK gallon:
Distance London to Glasgow in miles:
Radians as a function of Degrees:
Earth's gravity (in ms-2) as a function of Radians:
Radius (in metres) at a given geodetic Latitude as a function of Radians (or distance from the Earth's center to a point on the spheroid surface):
f:=phi->sqrt( ( (a^2*cos(phi))^2 + (b^2*sin(phi))^2 ) / ( (a*cos(phi))^2 + (b*sin(phi))^2 ) );
PotentialEnergy in Joules with mass (in kg) gravity (in ms-2) and height (in m):
GlasgowLatitudeRadians = 0.9751154533
LondonLatitudeRadians = 0.8991430162
GlasgowGravity = 9.818471842 ms-2
LondonGravity = 9.816854446 ms-2
*Notice that the gravity in Glasgow worked out as very slightly stronger!
GlasgowRadius = 6363522.841 m
LondonRadius = 6365075.641 m
And Potential Energy for the 1000kg automobile:
GlasgowPotentialEnergy = 62480069830 J
LondonPotentialEnergy = 62485021110 J
And the potential energy difference for LondonPE minus GlasgowPE:
PEDifference = 62485021110 - 62480069830 = 4951280 J
Kilos of diesel required:
KilosOfDiesel = 4951280 / 45300000 = 0.1092997793
Litres of diesel required:
LitresOfDiesel = KilosOfDiesel / 0.832 = 0.1313699271
Gallons of diesel required:
GallonsOfDiesel = LitresOfDiesel / 4.54609188 = 0.02889733216
A journey from London to Glasgow of 405.1 miles at 50 MPG uses:
GallonsToGlasgow = 405.1/50 = 8.102
To get back to London requires an additional 0.02889733216 gallons of diesel making the MPG:
MPGtoLondon = 405.1/(8.102+0.02889733216)
The difference would be barely noticeable! | <urn:uuid:fa3d26f0-e587-4b97-afda-93bae37516ff> | CC-MAIN-2017-04 | http://www.cosonok.com/2012/05/theory-into-why-it-should-be-cheaper.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00351-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.856075 | 947 | 2.875 | 3 |
Researchers at IBM have developed software that uses optical character recognition and screen scraping to identify and cover up confidential data.
MAGEN works at the screen level by ‘catching’ the information before it hits the screen, analyzing the screen content, and then masking those details that need to be hidden from the person logged in. The major novelty lies in architecting a single system that handles a wide range of scenarios in a centralized and unified manner, IBM stated.
The IBM system treats the screen of information as a picture and uses optical character recognition to identify the pieces that were defined as confidential. It then places a data 'mask' over the details that need to remain hidden—without ever copying, changing, or processing the data, IBM said.
MAGEN does not change the software program or the data -- it filters the information before it ever reaches the PC screen -- and does not force companies to create modified copies of electronic records where information is masked, scrambled, or eliminated, IBM stated.
IBM cites an example of a MEGEN application a healthcare firm that outsources customer service and claims processing functions to a third-party. Although private medical information in the patient records can’t be shared with the contractors, customer service representatives need access to patient records. In these kinds of cases, MAGEN can hide private information so that it never appears on the agents’ screens, IBM stated. Or, it can partially hide data, such as for the screens of call center customer service representatives, who only need enough identifying data to access, confirm or update an account.
IBM researchers have been on a security roll of late. Big Blue last week said one of its researchers made it possible for computer systems to perform calculations on encrypted data without decrypting it. IBM said the technology would let computer services, such as Google or others storing the confidential, electronic data of others will be able to fully analyze data on their clients' behalf without expensive interaction with the client and without actually seeing any of the private data.
The idea is a user could search for information using encrypted search words, and get encrypted results they could then decrypt on their own. Other potential applications include enabling filters to identify spam, even in encrypted email, or protecting information contained in electronic medical records. The breakthrough might also one day enable computer users to retrieve information from a search engine with more confidentiality, IBM said.
And last year IBM researchers came up with a small device they called "security on a stick" for use in online banking so customers plugging into any computer can protect transactions and find out if Trojan malware is trying to steal funds.
Created in IBM's Zurich Research Lab, the "security on a stick" is still a prototype and being tested in a few trials in Europe, says Michael Baentsch, a senior researcher there. IBM, which unveiled the device today, officially calls it the "Zone Trusted Information Channel" because the little USB-based device works to set up a secure channel to an online banking site supporting it.
Layer 8 in a box
Check out these other hot stories: | <urn:uuid:b033a124-0b08-4c55-983f-e7027f43491a> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2236399/security/ibm-researchers-build-security-software-to-mask-confidential-info.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00167-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935007 | 626 | 2.640625 | 3 |
While H.264 is often considered a single 'thing', many different 'types' of H.264 exist. These types have performance tradeoffs. For the past few years, most IP camera manufacturers only supported the most basic type - baseline profile. Now, increasingly, manufacturers are adding support for more 'advanced' types include main and high profile. In this report, we share our test findings of baseline vs main profile measuring differences in bandwidth consumption and CPU utilitzation.
Background on H.264
IP camera manufacturers has largely standardized on H.264 as the codec of choice for surveillance streaming. Since basically all surveillance video is compressed, codecs are required. In the past, MJPEG and MPEG-4 were most commonly used. Now it is predominantly H.264. In 2009-2010, a heated debate existed about using MJPEG or H.264. As our extensive test results of MJPEG vs H.264 showed, H.264 offered clear and compelling bandwidth savings.
H.264 Baseline vs Main Profile
Of the numerous H.264 profiles, the two most common considered for surveillance are baseline and main. Baseline is typically considered the least efficient of the H.264 profiles but also the least demanding of computing resources. By contrast, main profile is considered to be more bandwidth efficient but also more demanding.
Increasingly, new IP cameras are using main profile by default while the previous generation from 2-3 years ago were more likely to use baseline.
Questions for Our Test
We performed a test in 3 different scenes - daytime simple, nighttime and complex / high motion - to measure differences in bandwidth consumption and CPU utilization for H.264 baseline and main profiles.
The questions we addressed were:
- How much of a bandwidth savings, if any, does main profile deliver over baseline?
- How much does bandwidth savings vary by type of scene?
- How much does CPU utilization increase, if any, when using main rather than baseline?
- Should you prefer main profile cameras over baseline ones? | <urn:uuid:9c82c7bc-472f-406a-b579-0ab21da1d222> | CC-MAIN-2017-04 | https://ipvm.com/reports/h264-codec-shootout | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00561-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922363 | 417 | 2.625 | 3 |
AGL Energy , a publicly–listed Australian company, provides energy products and services to the Australian economy. The company is involved in both the generation and distribution of electricity for residential and commercial use.AGL Energy generates electricity from power stations that use thermal power, natural gas, wind power, hydroelectricity, and coal seam gas sources. The company began operating in Australia in 1837 as The Australian Gas Light Company and claimed in 2014 that it had more than 3.8 million residential and business customer accounts across New South Wales, Victoria, South Australia and Queensland. It has large investments in the supply of gas and electricity, and is Australia's largest private owner, operator and developer of renewable energy assets. Wikipedia.
News Article | April 22, 2016
A ground-breaking study involving leading utilities and the Australian Renewable Energy Agency has suggested that Australia’s largest battery storage array could be installed at a South Australian wind farm. The study – Energy Storage for Commercial Renewable Integration in South Australia (ESCRI-SA) – looks at a range of possibilities for non-hydro storage in South Australia and concludes that a 10MW, 20MWh lithium-ion battery storage facility next to the 91MW Wattle Valley wind farm on the Yorke Peninsula is the best option. It is not yet clear that the project will go ahead in that form – questions about financing, the economics of the project and the ability of ARENA to maintain grant funding have yet to be resolved – but it seems certain that the project will go ahead in some form, possibly as a reconfigured 30MW, 8MWh facility. South Australia finds itself at the cutting edge of the world’s shift to renewable energy, with its wind farms and rooftop solar expected to account for around half of total demand by the end of the year. While the Australian Energy Market Operator says this should pose no problems for the local grid – even after the closure of the state’s last coal-fired power station within a few weeks – eventually battery storage will have to be integrated into the grid to ensure stability. “There is no better place to demonstrate this than in South Australia, which has world leading levels of intermittent wind and solar PV generation relative to demand,” the study says. Within a decade, rooftop solar may account for all demand on some days, and there is another 3,000MW of wind projects in the pipeline. The 368-page ESCRI study – partnered by ElectraNet, AGL Energy and Worley Parsons – says that while there are no immediate problems, there is a sense of urgency because battery storage is emerging quickly and the market is simply not prepared. “It is hard to see a long-term future which does not involve energy storage in some form,” it notes, adding that the issues arising in South Australia are likely to emerge in other states as renewable energy penetration increases; meaning reliance on traditional inter-connector network solutions may become less effective. ARENA admits that the report’s conclusions around the economics of the project were disappointing, because it found that it would need grant funding of around $14 million, or nearly two-thirds the cost of the project. But it, and the consortium members, expect this to turn around soon. For one, the discussions with the battery storage industry found that the market is still very immature, and battery storage is a complex business. In other words, the battery storage industry is still learning how to configure its gear to suit the network and its major players. Secondly, the costs of battery storage are expected to fall quickly, with nearly all of the battery storage providers indicating that prices would fall by half in the next few years. Thirdly, and perhaps most significantly, is that the market for services that battery storage can provide to the network is also immature. These services include balancing the output of wind and solar farms, keeping the lights on in a blackout, reducing transmission losses, and providing frequency services to keep the grid stable. Once these services are better understood, and better valued – and this might need adjustments to regulations and market signals – then the economics of battery storage are likely to be clear. Indeed, the report notes that frequency control – and the ability to keep the lights on in the event that the state’s interconnector to Victoria goes out – could be critical. It says that one project is not enough to do this job, but if enough energy storage devices were installed, then this could reduce market fuel costs (from gas generators, for instance) and avoid the loss of all supply to grid-connected consumers. This is particularly important, in light of the state’s recent black-out and the problems created by fossil fuel generators in the attempted re-start. Certainly, the consortium members are keen for the project to go ahead, and say that without it, Australia might be left behind just when it should be seizing the opportunity of leading the pack. “Unlike Australia, other countries have particular policy drivers which are leading to storage take-up, with motives likely to include the lowering of integration costs of renewables, the gaining of experience with a likely disruptive technology and the driving of a local energy storage industry,” the study notes. It cites the Californian Independent System Operator (ISO), which operates in one of the most active energy storage markets driven by its policy mandate for more than 2GW of battery storage, designed specifically to ensure it keeps on top of renewable integration. “In the absence of such policy drivers and any current roadmap, Australia must make prudent investments to keep pace,” the ESCRI report notes. “This also supports the case to continue exploration of the storage product but provides more incentive to maximise the business case – that is, leverage the most from that investment.” Wattle Point and the nearby Dalrymple sub-station was chosen because it is a kind of microcosm of the state’s grid. It’s at the end of the network, it has large penetration of renewables, and there is a possibility of it being “islanded” – meaning that it will rely on local resources, including battery storage, to keep the lights on. It also offers advantages to both ElectraNet, which runs the main transmission line, and AGL. Assets owned by distribution network SA Power Networks were not considered, even though areas such as Kangaroo Island and Victor Harbour could also be suitable. Expressions of interest from 42 international parties were received, and 17 formal proposals, including technologies such as lithium-ion, sodium-sulphur and advanced lead acid batteries; molten salt heat storage; hydrogen generation and storage; and a number of different flow batteries. Project sizes ranged from 10-20MW and 20MWh to 200MWh. In the end, the consortium crunched the numbers in a detailed study of a 10MW- 20MWh lithium-ion project at Dalrymple. As for the next stage, the project partners are keen not to reduce the scale of the project too much, otherwise it will limit its impact, and may not allow the parallel services of market and network value to be realised effectively at the same time. It may not even choose lithium-ion, but rather a hybrid of energy storage technologies through a single interface, if available. “The consortium remains agnostic to which energy storage technology is used and will pursue that which delivers the optimum business case, although the project is really more about application than technology.” Drive an electric car? Complete one of our short surveys for our next electric car report. Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.
News Article | August 22, 2016
AGL Energy Ltd. plans to announce a program within a few months to roll out about 1,000 energy storage systems for Australian homes with rooftop solar panels amid forecasts that falling prices will stimulate demand.
News Article | September 2, 2016
Some utilities may think that it will be up to a decade before there is a mass market uptake of battery storage, and the chair of the Australian Energy Market Operator may even try to convince themselves that the technology won’t be commercial for another two decades, but they might be kidding themselves: New research suggests that the cross-over point between the value of solar and storage and grid prices for Australian households may occur within one year. That, at least, is the conclusion of research from Curtin University’s Jemma Green and Peter Newman, which suggests that the A1 tariff – the standard tariff offered to households by state owned retailer Synergy in West Australia – will become more expensive than the combined value of rooftop solar and battery storage some time in 2017. The graph was presented on Tuesday by David Martin – Green’s fellow executive in the solar trading start-up Power Ledger, which is using blockchain technology (the software behind Bitcoin) to trial solar sharing business models in Perth. “That price crossover – the point where the A1 tariff is equal to the value of energy from solar and tariffs happens next year … next year,” Martin told the Energy Disruption conference in Sydney co-hosted by RenewEconomy. He said that did not meant that people were going to “leap off the grid” in big numbers straight away. That’s because when that point is reached there are “intangible benefits” of being connected to the network, and it would cost a lot more to install enough batteries to deal with the consumer’s demand peaks, or days of cloudy weather. “But as soon as these lines diverge by a significant amount – and overtake the benefits of being connected to the network, then what happens?” The answer, he pointed out in another graph, is a big problem for the utilities that make their money from supplying power to households, because a lot of that demand will now disappear from view, and go “behind the meter.” Martin says a home with a 4kW array might still use the grid for most of the time – meaning that only 45 per cent of the load is “hidden” from the network behind the meter. But with battery storage, the rate of “load defection” – as opposed to grid defection – was likely to increase to the high 90 per cent levels in some instances (see graph above). Those households will only be tapping into network for a small amount of their energy needs. This, of course, has major implications for network business models – particularly their revenue source – and for other consumers. Networks, Martin says, will have to face losing $100 million in revenue in West Australia for instance, or load 20 per cent more grid costs on to other consumers to protect their revenues. Hence, Martin says, the need for completely new ways of thinking about network use, and of sharing solar energy and battery storage. That’s what Power Ledger intends to do with its shared solar model – it allows those with solar and storage to share their power with those who maybe don’t have it – and allows better utilisation of the grid. It also requires, he says, a completely new way of thinking about regulations. The rules governing the electricity industry had been framed without any consideration for sharing energy, for storing energy, or for the kind of technology that his company proposes. Martin was not the only person talking of an imminent tipping point in the economics of battery storage. Stefan Jarnason, the founder and head of Solar Analytics, a monitoring company partly owned by AGL Energy, says he believed that even some of the more bullish forecasts for battery storage were too conservative. These included predictions – from the likes of Bloomberg New Energy Finance above – that some six million households will have energy storage by 2040. Jarnason says that this shows that massive uptake is inevitable, but it is the speed that counts. He notes there there are already 1.6 million homes with rooftop solar, and around one million of these would soon be paid “visually” nothing for the vast majority of their rooftop solar production that is exported back to the grid. Most premium tariffs end at the end of the year in NSW, Victoria and South Australia. “We talk to those customers and they are not very happy about that. They love the fact that they have solar, they feel a bit green, a bit financially savvy, even a bit smug, but they already have got their money back on solar and they are now looking to do something extra.” That estimate is backed up by experience from one of the many battery storage providers moving into the Australian market. Enphase Energy, which is launching its first battery storage product in Australia, says more than half of the 72,000 units of its 1.2kWh battery has come from NSW, where generous feed in tariffs come to an end at the end of the year. “The energy storage revolution is going to come much faster than a lot of people imagine and a lot of people are prepared,” Jarnason says. “Residential solar plus storage is going to eat the energy world.” Drive an electric car? Complete one of our short surveys for our next electric car report. Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.
News Article | March 29, 2016
Dominant gas pipeline owner APA Group has stepped up the pace of its acquisitions again with a $151 million deal to buy AGL Energy out of their jointly owned Diamantina power plant in Queensland, a transaction foreshadowed by Street Talk earlier this month. The transaction, which comes as AGL Energy sheds non-core assets and increases its focus on retailing and customer-centric businesses such as solar power, has enabled APA to upgrade its earnings guidance for the full-year.
News Article | March 17, 2016
The 46,000MW of black and brown coal fired generation currently in service in Germany will be worthless in little more than a decade if the country adopts the targets embraced at the Paris climate change conference, a new analysis from Barclays says. The analysis, from leading energy analyst Mark Lewis, says coal fired power generation would have to be almost completely eliminated by 2030 in a scenario that would require a substantial carbon price (€45/t) and the end to the current energy market design. The conclusions of the report should not be a surprise, but are important because the fossil fuel industry appears to remain in complete denial, hoping that the Paris climate agreements amount to a “fell-good” gathering that will have no follow through. But the latest data on soaring global temperatures, and the biggest jump in greenhouse gas emissions on record, suggests this hope is misplaced. Or at least should be. The analysis has implications too, for Australia, which faces a similar transition to Germany, which a growing level of renewables on top of a huge surplus in coal generation, and no effective carbon price to influence energy choices. Even the most ambitious fossil fuel generators in Australia, such as AGL Energy, say their coal assets, particularly their brown coal assets, will continue generating as late as 2048. The Barclays scenario shows that this would be impossible. Indeed, The Climate Institute says all coal fired generation must cease by 2035 at the latest. But back to the Barclays report. It suggests that coal will have to be displaced to meet greenhouse gas targets embraced by the EU and implied by the Paris agreement. In Germany, under current policies, total generation will reduce by 15 per cent under energy efficiency measures and its target for 50 per cent renewables. To meet the emissions goals, however, that remaining fossil fuel generation will have to come nearly exclusively from gas, meaning a carbon price is required to upturn the “merit order”. Currently, the lack of a carbon price and the presence of cheap coal means that gas fired generation is marginalised. Lewis says by 2030, Germany will have dumped its energy system- where a plant is longer dispatched on the basis of its relative cost for the next half hour – and replace it with one where renewable generation is backed up by energy storage and sophisticated demand-side responses facilitated by smart-grid technologies. “It will still be some time before we reach that world and before the last half hour of competitively priced power is dispatched,” he notes. But he says this will occur, which is why the biggest utilities in Germany, E.ON and RWE, have decided to split their assets into new and old companies, jettisoning their old “centralised” generation assets to focus on a new business centered around solar, storage, smart grids and electric vehicles. “The Energiewende requires a complete strategic reprioritization away from conventional generation and towards renewable energy so that the companies can prepare for the power system of the future,” Lewis says. He believes that only a handful of coal plants would still be operating by 2030 – the Datteln 4 (1.1GW) and Maasvlakte 3 (1.1GW), and the Westfalen E (765MW) hard coal plants, all of which have very high efficiency rates of 46 per ent. The only lignite plant still running would be the BoA 2 & 3 units at Neurath (2.1GW), which have a very efficiency rate for lignite plant of 43 per cent. The value of the brown coal generators for both E.ON and RWE would be wiped out completely, and considerably reduced for hard coal. The value of gas-fired generation, though, would increase. Barclay’s base-case scenario assumes current targets, an average baseload power price over 2020-30 of €28/MWh in real terms (constant 2019 €), and an average EUA price in real terms of only €5/MWh. Its 2°C scenario would require an average EUA price of at least €45/t in real terms (constant 2019 €) and hence average baseload prices of €55-60/MWh (again in constant 2019 €) over 2021-30. By 2030, total coal and lignite output in the entire German market is only 50TWh (versus 190TWh in our base case) while total German gas-fired output in 2030 is doubled from the base case to 150TWh. “The implications of our analysis … of the German power sector – especially when taken together with Germany’s own 2030 targets for energy efficiency and renewable energy – are that very little coal and lignite could run by 2030,” Lewis writes. Gas displaces coal and lignite over the decade while coal and lignite plants see lower average utilization rates and shorter average operating lives, with what little plant is running by the end of the next decade pushed to the margin. Reprinted with permission. Get CleanTechnica’s 1st (completely free) electric car report → “Electric Cars: What Early Adopters & First Followers Want.” Come attend CleanTechnica’s 1st “Cleantech Revolution Tour” event → in Berlin, Germany, April 9–10. Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter. | <urn:uuid:8501c97a-69d9-4cfc-9cc0-4a6a70657ec9> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/agl-energy-550614/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00561-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959682 | 4,051 | 2.5625 | 3 |
The widespread adoption of digital communication in everyday life has vastly increased the temptation to multitask. Many people, particularly millennials, feel they need to continuously monitor news feeds, social media, text messages and other sources of digital information. Unfortunately, heavy media multitasking reduces personal effectiveness at work. Serious multitaskers may find it hard to believe, but research from Stanford University reports that people don’t multitask well. Consuming multiple electronic feeds simultaneously reduces attention, memory control and the speed with which an individual can switch among tasks.
The Stanford researchers compared people who regularly multitasked on electronic media against those who engaged in little media multitasking. They concluded that heavy multitaskers have difficulty focusing on important information, since they are easily distracted by irrelevant data. The researchers are convinced that chronic multitaskers’ minds do not work at maximum effectiveness, since they cannot help thinking about all the things they are currently not doing. The researchers are trying to determine if information overload harms cognitive ability or if regular multitaskers have always had minimal ability to concentrate.
Worse, chronic multitaskers often miss an important idea during the short time they have switched their focus. Most business conversations are short, direct and to the point. Even a few lost words can significantly inhibit the multitasker’s ability to make a meaningful contribution to the discussion.
If you have heavy media multitaskers in your organization, encourage them to do the following:
- Understand that heavy multitasking decreases rather than increases productivity. As the Stanford study concluded, it is more efficient to focus on one task at a time. The study also observed that most individuals accomplish more by doing less.
Multitaskers who are not convinced should try the following exercise. Make three columns on a page. Have someone time you as you write the numbers one through 23 from top to bottom in the first column, the letters A through W in the second column, and the Roman numerals one through 23 in the third column. On a new page, re-create the same table by row (1, A, I is the first row) instead of by column. Compare the times. Most people find that the context switching required to create the second table increases their time by 15% to 20%.
- Turn off their phone during meetings. While timely responses to digital communications are appreciated (and expected by most millennials), instantaneous responses are rarely required. Barring a true emergency, most people resent calls or electronic messages that interrupt meetings. And while they may not like it, even customers know that all organizations, including IT, serve many others.
Even if multitasking were equally efficient, management dislikes seeing subordinates shift their focus to check their digital media. Most managers, particularly those that are overcommitted, believe their time is valuable and resent having to wait even the short time it takes for an employee to read and respond to a message. Correspondingly, few executives would expect a subordinate to wait while they read and responded to their messages during a meeting. (If a true emergency arises, the meeting will be interrupted and rescheduled in order to address the crisis.) Moreover, when the highest-ranking person in a meeting looks at his/her phone or tablet, other participants assume they have permission to do the same, and will promptly begin checking their own digital media. Set a good example!
- Be attentive in meetings. Everyone has competing priorities. Working on another project while attending a meeting rarely works well. Other participants don’t mind some keyboarding during a meeting if the individual is taking notes or researching meeting issues. However, habitually failing to engage in meeting discussions due to multitasking is frowned upon.
Most executives are very good at focusing on the task at hand while blocking other thoughts and activities. They believe this is a critical skill and expect others to be similarly focused. To encourage this behavior, some executives even assign fines to employees who answer a phone call or respond to a message during a meeting.
- Practice mindfulness or meditation. With practice, these disciplines help increase concentration and ignore distractions. One basic exercise encourages choosing something to focus on, such as your breathing, an image or a sound. Keep your mind focused there for a set period of time, returning to the focal point whenever your mind wanders. Start with just 30 seconds and increase the focus time as you become more practiced.
Serious media multitasking is beginning to be recognized as a neural addiction. Multitasking increases production of the hormones cortisol and adrenaline. Increased amounts of these hormones overstimulate the brain, causing fuzzy thinking. In addition, the prefrontal cortex prefers external stimulation and rewards reading every post, Internet search or message with a burst of endogenous opioids. Essentially, this feedback loop rewards the brain for losing focus.
Encourage heavy multitaskers to reduce their consumption of electronic media. It is bad for their professional effectiveness, their career advancement and ultimately their mental health.
And though it can seem that just about everyone is multitasking, no one does it well.
OK, gotta run. Gotta check Facebook, Twitter and Instagram. How’re my stocks doing? Has my daughter responded to my last text? Should “multitasker” have a hyphen? What will the weather be tomorrow? Oh, excuse me, what did you say? What layoff?
Bart Perkins is managing partner at Louisville, Ky.-based Leverage Partners Inc., which helps organizations invest well in IT. Contact him at BartPerkins@LeveragePartners.com.
This story, "Out of focus: The multitasking dilemma" was originally published by Computerworld. | <urn:uuid:0b4bcec2-37f8-46c6-b997-2f4730d49558> | CC-MAIN-2017-04 | http://www.itnews.com/article/2989987/it-management/out-of-focus-the-multitasking-dilemma.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00157-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947866 | 1,150 | 2.6875 | 3 |
Komalamisra N.,Mahidol University |
Srisawat R.,Mahidol University |
Phanbhuwong T.,Mahidol University |
Oatwaree A.S.,Bangkok Metropolitan Administration
Southeast Asian Journal of Tropical Medicine and Public Health | Year: 2011
Mosquito larvae were collected from the houses of dengue infected patients in Bangkok, Thailand from 55 sites (36 out of the 50 districts of Metropolitan Bangkok). Aedes aegypti larvae were tested against temephos using WHO bioassay techniques. Adult mosquitoes were tested for susceptibility to permethrin, deltamethrin, cyfluthrin, malathion and DDT using WHO diagnostic doses. Most of the larvae tested were susceptible to temephos. Only few specimens were resistant to temephos. Most adult mosquitoes were highly susceptible to malathion. Deltamethrin resistance was seen in 6 districts of Bangkok. Variable levels of susceptibility were seen with cyfluthrin. Most of the specimens showed resistance to permethrin and all specimens were resistant to DDT. Source
File photo of revelers using water guns as they participate in a water fight during Songkran Festival celebrations at Silom road in Bangkok April 13, 2015. Thailand is facing its worst water shortage in two decades, with 14 out of 76 provinces hit and large swathes of agricultural land at risk. Thailand has entered its annual dry season, which typically runs from March to May, meaning the drought is likely to get worse. The Bangkok Metropolitan Administration's solution? Be a wet blanket by cutting festival days down from four to three and imposing a curfew. "This is partly symbolic, but we hope to save water too because our lakes have become deserts," said deputy Bangkok governor Amorn Kijchawengjul. "We don't want city folk splashing water around carelessly while farmers struggle." The Songkran festival, which marks Thai New Year, is often referred to as the world's biggest water fight - a time when revelers splashing water on each other and everyone, young and old, is fair game. But this year all splashing will have to stop at 9 p.m. sharp. "We'll just shut down the party," said Amorn.
Prybylski D.,Thailand MOPH U.S. CDC Collaboration |
Prybylski D.,Centers for Disease Control and Prevention |
Manopaiboon C.,Thailand MOPH U.S. CDC Collaboration |
Visavakum P.,Thailand MOPH U.S. CDC Collaboration |
And 10 more authors.
Drug and Alcohol Dependence | Year: 2015
Background: Thailand's long-standing HIV sero-sentinel surveillance system for people who inject drugs (PWID) is confined to those in methadone-based drug treatment clinics and representative data are scarce, especially outside of Bangkok. Methods: We conducted probability-based respondent-driven sampling (RDS) surveys in Bangkok (n = 738) and Chiang Mai (n = 309) to increase understanding of local HIV epidemics and to better inform the planning of evidence-based interventions. Results: PWID had different epidemiological profiles in these two cities. Overall HIV prevalence was higher in Bangkok (23.6% vs. 10.9%, p < 0.001) but PWID in Bangkok are older and appear to have long-standing HIV infections. In Chiang Mai, HIV infections appear to be more recently acquired and PWID were younger and had higher levels of recent injecting and sexual risk behaviors with lower levels of intervention exposure. Methamphetamine was the predominant drug injected in both sites and polydrug use was common although levels and patterns of the specific drugs injected varied significantly between the sites. In multivariate analysis, recent midazolam injection was significantly associated with HIV infection in Chiang Mai (adjusted odds ratio = 8.1; 95% confidence interval: 1.2-54.5) whereas in Bangkok HIV status was not associated with recent risk behaviors as infections had likely been acquired in the past. Conclusion: PWID epidemics in Thailand are heterogeneous and driven by local factors. There is a need to customize intervention strategies for PWID in different settings and to integrate population-based survey methods such as RDS into routine surveillance to monitor the national response. © 2015. Source
Choopanya K.,Bangkok Tenofovir Study Group |
Martin M.,Health-U |
Martin M.,Centers for Disease Control and Prevention |
Suntharasamai P.,Bangkok Tenofovir Study Group |
And 16 more authors.
The Lancet | Year: 2013
Background Antiretroviral pre-exposure prophylaxis reduces sexual transmission of HIV. We assessed whether daily oral use of tenofovir disoproxil fumarate (tenofovir), an antiretroviral, can reduce HIV transmission in injecting drug users. Methods In this randomised, double-blind, placebo-controlled trial, we enrolled volunteers from 17 drug-treatment clinics in Bangkok, Thailand. Participants were eligible if they were aged 20-60 years, were HIV-negative, and reported injecting drugs during the previous year. We randomly assigned participants (1:1; blocks of four) to either tenofovir or placebo using a computer-generated randomisation sequence. Participants chose either daily directly observed treatment or monthly visits and could switch at monthly visits. Participants received monthly HIV testing and individualised risk-reduction and adherence counselling, blood safety assessments every 3 months, and were off ered condoms and methadone treatment. The primary effi cacy endpoint was HIV infection, analysed by modifi ed intention-to-treat analysis. This trial is registered with ClinicalTrials.gov, number NCT00119106. Findings Between June 9, 2005, and July 22, 2010, we enrolled 2413 participants, assigning 1204 to tenofovir and 1209 to placebo. Two participants had HIV at enrolment and 50 became infected during follow-up: 17 in the tenofovir group (an incidence of 0.35 per 100 person-years) and 33 in the placebo group (0.68 per 100 person-years), indicating a 48.9% reduction in HIV incidence (95% CI 9.6-72.2; p=0.01). The occurrence of serious adverse events was much the same between the two groups (p=0.35). Nausea was more common in participants in the tenofovir group than in the placebo group (p=0.002). Interpretation In this study, daily oral tenofovir reduced the risk of HIV infection in people who inject drugs. Preexposure prophylaxis with tenofovir can now be considered for use as part of an HIV prevention package for people who inject drugs. Copyright © 2013 Elsevier B.V. Source
Martin M.,Centers for Disease Control and Prevention |
Vanichseni S.,Bangkok Vaccine Evaluation Group |
Suntharasamai P.,Mahidol University |
Mock P.A.,Centers for Disease Control and Prevention |
And 8 more authors.
International Journal of Drug Policy | Year: 2010
Background: HIV spread rapidly amongst injecting drug users (IDUs) in Bangkok in the late 1980s. In recent years, changes in the drugs injected by IDUs have been observed. We examined data from an HIV vaccine trial conducted amongst IDUs in Bangkok during 1999-2003 to describe drug injection practices, drugs injected, and determine if drug use choices altered the risk of incident HIV infection. Methods: The AIDSVAX B/E HIV vaccine trial was a randomized, double-blind, placebo-controlled trial. At enrolment and every 6 months thereafter, HIV status and risk behaviour were assessed. A proportional hazards model was used to evaluate demographic characteristics, incarceration, drug injection practices, sexual activity, and drugs injected during follow-up as independent predictors of HIV infection. Results: The proportion of participants injecting drugs, sharing needles, and injecting daily declined from baseline to month 36. Amongst participants who injected, the proportion injecting heroin declined (98.6-91.9%), whilst the proportions injecting methamphetamine (16.2-19.6%) and midazolam (9.9-31.9%) increased. HIV incidence was highest amongst participants injecting methamphetamine, 7.1 (95% CI, 5.4-9.2) per 100 person years. Injecting heroin and injecting methamphetamine were independently associated with incident HIV infection. Conclusions: Amongst AIDSVAX B/E vaccine trial participants who injected drugs during follow-up, the proportion injecting heroin declined whilst the proportion injecting methamphetamine, midazolam, or combinations of these drugs increased. Controlling for heroin use and other risk factors, participants injecting methamphetamine were more likely to become HIV-infected than participants not injecting methamphetamine. Additional HIV prevention tools are urgently needed including tools that address methamphetamine use. © 2010. Source | <urn:uuid:5ae86078-0075-4aa4-90cb-b80009b956c3> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/bangkok-metropolitan-administration-769057/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00030-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934286 | 1,879 | 2.71875 | 3 |
Facebook Slashes Data Center Power Consumption
Facebook is developing a new traffic management technology called Autoscale to optimize energy consumption by 10-15% at its data centers.
Facebook currently uses a traditional round-robin approach to load balancing, but found that was less than optimal, because servers running low-level loads use power more inefficiently than idle servers or servers running at moderate or greater loads, writes Qiang Wu, Facebook infrastructure software engineer, on the Facebook Code Engineering Blog.
Autoscale is designed to optimize workloads so that servers are either idling, or running at medium capacity. It tries to avoid assigning workloads in a way that results in servers running at low capacity, Wu writes.
An idle server consumes about 60 watts. It takes a big power hit, to 130 watts, when it jumps to low-level CPU utilization, for a small number of requests per second. But it only takes a small power hit, to 150 watts, when it goes from low-level to medium-level CPU utilization, Wu writes.
Therefore, from a power-efficiency perspective, we should try to avoid running a server at low RPS and instead try to run at medium RPS.
To tackle this problem and utilize power more efficiently, we changed the way that load is distributed to the different web servers in a cluster. The basic idea of Autoscale is that instead of a purely round-robin approach, the load balancer will concentrate workload to a server until it has at least a medium-level workload. If the overall workload is low (like at around midnight), the load balancer will use only a subset of servers. Other servers can be left running idle or be used for batch-processing workloads.
Though the idea sounds simple, it is a challenging task to implement effectively and robustly for a large-scale system.
Autoscale dynamically adjusts the size of the server pool in use, so that each active server will get at least a medium-level CPU load. Servers not in the active pool don't receive traffic.
Optimizing both performance and power consumption was key in developing decision logic for traffic management: "On one hand, we want to maximize the energy-saving opportunity. On the other, we don't want to over-concentrate the traffic in a way that could affect site performance."
Results have been promising:
Autoscale led to a 27% power savings around midnight (and, as expected, the power saving was 0% around peak hours). The average power saving over a 24-hour cycle is about 10-15% for different web clusters.
Facebook is driving open source data center hardware design with its own Open Compute project. The project is self-serving -- Facebook runs among the most massive data centers in the world, and data center cost savings improves Facebook's bottom line. Facebook says it has saved $1.2 billion over three years using the Open Compute hardware designs it champions. (See Open Compute Project Takes on Networking.)
Earlier this week, Facebook bought PrivateCore, a security software company, to beef up its server security. (See Facebook Buys PrivateCore for Server Security.) | <urn:uuid:1d967211-52b8-427f-a7ed-c5e501ef5046> | CC-MAIN-2017-04 | http://www.lightreading.com/facebook-slashes-data-center-power-consumption/d/d-id/710297?_mc=RSS_LR_EDT | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00242-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91651 | 652 | 2.53125 | 3 |
In recent years, not only has the number of network and computer attacks been on the rise, but also the level of complexity and sophistication with which they strike. The most common and perhaps most damaging of these attacks are called worms. Worms are malicious programs written to exploit vulnerabilities within an operating system or an application environment and to then automatically seek out and find other vulnerable hosts to exploit and infect with the worm code. The worms travel rapidly affecting all neighboring systems of the initially infected host. This exponential propagation induces a large amount of network traffic that overwhelms bandwidth and system resources making applications and network services slow or even unavailable. Some worms also contain payloads including additional code to further exploit the host such as data modification (a web page) or thief of information.
Network worms and viruses have existed for well over 20 years. One of the first and famous worm programs to impact the Internet was the Morris Worm in November of 1988. This worm exploited vulnerabilities in the finger and sendmail programs. At that time the Internet consisted of approximately 60,000 hosts. This worm infected approximately 10% of the hosts and caused significant outages and slowdowns of mail servers across the net. In July of 2001 a new worm infection appeared that would significantly raise awareness of the threat posed by these malicious software programs along with the dramatic landscape change of the Internet.
An estimated 650 million hosts are today connected to the Internet hence a fundamental shift in the potential number of participants to propagate a worm. CodeRed spread quickly and became the most widespread and damaging worm to hit the Internet since the Morris Worm. An estimated total of 360,000 hosts were infected within a period of 14 hours. Two months after CodeRed another large-scale worm named NIMDA (ADMIN spelled backwards) impacted the Internet. More recently, the Internet saw the appearance of a new type of worm that infected the Internet at such a high rate that it was classified as a flash worm. The fast scanning rate of SQL Slammer in January 2003 was achieved because of its small size (single packet of 376 byte) as well as the fact that the worm was not TCP but UDP based (connectionless). SQL Slammer reached its full scanning rate of 55 million scans/sec within 3 minutes of the start of the infection and infected the majority of vulnerable hosts on the Internet within 10 minutes of the start of the infection with an estimated 250,000 – 300,000 infected hosts overall. Summer 2003 witnessed the infamous Blaster and January 2004 was the turn of MyDoom to impact Internet users.
While the underlying exploits used to achieve access to the target hosts varied between these worms the methods and technologies used to mitigate and contain the infection remained the same. In order to protect the network from these threats, the security system must be able to protect and react against both known and unknown attacks. This calls for an integrated security solution that is both flexible and pervasive, providing tighter collaboration between network services, security services, hosts, applications, management and business processes. As worms typically invade an environment in a multi-phased approach, this layered structure is an effective way to protect networks from these threats.
There are six steps involved in a worm mitigation methodology, in order: preparation, identification, classification, trace back, reaction, and post-mortem. The reaction phase can broken down into containment, inoculation, quarantine, and treatment. Worm mitigation requires coordination between system administration, network engineering, and security operations personnel. This is critical in responding effectively to a worm incident. The containment phase involves the limiting of the spread of a worm infection to those areas of the network already affected. With the worm infection contained, or at the least, significantly slowed down, the inoculation process further deprives the worm of any available targets.
The mobile environment prevalent on networks today poses significant challenges since laptops are routinely taken out of the “secure” environment and connected to potentially “insecure” environments such as home networks. A laptop can be infected with a worm or virus and then bring it back into the “secure” environment where it can infect other systems. The quarantine phase involves tracking down and identifying infected machines within the contained areas and disconnecting, blocking, or removing the infected machines. This isolates these systems appropriately for the final phase. During the treatment phase actively infected systems are disinfected of the worm. This can involve simply terminating the worm process and removing any modified files or system settings that the worm introduced, and patching for the vulnerability the worm used to exploit the system. In other cases a complete re-install of the system may be warranted in order to confidently ensure that the worm and its byproducts are removed. | <urn:uuid:dec5b49b-500c-48ad-967e-900162eba49d> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2004/05/10/combating-internet-worms/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00324-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949113 | 941 | 3.53125 | 4 |
New designs make microchips an almost two-for-one deal.
View the PDF -- Turn off pop-up blockers!What are they? The next real design change in the engines that drive every computer, from the lowliest desktop to the highest-end Unix server. They're the same kind of chips we've been using, but instead of one area on the chip primarily responsible for crunching data, these chips have two or more.
What's the advantage? More power from almost the same space. Dual-core chips are about 70% to 80% faster than a single-core chip that has the same number of transistors, says Jonathan Eunice, research director at Illuminata, a consultancy that specializes in high-end systems. They are limited, however, by having only one system busthe path through which all data passes on the way to the processing core. That creates a bottleneck because there's only one channel to feed data to two processing cores. But since two separate single-core chips deliver only about 85% more power than a single chip, dual-core models are a credible alternative to single-chip systems. They are also cheaper, mainly because it costs more to build the components necessary to support two chips than it is to get one chip to do twice the work. We won't know how much cheaper until Intel and AMD ship their dual-core chips late next year.
Why change to two cores? Complexity. Within two years, chipmakers will be putting a billion transistors on each chip. That's a lot of potential, but designing an effective use for 50 million transistors is hard enough; engineering the layout and connections for a billion is almost incomprehensibly difficult. Instead, chipmakers are using proven designs, subdividing each chip into several processing areas.
Who's doing it? IBM was first, but Hewlett-Packard and Sun also put them in Unix servers. Intel and AMD have promised to deliver dual-core 64-bit chips in late 2005.
Doesn't Intel already do something like this? Kind of. Hyper-Threading is a technique in which a chip basically fools software into thinking there are two chips in a machine instead of one. It takes advantage of the downtime a processor often has in the middle of a job while it waits for data to be delivered from various memory locations. Hyper-Threading schedules another job into those idle periods, delivering an extra 25% to 35% of oomph in the process, according to Eunice.
What's the downside to dual-core? Dual-core chips make it awfully hard to decide what, exactly, a processor is. Software makers often charge according to the number of processors in a machine. Having two processing cores complicates the equation. Should a license for a dual-core system cost the same as for a two-processor system?
What's not in doubt is that system makers are moving quickly toward multi-core chips to save money and design effort. In the process, they're putting into desktop machines almost the same power as in the dual-processor servers for which they charge a premium. And that may change what customers are willing to pay for "server-class" machines. | <urn:uuid:14f5f3f1-e0db-4c72-b032-2a7577c72c0d> | CC-MAIN-2017-04 | http://www.baselinemag.com/c/a/Tools-Primers-hold/Primer-DualCore-Chips | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00260-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957126 | 659 | 3.484375 | 3 |
Definition: A numeric function that maintains the order of input keys while changing their spacing.
Formal Definition: A hash function f for keys in S such that k1, k2 ∈ S ∧ k1 > k2 → f(k1) > f(k2).
Also known as order-preserving hash.
Generalization (I am a kind of ...)
Specialization (... is a kind of me.)
order-preserving minimal perfect hashing.
Aggregate parent (I am a part of or used in ...)
grid file, hash heap.
See also linear hashing.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 4 February 2009.
HTML page formatted Mon Feb 2 13:10:39 2015.
Cite this as:
Paul E. Black, "linear hash", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 4 February 2009. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/linearhash.html | <urn:uuid:09863de0-2148-40d4-9373-b840ea3491cb> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/linearhash.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00260-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.833279 | 242 | 3.03125 | 3 |
EMV has been deployed in over 80 countries. Over and over, it has been shown to reduce counterfeit card fraud.
How does EMV prevent fraud? The chip is tamper-proof and nearly impossible to clone, making counterfeit card fraud extremely difficult. In addition, EMV cards generate a unique numeric code for every transaction, which means a would-be fraudster cannot use stolen account data to make fraudulent transactions at any merchant that requires an EMV card.
Countries that have implemented EMV have seen dramatic reductions in fraud from counterfeit cards and stolen cards, leading to overall fraud reduction, too.
For a few dollars per year, an EMV card produces safe face-to-face transactions *
England saw overall card fraud reduced by a third after the implementation EMV in 2004.
In Canada debit card fraud card losses fell dramatically after the implementation of EMV.
Debit losses fell from a high of $142 million in 2009 to $38.5 million in 2012 – a 73% drop.
When France migrated to EMV in 2005, counterfeit card theft fraud nearly dissappeared.
Counterfeit card fraud dropped by 91% while fraud from card theft fell by 98%.
*Source: Face-to-face domestic fraud rate in France, 2012 Banque de France, Face-to-face domestic fraud rate in UK, 2012 UK Cards Association | <urn:uuid:3b48522c-ec2a-4bf3-b3fd-2aed5fa63c9f> | CC-MAIN-2017-04 | http://www.gemalto.com/emv/fraud | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00260-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940255 | 281 | 2.78125 | 3 |
Bicego G.T.,Centers for Disease Control and Prevention |
Nkambule R.,Ministry of Health |
Peterson I.,Columbia International University |
Reed J.,CDC Atlanta |
And 9 more authors.
PLoS ONE | Year: 2013
Background:The 2011 Swaziland HIV Incidence Measurement Survey (SHIMS) was conducted as part of a national study to evaluate the scale up of key HIV prevention programs.Methods:From a randomly selected sample of all Swazi households, all women and men aged 18-49 were considered eligible, and all consenting adults were enrolled and received HIV testing and counseling. In this analysis, population-based measures of HIV prevalence were produced and compared against similarly measured HIV prevalence estimates from the 2006-7 Swaziland Demographic and Health. Also, measures of HIV service utilization in both HIV infected and uninfected populations were documented and discussed.Results:HIV prevalence among adults aged 18-49 has remained unchanged between 2006-2011 at 31-32%, with substantial differences in current prevalence between women (39%) and men (24%). In both men and women, between since 2006-7 and 2011, prevalence has fallen in the young age groups and risen in the older age groups. Over a third (38%) of the HIV-infected population was unaware of their infection status, and this differed markedly between men (50%) and women (31%). Of those aware of their HIV-positive status, a higher percentage of men (63%) than women (49%) reported ART use.Conclusions:While overall HIV prevalence remains roughly constant, age-specific changes strongly suggest both improved survival of the HIV-infected and a reduction in new HIV infections. Awareness of HIV status and entry into ART services has improved in recent years but remains too low. This study identifies opportunities to improve both HIV preventive and care services in Swaziland. Source | <urn:uuid:0b745b1e-9849-4f78-9d53-d140ccf34f12> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/cdc-atlanta-1444417/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00076-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959043 | 392 | 2.78125 | 3 |
Diminishing resources and a growing world population underscore the absolute necessity of protecting our environment for the benefit of future generations. This will require us to rethink our existing patterns of resource use to globally establish a shared frame of reference and take concerted action. We have no other choice than to act collectively to achieve
Major forces at work would appear to support a general movement toward a sustainable society. A sustainable society is one that can fulfill its needs without experiencing catastrophic setbacks in the foreseeable future. The
principles of a sustainable society are interrelated and mutually supporting, giving equal consideration to
social development, economic development and
preservation of the environment. The very concept of sustainability itself is defined as the confluence of these three constituent parts.
eGovernment is among the leading figures of this global movement. It is driven by a real awareness of the need to redirect a significant portion of the benefits of growth toward better human development. Digital modernization of processes used by public bodies and private enterprise will play an active role in striking a new balance between current and future needs.
Lesson 1: The myth of return on investment
Return on investment can’t be reasonably expected from a movement that seeks to transform society over several generations. Although eGovernment programs have a major role to play in the emergence of a sustainable society, significant short-term budgetary gains are unrealistic. In fact, the lack of quantifiable gains from sustainable development often leads to disappointing results or even the failure of publicly funded programs. This is because performance targets are either not feasible or set without a relevant analytical frame of reference.
eGovernment initiatives modernize a state by endowing it with the technical infrastructure necessary to enable future government processes. France has already made a successful foray into eGovernment with online income tax filing, paperless medical expense claims and online VAT filing and payment. While measurable cost-benefit analysis has yet to be carried out, tremendous progress has been made through increased administrative productivity, lowered processing costs and an overall reduction in the country’s carbon footprint.
Lesson 2: The myth of progress from green IT
Green IT can act as a powerful catalyst to close the social divide and promote human development. Even so, transformation is a process brought about by human endeavor, not technological achievement. Technology can actually widen the gap between the most affluent and the most disadvantaged due to unequal access to technological means, social differences and inequalities, and uneven penetration of usage. That is why bringing the Internet into villages for example is not enough to transform them.
Drishtee project is a social enterprise focused on information and communication technologies. It provides a kiosk-based platform to deliver IT training and micro-financing and enable eCommerce in over 4,000 villages in rural India. Facilitating access to technology was not enough to achieve the social and economic transformations initially hoped for, however. Although it gave many agricultural communities access to expertise, the Drishtee project did not prove sustainable because deployment was not well rooted locally.
Lesson 3: Green IT is really about people and their roots
Green IT is particularly effective when the right approach is used to introduce it into people’s everyday lives. Change is easier to implement when modernization does not alter frames of reference and meaning. As one example, the eco-development of sugar cane in the Philippines demonstrates that green IT can be a powerful way to leverage sustainable development policy.
As the 11th largest producer of sugar cane worldwide, the Philippines enacted a law on biofuels in 2006, opening the way for the production of ethanol for fuel. Commercial production of sugar cane as the main source of raw material is aimed at helping the country to diversify its energy use and ensure energy security. Innovative new technologies were required to make Philippine sugar and biofuels more competitive, particularly on the world market. Convinced of its effectiveness, farmers eagerly adopted drip irrigation, which raised production yield considerably while ensuring sustainable use of water resources.
Lesson 4: Healthcare, a hotbed for green IT development
Telemedicine is one promising area where green IT will have a fundamental impact on personal and social well-being. A top priority for any emerging sustainable society is the ability to provide a better standard of healthcare to an ever-growing number of patients. Using telecommunication and information technologies, telemedicine makes it possible to provide remote assistance to medically dependent persons and perform detailed diagnosis in more isolated regions that specialists cannot visit.
Telemedicine enables people to overcome the geographical and socio-economic barriers that isolate medically underequipped rural regions by providing them with access to healthcare services through multimedia technologies. Advanced satellite technologies also make it possible to project the spread of major diseases such as malaria, dengue fever and cholera, as well as other diseases responsible for millions of deaths worldwide. These epidemics can now be tracked by satellite sensors and monitored through tele-epidemiology.
Lesson 5: Green IT, the key to sustainable traceability
The society of the 21st century will be one that is fundamentally mobile and traceable. For world health authorities, broad traceability can help determine the causes of food contamination, reducing the potential danger to the public at large. For private enterprise, traceability can help companies comply with standards and promote development to increase competitiveness and improve quality management.
Traceability can also contribute to an efficient digital world for public authorities and citizens.
Electronic identity, signature, time stamping and archiving are essential to ensuring legal protection and redress. In this regard, eID forms the key link in the chain of trust. When requested, the governments of
Portugal are required to disclose any electronic records maintained on individual citizens. Belgian citizens can also contest whether their government has the right or legal obligation to maintain certain records. In all instances, allowing traceability to serve citizens’ interests fosters civic behavior and self-regulation.
Lesson 6: The right to be different in a sustainable society
Based on the harmonious co-existence of diversity, the sustainable society is a social model that places people, human development and personal well-being at the center of how society is to be structured. Naturally, such a society believes that inspiring people to fulfill their potential is essential. This in turn means encouraging individuals to stand out from the crowd. With the advent of new technology, eID takes on special significance.
By managing the relationship between individual identity and all secondary identities without risk, eID contributes to the emergence of new, multi-layered identities. Rooted in the citizen’s social, hereditary and professional titles, eID enables management of secondary identities for each circle of trust to which the citizen belongs. Over and above national and regional borders, what really defines individuals are the circles of trust they belong to, in which they can freely express a multi-layered identity in all its richness and complexity.
Lesson 7: There can be no green IT without green spirit
Behind efforts to fight climate change rages a debate on how best to create a more sustainable, balanced society and close the very divides that current development trends threaten to widen. Social innovation and new business-minded ideas are key to boosting productivity, both in the public sector and in philanthropic endeavors. Green technology already enables thousands of local projects to work more efficiently and on a grander scale.
New behaviors and methods—particularly those involving emerging technologies or tools—can only take root in society when used by locally based intermediaries, however. These individuals are well placed to ensure that transformations catch on and achieve results. This green spirit is embodied by the United Kingdom’s Big Society, which fosters social action, community engagement and public sector reform through a framework of policies and strategies. The initiative’s expansion of cooperative social activity aims to benefit individuals, communities and society.
Lesson 8: Green IT, a powerful tool for sustainable governance
To effect lasting change, sustainable governance must transform uncertainty into opportunity, creating an exponential capacity for innovation and new initiatives. National governments realize that a sustainable society model for the post-industrial era still needs to be created. A new governmental organization is required at all levels to reconcile central mandates with local objectives. Green IT’s ease in facilitating point-to-point exchanges will prove instrumental to achieving cooperation at a local level rather than strictly conforming to policy set forth by central authorities.
Support from public authorities provides the critical resources needed to create communities of interest and local systems of government enabled by green IT. Like the UK’s Big Society, the Obama administration’s Office of Social Innovation and Civic Participation uses public policy as a catalyst of local civic engagement. The recently created agency promotes the use of new communications technology to identify and fund innovative community solutions with demonstrated results.
Lesson 9: Building a sustainable society means doing one’s part
The sustainable society is not merely a concept. It is rooted in ideal social values that determine the highest priorities for the enablement of human development. Both the 2010 Deutsche Post DHL study on green business trends and the third annual National Geographic/GlobeScan Consumer Greendex demonstrate that the general public is more than ready to take voluntary action to enter the era of sustainable development.
Consumers would appear to have regained control of the social value system by taking action to do their part to promote sustainability. Two-thirds of consumers surveyed say they would make purchasing choices based on the sustainable development policy of a company or brand. Respondents also said they expect greener alternatives to be available at the same price as conventional services in the near future. For their part, 56% of businesses surveyed believe consumers prefer greener solutions to cheaper ones.
Lesson 10: From window dressing to genuine green spirit
Public authorities and businesses are coming to realize that unsubstantiated claims are just window dressing for an increasingly sophisticated public and clientele. Today’s citizen has high expectations of concrete action to achieve sustainability goals. To effect the cultural change necessary to achieve a sustainable society, organizations must move beyond empty claims to produce products and services that can herald the society of the future.
For public authorities, eID programs and eGovernment services are vital because they go to the very heart of the bond between citizens and public services. These increasingly widespread services enable citizens to exercise their rights and become more involved in administrative procedures. eID is destined to become the standard-bearer and symbol for bridge-building between citizens and public services. Indeed, it represents renewed stock given to local needs, the hope for a brighter future, and the emergence of a new society over the long term. | <urn:uuid:635697e2-b267-40cd-a831-caaa7df02772> | CC-MAIN-2017-04 | http://www.gemalto.com/govt/inspired/green-it-egov-slide | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00288-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926684 | 2,154 | 3.25 | 3 |
Five Steps to Resolving Workplace ConflictBy Larry and Meagan Johnson | Posted 2010-12-21 Email Print
Conflicts often arise in a multigenerational environment, so it’s important for managers to understand the differences among age-groups.
For the first time in history, five generations are working side by side. Since conflicts often arise in a multigenerational environment, it’s important for managers to understand the differences among the generations.
Traditionals (born before 1945): “The Depression Babies” are influenced by the Great Depression and World War II. They are loyal and respectful of authority; stubbornly independent; dependable with a great work ethic; experienced with a lot to offer; high commitment to quality; great communication and interpersonal skills; able and willing to learn.
Baby Boomers (born between 1946 and 1964): “The Woodstock Generation” is influenced by the Sixties, the Vietnam War and postwar social change. They are interested in spirituality and making a difference; pioneers of antidiscrimination policies; well-educated and culturally literate; questioners of authority; good at teamwork, cooperation and politics; seekers of financial prosperity; not in a rush to retire early.
Generation X (born between 1965 and 1980): “The Latchkey Generation” is influenced by pop culture and may be children of divorce. They are highly independent workers who prefer to fly solo; responsible, family-focused; little patience for bureaucracy and what they consider nonsensical policies; constantly preparing for potential next job; hardworking and wanting to contribute; expect to be valued and rewarded; thrive on adrenaline-charged assignments.
Generation Y (born between 1981 and 1995): “The Entitled Generation” is influenced by technology and doting parents. They are into friends and socializing; at ease with technology and multitasking; used to hovering, involved authorities; value social responsibility; expect praise and notice; need constructive feedback routinely; want work-life balance; will stay put if their loyalty is earned.
Linksters (born after 1995) “The Facebook Crowd” is influenced by a chaotic, media-saturated world. They are still living at home; used to taking instruction; best friends with their parents; live and breathe technology; tuned in to pop music and TV culture; tolerant of alternative life styles; involved in green causes and social activism; loathe dress codes.
Resolving Intergenerational Conflicts
Here are five tips for dealing with intergenerational friction:
1. Look at the generational factor. There is almost always a generational component to conflict: Recognizing this offers new ways to resolve it. For example, Traditionals and Baby Boomers don’t like to be micromanaged, while Gen Y employees and Linksters crave specific, detailed instructions about how to do things and are used to hovering authorities. Baby Boomers value teamwork, cooperation and buy-in, while Gen X individuals prefer to make unilateral decisions and move on—preferably solo.
2. Air different generations’ perceptions. When employees of two or more generations are involved in a workplace conflict, invite them to share their perceptions. For instance, a Traditional employee may find a Gen Y worker’s lack of formality and manners offensive, while a Gen Y staffer may feel “dissed” when an older employee fails to respect his or her opinions and input.
3. Find a generationally appropriate fix. Work with the set of workplace attitudes and expectations that come from everyone’s generational experience. For instance, if you have a knowledgeable Boomer who is frustrated by a Gen Y employee’s lack of experience and sense of entitlement, turn the Boomer into a mentor. Or if you have a Gen X individual who is slacking off, give him or her a super-challenging assignment linked to a tangible reward.
4. Find commonality. Shared and complementary characteristics can be exploited when dealing with intergenerational conflict. For instance, Traditionals and Gen Y employees both tend to value security and stability. Traditionals and Boomers tend to resist change—but crave training and development. Gen X and Gen Y employees place a high value on workplace flexibility and work-life balance. Boomers and Linksters are most comfortable with diversity and alternative life styles. Gen Y employees and Linksters are technologically adept and committed to socially responsible policies.
5. Learn from each other. Traditionals and Boomers have a wealth of knowledge that younger workers need. Gen X employees are known for their fairness and mediation abilities. Gen Y workers are technology wizards. And Linksters hold clues to future workplace, marketing and business trends.
Organizations that make an effort to reconcile the differences and emphasize the similarities among the various generations will be rewarded with intergenerational harmony and increased productivity.
Larry and Meagan Johnson, a father-daughter team, are partners in the Johnson Training Group. They are experts on managing multigenerational workplaces, and are co-authors of Generations, Inc.: From Boomers to Linksters—Managing the Friction Between Generations at Work. | <urn:uuid:6fc0c0be-5ab5-4752-8ddc-9d277cb9d530> | CC-MAIN-2017-04 | http://www.baselinemag.com/careers/Five-Steps-to-Resolving-Workplace-Conflict | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00342-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946285 | 1,054 | 2.765625 | 3 |
Gort's Gouda announced as source of E. Coli outbreak in Canada
Wednesday, Sep 25th 2013
Cheese, much like wine, is considered a treat for many connoisseurs who partake in the wide variety of products. However, due to certain properties in some cheese products, there are possibilities for the outbreak of foodborne illness for items that are raw or improperly stored. Gort's Gouda Cheese Farm based in Salmon Arm, British Columbia, was found to have caused an E. coli outbreak that killed one person and sickened 20 others, CBC News reported.
After the outbreak, many were left to ask what had happened and why it had taken over a month to issue a warning after the death of one person. The Canadian Food Inspection Agency finally released an advisory, noting that the recall was part of an ongoing investigation. It listed several of the cheese provider's products as potentially being the cause for E. coli. The main reason for contamination was the raw milk used to make the cheese. Due to the lack of packaging identifiers, the products may have been sold as a typical item to an unsuspecting consumer.
Once infected with E. coli, many victims don't seek treatment unless it is a severe case. Most healthy adults recover within a week, however, children and older adults should consult a doctor as they can be more affected by the bacteria, according to Mayo Clinic. For the person who was killed by the E. coli infection, the family continued to eat the cheese without knowing that it was the cause, which could have led to further complications. In many foodborne illness cases, it takes months to find the source, if it is found at all. This requires officials to conduct interviews with those who report being sick and display clearly identifiable symptoms. They must also be able to narrow down victims' diets to a single commonality.
Safely consuming dairy products - particularly cheese - requires individuals to adhere to numerous storage best practices. Here are a few things to observe when storing cheese in the refrigerator:
- Keep it cool
For the staple of American cheese as well as other pre-sliced packaged products, there won't be as much work in keeping it fresh for consumption. When using a temperature sensor for more exotic varieties that come wrapped in paper, the environment should be kept between 35 and 45 degrees Fahrenheit with a high humidity level, according to the American Cheese Society. Freezing the products is generally discouraged as natural cheeses could lose their texture and flavor. Storing the cheese in the freezer should only be done when the product will be used strictly for cooking as the cheese will begin to crumble once it thaws.
- Wrap well
While typical sandwich cheese is already prepackaged, fresh products require storage in other materials. Nora Singley of The Kitchn urges not to use plastic wrap as it will suffocate the flavor. Instead, using cheese paper is the best way to go. Putting it in waxed paper then loosely in plastic wrap will also work if the cheese paper isn't available. The cheese should be labeled and dated to distinguish it from any other cheese products, this way the expiration date will be better noted and consumers will be able to use it.
- Cook appropriately
Many people like adding fresh grated cheese to their meals, however, it must be done properly for the full effect. Adding at the end will allow the cheese to stay cool, and grating it when it's cold is easier, according to the American Cheese Society. Cooking on the stovetop should be kept on low to medium heat for the best result.
The E. coli outbreak is a reminder how proper labeling, prompt warnings and appropriate storage is crucial to maintaining products and keeping consumers healthy. Observing best practices for cheese storage will provide more enjoyment and deter any foodborne illnesses. | <urn:uuid:9f686b0c-e2b7-4c2f-a449-d4c0bd2e4db5> | CC-MAIN-2017-04 | http://www.itwatchdogs.com/environmental-monitoring-news/cold-storage/gorts-gouda-announced-as-source-of-e.-coli-outbreak-in-canada-513182 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00371-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.969335 | 774 | 2.78125 | 3 |
At present, SQL Server 2008 continues to support two modes for validating connections and authenticating access to database resources: (Windows Authentication Mode) and (SQL Server and Windows Authentication Mode) also known as "Mixed Mode". Both of these authentication methods provide access to SQL Server 2008 and its resources. Lets first examine the differences between the two authentication modes.
Windows Authentication Mode
Windows Authentication Mode is the default and recommended authentication mode. It tactfully leverages Active Directory user accounts or groups when granting access to SQL Server. In this mode, Database Administrators are given the opportunity to grant domain or local server users access to the database server without creating and managing a separate SQL Server account. Also worth mentioning, when using Windows Authentication mode, user accounts are subject to enterprise wide policies enforced by the Active Directory domain such as complex passwords, password history, account lockouts, minimum password length, maximum password length and the Kerberos protocol. These enhanced and well defined policies are always a plus to have in place.
SQL Server and Windows Authentication (Mixed) Mode
SQL Server and Windows Authentication Mode uses either Active Directory user accounts or SQL Server accounts when validating access to SQL Server. SQL Server 2005 introduced a means to enforce password and lockout policies for SQL Server login accounts when using SQL Server Authentication. SQL Server 2008 continues to do so. The SQL Server polices that can be enforced include password complexity, password expiration, and account lockouts. This functionality was not available in SQL Server 2000 and was a major security concern for most organizations and Database Administrators. Essentially, this security concern played a role in helping define Windows Authentication as the recommended practice for managing authentication in the past. Today, SQL Server and Windows Authentication Mode may be able to successfully compete with Windows Authentication mode.
Which Mode should be Used to Harden Authentication?
Once the Database Administers are aware of the authentication methods, the next step is choosing one to manage SQL Server security. Although, SQL Server 2008 now has the ability to enforce policies, Windows Authentication Mode is still the recommended alternative for controlling access to SQL Server because this mode carries added advantages; Active Directory provides an additional level of protection with the Kerberos protocol. As a result, the authentication mechanism is more mature, robust and administration can be reduced by leveraging Active Directory groups for role based access to SQL Server.
Nonetheless, this mode is not practical for everything out there. Mixed Authentication is still required if there is a need to support legacy applications or clients coming in from a platform other than windows and there exist a need for separation of duties. To summarize it is common to find organizations where the SQL Server and Windows team do not trust one another. Therefore, a clear separation of duties are required as SQL Server accounts are not managed via Active Directory.
Using Windows authentication is a more secure choice, however, if Mixed Mode authentication is required then make sure to leverage complex passwords and the SQL Server 2008 password and lockout policies to further bolster security. | <urn:uuid:06d41bd3-e1db-4665-9184-62dfeb7fcebf> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2350774/microsoft-subnet/which-sql-server-2008-authentication-mechanism-should-i-choose-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00187-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.876523 | 598 | 2.53125 | 3 |
Kaspersky Lab, a leading developer of Internet threat management solutions that protect against all forms of malicious software including viruses, spyware, hackers and spam, today reveals that the number of new malicious programs detected in 2009 was virtually the same as 2008, but warns of increasingly sophisticated malware.
Kaspersky Lab reports that the number of new malicious programs detected in 2009 was virtually the same as in 2008 – approximately 15 million and each day a further 30,000 new threats are being detected. Currently Kaspersky Lab holds 33.9 million unique malicious files.
Alexander Gostev, Director of the Global Research and Analysis Team at Kaspersky Lab cites one of the most significant incidents in the IT underworld during 2009 was a slowdown in the growth of newly emerging threats. However, he is quick to point out that malware is becoming more sophisticated, along with a growth in the number of global epidemics, infected web resources (one in 150 websites is currently spreading infections), a variety of scams, and the development of malware for alternative platforms and devices, are all major sources of digital pollution.
During 2009, there were eight malware programs that affected more than one million computers, most notably the polymorphic worm Kido (otherwise known as Conficker) that reached over seven million infections and is expected to remain an active global epidemic throughout 2010. Gostev notes one positive outcome in the creation of the Conficker Working Group, which was the first example of broad international cooperation to deal with such a widespread threat. Although attracting less notoriety the Gumblar self-spreading software botnet came in waves during 2009 and affected tens of thousands of computers by re-directing Internet users from legal websites to illegal malicious servers, or redirecting to infected but legal websites.
Gostev also notes a boom in Internet-based fraud and specifically fake anti virus software, with figures from the Internet Crime Complaint Center estimating revenues from fake anti virus reaching $150 million in 2009.
Kaspersky Lab also reports the evolution of threats targeting social networking sites such as Facebook and Twitter, as another major trend throughout 2009. Stefan Tanase, Kaspersky Lab's Senior Security Researcher, EEMEA, explains that at the current time there is a rise in these threats to a new level, involving automated targeted attacks against users.
Looking towards the rest of 2010 Kaspersky Lab forecasts a significant increase in attacks through P2P networks, the emergence of more 'grey' schemes in the botnet services market, as well as a rise in the number of attacks via Google Wave. It is also anticipating a rise in the number of mobile device threats, exploiting the popularity of Android and the iPhone.
Magnus Kalkuhl one of Kaspersky Lab's Senior Virus Analysts explains that IT security industry has made a quantum leap in signature-based detection and proactive defence technology in recent years and today vendors are very well placed to defend against the evolution of the threat landscape.
To find out more about computer threats visit: http://www.kaspersky.co.uk/threats
To read the latest security news please visit: http://threatpost.com | <urn:uuid:6613b8b5-9d19-4b2a-9c5f-c38079bb7bd0> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/virus/2010/Kaspersky_Lab_Overview_of_the_Cyberthreat_Landscape_During_2009 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00399-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945567 | 640 | 2.53125 | 3 |
2.4.7 What are the most important attacks on stream ciphers?
The most typical use of a stream cipher for encryption is to generate a keystream in a way that depends on the secret key and then to combine this (typically using bitwise XOR) with the message being encrypted.
It is imperative the keystream "looks" random; that is, after seeing increasing amounts of the keystream, an adversary should have no additional advantage in being able to predict any of the subsequent bits of the sequence. While there are some attempts to guarantee this property in a provable way, most stream ciphers rely on ad hoc analysis. A necessary condition for a secure stream cipher is that it pass a battery of statistical tests which assess (among other things) the frequencies with which individual bits or consecutive patterns of bits of different sizes occur. Such tests might also check for correlation between bits of the sequence occurring at some time instant and those at other points in the sequence. Clearly the amount of statistical testing will depend on the thoroughness of the designer. It is a very rare and very poor stream cipher that does not pass most suites of statistical tests.
A keystream might potentially have structural weaknesses that allow an adversary to deduce some of the keystream. Most obviously, if the period of a keystream, that is, the number of bits in the keystream before it begins to repeat again, is too short, the adversary can apply discovered parts of the keystream to help in the decryption of other parts of the ciphertext. A stream cipher design should be accompanied by a guarantee of the minimum period for the keystreams that might be generated or alternatively, good theoretical evidence for the value of the lower bound to such a period. Without this, the user of the cryptosystem cannot be assured that a given keystream will not repeat far sooner than might be required for cryptographic safety.
A more involved set of structural weaknesses might offer the opportunity of finding alternative ways to generate part or even the whole of the keystream. Chief among these approaches might be using a linear feedback shift register to replicate part of the sequence. The motivation to use a linear feedback shift register is due to an algorithm of Berlekamp and Massey that takes as input a finite sequence of bits and generates as output the details of a linear feedback shift register that could be used to generate that sequence. This gives rise to the measure of security known as the linear complexity of a sequence; for a given sequence, the linear complexity is the size of the linear feedback shift register that needs to be used to replicate the sequence. Clearly a necessary condition for the security of a stream cipher is that the sequences it produces have a high linear complexity. RSA Laboratories Technical Report TR-801 [Koç95] describes in more detail some of these issues and also some of the other alternative measures of complexity that might be of interest to the cryptographer and cryptanalyst.
Other attacks attempt to recover part of the secret key that was used. Apart from the most obvious attack of searching for the key by brute force, a powerful class of attacks can be described by the term divide and conquer. During off-line analysis the cryptanalyst identifies some part of the key that has a direct and immediate effect on some aspect or component of the generated keystream. By performing a brute-force search over this smaller part of the secret key and observing how well the sequences generated match the real keystream, the cryptanalyst can potentially deduce the correct value for this smaller fraction of the secret key [Koç95]. This correlation between the keystream produced after making some guess to part of the key and the intercepted keystream gives rise to what are termed correlation attacks and later the more efficient fast correlation attacks.
Finally there are some implementation considerations. A synchronous stream cipher allows an adversary to change bits in the plaintext without any error-propagation to the rest of the message. If authentication of the message being encrypted is required, the use of a cryptographic MAC might be advisable. As a separate implementation issue synchronization between sender and receiver might sometimes be lost with a stream cipher and some method is required is ensure the keystreams can be put back into step. One typical way of doing this is for the sender of the message to intersperse synchronization markers into the transmission so only that part of the transmission which lies between synchronization markers might be lost. This process however does carry some security implications. | <urn:uuid:0b75abd3-c95c-458a-91de-63c91e69ce3c> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/important-attacks-on-stream-ciphers.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00399-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931661 | 904 | 3.6875 | 4 |
Can cell phones prevent distracted driving?
- By Kevin McCaney
- Dec 14, 2011
The National Transportation Safety Board is going up against a cultural wave of mobile computing in its call for all 50 states to enact laws banning drivers from using cell phones and other electronic devices even if their devices are hands-free.
And it’s asking makers of mobile devices to help by adding features that would prevent drivers from talking, texting or otherwise using their devices.
NTSB’s proposal includes a pitch to CTIA and the Consumer Electronics Association to “encourage the development of technology features that disable the functions of portable electronic devices within reach of the driver when a vehicle is in motion.”
Highway safety chief: Car not a ‘mobile device’
Death (and close calls) by texting
The proposal also recommends that features allow use during emergencies and are capable of identifying where someone is sitting in a car, in order to allow device use by passengers.
Can device-makers deliver?
There already are apps available that can do this to varying degrees. The question could be whether device-makers would install them by default, whether they can be made to work automatically, and whether they can still allow for emergency and passenger use.
Many of the current products are aimed at companies and other organizations for managing their fleets of commercial vehicles, or for parents managing their teenagers’ use while driving.
CellControl, for example, is designed to be a fleet management tool, linking a phone to a specific vehicle (being installed on both) and blocking the ability to text while the vehicle is in motion. It can connect with several phones in a vehicle but wouldn’t necessarily prevent passengers from using their phones. It also wouldn’t prevent the driver from using a second phone.
ZoomSafer’s software uses a phone’s Global Positioning System receiver to detect when it’s in motion (faster than 5 or 10 mph) and can be preset, on a Web page account, to block texts, e-mails, Web browsing or phone calls.
But like other apps that rely on GPS signals to activate, ZoomSafer doesn’t differentiate between drivers and passengers, unless you go to your account page ahead of time and set up a time for making calls while in motion, so it can shut off when you’re riding in a car, bus, train or plane.
An app such as tXtBlocker allows passengers to unblock the phones by solving a puzzle, although it would seem a driver could do that, too. In catering to the parent/boss appeal, tXtBlocker also allows the administrator to track the cell phone.
Another app, iZup, turns off essentially all functions of a phone when it’s in motion and for several minutes after it has stopped. Otherwise it allows only calls to 911 and a couple other preset numbers. Only the administrator can unblock the phone.
At the moment, the available apps can significantly reduce distractions from cell phones, although they all have limitations — and they all require the consent of the user in one way or another.
Even if device-makers comply, NTSB still faces the task of changing users' attitudes, which, in some cases, haven’t changed much even in states that have fairly strict laws.
Most states have some form of regulations on using cell phones while driving, although several limit restrictions to novice drivers and school bus drivers. Thirty-five states, plus the District of Columbia and Guam, ban texting while driving, but only nine ban handheld calling use for all drivers. No state bans all cell phone use, including hands-free, for all drivers, which is what NTSB is proposing.
The Governors Highway Safety Association offers a chart of current laws on its website.
NTSB marshaled plenty of statistics in making its proposal, citing various studies that concluded: Distracted driving played a part in an estimated 3,092 highway deaths in 2010; drivers using cell phones fail to see up to 50 percent of the information in front of them; and someone using a cell phone when driving is four times more likely to have a crash that will result in going to the hospital.
It also presented statistics that show how common distracted driving is, which could also underscore the difficulty of changing motorists' habits, even if bans and blocking mechanisms are in place.
A study by AAA Mid-Atlantic, for instance, found that more than half of the approximately 210,000 individuals who drive on the Capital Beltway around Washington. D.C., every day do so while distracted by cell phones.
Another study found that, in a typical daytime moment in 2010, 5 percent of drivers (in 660,000 vehicles) were using a handheld cell phone.
And a national survey by AAA Foundation for Traffic Safety found that 69 percent of drivers reported having talked on their cell phones while driving in the past 30 days, and 24 percent admitted to texting or e-mailing while driving.
Kevin McCaney is a former editor of Defense Systems and GCN. | <urn:uuid:47ee60bb-8f07-485f-95d8-6ed831c8e1cc> | CC-MAIN-2017-04 | https://gcn.com/articles/2011/12/14/ntsb-cell-phone-ban-driving-apps.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00123-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951082 | 1,049 | 2.53125 | 3 |
Kaspersky Lab, a leading developer of secure content management solutions, has successfully patented a groundbreaking piece of technology in the USA that allows the potential scale of malware epidemics to be accurately predicted in order to prevent them from spreading.
Today's malware has the capacity to spread like wildfire, with millions of computers infected in an instant as an epidemic sweeps across the Internet. This can take down huge swathes of infrastructure, bringing information highways to a standstill and leaving systems vulnerable to data leakage which in turn opens the door to large scale fraud. Detecting malware on every computer that is infected during an epidemic has little or no effect. What is needed is a reliable method for estimating the potential scale and direction of an epidemic, an early warning system, and that is exactly what the new technology developed by Kaspersky Lab's Yury Mashevsky, Yury Namestnikov, Nikolay Denishchenko and Pavel Zelensky, is capable of doing. The technology was granted Patent No. 7743419 by the US Patent and Trademark Office on 22 June, 2010.
The patented new technology works by analyzing statistical data about threats received from a global monitoring network. The network tracks malware downloads, hacker attacks and other similar security incidents, recording the times that they occur, their source and geographical location etc. Emerging epidemics can then be identified by the number of incidents occurring during a specific period in one location or another. This method makes it easy to pinpoint the source of an epidemic and forecast its likely propagation pattern.
Protective measures can then be developed and implemented by those countries in the path of the epidemic, slowing the proliferation rate considerably and providing effective damage limitation. The monitoring, detection and analysis of data is performed in real time, making the patented technology especially effective against malware epidemics that spread rapidly.
"The new system has a number of advantages over other similar solutions. This technology contains a subsystem for tracing the source of the threat, a module that generates protective measures and a subsystem that simulates the spread of an epidemic," noted Nadia Kashchenko, Chief Intellectual Property Counsel at Kaspersky Lab.
Kaspersky Lab currently has more than 50 patent applications pending in the USA, Russia, China and Europe. These relate to a range of unique information security technologies developed by the Company's personnel. | <urn:uuid:5322648a-c32e-4a42-bd1e-651e38c31129> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/business/2010/Kaspersky_Lab_patents_advanced_technology_for_combating_malware_epidemics | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00031-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934554 | 469 | 2.90625 | 3 |
Top 6 vulnerabilities found via penetration tests
- By Andrew Whitaker
- May 22, 2014
The basement-dwelling teenager poring over lines of scrolling code as he rips through the security of a government or corporate server is a popular trope in Hollywood movies. Although this widespread image of the hacker isn’t accurate, the threat of cyberattacks against government networks is very much a real world concern.
In order to be more prepared for cybersecurity breaches, agencies should consider a comprehensive penetration test – ethical hacking with the goal of attacking or bypassing the established security mechanisms of an agency’s systems, and using the same tactics as a malicious intruder.
Penetration testing can be conducted by way of a cyberattack or by exploiting a physical vulnerability of an organization.
After gaining access to a system, the penetration testers will report back with detailed information about what vulnerabilities were exploited, how they were able to breach the system, what level of data was accessed and how to prevent future exploitation. The following is a compilation of the six most common vulnerabilities found during penetration tests:
Pass-the-hash. Hashing is the process of taking data of an arbitrary length and manipulating it into a predetermined length. Most password challenge and response systems use hashing to convert a plaintext password into a string of letters and numbers that would appear meaningless and random to the common user. A malicious intruder would develop a program to intercept the hashed data as it is being relayed and could then use that hashed data to fake authentication and gain access to an otherwise secure system.
Password reuse. Anyone who reuses passwords across multiple platforms can fall victim to further attacks when a password that was compromised in a data-loss incident is used to gain access to different, otherwise secure platforms that use the same password.
Patch management. Cyber criminals commonly exploit known weaknesses for which patches have already been released. IT managers who have not kept their patches up to date, particularly with the updating of third-party applications like Java and Adobe, have opened themselves up to this kind of attack.
Unsupported legacy software. Closely related to improper patch management, using unsupported software opens the agency to a world of vulnerability. With Microsoft’s recent withdrawal of support for Windows XP, the company will stop issuing patches to fix vulnerabilities found in the operating system, leaving XP a prime target for attack.
Insecure in-house developed applications. Internally developed applications are not generally as rigorously tested as popular third-party programs. One major category of vulnerability is the input validation flaw, where an outside or client-facing input overrides the legitimate functioning of a subsystem. These include cross–site scripting for websites and SQL injection for applications.
User awareness. One of the simplest methods for cyber criminals to exploit is the phishing scheme, whereby an attacker tricks the user into revealing personal information. One of the more basic approaches is to pose as a system’s administrator and then demand a user’s password for “validation.”
A more advanced method is to fraudulently copy the interface and layout of a targeted website or application and trick the user into entering his username and password into the fake website. This will often be accomplished by providing the target with a misleading URL address or by actually interfering with the display functions in the address bar, so that the user sees a trusted URL when visiting a fake website.
The majority of cyber attackers are not the Hollywood variety. Cyber criminals most often rely on exploiting known vulnerabilities and improper security practices; they prey on the non-technical and the misinformed. Conscientiously keeping up-to-date with security updates and patches and adhering to the basic common-sense practices of cybersecurity will help keep an agency’s systems and its users protected from the majority of attempted cyberattacks.
But because agency IT departments, “don’t know what they don’t know,” they should consider penetration testing, enabling an ethical hacker to identify and remediate weaknesses before the “bad actors” steal the show.
Andrew Whitaker is the director of the cyber attack penetration division of Knowledge Consulting Group. He can be reached at email@example.com. | <urn:uuid:a163c36e-71c2-4a56-905f-cdfeed13832d> | CC-MAIN-2017-04 | https://gcn.com/articles/2014/05/22/pen-testing.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00031-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939056 | 864 | 2.875 | 3 |
110 blocks are one type of punch blocks used to connect sets of wires in a structured cabling system. The “110” designation is also used to describe a type of insulation-displacement connector used to terminate twisted pair cables which uses a similar punch-down tool as the older 66 block. People are preffered to 110 blocks rather than 66 blocks in high-speed networks because they introduce less crosstalk and allow much higher density terminations, and meet higher bandwidth specifications. Many 110 blocks are certified for use in Category 5 and Category 6 wiring systems, even Cat6a. The 110 block provides an interconnection between patch panels and work area outlets.
Modern homes usually have phone service entering the house to a single 110 block, when it is distributed by on-premises wiring to outlet boxes throughout the home in series or star topology. At the outlet box, cables are punched down to standard RJ-11 sockets, which fit in special faceplates. The 110 block is often used at both ends of Category 5 cable runs through buildings. In switch rooms, 110 blocks are often built into the back of patch panels to terminate cable runs. At the other end, 110 connections may be used with keystone modules that are attached wall plates. In patch panels, the 110 blocks are built directly onto the back where they are terminated. Category 6 – 110 wiring blocks are designed to support Category 6 cabling applications as specified in TIA/EIA-568-B.2-1 with unique spacing that provides superior NEXT performance.
Both 66 and 110 blocks are insulation displacement connection (IDC) devices, which are key to reliable data connections. 66-clip blocks have been the standard for voice connections for many years. 110 blocks are newer and are preferable for computer work, for one thing, they make it easier to preserve the twist in each pair right up to the point of connection.
1. Although 66-clip blocks historically have been used for data, they are not an acceptable connection for Category 5 or higher cabling. The 110-type connection, on the other hand, offers: higher density (more wiring in a smaller space) and better control (less movement of the wires at the connection). Since more and more homes and businesses call for both voice and data connections, it is easy to see why it makes sense to install 110-type devices in most situations. Most cat5 jacks also use type 110 terminals for connecting to the wire.
2. The 110 block is a back-to-back connection whereas the 66 block is a side-by-side connection. The 110 block is a smaller unit featuring a two-piece construction of a wire block and a connecting block. Wires are fed into the block from the front, as opposed to the side entry on the 66 block. This helps to reduce the space requirements of the 110 block and reduce overall cost. The 110 block’s construction also provides a quiet front, meaning there is insulation both above and around the contacts. Since the quiet front is lacking on the 66 blocks, a cover is often recommended.
3. 110 blocks have a far superior labeling system that not only snaps into place but is erasable. This is particularly important for post-installation testing and maintenance procedures.
110 Connecting Blocks enable you to quickly organize and interconnect phone lines and communication cable, preserve the twists in each pair right up to the connection point. Plus, most networking cable equipment also use 110 type terminals for cable connections. | <urn:uuid:b500dcf2-45c2-4c77-9cca-9fac663f64a8> | CC-MAIN-2017-04 | http://www.fs.com/blog/modern-110-connecting-blocks-for-data-networking.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00335-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93617 | 713 | 2.796875 | 3 |
Black Box Explains Visual Inspection Probes
One method of testing fiber optic cable is a visual inspection for continuity. A visual inspection probe, an essential tool for people working with optical components or systems, is a portable video microscope used to inspect fiber optic terminations for cleanliness or damage. It’s used to make sure the fibers are not broken by tracing a fiber from one end to the other.
This tool is like an oversized pen-type flashlight with a lightbulb or LED source. It’s very simple to use. Just attach the tracer to the cable and then check the other end of the fiber to see if the light is transmitted. If there’s no light, there is a bad connection or section of cable.
It can also be used to check hard-to-reach connectors that are installed on the back of patch panels or inside hardware devices, saving you the need to access the back side of the panels or to disassemble hardware devices before inspecting fiber optic terminations. The probe is inserted through the bulkhead adapters.
Visual inspection probes also enable you to inspect connector endfaces for debris or damage, the leading causes of transmission failure, prior to mating them.
For example, the 400x power probe and 3.5-inch display of the Visual Inspection Probe make it easy to view small particles that may exist on connectors. Using adapter tips, you can inspect patch cords, pigtails, and cable assemblies as well.
A visual inspection probe provides a fast and effective way to install, troubleshoot, or maintain fiber optic patch panels. | <urn:uuid:b141c5a1-4773-4648-b546-e6913d3d1622> | CC-MAIN-2017-04 | https://www.blackbox.com/en-au/products/black-box-explains/visual-inspection-probes | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00243-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.863233 | 325 | 2.640625 | 3 |
Returns the geomagnetic field for a location at the specified date.
int wmm_get_geomagnetic_field(const wmm_location_t *loc, const struct tm *date, wmm_geomagnetic_field_t *field)
The geographic location to be used in the calculation of the magnetic field.
The date to be used in the calculation of the magnetic field.
The geomagnetic field for the given location and date.
Library:libwmm (For the qcc command, use the -l wmm option to link against this library)
The geomagnetic field is returned in field.
If the latitude_deg or longitude_deg values in loc exceed their ranges, they will be changed to fit into their respective range.
0 if successful, -1 if an error occurred, 1 if loc was altered to fit into the magnetic model range.
Last modified: 2014-05-14 | <urn:uuid:7c7de9f2-94e9-4aad-a1af-52aef4f12e7b> | CC-MAIN-2017-04 | http://developer.blackberry.com/native/reference/core/com.qnx.doc.wmm.lib_ref/topic/wmm_get_geomagnetic_field.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00179-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.731924 | 199 | 2.6875 | 3 |
Computational failures take a steep toll in the HPC sciences. Events such as broken node electronics, software bugs, insufficient hardware resources, and communication faults stymy work on expensive machines and bedevil computer scientists. An article at Deixis Magazine chronicles the work of a Pacific Northwest National Laboratory researcher who is developing load balancing techniques to keep calculations running as smoothly as possible even in the wake of unforeseen mishaps.
Addressing fault tolerance grows more urgent as core counts proliferate and machines cross over from petaflop to exaflop-class territory. Sriram Krishnamoorthy and his research team have developed a technique called selective recovery that aims to minimize the negative impact of faults.
“The basic idea of dynamic load balancing is you can react to things like faults online,” Krishnamoorthy reports. “When a fault happens, we showed you could actually find what went bad due to the fault, recover that portion and only that portion and then re-execute it” while everything else continues to execute.
Under the current paradigm, when a system failure occurs, the process rolls back to the last checkpoint and the tasks are re-executed. It’s a tried-and-true method, but it is time-consuming and resource-intensive.
“When one process goes bad and you take a million of them back to the last good checkpoint, it’s costly,” Krishnamoorthy says. “We showed that the cost of a failure is not proportional to the scale at which it runs.”
Krishnamoorthy and his colleagues proposed a new framework, called Task Scheduling Library (TASCEL) for Load Balancing and Fault Tolerance, which they described at the June International Supercomputing meeting. In the event of a failure, only the problematic section is rerun while the computer continues without interruption. The method employs a system of checks that ignores duplications and synchronizes results for a given task. The overall job is tracked via data structures that are globally accessible, instead of being stored in local memory, which reduces communication costs.
The framework was initially developed to enable computational chemistry codes to make the jump from smaller cluster to highly-parallel many-core machines. In a recent success story, the research team ran a computationally-intensive code on 210,000 processor cores of Titan at Oak Ridge’s Leadership Computing Facility, achieving more than 80 percent parallel efficiency.
Now the team is working towards broadening the framework so it can apply to any algorithm with load-imbalance issues. The goal of exascale is certainly a big motivator and it means that Krishnamoorthy must work with one foot in the present and the other in the future. To establish exascale machines within the next 8 to ten years, load imbalance and fault tolerance will require this kind of dedicated attention.
Judging from the community support that his research has garnered, it appears Krishnamoorthy is on the right track. The computer scientist was awarded a DOE Early Career Research Program award, which provides $2.5 million over five years to explore exascale computing strategies. And recently, Krishnamoorthy was also recognized with PNNL’s 2013 Ronald L. Brodzinski Award for Early Career Exceptional Achievement. | <urn:uuid:8ae49d6f-245e-4d65-8106-7980d529c61c> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/11/04/reining-restarts-selective-recovery/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00353-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940579 | 682 | 2.796875 | 3 |
Definition: A priority queue implemented with a variant of a binary tree. The root points to its children, as in a binary tree. Every other node points back to its parent and down to its leftmost (if it is a right child) or rightmost (if it is a left child) descendant leaf. The basic operation is merge or meld, which maintains the heap property. An element is inserted by merging it as a singleton. The root is removed by merging its right and left children. Merging is bottom-up, merging the leftmost edge of one with the rightmost edge of the other.
Generalization (I am a kind of ...)
Aggregate child (... is a part of or used in me.)
binary tree, heap property, meld.
J. Francon, G. Viennot, and J. Vuillemin, Description and analysis of an efficient priority queue representation, Proc. 19th Annual Symp. on Foundations of Computer Science. IEEE, 1978, pages 1-7.
R. Nix, An Evaluation of Pagodas, Res. Rep. 164, Dept. of Computer Science, Yale Univ. 1988?
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 16 November 2009.
HTML page formatted Mon Feb 2 13:10:40 2015.
Cite this as:
Paul E. Black, "pagoda", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 16 November 2009. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/pagoda.html | <urn:uuid:69faa164-8dd5-4c07-aae3-bbff42bd1668> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/pagoda.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00261-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.865116 | 360 | 3.203125 | 3 |
Cost accounting refers to a set of activities that include collection and analysis of data that is related to production/service delivery process of an organization. The purpose of these activities is to identify various fixed & variable costs, and notice which costs can be minimized or removed to achieve a better profitability rate.
There are several benefits of a cost accounting system. First of all, a good system minimizes the time and efforts that are usually employed in the cost accounting process. It also brings consistency in the operations and ensures that the information captured is also stored so that it can be referred to in future.
Cost accounting also makes it easier to track the hidden costs that go unrecorded, and thereby unnoticed. Over a period of time, these hidden costs can cause substantial loss to an organization. The system brings into notice these hidden costs and allows the management to take a decision, accordingly.
Cost accounting is different from financial accounting. While the main purpose of financial accounting is to present the financial position of an organization, cost accounting brings the costs involved in the production/service delivery to the management. Results obtained from the former accounting system can be made available to the general public and stakeholders. Results obtained from the latter system, however, are meant for internal use and by specific individuals or departments.
In a healthcare facility, the importance of an efficient cost accounting system cannot be undermined. The healthcare services market needs to be timely and consistent, in which cost accounting plays a significant role. By eliminating the unwanted expenses and processes, healthcare facilities can drastically bring down the healthcare costs.
In the Asian region, the cost accounting system market is witnessing growth on account of improved care quality and clinical outcomes, high returns on investment on the systems implemented in a facility, and an increasing need to integrate the healthcare systems.
This market is segmented on the basis of companies, components, deployments, end-users, and macro indicators.
The Asian cost accounting system market report is based on the information collected through extensive primary and secondary research. Data and facts have been collected and presented in a logical manner to illustrate the current and future trends of this market. The report analyzes the market shares of leading companies and the strategies being implemented by them to enhance their market share and presence. These strategies include mergers & acquisitions, partnerships, new product launches, capacity expansions, investments in R&D, and others.
Please fill in the form below to receive a free copy of the Summary of this Report
Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement
North American Non-Clinical Information System Market
North America is the largest market for non-clinical information systems globally, and is expected to grow at a CAGR of 8.1% from 2013 to 2018, to reach a value of $8,905.5 million in 2018. This market is segmented into sub-segments, components, deployments, end users, applications, and geographies.
European Non-Clinical Information Systems Market
The European non-clinical information systems (NCIS) market has been segmented by types, deployment, components, end users, applications, and geographies. Globally, this is the second-largest NCIS market, and is expected to grow at a CAGR of 6.3% from 2014 to 2019.
Asian Non-Clinical Information Systems Market
Asia is the fastest-growing market for non-clinical information systems, and was valued at $1,336.4 million in 2013. It is expected to grow at a CAGR of 7.2%, from 2013 to 2018, to reach a value of $1,892.2 million in 2018. This market can be segmented by companies, deployments, components, end users, and macro indicators. | <urn:uuid:3d9e22e7-b711-464f-9122-86c4ce6a9fb8> | CC-MAIN-2017-04 | http://www.micromarketmonitor.com/market/asia-cost-accounting-system-5637520024.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00261-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945152 | 777 | 2.859375 | 3 |
Lightweight Directory Access Protocol
The purpose of LDAP introduction was to supply a directory service offering protocol. That protocol is active on a layer, resting on top of the TCP/IP stack. Its mechanism is available for the connection, searching, and modification of the Internet directories. Actually, Lightweight Directory Access Protocol (a directory service) is consisted on the client-server representation. The main job of that protocol is to the facilitate right of entry to an accessible directory.
In other words, it is quite appropriate for the directory administration. Moreover, it is suitable for browser applications with no primary directory service uphold. LDAP internet protocol is used by the email and some other programs in order to search out for the information on or after a server. An email program can have an individual address book. However, certain questions may come in front of you like using what way; you can be able to find an address of a person who has never emailed you. And in which way a group can maintain one updated centralized phone book for everybody right to use. Due to these reasons major software companies started to hold up LDAP standard and which became a reason of its popularity too.
Anyway, no more than a LDAP-aware client program can pronounce, for searching the entries, to LDAP server. But LDAP protocol is not restricted to just getting in touch with the information but its other utilizations are included the searching encryption certificates and other specific services over the networks etc. In fact, this protocol is right for those similar to directory information that requires fast lookups but not as much of frequent updates.
Application protocol “Lightweight Directory Access Protocol” can be used to access and maintain the dispersed directory related information services within an IP network. Actually, LDAP is accessible as ASN.1 and can be transmitted with the help of BER.
General Idea of Protocol
A client program can initiate an LDAP session after making a connection with an LDAP server which is also known as the DSA (directory system agent) when using the TCP-port 389. In this process, the client will first send a request for the intended procedure of communication to the server, and in reply the server is required to respond. But in certain cases, client is not needed to hang around for such replies and can send its next request without waiting anymore. That means the server can throw responses anyway.
Following are the some possible operations that a client may ask for: StartTLS, Search for directory entries, fetching directory entries, comparison for the sake of testing a named entry that is contained a given attribute entry, adding up a fresh entry, deleting entries, modifying entries, extended operation and unbind or close the connection operation.
Standard LDAP Error Messages
Following error messages can be observed in RFC 4511 (Section: 4.1.9):
Error Name: LDAP_SUCCESS (Number: 0 (x’00)), Error Name: LDAP_OPERATIONS_ERROR (Number: 1 (x’01)), Error Name: LDAP_TIMELIMIT_EXCEEDED (Number: 3 (x’03)), Error Name: LDAP_STRONG_AUTH_NOT_SUPPORTED (Number: 7 (x’07) and Error Name: LDAP_REFERRAL (Number: 10 (x’0A)) | <urn:uuid:04b98694-64cc-4d15-803f-2b16ad643d36> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2012/ldap-lightweight-directory-access-protocol | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00169-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.912541 | 693 | 2.625 | 3 |
NIST Research Could Boost Mobile Device SecurityAn electron spinning technique could pave the way for a new generation of wireless device signals that are difficult for enemies to intercept, according to researchers at the National Institutes of Standards and Technology.
Particle physics could be the key to creating a new generation of wireless technology that would be more secure and resistant to interference than current methods, according to the National Institutes of Standards and Technology (NIST).
The research could pave the way for federal agencies like the U.S. military to create wireless devices with signals that would be difficult for enemies to intercept or scramble. If NIST research and analysis is correct, it may be possible to create an oscillator that could leverage the spin of electrons to generate microwaves for use in mobile devices.
The effect of this process could be used to create a cell-phone oscillator that enables the frequency of the devices to be changed very quickly. This would make the signals from the devices very hard for enemies to intercept or jam, making them optimal for use by the military or other defense or intelligence agencies, according to NIST.
Electron spin is a property that also can be applied to electronic circuits. The technique proposed by NIST researchers for cell phones could develop a type of wave called a "soliton," a shape-preserving wave that is already used in a variety of media, including optical fiber communications.
In theory, a soliton would be created in a layer of what NIST describes as a "multilayered magnetic sandwich." One of the sandwich layers must be magnetized perpendicular to the plane of the layers. To generate a soliton, an electric current then must be forced through a small channel in the sandwich.
Once the soliton is generated, the magnetic orientation oscillates at more than a billion times a second, which is the frequency of microwaves, according to NIST.
According to NIST, the oscillator, as predicted by researchers, would maintain a constant frequency even with variations in wave current. The result would be a steady, strong output signal that also would reduce unwanted noise.
While only mathematical research has been done so far to prove the theory, NIST researchers believe they can realize the effect in devices. They are currently seeking experimental evidence to support the theory, according to NIST. | <urn:uuid:d1937267-f56a-4bd4-9335-184cb5f912aa> | CC-MAIN-2017-04 | http://www.darkreading.com/risk-management/nist-research-could-boost-mobile-device-security/d/d-id/1092607 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00077-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941881 | 470 | 3.078125 | 3 |
Logitech's first ever mouse - the P4 mouse was developed in 1982.
The humble computer mouse celebrates its 40th birthday next week on 9 December. It was first invented and demonstrated by Douglas Engelbart and his group of researchers at Stanford University in 1968.
Now forty years on, Logitech the Swiss firm that was founded in a farmhouse in Apple, has announced that it has sold its one billionth mouse, since it designed its first, the P4 in 1982.
Today Logitech produces 376,000 mice a day, 7.8 million per month and sells the mouse in 100 countries worldwide. | <urn:uuid:d7bcb87f-999e-4dc2-a856-9b9df23d9c18> | CC-MAIN-2017-04 | http://www.computerweekly.com/photostory/2240107249/Photos-Logitech-notches-up-one-billion-mouse-sales/1/Logitech-notches-up-one-billion-mouse-sales | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00407-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.97712 | 125 | 2.671875 | 3 |
Most people think computers, being electronic devices, don't require any mechanical maintenance, but this is not so. Many computer faults are caused by components overheating due to poor airflow in the case because of a buildup of dirt and dust over time. It's worthwhile cleaning your computer annually or even more often if it is in a particularly dusty environment, on carpet or in a household with pets. This tutorial is designed to help you safely clean the interior of your tower or desktop PC so as to maximize its lifespan. No computer knowledge is assumed other than familiarity with component names. Cleaning the computer is not rocket-science and does not require any special skills or tools but you do need to know how to avoid possible damage to some of the more sensitive parts. That's what we will demonstrate here. Although the same principle applies to laptops and notebook PCs, because of the difficulty dismantling them these instructions do not apply to those types of computer.
back to contents
Inside the computer is completely safe with one exception - the power supply or PSU. The PSU is in its own metal box usually at the top rear of a tower (at the rear of a desktop) and you should NEVER attempt to open this box or stick anything metallic into it. There may be an on/off switch at the back of the PSU and there may be a (red) voltage selection switch. Do NOT change the voltage selection switch. Older computers have power at the on/off switch at the front of the case, identified by a thick electrical cable linking the switch to the PSU. Do not attempt to disconnect this cable from the switch. back to contents
The greatest danger inside the tower is of you "electrocuting" the computer through discharge of static electricity that builds up on your body or clothing. Static is especially a problem during dry weather and if you have synthetic carpets or clothing. For example a synthetic pullover (sweater) would be a bad choice of garment for this job, a short sleeved cotton shirt would be a much better choice. The best way to combat static while cleaning your computer is to wear a static strap attached to the chassis and worn on your wrist during the whole process. Disposable static straps are available for a few dollars, professional versions may cost $30-40. Alternatively if you can maintain good contact between yourself and the metal chassis for most of the cleaning process and try not to move around too much then that will be adequate without a strap.
Computers make pretty good dust collectors and if yours is normally placed on or near the floor (especially carpeted floor) or if you have pets, are a smoker, or the computer is situated in a high pollution area there could be a lot of dirt trapped in the system. When you blow this out with the compressed air it will be spread through the room. You should work with good ventilation and if you suffer from allergies you should consider wearing a dust mask. back to contents
Shutdown the computer and disconnect all the cables plugged into it (you may want to mark the cables and the ports they came from with coloured stickers to help you when putting your computer back together again). You may need the flat-bladed screwdriver to undo some of the connector screws. Put newspaper down on your work surface so it doesn't get scratched. Locate your worksurface near a power outlet (power point) and plug in the computer power cord (you don't need to switch it on). Put the computer on your work surface and connect the power cord to the computer but do not turn it on. Set out your tools and materials so you do not need to move around much to reach them during cleaning. Starting about two inches (50mm) from the blunt end of the pencil fasten insulating tape down the length of the pencil to the blunt end and cut the tape 2"(50mm) beyond the end of the pencil. Smooth the tape around the pencil then fold the excess length over the blunt end and up the other side. Press the tape down so it is firmly stuck to the length of the pencil. back to contents
Opening the Case
The standard tower case usually has either a single metal cover covering the top and both sides, held in place by three or four screws or has removable side panels each held in place by two screws.
Use the Philips screwdriver to remove the three or four screws holding on the cover(s) and put them aside where they will not be lost. Remove the cover(s) and put them to one side but within reach. If you are using a static strap put it on your wrist and attach it to a metal part of the chasis, if you do not have a static strap touch the metal of the chasis with both hands. Then remove the power cord from the back of the computer. back to contents
Floppy drives can collect a lot of dust which could prevent them from working properly. Push the nozzle of the compressed-air can a little way into the drive opening so that the flap is held open, or use Cotton-tips to hold the flap open wide, then use the compressed air to blow out the dust. There are special floppy cleaning disks available which are used to clean the floppy drive read/write heads but these are often more expensive than replacing the drive and are only needed if the drive is old or gets very heavy usage.
The CDROM drives or DVD drives are unlikely to be clogged by dust but they may collect dirt on the optical lens which can cause errors. Use the CD lens cleaning disk following the manufacturer's instructions to clean the lenses on these drives - this has to be done when the PC is operating.
Hard Drives are sealed units and require no cleaning, but to maximise the air-flow around them use the compressed air to blow away any dust from the drive's upper surfaces. back to contents
Connect your PC power cable again and switch on the PC, while it is open, for just long enough to see that all the fans you identified above are spinning. Fans which do not spin turn into miniature heaters which makes the situation worse than without a fan. If you find a fan which is not working then, after turning off the PC, note what kind of fan it is, where it is and, if possible, unplug it. You can probably order a replacement online or they may have stock in your local computer store. If the CPU fan is not working then you should not run the computer for more than a few minutes until it is replaced. If the PC has started to boot while you were inspecting the fans and is reluctant to turn off, just hold the power button in for about 5 seconds and the PC will switch off.
back to contents
Make sure nothing has been left inside the case and nothing is likely to get caught in the fans. Any cables that were moved to get access to other items should be put back in place. Inspect the cables going to the optical drives, floppy drive and hard drive(s) to check none have been dislodged. Put the cover(s) back on the system and do up the screws to hold them in place. Unplug the power cable and return your PC to its normal location. Connect up all the cables that were originally present (following the colour code if ygu used it) and reconnect the power cable. Plug into the power outlet and switch on. Make sure your monitor is switched on and check the computer boots up normally. Now you can use the CD lens cleaner if required. back to contents
We hope this tutorial has shown you that, with a little knowledge and a few basic tools, cleaning the inside of your computer is a simple, hazard free process. Following the above steps should have enabled you to successfully clean your computer so that it can continue to run as efficiently as it was originally designed to do.
This tutorial is intended to explain what RAM is and give some background on different memory technologies in order to help you identify the RAM in your PC. It will also discuss RAM speed and timing parameters to help you understand the specifications often quoted on vendors' websites. Its final aim is to assist you in upgrading your system by suggesting some tools and strategies to help you ...
In order to use a hard drive, or a portion of a hard drive, in Windows you need to first partition it and then format it. This process will then assign a drive letter to the partition allowing you to access it in order to use it to store and retrieve data.
I am sure many of you have been told in the past to defrag your hard drives when you have noticed a slow down on your computer. You may have followed the advice and defragged your hard drive, and actually noticed a difference. Have you ever wondered why defragging helps though? This tutorial will discuss what Disk Fragmentation is and how you can optimize your hard drive's partitions by ...
Windows Vista Business, Ultimate, and Enterprise come with a more advanced backup and restore utility called Windows Complete PC Backup and Restore. This program allows you to create an entire backup of your computer that can be used to restore your computer in the case of system-wide failure. Unlike the standard backup and restore feature that comes with all the versions of Windows Vista, Windows ...
Almost everyone uses a computer daily, but many don't know how a computer works or all the different individual pieces that make it up. In fact, many people erroneously look at a computer and call it a CPU or a hard drive, when in fact these are just two parts of a computer. When these individual components are connected together they create a complete and working device with an all ... | <urn:uuid:6687739c-6178-4d92-87a2-8a48f60a7784> | CC-MAIN-2017-04 | https://www.bleepingcomputer.com/tutorials/cleaning-the-inside-of-your-pc/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00315-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952656 | 1,955 | 2.671875 | 3 |
Network security is a broad topic that can be addressed at the data link, or media, level (where packet snooping and encryption problems can occur), at the network, or protocol, layer (the point at which Internet Protocol (IP) packets and routing updates are controlled), and at the application layer (where, for example, host-level bugs become issues).
As more users access the Internet and as companies expand their networks, the challenge to provide security for internal networks becomes increasingly difficult. Companies must determine which areas of their internal networks they must protect, learn how to restrict user access to these areas, and determine which types of network services they should filter to prevent potential security breaches.
Cisco Systems provides several network, or protocol, layer features to increase security on IP networks. These features include controls to restrict access to routers and communication servers by way of console port, Telnet, Simple Network Management Protocol (SNMP), Terminal Access Controller Access Control System (TACACS), vendor token cards, and access lists. Firewall architecture setup is also discussed.
When most people talk about security, they mean ensuring that users can only perform tasks they are authorized to do, can only obtain information they are authorized to have, and cannot cause damage to the data, applications, or operating environment of a system.
The word security connotes protection against malicious attack by outsiders. Security also involves controlling the effects of errors and equipment failures. Anything that can protect against a deliberate, intelligent, calculated attack will probably prevent random misfortune as well.
Security measures keep people honest in the same way that locks do. This case study provides specific actions you can take to improve the security of your network. Before going into specifics, however, it will help if you understand the following basic concepts that are essential to any security system:
It is important to control access to your Cisco routers. You can control access to the router using the following methods:
You can secure the first three of these methods by employing features within the router software. For each method, you can permit nonprivileged access and privileged access for a user (or group of users). Nonprivileged access allows users to monitor the router, but not to configure the router. Privileged access allows the user to fully configure the router.
For console port and Telnet access, you can set up two types of passwords. The first type of password, the login password, allows the user nonprivileged access to the router. After accessing the router, the user can enter privileged mode by entering the enable command and the proper password. Privileged mode provides the user with full configuration capabilities.
SNMP access allows you to set up different SNMP community strings for both nonprivileged and privileged access. Nonprivileged access allows users on a host to send the router SNMP get-request and SNMP get-next-request messages. These messages are used for gathering statistics from the router. Privileged access allows users on a host to send the router SNMP set-request messages in order to make changes to the router's configurations and operational state.
A console is a terminal attached directly to the router via the console port. Security is applied to the console by asking users to authenticate themselves via passwords. By default, there are no passwords associated with console access.
You configure a password for nonprivileged mode by entering the following commands in the router's configuration file. Passwords are case-sensitive. In this example, the password is "1forAll."
line console 0 login password 1forAll
When you log in to the router, the router login prompt is as follows:
User Access Verification Password:
You must enter the password "1forAll" to gain nonprivileged access to the router. The router response is as follows:
Nonprivileged mode is signified on the router by the
>prompt. At this point, you can enter a variety of commands to view statistics on the router, but you cannot change the configuration of the router. Never use "cisco," or other obvious derivatives, such as "pancho," for a Cisco router password. These will be the first passwords intruders will try if they recognize the Cisco login prompt.
Configure a password for privileged mode by entering the following commands in the router's configuration file. In this example, the password is "san-fran."
To access privileged mode, enter the following command:
router> enable Password:
Enter the password "san-fran" to gain privileged access to the router. The router responds as follows:
Privileged mode is signified by the
# prompt. In privileged mode, you can enter all commands to view statistics and configure the router.
Setting the login and enable passwords may not provide enough security in some cases. The timeout for an unattended console (by default 10 minutes) provides an additional security measure. If the console is left unattended in privileged mode, any user can modify the router's configuration. You can change the login timeout via the command exec-timeout mm ss where mm is minutes and ss is seconds.
line console 0 exec-timeout 1 30
All passwords on the router are visible via the write terminal and show configuration privileged mode commands. If you have access to privileged mode on the router, you can view all passwords in cleartext by default.
There is a way to hide cleartext passwords. The command service password-encryption stores passwords in an encrypted manner so that anyone performing a write terminal and show configuration will not be able to determine the cleartext password. However, if you forget the password, regaining access to the router requires you to have physical access to the router.
You can access both nonprivileged and privileged mode on the router via Telnet. As with the console port, Telnet security is provided when users are prompted by the router to authenticate themselves via passwords. In fact, many of the same concepts described in the "Console Access" section earlier in this chapter apply to Telnet access. You must enter a password to go from nonprivileged mode to privileged mode, and you can encrypt passwords and specify timeouts for each Telnet session.
Each Telnet port on the router is known as a virtual terminal. There are a maximum of five virtual terminal (VTY) ports on the router, allowing five concurrent Telnet sessions. (The communication server provides more VTY ports.) On the router, the virtual terminal ports are numbered from 0 through 4. You can set up nonprivileged passwords for Telnet access via the virtual terminal ports with the following configuration commands. In this example, virtual terminal ports 0 through 4 use the password "marin":
line vty 0 4 login password marin
When a user telnets to a router IP address, the router provides a prompt similar to the following:
% telnet router Trying ... Connected to router. Escape character is '^]'. User Access Verification Password:
If the user enters the correct nonprivileged password, the following prompt appears:
The user now has nonprivileged access to the router and can enter privileged mode by entering the enable command as described in the "Privileged Mode Password" section earlier in this chapter.
If you want to allow only certain IP addresses to use Telnet to access the router, you must use the access-class command. The command access-class nn in defines an access list (from 1 through 99) that allows access to the virtual terminal lines on the router. The following configuration commands allow incoming Telnet access to the router only from hosts on network 126.96.36.199:
access-list 12 permit 188.8.131.52 0.0.0.255 line vty 0 4 access-class 12 in
It is possible to access Cisco products via Telnet to specified TCP ports. The type of Telnet access varies, depending upon the following Cisco software releases:
For Software Release 9.1 (11.4) and earlier and Software Release 9.21 (3.1) and earlier, it is possible, by default, to establish TCP connections to Cisco products via the TCP ports listed in Table 3-1.
|TCP Port Number||Access Method|
Telnet (to virtual terminal VTY ports in rotary fashion)
SNMP over TCP
2001 through 2999
Telnet to auxiliary (AUX) port, terminal (TTY) ports, and virtual terminal (VTY) ports
3001 through 3999
Telnet to rotary ports (access via these ports is only possible if the rotaries have been explicitly configured first with the rotary command)
4001 through 4999
Telnet (stream mode) mirror of 2000 range
5001 through 5999
Telnet (stream mode) mirror of 3000 range (access via these ports is possible only if the rotaries have been explicitly configured first)
6001 through 6999
Telnet (binary mode) mirror of 2000 range
7001 through 7999
Telnet (binary mode) mirror of 3000 range (access via these ports is possible only if the rotaries have been explicitly configured first)
8001 through 8999
Xremote (communication servers only)
9001 through 9999
Reverse Xremote (communication servers only)
10001 through 19999
Reverse Xremote rotary (communication servers only; access via these ports is possible only if the ports have been explicitly configured first)
|Caution Because Cisco routers have no TTY lines, configuring access (on communication servers) to terminal ports 2002, 2003, 2004, and greater could potentially provide access (on routers) to virtual terminal lines 2002, 2003, 2004, and greater. To provide access only to TTY ports, you can create access lists to prevent access to VTYs.|
When configuring rotary groups, keep in mind that access through any available port in the rotary group is possible (unless access lists are defined). Cisco recommends that if you are using firewalls that allow in-bound TCP connection to high-number ports, remember to apply appropriate in-bound access lists to Cisco products.
The following is an example illustrating an access list denying all in-bound Telnet access to the auxiliary port and allowing Telnet access to the router only from IP address 184.108.40.206:
access-class 51 deny 0.0.0.0 255.255.255.255 access-class 52 permit 220.127.116.11 line aux 0 access-class 51 in line vty 0 4 access-class 52 in
To disable connections to the echo and discard ports, you must disable these services completely with the no service tcp-small-servers command.
|Caution If the ip alias command is enabled on Cisco products, TCP connections to any destination port are considered valid connections. You may want to disable the ip alias command.|
You might want to create access lists to prevent access to Cisco products via these TCP ports. For information on how to create access lists for routers, see the "Configuring the Firewall Router" section later in this chapter. For information on how to create access lists for communication servers, see the "Configuring the Firewall Communication Server" section later in this chapter.
With Software Release 9.1 (11.5), 9.21 (3.2), and any version of Software Release 10, the following enhancements have been implemented:
For later releases, a Cisco router accepts TCP connections on the ports listed in Table 3-2 by default.
|TCP Port Number||Access Method|
SNMP over TCP
Auxiliary (AUX) port
Auxiliary (AUX) port (stream)
Auxiliary (AUX) port (binary)
Access via port 23 can be restricted by creating an access list and assigning it to virtual terminal lines. Access via port 79 can be disabled with the no service finger command. Access via port 1993 can be controlled with SNMP access lists. Access via ports 2001, 4001, and 6001 can be controlled with an access list placed on the auxiliary port.
Nonprivileged and privileged mode passwords are global and apply to every user accessing the router from either the console port or from a Telnet session. As an alternative, the Terminal Access Controller Access Control System (TACACS) provides a way to validate every user on an individual basis before they can gain access to the router or communication server. TACACS was derived from the United States Department of Defense and is described in Request For Comments (RFC) 1492. TACACS is used by Cisco to allow finer control over who can access the router in nonprivileged and privileged mode.
With TACACS enabled, the router prompts the user for a username and a password. Then, the router queries a TACACS server to determine whether the user provided the correct password. A TACACS server typically runs on a UNIX workstation. Public domain TACACS servers can be obtained via anonymous ftp to ftp.cisco.com in the /pub directory. Use the /pub/README file to find the filename. A fully supported TACACS server is bundled with CiscoWorks Version 3.
The configuration command tacacs-server host specifies the UNIX host running a TACACS server that will validate requests sent by the router. You can enter the tacacs-server host command several times to specify multiple TACACS server hosts for a router.
If all servers are unavailable, you may be locked out of the router. In that event, the configuration command tacacs-server last-resort [password | succeed] allows you to determine whether to allow a user to log in to the router with no password (succeed keyword) or to force the user to supply the standard login password (password keyword).
The following commands specify a TACACS server and allow a login to succeed if the server is down or unreachable:
tacacs-server host 18.104.22.168 tacacs-server last-resort succeed
To force users who access the router via Telnet to authenticate themselves using TACACS, enter the following configuration commands:
line vty 0 4 login tacacs
This method of password checking can also be applied to the privileged mode password with the enable use-tacacs command. If all servers are unavailable, you may be locked out of the router. In that event, the configuration command enable last-resort [succeed | password] allows you to determine whether to allow a user to log in to the router with no password (succeed keyword) or to force the user to supply the enable password (password keyword). There are significant risks to using the succeed keyword. If you use the enable use-tacacs command, you must also specify the tacacs-server authenticate enable command.
The tacacs-server extended command enables a Cisco device to run in extended TACACS mode. The UNIX system must be running the extended TACACS daemon, which can be obtained via anonymous ftp to ftp.cisco.com. The filename is xtacacsd.shar. This daemon allows communication servers and other equipment to talk to the UNIX system and update an audit trail with information on port usage, accounting data, or any other information the device can send.
The command username <user> password [0 | 7] <password> allows you to store and maintain a list of users and their passwords on a Cisco device instead of on a TACACS server. The number 0 stores the password in cleartext in the configuration file. The number 7 stores the password in an encrypted format. If you do not have a TACACS server and still want to authenticate users on an individual basis, you can set up users with the following configuration commands:
username steve password 7 steve-pass username allan password 7 allan-pass
The two users, Steve and Allan, will be authenticated via passwords that are stored in encrypted format.
Using TACACS service on routers and communications servers, support for physical card key devices, or token cards, can also be added. The TACACS server code can be modified to provide support for this without requiring changes in the setup and configuration of the routers and communication servers. This modified code is not directly available from Cisco.
The token card system relies on a physical card that must be in your possession in order to provide authentication. By using the appropriate hooks in the TACACS server code, third-party companies can offer these enhanced TACACS servers to customers. One such product is the Enigma Logic SafeWord security software system. Other card-key systems, such as Security Dynamics SmartCard, can be added to TACACS as well.
SNMP is another method you can use to access your routers. With SNMP, you can gather statistics or configure the router. Gather statistics with get-request and get-next-request messages, and configure routers with set-request messages. Each of these SNMP messages has a community string that is a cleartext password sent in every packet between a management station and the router (which contains an SNMP agent). The SNMP community string is used to authenticate messages sent between the manager and agent. Only when the manager sends a message with the correct community string will the agent respond.
The SNMP agent on the router allows you to configure different community strings for nonprivileged and privileged access. You configure community strings on the router via the configuration command snmp-server community <string> [RO | RW] [access-list]. The following sections explore the various ways to use this command.
Unfortunately, SNMP community strings are sent on the network in cleartext ASCII. Thus, anyone who has the ability to capture a packet on the network can discover the community string. This may allow unauthorized users to query or modify routers via SNMP. For this reason, using the no snmp-server trap-authentication command may prevent intruders from using trap messages (sent between SNMP managers and agents) to discover community strings.
The Internet community, recognizing this problem, greatly enhanced the security of SNMP version 2 (SNMPv2) as described in RFC 1446. SNMPv2 uses an algorithm called MD5 to authenticate communications between an SNMP server and agent. MD5 verifies the integrity of the communications, authenticates the origin, and checks for timeliness. Further, SNMPv2 can use the data encryption standard (DES) for encrypting information.
Use the RO keyword of the snmp-server community command to provide nonprivileged access to your routers via SNMP. The following configuration command sets the agent in the router to allow only SNMP get-request and get-next-request messages that are sent with the community string "public":
snmp-server community public RO 1
You can also specify a list of IP addresses that are allowed to send messages to the router using the access-list option with the snmp-server community command. In the following configuration example, only hosts 22.214.171.124 and 126.96.36.199 are allowed nonprivileged mode SNMP access to the router:
access-list 1 permit 188.8.131.52 access-list 1 permit 184.108.40.206 snmp-server community public RO 1
Use the RW keyword of the snmp-server community command to provide privileged access to your routers via SNMP. The following configuration command sets the agent in the router to allow only SNMP set-request messages sent with the community string "private":
snmp-server community private RW 1
You can also specify a list of IP addresses that are allowed to send messages to the router by using the access-list option of the snmp-server community command. In the following configuration example, only hosts 220.127.116.11 and 18.104.22.168 are allowed privileged mode SNMP access to the router:
access-list 1 permit 22.214.171.124 access-list 1 permit 126.96.36.199 snmp-server community private RW 1
If a router regularly downloads configuration files from a Trivial File Transfer Protocol (TFTP) or Maintenance Operations Protocol (MOP) server, anyone who can access the server can modify the router configuration files stored on the server.
Communication servers can be configured to accept incoming local area transport (LAT) connections. Protocol translators and their translating router brethren can accept X.29 connections. These different types of access should be considered when creating a firewall architecture.
A firewall architecture is a structure that exists between you and the outside world to protect you from intruders. In most circumstances, intruders are represented by the global Internet and the thousands of remote networks it interconnects. Typically, a network firewall consists of several different machines as shown in Figure 3-1.
In this architecture, the router that is connected to the Internet (exterior router) forces all incoming traffic to go to the application gateway. The router that is connected to the internal network (interior router) accepts packets only from the application gateway.
The application gateway institutes per-application and per-user policies. In effect, the gateway controls the delivery of network-based services both into and from the internal network. For example, only certain users might be allowed to communicate with the Internet, or only certain applications are permitted to establish connections between an interior and exterior host.
The route and packet filters should be set up to reflect the same policies. If the only application that is permitted is mail, only mail packets should be allowed through the router. This protects the application gateway and avoids overwhelming it with packets that it would otherwise discard.
This section uses the scenario illustrated in Figure 3-2 to describe the use of access lists to restrict traffic to and from a firewall router and a firewall communication server.
In this case study, the firewall router allows incoming new connections to one or more communication servers or hosts. Having a designated router act as a firewall is desirable because it clearly identifies the router's purpose as the external gateway and avoids encumbering other routers with this task. In the event that the internal network needs to isolate itself, the firewall router provides the point of isolation so that the rest of the internal network structure is not affected.
Connections to the hosts are restricted to incoming file transfer protocol (FTP) requests and email services as described in the "Configuring the Firewall Router" section later in this chapter. The incoming Telnet, or modem, connections to the communication server are screened by the communication server running TACACS username authentication, as described in the "Configuring the Firewall Communication Server" section later in this chapter.
In the firewall router configuration that follows, subnet 13 of the Class B network is the firewall subnet, whereas subnet 14 provides the connection to the worldwide Internet via a service provider:
interface ethernet 0 ip address B.B.13.1 255.255.255.0 interface serial 0 ip address B.B.14.1 255.255.255.0 router igrp network B.B.0.0
This simple configuration provides no security and allows all traffic from the outside world onto all parts of the network. To provide security on the firewall router, use access lists and access groups as described in the next section.
Access lists define the actual traffic that will be permitted or denied, whereas an access group applies an access list definition to an interface. Access lists can be used to deny connections that are known to be a security risk and then permit all other connections, or to permit those connections that are considered acceptable and deny all the rest. For firewall implementation, the latter is the more secure method.
In this case study, incoming email and news are permitted for a few hosts, but FTP, Telnet, and rlogin services are permitted only to hosts on the firewall subnet. IP extended access lists (range 100 to 199) and transmission control protocol (TCP) or user datagram protocol (UDP) port numbers are used to filter traffic. When a connection is to be established for email, Telnet, FTP, and so forth, the connection will attempt to open a service on a specified port number. You can, therefore, filter out selected types of connections by denying packets that are attempting to use that service. For a list of well-known services and ports, see the "Filtering TCP and UDP Services" section later in this chapter.
An access list is invoked after a routing decision has been made but before the packet is sent out on an interface. The best place to define an access list is on a preferred host using your favorite text editor. You can create a file that contains the access-list commands, place the file (marked readable) in the default TFTP directory, and then network load the file onto the router.
The network server storing the file must be running a TFTP daemon and have TCP network access to the firewall router. Before network loading the access control definition, any previous definition of this access list is removed by using the following command:
no access-list 101
The access-list command can now be used to permit any packets returning to machines from already established connections. With the established keyword, a match occurs if the TCP datagram has the acknowledgment (ACK) or reset (RST) bits set.
access-list 101 permit tcp 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 established
If any firewall routers share a common network with an outside provider, you may want to allow access from those hosts to your network. In this case study, the outside provider has a serial port that uses the firewall router Class B address (B.B.14.2) as a source address as follows:
access-list 101 permit ip B.B.14.2 0.0.0.0 0.0.0.0 255.255.255.255
The following example illustrates how to deny traffic from a user attempting to spoof any of your internal addresses from the outside world (without using 9.21 input access lists):
access-list 101 deny ip B.B.0.0 0.0.255.255 0.0.0.0 255.255.255.255
The following commands allow domain name system (DNS) and network time protocol (NTP) requests and replies:
access-list 101 permit udp 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 eq 53 access-list 101 permit udp 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 eq 123
The following command denies the network file server (NFS) user datagram protocol (UDP) port:
access-list 101 deny udp 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 eq 2049
The following commands deny OpenWindows on ports 2001 and 2002 and deny X11 on ports 6001 and 6002. This protects the first two screens on any host. If you have any machine that uses more than the first two screens, be sure to block the appropriate ports.
access-list 101 deny tcp 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 eq 6001 access-list 101 deny tcp 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 eq 6002 access-list 101 deny tcp 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 eq 2001 access-list 101 deny tcp 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 eq 2002
The following command permits Telnet access to the communication server (B.B.13.2):
access-list 101 permit tcp 0.0.0.0 255.255.255.255 B.B.13.2 0.0.0.0 eq 23
The following commands permit FTP access to the host on subnet 13:
access-list 101 permit tcp 0.0.0.0 255.255.255.255 B.B.13.100 0.0.0.0 eq 21 access-list 101 permit tcp 0.0.0.0 255.255.255.255 B.B.13.100 0.0.0.0 eq 20
For the following examples, network B.B.1.0 is on the internal network. Figure 3-2The following commands permit TCP and UDP connections for port numbers greater than 1023 to a very limited set of hosts. Make sure no communication servers or protocol translators are in this list.
access-list 101 permit tcp 0.0.0.0 255.255.255.255 B.B.13.100 0.0.0.0 gt 1023 access-list 101 permit tcp 0.0.0.0 255.255.255.255 B.B.1.100 0.0.0.0 gt 1023 access-list 101 permit tcp 0.0.0.0 255.255.255.255 B.B.1.101 0.0.0.0 gt 1023 access-list 101 permit udp 0.0.0.0 255.255.255.255 B.B.13.100 0.0.0.0 gt 1023 access-list 101 permit udp 0.0.0.0 255.255.255.255 B.B.1.100 0.0.0.0 gt 1023 access-list 101 permit udp 0.0.0.0 255.255.255.255 B.B.1.101 0.0.0.0 gt 1023
The following commands permit DNS access to the DNS server(s) listed by the Network Information Center (NIC):
access-list 101 permit tcp 0.0.0.0 255.255.255.255 B.B.13.100 0.0.0.0 eq 53 access-list 101 permit tcp 0.0.0.0 255.255.255.255 B.B.1.100 0.0.0.0 eq 53
The following commands permit incoming simple mail transfer protocol (SMTP) email to only a few machines:
access-list 101 permit tcp 0.0.0.0 255.255.255.255 B.B.13.100 0.0.0.0 eq 25 access-list 101 permit tcp 0.0.0.0 255.255.255.255 B.B.1.100 0.0.0.0 eq 25
The following commands allow internal network news transfer protocol (NNTP) servers to receive NNTP connections from a list of authorized peers:
access-list 101 permit tcp 188.8.131.52 0.0.0.1 B.B.1.100 0.0.0.0 eq 119 access-list 101 permit tcp 184.108.40.206 0.0.0.0 B.B.1.100 0.0.0.0 eq 119
The following command permits Internet control message protocol (ICMP) for error message feedback:
access-list 101 permit icmp 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255
Every access list has an implicit "deny everything else" statement at the end of the list to ensure that attributes that are not expressly permitted are in fact denied.
Many sites today choose to block incoming TCP sessions originated from the outside world while allowing outgoing connections. The trouble with this is that blocking incoming connections kills traditional FTP client programs because these programs use the "PORT" command to tell the server where to connect to send the file. The client opens a "control" connection to the server, but the server then opens a "data" connection to an effectively arbitrarily chosen (> 1023) port number on the client.
Fortunately, there is an alternative to this behavior that allows the client to open the "data" socket and allows you to have the firewall and FTP too. The client sends a PASV command to the server, receives back a port number for the data socket, opens the data socket to the indicated port, and finally sends the transfer.
In order to implement this method, the standard FTP client program must be replaced with a modified one that supports the PASV command. Most recent implementations of the FTP server already support the PASV command. The only trouble with this idea is that it breaks down when the server site has also blocked arbitrary incoming connections.
Source files for a modified FTP program that works through a firewall are now available via anonymous FTP at ftp.cisco.com. The file is /pub/passive-ftp.tar.Z. This is a version of BSD 4.3 FTP with the PASV patches. It works through a firewall router that allows only incoming established connections.
|Caution Care should be taken in providing anonymous FTP service on the host system. Anonymous FTP service allows anyone to access the hosts, without requiring an account on the host system. Many implementations of the FTP server have severe bugs in this area. Also, take care in the implementation and setup of the anonymous FTP service to prevent any obvious access violations. For most sites, anonymous FTP service is disabled.|
After this access list has been loaded onto the router and stored into nonvolatile random-access memory (NVRAM), assign it to the appropriate interface. In this case study, traffic coming from the outside world via serial 0 is filtered before it is placed on subnet 13 (ethernet 0). Therefore, the access-group command, which assigns an access list to filter incoming connections, must be assigned to Ethernet 0 as follows:
interface ethernet 0 ip access-group 101
To control outgoing access to the Internet from the network, define an access list and apply it to the outgoing packets on serial 0 of the firewall router. To do this, returning packets from hosts using Telnet or FTP must be allowed to access the firewall subnetwork B.B.13.0.
Some well-known TCP and UDP port numbers include the services listed in Table 3-3.
|Service||Port Type||Port Number|
File Transfer Protocol (FTP)---Data
Simple Mail Transfer Protocol (SMTP)---Email
Terminal Access Controller Access Control System (TACACS)
Domain Name Server (DNS)
TCP and UDP
Trivial File Transfer Protocol (TFTP)
SUN Remote Procedure Call (RPC)
Network News Transfer Protocol (NNTP)
Network Time Protocol (NTP)
TCP and UDP
Simple Management Network Protocol (SNMP)
Border Gateway Protocol (BGP)
TCP and UDP
TCP and UDP
TCP and UDP
Network File System (NFS)
TCP and UDP
The Computer Emergency Response Team (CERT) recommends filtering the services listed in Table 3-4.
|Service||Port Type||Port Number|
DNS zone transfers
TFTP daemon (tftpd)
link---commonly used by intruders
TCP and UDP
BSD UNIX r commands (rsh, rlogin, and so forth)
512 through 514
line printer daemon (lpd)
UNIX-to-UNIX copy program daemon (uucpd)
TCP and UDP
TCP and UDP
|1Port 111 is only a directory service. If you can guess the ports on which the actual data services are provided, you can access them. Most RPC services do not have fixed port numbers. You should find the ports on which these services can be found and block them. Unfortunately, because ports can be bound anywhere, Cisco recommends blocking all UDP ports except DNS where practical.|
In Software Release 9.21, Cisco introduces the ability to assign input access lists to an interface. This allows a network administrator to filter packets before they enter the router, instead of as they leave the router. In most cases, input access lists and output access lists accomplish the same functionality; however, input access lists are more intuitive to some people and can be used to prevent some types of IP address "spoofing" where output access lists will not provide sufficient security.
Figure 3-3 illustrates a host that is "spoofing," or illegally claiming to be an address that it is not. Someone in the outside world is claiming to originate traffic from network 220.127.116.11. Although the address is spoofed, the router interface to the outside world assumes that the packet is coming from 18.104.22.168. If the input access list on the router allows traffic coming from 22.214.171.124, it will accept the illegal packet. To avoid this spoofing situation, an input access list should be applied to the router interface to the outside world. This access list would not allow any packets with addresses that are from the internal networks of which the router is aware (17.0 and 18.0).
If you have several internal networks connected to the firewall router and the router is using output filters, traffic between internal networks will see a reduction in performance created by the access list filters. If input filters are used only on the interface going from the router to the outside world, internal networks will not see any reduction in performance.
In this case study, the firewall communication server has a single inbound modem on line 2:
interface Ethernet0 ip address B.B.13.2 255.255.255.0 ! access-list 10 deny B.B.14.0 0.0.0.255 access-list 10 permit B.B.0.0 0.0.255.255 ! access-list 11 deny B.B.13.2 0.0.0.0 access-list 11 permit B.B.0.0 0.0.255.255 ! line 2 login tacacs location FireWallCS#2 ! access-class 10 in access-class 11 out ! modem answer-timeout 60 modem InOut telnet transparent terminal-type dialup flowcontrol hardware stopbits 1 rxspeed 38400 txspeed 38400 ! tacacs-server host B.B.1.100 tacacs-server host B.B.1.101 tacacs-server extended ! line vty 0 15 login tacacs
In this example, the network number is used to permit or deny access; therefore, standard IP access list numbers (range 1 through 99) are used. For incoming connections to modem lines, only packets from hosts on the internal Class B network and packets from those hosts on the firewall subnetwork are permitted:
access-list 10 deny B.B.14.0 0.0.0.255 access-list 10 permit B.B.0.0 0.0.255.255
Outgoing connections are allowed only to internal network hosts and to the communication server. This prevents a modem line in the outside world from calling out on a second modem line:
access-list 11 deny B.B.13.2 0.0.0.0 access-list 11 permit B.B.0.0 0.0.255.255
Apply an access list to an asynchronous line with the access-class command. In this case study, the restrictions from access list 10 are applied to incoming connections on line 2. The restrictions from access list 11 are applied to outgoing connections on line 2.
access-class 10 in access-class 11 out
It is also wise to use the banner exec global configuration command to provide messages and unauthorized use notifications, which will be displayed on all new connections. For example, on the communication server, you can enter the following message:
banner exec ^C If you have problems with the dial-in lines, please send mail to helpdesk@Corporation X.com. If you get the message "% Your account is expiring", please send mail with name and voicemail box to helpdesk@CorporationX.com, and someone will contact you to renew your account. Unauthorized use of these resources is prohibited.
There are a number of nonstandard services available from the Internet that provide value-added services when connecting to the outside world. In the case of a connection to the Internet, these services can be very elaborate and complex. Examples of these services are World Wide Web (WWW), Wide Area Information Service (WAIS), gopher, and Mosaic. Most of these systems are concerned with providing a wealth of information to the user in some organized fashion and allowing structured browsing and searching.
Most of these systems have their own defined protocol. Some, such as Mosaic, use several different protocols to obtain the information in question. Use caution when designing access lists applicable to each of these services. In many cases, the access lists will become interrelated as these services become interrelated.
Although this case study illustrates how to use Cisco network layer features to increase network security on IP networks, in order to have comprehensive security, you must address all systems and layers.
This section contains a list of publications that provide internetwork security information.
Cheswick, B. and Bellovin, S. Firewalls and Internet Security. Addison-Wesley.
Comer, D.E and Stevens, D.L., Internetworking with TCP/IP. Volumes I-III. Englewood Cliffs, New Jersey: Prentice Hall; 1991-1993.
Curry, D. UNIX System Security---A Guide for Users and System Administrators.
Garfinkel and Spafford. Practical UNIX Security. O'Reilly & Associates.
Quarterman, J. and Carl-Mitchell, S. The Internet Connection, Reading, Massachusetts: Addison-Wesley Publishing Company; 1994.
Ranum, M. J. Thinking about Firewalls, Trusted Information Systems, Inc.
Stoll, C. The Cuckoo's Egg. Doubleday.
Treese, G. W. and Wolman, A. X through the Firewall and Other Application Relays.
RFC 1118. "The Hitchhiker's Guide to the Internet." September 1989.
RFC 1175. "A Bibliography of Internetworking Information." August 1990.
RFC1244. "Site Security Handbook." July 1991.
RFC 1340. "Assigned Numbers." July 1992.
RFC 1446. "Security Protocols for SNMPv2." April 1993.
RFC 1463. "FYI on Introducing the Internet---A Short Bibliography of Introductory Internetworking Readings for the Network Novice." May 1993.
RFC 1492. "An Access Control Protocol, Sometimes Called TACACS." July 1993.
Documents at gopher.nist.gov.
The "Computer Underground Digest" in the /pub/cud directory at ftp.eff.org.
Documents in the /dist/internet_security directory at research.att.com.
Posted: Thu Oct 28 16:48:30 PDT 1999
Copyright 1989-1999©Cisco Systems Inc. | <urn:uuid:7b0ac2b5-bccd-4e35-ba73-08c27b3a67d7> | CC-MAIN-2017-04 | http://www.cisco.com/cpress/cc/td/cpress/ccie/ndcs798/nd2016.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00041-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.851273 | 9,095 | 3.140625 | 3 |
Sharpen your pencils: It's time for Web Application Security 101.
A traditional firewall is commonly employed to restrict Web site access to Ports 80 and 443, used for HTTP and Secure Sockets Layer communications, respectively. However, such a device does very little to deter attacks that come over these connections. URL query string manipulations including SQL injection, modification of cookie values, tampering of form field data, malformed requests and a variety of other nasty tricks are often given free passage on allowed, legitimate traffic.
A Web application firewall, such as those reviewed in this issue (see review) might help address security holes in Web servers and Web applications, but there is certainly a great deal that network security professional could and should do before and after employing such measures.
So sharpen your pencils: It's time for Web Application Security 101.
Tip 1: Don't trust, authenticate.
If you are in charge of designing or administrating a public Web site, you need to embrace the fact that you cannot trust your users. If you are particularly paranoid, you might extend this concept to an extranet or even an internal site. But the point is that unless the users authenticate themselves with the site somehow, you have no idea who they are and what their intentions might be.
Not to suggest that a hacker hides behind every IP address accessing your site, but can you easily separate legitimate traffic from non-legitimate traffic? Are those excessive 404 errors in your server log simple mistakes or someone probing your defenses? You should always err on the side of caution, and the tips that follow embrace this spirit.
Tip 2: Keep a low profile.
The first step for a potential intruder is to gather information about your Web server and any hosted application. Don't expose anything your end users don't need to know and consider the following simple anti-reconnaissance tactics:
• Remove personal information from your WHOIS records that might be useful in a social engineering attack and employ a role account instead.
• Make sure your machine is not named something that indicates its operating system or version.
• Remove the server header from your Web server's response.
• Remap file extensions of dynamic pages, for example .jsp to .shtm.
• Add custom error pages that suppress useful information about the server or associated development platform.
• Do not expose sensitive file or directory names in robots.txt file.
You can go deeper with anti-reconnaissance by tweaking your network firewall and server connection settings to fool tools such as NMAP (www.insecure.org) that will try to identify your server via its TCP stack responses. At the HTTP level, you might consider changing your Web server's responses to alter header order, mask session cookie names, and remove other items in the response. A tool such as ServerMask for Internet Information Systems can help you perform many of these masking tricks.
Obviously, the competent Web administrator does not solely embrace security by obscurity. True protection is required. However, inviting attack to test your site's "armor" is foolish; the aim is only to keep potential attackers from easily sizing up defenses and attacking more successfully by giving the site and server the equivalent of camouflage.
Tip 3: Use misdirection and misinformation beyond reducing information exposure.
You should consider using misinformation and misdirection in what you do reveal. Looking like another type of server, pretending to use a different technology or giving contradictory information can trip an attacker into making the wrong types of attacks and clearly signaling his intention. For example, you might add fake "off- limits" directories or file names in a site's robots.txt, comments or error pages so that users or tools with bad intent reveal themselves for monitoring or blocking. Other examples of misdirection include:
• Randomized network and HTTP server signatures found in the response packets.
• False administrator names in page comments or network records that are known internally when used to be indicative of a social engineering attack in progress.
• Decoy servers or honeypots (www.honeypots.org) to confuse intruders.
• Send varying error responses or make your site "play dead" by sending obvious intruder "500 Server Error" responses for all their requests.
There is a great deal of room to expand on the idea of misdirection. Creating a forest of decoy devices and sites that rotate their signatures could make finding your site a great pain for a potential intruder. A service such as Netbait suggest such thinking is not so wild.
Yet be careful - camouflage will not protect problems, and misdirection might anger an enemy inviting attack. In many cases the tactics will be useless against the "stupid" attack from a robot, worm or script kiddies following a canned script. These folks don't care what they are hitting and hit Apache boxes with IIS attacks and vice versa, so make sure you can handle what they throw at you.
Tip 4: Forcefully deny bad requests.
A user's request just might not be safe to execute. Simple attacks focus on trying to modify the HTTP request to cause something bad to happen. You can use an application firewall or server filter to eliminate bad HTTP requests such as very long URIs, funny characters, unsupported methods and headers, and any other obviously malformed requests.
You should be aware of the types of data and programs in your site. If you know what is allowed, anything else should be disallowed - the so-called positive model. For example, requests for Active Server Pages files in a site built in PHP are problematic. Make sure to purge all unused files, particularly backup files (.bak). Turn off your server's directory browsing option. And remove any unused extensions from your server's configuration.
Tip 5: Sanitize user requests and inputs
Hidden form fields and cookies also serve as inputs that you should be careful to monitor. Avoid putting sensitive data in, and consider adding a checksum to verify they have not been tampered with. Be particularly careful in the case of session cookies. If the form is too predictable, your application might be open to a cookie hijacking attack.
When application flow is important, make sure you check referring URLs and deny any page requests out of sequence. To signal problems, you can add extra, encrypted cookie information to indicate entry point and last page visited.
Tip 6: Monitor and test continuously.
If you are examining logs only when things go wrong, you aren't doing enough. Many times it's already too late and logs provide only forensics to help you try to reconstruct the crime or help patch the hole. Fortunately, spotting a problem more quickly isn't hard because application attacks are clearly recorded in your server access log, and unless the compromise gives the attacker server-level access, they won't be able to cover their tracks easily. However, as a precaution, you might consider multiple logging hosts and using on and off network monitoring of your site and applications.
While application attacks are often more difficult than network intrusions for an intruder to cover up, sorting the bad requests from the good can be hard. To narrow a log down, try filtering on unknown user agents, unresolvable IP addresses and very fast requests from one source. Pay attention to your server's error log and look at 404 requests: They are often not simple mistakes but failed exploits or probes.
Make sure you test your site using the various vulnerability tools such as NStealth (www.nstalker.com) to find and plug obvious holes, but embrace the fact that "zero-day" attacks will continue and an as of yet indefensible attack might occur.
Tip 7: Prepare for the worst.
Despite your best efforts, someone might compromise your Web server or application. Rather than ignoring that possibility, you should come up with a plan to address a variety of compromises, including:
• Server compromise.• Site defacement.• Application-level denial of service (DoS).• Sensitive data exposure.
In the case of server compromise, rolling back to a former state, going off-line and trying to plug holes are really your only choices. Similarly, when faced with site defacement you want to be able to roll back the site quickly or put a standby page in place. Dealing with defacement isn't hard, but how can you detect it rapidly? A blatant home page modification by an intruder is obvious, but without page checksums detecting minor data modifications might be difficult. Imagine the damage done by the alteration of a financial press release on a corporate site?
DoS at the network level is a known attack and can be dealt with by many devices, but application-level DoS is more difficult to deal with. With the potential for a robot attack using apparently legitimate HTTP traffic from open proxies all over the Internet, it might be very difficult to determine the good users from the bad. Work still needs to be done in this area, but actively monitoring site traffic is an important first step.
Sensitive data exposure - such as the revelation of customer data including credit card numbers, for example - can be difficult to catch. Security software and devices such as the Teros offering (see story) can monitor pages for sensitive data patterns and block the data from being revealed. However, active monitoring is really the best bet because what is sensitive might not always be as obvious as a Social Security or credit card number.
Tip 8: Cross the developer-administrator chasm.
The greatest challenge in Web application security is that often the person who has built the application is not in charge of securing the application. Without intimate knowledge of the workings of a Web site, it might be difficult for an administrator to secure it adequately. On the flip side, developers are likely unaware of the types of attacks that occur and, therefore, don't write their code to address them. Getting the two groups together to share knowledge is truly the ultimate weapon against Web application security problems. | <urn:uuid:95d3bbff-746a-4ac2-b40f-f3f0f67023a5> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2333098/lan-wan/quick-tips-for-web-application-security.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00159-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92553 | 2,029 | 2.6875 | 3 |
An identification, user detection or, simply, web-tracking, all that means a computation and an installation of a special identificator for each browser visiting a certain site. By and large, initially, it was not designed as a ‘ global evil’ and, as everything else has another ‘ side of a coin’, in other words it was made up to provide a benefit, for example, to allow website owners to distinguish real users from bots, or to give them a possibility to save user’s preferences and use them during the further visits. However, at the same time this option catch promo’s fancy. As you know, cookies are the most popular way to detect users. And they have been being used in advertising since 90s.
Many changes have taken place ever since, in the sphere of technologies a huge step forward have been made and today we can use not only cookies but many other ways for tracking users. The most obvious method is to install an indificator similar to a cookie. Another way is to use the information from the user’s computer, that we, actually, can get from the sent requests of HTTP-headers: address, OS type, time and so on. And, finally, we can distinguish a user upon his habits and behaviour ( the way he moves the cursor, favourite site sections and so on).
This approach is quite obvious, all that we need to do is to place some long-lasting identificator on the user’s side, that we can request during the further visits. Modern browsers allow to make it in a transparent manner for the user. First and foremost, we can use good old cookies. Then, it could be specific features of certain plag-ins, that have similar to cookies software, like, Local Shared Objects in Flash or Isolated Storage in Silverlight. Also we can find some storage mechanisms in HTML5, including localStorage, File and IndexedDB API. Besides, we can save unique markers in cache-resources on local machines or in the cache metadata (Last-Modified, ETag). Furthermore, we can detect a user by his fingerprints that we can get from Origin Bound certificates, generated by the browser for SSL-connections, or, by the data contained in SDCH-dictionaries, or by the information in those dictionaries. In short, there are plenty of possibilities.
Unlike other mechanism that we are going to discuss later, the usage of cookies is transparent for the end user. Moreover, it is not necessary to store an identification in a separate cookie to ‘mark’ a user, it could be simply collected from other cookies or be stored in metadata, like Expiration Time. That is why it is quite difficult to understand whether some certain cookie have been used for tracking or not.
Local Shared Objects
To store data on the client’s side by means of Adobe flash we use LSO mechanism. It is cookie’s analogue in HTTP, however, unlike the last ones it can store not only short fragments of text data which, in its turn, complicates the analysis and check-up of such objects. Until the release of the 10.3 version the flash-cookies’ behaviour has been set up separately from the browser settings, likewise, you needed to go to the Flash settings manager, situated on the
macromedia.com site ( by the way, it is still available on the further link (bit.ly/1nieRVb)). Today you can do it right from the control panel. Furthermore, most nowadays browsers provide quite good integration with flash-player, so, during the deleting of cookies and other sites’ datas the LSO will be deleted as well. On the other hand, the interaction is not that good, so the setting up the policy about outside cookies will not always consider the flash’s ones ( here you can find how to turn them off manually (adobe.ly/1svWIot)).
Isolated storage Silverlight
The software platform Silverlight is quite similar to Adobe Flash. So, for example, the mechanism
Isolated Storage is the analogue to
Local Shared Objects in Adobe. However, unlike Adobe the privacy settings in here are not connected to browser that is why even after deleting all the caches and cookies from a browser, all the data saved in
Isolated Storage still will be there. But what is more interesting, the storage is common for all the tabs of a browser ( except those in ‘incognito’ mode), as long as for all the profiles, installed on the one machine. Just like in LSO, from the technical point of view, there is no obstacles to store session identifications. Nonetheless, regarding the fact that you can not influence the mechanism through browser settings, it has not got such an expansion in term of unique identificators storage.
HTML5 and the data storage on the client’s side
HTML5 is a mechanism that allows to store structured information on the client side. Among them we have localStorage, File API and IndexedDB. Despite the differences their purpose is to store random amount of binary data connected to a certain resource. Moreover, unlike HTTP and Flash cookies there is no particular restrictions regarding the size of stored files. In modern browsers the HTML5 storage is situated among other site data. Nevertheless, it is quite difficult to figure out how to control the storage through a browser. Like, for example, to delete the information from the Firefox
localStorage the user have to choose “offline website data” or “site preferences” and set up the time interval on “everything”. Another offbeat feature contained in IE is that the data are existing only while the tabs opened at the moment of their saving are alive. Beside everything we have mentioned above we should say that the restrictions applicable to HTTP cookies does not really work with the mechanisms. For example, you can write and read from
localStorage through cross domain frames even when the side cookies are turned off.
The randomised objects
ETag and Last-Modified
A server should inform somehow a browser that the new version of the document is available in order the randomising works properly. That is why HTTP/1.1 offers two ways to deal with this problem. The first one is based on the date of the last modification, while the other one on the abstract identification known as ETag.
Using ETag, first, a server returns a so called version tag in a header of the reply with the document itself. With further requests to set up URL a client will send through the header If-None-Match this value associated with its local copy to the server. If the version in the header is up-to-date then the server will send the HTTP-code 304 (‘ Not Modified’) and a client will continue to use the randomised version. Otherwise the server will send a new version of the document with a new Etag. This approach are quite similar to the HTTP-cookies – like, the server stores random value on a client to be able to read it later. The other way is to use the Last-Modified header that allow to store at least 32 bits of information in the data string, that further will be sent by a client to the server in the If-Modified-Since header. What is interesting, that most browsers don’t request the correct date format in the date string. The situation here is the same as with the identification through randomised objects, the deleting of cookies and site data does not influenceETag andLast-Modifie
d, you can delete them only by cleaning the caches.
Application Cache allows to set up which part of a site supposed to be stored and be available even if a user is offline. The mechanism is controlled by manifests which set up the regulations of storing and extracting of the cache elements. Just like traditional randomising mechanism the AppCache also allows to store unique information that depends on user as inside the manifest itself so inside resources that exist for an indefinite amount of time ( in contrast to an ordinary cache which resources are deleted after some time). AppCache occupy an intermediate value between the mechanism of data storing in HTML5 and the common browser’s cache. In some browsers it is cleaned due to deleting of cookies and site data, while in the others only after the deleting of browsing history and all the randomised documents.
SDCH – dictionnaires
Other storage mechanisms
Besides mechanisms connected to randomising, JS and other plug-ins usage, the modern browsers also have another particular features, that allows to keep and take out the unique identificators.
- Origin Bound Certificates (aka ChannelID) – are the persistent self-signed certificates that identifies a client to server. A separate certificate is created for each new domain, that is being used for connections initiating in future. Also sites could use single external signal to track users without any actions along with, that a client could notice. The cryptographic hash of a certificate could be used as a unique identification as well, given by a client as a part of legitimate SSL-‘handshake’.
- There are two mechanisms in the TLS as well – session identifiers and session tickets that allow clients to resume link-downs connections without ‘full-handshake’. It is possible to do using randomised data. The two mechanisms allow servers to identify requests sent by clients within quite small amount of time.
- Almost all modern browsers use their own inner cache to accelerate the name resolution process ( moreover, in particular cases it allows to cut the risk of DNS rebinding attacks). Such cache could be easily used to store small amount of information. Like, for example, if you have about 16 available IP addresses, it would be enough to have about 8-9 randomised names, to identify any computer in web. However, such approach are restricted by the size of the inner browser’s DNS-cache and, potentially, could provoke conflicts with name resolution regarding DNS provider.
All the methods that we have considered are supposed the installation a unique identification that would be send to the server during further requests. However, there is another way to track users based on requests or characteristic changes in terms of a client machine. Separately each received characteristic is just a several bits of information, but if we combine some of them, we can identify any computer in web. Beside the fact that such tracking is far more difficult to recognise as long as to prevent, the technique will allow to identify a user that uses different browsers or private mode.
The simplest approach in terms of tracking is to build an identification by combining different available parameters in the browser’s environment, that, actually, does not have any value separately, however, together they create a remarkable features for each machine:
- User-agent. Hand out a browser’s version, an OS version and some installed add-ons. In cases when there is no User-agent or you would like to check its ‘truthfulness’ we can determine the browser’s version by checking certain implemented or changed features between releases.
- The display resolution and the window size of a browser ( including the parameters of the second display in case of multidisplay system).
- The list of installed fonts that have been downloaded, for example, with getComputedStyle API.
- The list of all installed plug-ins, ActiveX-controllers, Browser Helper Objects, including their versions. We can get them using navigator.plugins ( certain plat-ins could be tracked in HTTP-headers).
- The information about the installed extensions and other software. The extensions, like, advertisement blockers, implement some changes in browsable pages, you can determine these extensions and their settings due to these changes.
Web – fingerprints
There is another row of features in the architecture of the local net and the net-protocols’ settings. These features will be common for all browsers, installed on the client’s machine, they can not be hidden even with privacy settings or certain security utilities. Here the list of them:
- The external IP-address. This vector is especially interesting for IPv6, because last octets could be gotten from device’s MAC-address in certain cases and that is why they are stored even during the connection to different networks.
- Port numbers for outgoing TCP/IP-connections ( for most OS they are usually choose sequentially).
- Local IP-address for users on NAT or HTTP-proxy. Along with an external IP allows to identify the most of clients
- The information about proxy servers that a client is using can be found in HTTP headers (
X-Forwarded-For). In combination with the real client address that we can get using several ways by passing proxy, also allows to identify a user.
Behavior analysis and habits
Another way is to check characteristics that are connected not to PC, but, more likely, to a final user, such as local settings and behaviour. This method also allows to identify clients among different browser’s sessions, profiles and in case of private mode. So, we can draw conclusions basing on further parameters, that are always available for explorations:
- The cache data of a client and his browsing history. The cache elements could be found using time attacks, a tracking can find a long-lasting cache elements relative to popular resources, simply by measuring the time of downloading ( and just notice the transition if the time overpass the time of downloading from the local cache. Also we can get URL files from the browsing history, however, this attack is urge for an interaction with a user in terms of modern browsers.
- Mouse gestures, the frequency and duration of keystrokes, the accelerometer data – all these parameters are unique for each user.
- Any changes in terms of standard site fonts and their sizes, zoom level or usage of special possibilities, like, text colour or size.
- The condition of certain browser features, setting up by a client, like, the block of external cookies, DNS – prefetching, pop-up blocking, flash security adjustments and so on ( the irony of it is that the users that change their default settings as a matter of fact make it far more recognisable in terms of identification).
By and large, these are only the obvious variants that are not hard to plumb. If we ‘dig’ a little further – we can find out more.
As you can see, practically, there are great amount of different ways to track users. Some of them are the result of the implementation defects or gaps and, theoretically, can be fixed, the other ones are quite impossible to prevent without a full changing of the work principles of the computer networks, web applications, browsers. Generally, we can counter work against some techniques, like, to clean caches, cookies and other places where identificators can be stored. However, others work absolutely imperceptible for a user, and it is impossible to protect yourself from them. That is why, the most important thing to remember is that when you are ‘travelling’ in the web, all your shuffles could be tracked. | <urn:uuid:ac59e577-7ef7-4a45-a137-23a9e78d3197> | CC-MAIN-2017-04 | https://hackmag.com/security/the-bourne-identity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00372-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927461 | 3,140 | 2.984375 | 3 |
One significant benefit of cloud hosting services is the reduced energy consumption. As a recent post from Pike Research points out, outsourcing data centers leads to savings on manpower, money, and energy. Clouds are less expensive to operate, consume less energy, and have higher utilization rates than traditional data centers.
This post summarizes a recent report from Pike Research, a market research and consulting firm that provides in-depth analysis of global clean technology markets. The report, “Cloud Computing Energy Efficiency," gives an in-depth analysis of the energy efficiency benefits of cloud computing.
According to the report, growth in the cloud computing market will have a substantial impact on energy consumption and greenhouse gas emissions. In fact, Pike Research forecasts that continued adoption of cloud computing will lead to a reduction of data center energy consumption of 31 percent from 2010 to 2020.
The post goes on to state that the transition to the cloud will continue to accelerate:
"Furthermore, it’s important to note that the spread of cloud computing services is helping to create a virtuous circle, wherein suppliers of servers, network equipment, disk drives, and cooling and power equipment increasingly design their products to suit the needs of large cloud operators, leading to improved operating margins through better use of electricity, and in turn to more adoption."
At Green House Data, we take the energy efficiency of cloud hosting one step further. Our 10,000 square foot green data center is powered entirely through renewable wind energy.
Our high-availability and secure facility operates at a 40% lower energy utilization per square foot than comparable data centers. Contact us to discover how this energy savings can translate into a cost savings for you. | <urn:uuid:8ae1ef2d-8376-41c3-a690-da6640f67bca> | CC-MAIN-2017-04 | https://www.greenhousedata.com/blog/cloud-hosting-saves-energy-and-money | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00400-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931984 | 334 | 2.59375 | 3 |
Table Of Contents
Creating Core Dumps
When a router crashes, it is sometimes useful to obtain a full copy of the memory image (called a core dump) to identify the cause of the crash. Core dumps are generally very useful to your technical support representative. Not all crash types will produce a core dump. The different crash types are discussed in more details in Appendix B, "Memory Maps."
Caution Use the commands discussed in this appendix only under the direction of a technical support representative. Creating a core dump while the router is functioning in a network can disrupt network operation.
Four basic ways exist for setting up the router to generate a core dump:
•Using Trivial File Transfer Protocol (TFTP)
•Using File Transfer Protocol (FTP)
•Using remote copy protocol (rcp)
•Using a Flash disk
If TFTP is used to dump the core file to the TFTP server, the router will dump only the first 16 MB of the core file. This is a limitation of most TFTP applications. Therefore, if your router's main memory is more than 16 MB, do not use TFTP.
The following is the router configuration needed for getting a core dump using TFTP:
exception dump a.b.c.d
Here, a.b.c.d is the IP address of the TFTP server.
The core dump is written to a file named hostname-core on the TFTP server, where hostname is the name of the router. You can change the name of the core file by adding the exception core-file filename configuration command.
Depending on the TFTP server application used, it may be necessary to create on the TFTP server the empty target file to which the router can write the core. Also make sure that you have enough memory on your TFTP server to hold the complete core dump.
To configure the router for core dump using FTP, use the following configuration commands:
ip ftp usename username
ip ftp password password
exception protocol ftp
exception dump a.b.c.d
Here, a.b.c.d is the IP address of the FTP server. If the username and password are not configured, the router will attempt anonymous FTP.
Remote copy protocol (rcp) can also be used to capture a core dump. Enabling rcp on a router will not be covered in this appendix. Refer to the Cisco IOS Software Configuration document for configuring rcp.
After rcp is enabled on the router, the following commands must be added to capture the core dump using rcp:
exception protocol rcp
exception dump a.b.c.d
Here, a.b.c.d is the IP address of the host enabled for rcp.
Using a Flash Disk
Some router platforms support the Flash disk as an alternative to the linear Flash memory or PCMCIA Flash card. The large storage capacity of these Flash disks makes them good candidates for another means of capturing core dump. For information on the router platforms and IOS versions that support the Flash disk, refer to the Cisco IOS Release Notes.
The following is the router configuration command needed to set up a core dump using a Flash disk:
exception flash <procmem|iomem|all> <device_name[:partition_number]> <erase | no_erase>
The show flash all command will give you a list of devices that you can use for the exception flash command.
The configuration commands in this section may be used in addition to those described in the "Basic Setup" section.
During the debugging process, you can cause the router to create a core dump and reboot when certain memory size parameters are violated. The following exception memory commands are used to trigger a core dump:
exception memory minimum size
The previous code is used to define the minimum free memory pool size.
exception memory fragment size
The previous code is used to define the minimum size of contiguous block of memory in the free pool.
The value of size is in bytes and is checked every 60 seconds. If you enter a size that is greater than the free memory, and if the exception dump command has been configured, a core dump and router reload is generated after 60 seconds. If the exception dump command is not configured, the router reloads without generating a core dump.
In some cases, the technical support representative will request that debug sanity be enabled when setting up the core dump. This is a hidden command in most IOS releases, but it sometimes is necessary to debug memory corruption. With debug sanity, every buffer that is used in the system is sanity-checked when it is allocated and when it is freed.
The debug sanity command must be issued in privileged exec mode (enable mode) and involves some CPU utilization. However, it will not significantly affect the router's functionality.
Not all types of crash require debug sanity to be enabled. Use this command only when your technical support representative requires it.
To disable debug sanity, use the privileged exec command undebug sanity.
Testing the Core Dump Setup
When the router is configured for core dump, it may be useful to test whether the setup works.
The IOS provides a special command to test or trigger a core dump:
Use this command in privileged exec mode (enable mode). This command will cause a crash, and the content of the memory will be dumped accordingly. If it no core dump is generated, the whole setup and config must be reviewed.
Caution The write core command will have an impact on a production network. It will cause the router to crash and will prevent it from coming up before dumping the content of its memory. This might take some time, depending on the amount of DRAM present on the router. Use the command with utmost caution. | <urn:uuid:708f76aa-b065-45c9-84eb-ed151529f4f5> | CC-MAIN-2017-04 | http://www.cisco.com/en/US/docs/internetworking/troubleshooting/guide/tr19aa.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00426-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.844491 | 1,208 | 2.546875 | 3 |
Researchers in Colorado are investigating the potential for the state’s irrigation canals to be used as a source of renewable hydropower.
Engineering firm Applegate Group Inc. and Colorado State University have received a $50,000 grant from the Colorado Department of Agriculture to look into generating hydropower from the 3 million acres of irrigated land in the state. The grant is part of the Advancing Colorado's Renewable Energy Program to promote energy-related projects beneficial to Colorado's agriculture industry.
Hydropower is created by running water through a hydraulic turbine that spins and drives a generator shaft to create electricity. Most small hydro projects, also called micro-hydro, divert a portion of a river or creek’s flow, or are constructed on established channels, such as irrigation ditches.
Currently about 10 percent of U.S. electricity comes from hydropower, according to the Colorado Renewable Energy Society. Compared to other renewable energy sources, hydropower is known for being consistent and durable.
Recent technological advancements in small hydro — the development of hydroelectric power on a scale serving a small community — have made Colorado irrigation canals a likely possibility for hydro development, said Colorado State University professor Daniel Zimmerle, who received the grant.
“In the small hydro area, [Colorado has] a good chance of being a leader because we have a lot of state and local support for the idea,” Zimmerle said. “It helps to be in mountains.”
Zimmerle and researchers will study how efficient and plausible low-head hydropower, which uses river current and tidal flows to produce energy without the use of a dam, is in hundreds of statewide irrigation ditches with drops between five feet and 30 feet.
The costs and environmental impacts of constructing a dam make traditional hydroelectric projects difficult. However, small hydro costs are similar to other renewable energy sources, Zimmerle said.
In terms of wiring the hydro facility to be managed remotely and connected to the electric grid, all conversion inverters and communications equipment that have been used for other applications will be reapplied to small hydro, he said.
“Once you get those big rocks in place, there are actually quite a few questions about economic viability and how you implement these systems,” said Zimmerle. “That’s where we are going to go after this research project is over.”
Until recently, “tortuous” government permitting processes, along with the large initial cost of the systems, have been the biggest barriers to implementing hydro technology, said Zimmerle. Plus, water resources can be particularly troublesome in terms of water rights issues and flow rates, he added.
However, as barriers to renewable energy sources have been knocked down over the past decade, more opportunity for micro-hydro has also become available, he said, and legislators have been green energy promoters.
Small hydro technology may also prove to be a revenue source for irrigation companies, researchers said. | <urn:uuid:80f1f945-c8a1-452c-bf1c-3ce233b0d7f7> | CC-MAIN-2017-04 | http://www.govtech.com/technology/Colorado-Hydropower-Irrigation-Ditches-030111.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00088-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958731 | 615 | 3.390625 | 3 |
This paper introduces the concept of Web Services.
"Web Services" is the name of a set of standards and mechanisms enabling software components to be invoked across the Web. The components themselves are called Web services. An application using a Web service (a client application) can invoke it and pass data to and from it very easily, because all communication between them is in the form of XML files sent using a standard protocol such as HTTP. This means the client application has no need to know details of how the component is deployed - for example, whether it is a COM object or an EJB, what language it is in, and so on.
At the time of writing, Web Services is a relatively recent innovation. In theory, deploying a component as a Web service means it can be invoked across the Web by anyone in the world. For example, a credit card company might provide a Web service to be called by retailers to validate card details. However, at the present time they are often used across companies' intranets, for use in internal applications and as a means of integrating disparate internal applications.
Web Services is built around three standards defining the format of the XML files needed to link clients to services:
For a detailed discussion of Web services and their benefits, see the white paper Web Services Concepts. For the latest specifications of WSDL, SOAP, and UDDI, see http://www.w3.org.
The toolkits for creating and accessing Web services that are available today do not natively support COBOL. However, there are several ways you can use Net Express with these toolkits to deploy COBOL programs as Web services:
This approach is documented in the white paper COBOL Web Services with the Microsoft SOAP Toolkit.You can download the SOAP Toolkit from the Microsoft Web site.
This approach is documented in the white paper COBOL Web Services with Cape Clear. For ordering and pricing information for the Cape Clear software, please contact Cape Clear. For contact details, see the Cape Clear Web site.
No documentation for this approach is provided here. For COBOL/Java interoperability, see your Distributed Computing manual.
Which mechanism you use largely depends on which products you use in your organization. The Microsoft SOAP Toolkit can only be used to create Web services that are hosted under Microsoft Internet Information Server (IIS). If you use another Web server or are using an application server to host your Web services, you may want to take a look at Java-based toolkits. Of the three mechanisms mentioned, only the Net Express/Cape Clear integration has been designed specifically for creating COBOL Web services. | <urn:uuid:5a5fe2f9-8677-4325-af11-65aa76a3283b> | CC-MAIN-2017-04 | https://supportline.microfocus.com/documentation/books/nx31sp1/wsintr.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00088-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921172 | 550 | 3.265625 | 3 |
How to deter and mitigate an Attack: Types and techniques
There are many attacks which can affect the computer performance of the user. But many of them are now known and there are some methods too which can be used to make sure those attacks do not affect the system that bad. Hence one should know about those preventive measures and should try his best to get rid of those attacks threats which can result into some better computer security. Here are some ways through which one can get those attacks ignored;
Monitoring system logs;
If someone can monitor the logs, he will know what the usual happenings are. Hence he can know if something special happens somehow and he would be able to take some preventive measures for that. Logs motoring are something very important since it can help someone monitor the logs for the sensitive data too and hence can know if there is any breach which has been done within the data. There are some software's too which are available for this purpose. Like one should know that the log monitors is a type of software's which can monitor the files and the logs. Many of the networks, security devices, servers etc. generate the log files. Also, the errors and the problems which happen again and again also generate the logs which can be used for some analysis. So for detecting the problems in automatic way, the administrations normally set up some monitors for the logs. Those mentors can scan the files which contain the log and can search for the text pattern which are known and the rules which can indicate the events which are important/ once an event has been detected, then an alter would be sent to the software or the hard ware system or some person so that he can take some actions on it. Here are some types of the system logs which are available to one;
Event logs: event logs are generated through event monitoring. When some event takes place like some software's has run some program or the data analysis has taken place, then the logs are generated. Event monitoring is he process in which collected, analysis and some other events are occurred like the processes done by the OS. These events may originate form open sources which are arbitrator as well like the hardware's and the software's as well.
Audit logs: Audit log is basically the historic account of all the events which have happened to the computer and they are related to some certain object. Normally, people just keep the logs of the rich target that are managed by some promising server. But there are some problems too, which are related to the audit log monitoring. Like, it is difficult to maintain such logs. One can maintain that log at the target but the management agent talks with the server and hence it can also keep the log as well. So basically question that pops up here is that how can one maintain the log? What types of events should it include and what is event itself? The audit log should be reusable and it should be simple so people can also review it easily and can put up some queries on it. There are some of the cases which can be defined, like the store event. They can be the new log into the audit log. There can be the get events, which mean the queries as the subset of events. There can be the merge events as well which means the new events are merged with some existing events. Hence all these logs are generated and one cane easily keep same track of them to get their hands on some good quality log monitoring of audit.
Security logs: the security log is the log which is used for keeping track of what is happening in the system and all those events relate to the security and the information is then saved and is checked later. It should be easy and readable as well. Hence people will be able to put up some questions and would be able to comment on it as well. There are many logs which are generated on some weekly bases as well. They can be the logs generated through windows security, the security log of the internet connection and the firewall, etc. Stephen Accession said that many of the UNIX installations do not really run any of the forms of security lagging software's, actually. The reason is that the logging files related to security facilities are pretty much excessive and they can cost a lot when it comes to the processing of time, disk damages and hence the costs which are associated to them while analysing the trail of audit. can be done through some software or by manually. In any case, the logs generated can help estimating the damage and its extent.
Access logs: As the name suggests, one should know, that the access log means the list of all the requests for some files which have been requested by the people through the HTML files and whatever is embedded into them. They can vet the associated files, graphics images etc. and they are the files which are transferred. This access log is sometimes also known as the raw data. This data can be analysed and then can summarized by some other person. Normally, this access log can be sued for analysing and telling someone about the fact that how any visits have been paid to some home page and how many of them were the first timers. Also, the visitors' location and the origin are also mentioned in terms of the Doman their server has. Like, there can be the eddo, .com, .up etc. at the bed of the website hence I t can be recorded. Also, it can be there that how many of the requests have been generated for each of the pages and hence which can be sued for presenting with pages with some most requested lists first. Also, the pattern of usage is also logged like the time of the day, the day of the week and etc. Another important thing is, that the those software's who have been keeping logs and have been putting up some analysis for that, they can be found as the shareware located on the website or they might even come with the web keeper.
When someone knows that there is some problem and the systems to be guarded well, then he would start putting some restrictions. It is a very good step since it can ensure that the data is not being stolen frequently and hence h can help protecting his precious data that he has on the files. Here are some of the ways through which one can make sure that the security settings are hardened and no one can get access into the computer' files;
Disabling unnecessary services: as it has been mentioned that there can be many applications too which can become the source of the attacks, then first thing that should be done, is making sure that the services which are not necessary are disabled first. This can help someone better since if the UN necessary application is running, one would have to monitor their activities as well and it would take too much time and so many opened things can divert one's attention easily. So closing them down through task manager or any other way can help one getting hold of the system.
Protecting management interfaces and applications: One must be able to protect the interfaces and the applications. The disabling of the applications is very important since they may bring the attack and can be the weak piutn if there is some developing error residing within them. The interface services can also be disabled. There are some global services which can be pretty insecure and they are not so necessary. If someone feels that these services are not so secure, he can disable them and hence can disable them on the router's interface as well. There are some basics which one should know about them first, like there is some caution which one would have to have. Like, there are the loopback interfaces and some null interfaces which are the physical interfaces and are location on the router. So while disabling, they should be disabled as well since it is better to be the safe one instead of being sorry at the end of the day. Any of the interface which is insecure, is basically the one which isn't connected to the internet network that one has. They are the ones which are actually connected to some public network like the internet is connected to it. Also, that one can be connected to some private network, Also; there can be the connection with the private LAN and the remote office.
Password protection: Well, there isn't anything to say too much about this thing, since it is a common understanding that passwords are the gateway to one's accounts and it is something that one has to protect. So one should set some strong asked passwords with some characters in it and if there is any default password, one should simple disable it and set a new one.
Disabling unnecessary accounts: If one imagines, that what will happen if he walks out of the room where computer is, and someone else sits and gets access to the computer through some guest account or some account which is idle. Then one would know how important it is to manage the accounts as well.
Network securityFollowing are the ways through which one can ensure network security;
MAC limiting and filtering: this term means that there can be the access control methods used for disabling the address which are assigned to some network. They are a great tool since these addresses can be used for the taking some access to the described network.
802.1 xs: this standard is used basically for getting access to the network so it should be made sure that this standard is secure so one can stay away from the risk of being exposed to any cyber-attack.
Disabling unused interfaces and unused application service ports: The interfaces which are unnecessary should be disabled since it is clear that they might harm the computer by bringing in the cirrus. Also, there are some ports on the routers which can also be cleared and closed up for secure system.
Rogue machine detection: There can be some rouge machines which can be detected through many methods and hence one cans are guard his interest through getting away from this.
Here are some security postured which one should have to protect himself;
Initial baseline configuration: The configuration of the baseline should be done initially so that risks can be minimized.
Continuous security monitoring: Setting up security isn't the only thing, in mist measure and monitor it too to check if it's effective.
Remediation: if there is some problem which occurs, then some steps should be taken like instating some programs to defend the computer.
Here are the reporting methods which can be used;
Alarms: Alarms can alert someone easily so they should be paid notice to.
Alerts: Alters shouldn't be ignored since they carry important messages.
Trends: Trends should be followed to see which cirrus is in these days.
Detection controls vs. prevention controls
There are not only detection controls there are prevention ones as well and they should be done as follows;
IDS vs. IPS: The IPS makes a report when there is some intrusion while IDS doesn't just reports but takes action too so enable the IDS.
Camera vs. guard: The cameras are effective then goads since goads can doze off too but cameras can catch everything in them if kept at secured position.
Hence one can know that one can stay safe from the attacks and all he needs to do is simply keep him safe. He can follow all the steps mentioned above. They will not only keep him secured but will act as a good defence as well. | <urn:uuid:d8fe2fe7-1ac3-43a5-901b-bff36b768a83> | CC-MAIN-2017-04 | https://www.examcollection.com/certification-training/security-plus-how-to-deter-and-mitigate-an-attack-types-and-techniques.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00024-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.977214 | 2,294 | 3.078125 | 3 |
Written by: Håkan Granbohm and Joakim Wiklund
Download PDF file
By adding GPRS to the GSM network, operators can offer efficient wireless access to external IP-based networks, such as the Internet and corporate intranets. What is more, operators can profit from the rapid pace of service development in the Internet world, offering their own IP-based services using the GPRS IP bearer, thereby moving up the Internet value chain and increasing profitability. End-users can remain connected indefinitely to the external network and enjoy instantaneous transfer rates of up to 115 kbit/s. Users who are not actually sending or receiving packets occupy only a negligible amount of the network’s critical resources. Thus, new charging schemes are expected to reflect network usage instead of connection time.
Ericsson’s implementation of GPRS enables rapid deployment while keeping entry costs low—the two new nodes that are added to the network can be combined and deployed at a central point in the network. The rest of the GSM network solely requires a software upgrade, apart from the BSC, which requires new hardware. The authors describe Ericsson’s implementation of GPRS. In particular, they explain the role of the two new GPRS support nodes and needed changes to Ericsson products in the PLMN.
[First published in Ericsson Review no. 02, 1999] | <urn:uuid:08ded84b-dbc0-4f5e-9624-4762ea249896> | CC-MAIN-2017-04 | https://www.ericsson.com/ericsson/corpinfo/publications/review/1999_02/51.shtml | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00510-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.904166 | 286 | 2.59375 | 3 |
Credit: Dartmouth College Library
BASIC creators John Kemeny and Thomas Kurtz.
The mainframe isn't the only technology hitting the ripe old age of 50 this year. On May 1st, the BASIC programming language, first developed by Dartmouth College Professors Thomas Kurtz and John Kemeny, celebrates 50 years.
At the time, computers were highly serial. You loaded punch cards and waited your turn to run the application. That was known as batch processing. As computers matured from vacuum tubes to silicon semiconductors, they became more powerful and gained the ability to run multiple programs at once.
Kemeny wanted a language that would allow people to write their own programs and execute at the same time. Kemeny and a programming student both ran a program at the same time written in Beginner's All-purpose Symbolic Instruction Code, and both got their responses back. BASIC was born.
BASIC lived up to its name and was fairly straightforward, making it much easier to program than writing in assembler language or punch cards. It would start on minicomputers like the DEC PDP line. It would be released on the growing number of personal computers in the 1970s.
When the Altair 8800 came out, there were actually two BASIC compilers for it, both inspired by the minicomputer version of the language: Tiny BASIC, a simple version of the language, and Altair BASIC, written by a company called Micro-Soft. You may have heard of them.
Radio Shack's TRS-80, Apple Computer's Apple II, and Commodore's PET 2001 all came with BASIC built into the firmware, and IBM would release a BASIC interpreter for its Personal Computer as well. BASIC would eventually be overshadowed in significance with developers by C and later C++, but it remained a popular first language for many programmers to grasp the concepts of programming.
Microsoft would return to its roots, breathing new life into BASIC in 1991 with the release of Visual Basic, which helped developers write Windows-based BASIC apps that were actually compiled, not just interpreted. Thanks to the power of the VB compiler, it found favor as more than just a teaching tool, and commercial apps were soon being developed with VB. Granted, many if not most were freeware/shareware, but it was more than anyone expected out of BASIC.
BASIC is still alive and kicking. Wikipedia lists 33 different compilers, plus there is True BASIC, the direct successor to Dartmouth BASIC from a company co-owned by Kurtz. There are even a few in the iOS App Store. It doesn't look a thing like the AppleSoft BASIC I was learning 30 years ago, but that's why it survives; BASIC adapted and grew.
Dartmouth will be holding a series of events to mark the anniversary on the campus, but they will also be broadcast on the Internet. | <urn:uuid:7fe70cfa-1f76-4be9-bd78-184c719ad0d8> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2226711/microsoft-subnet/50-years-of-basic--celebrating-the-programming-language-s-long--eventful-life.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00050-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.977868 | 608 | 3.234375 | 3 |
Gurley E.S.,b formerly known as International Center for Diarrheal Diseases Research |
Homaira N.,b formerly known as International Center for Diarrheal Diseases Research |
Haque R.,b formerly known as International Center for Diarrheal Diseases Research |
Petri W.,University of Virginia |
And 5 more authors.
Indoor Air | Year: 2013
Approximately half of all children under two years of age in Bangladesh suffer from an acute lower respiratory infection (ALRI) each year. Exposure to indoor biomass smoke has been consistently associated with an increased risk of ALRI in young children. Our aim was to estimate the effect of indoor exposure to particulate matter (PM2.5) on the incidence of ALRI among children in a low-income, urban community in Bangladesh. We followed 257 children through two years of age to determine their frequency of ALRI and measured the PM2.5 concentrations in their sleeping space. Poisson regression was used to estimate the association between ALRI and the number of hours per day that PM2.5 concentrations exceeded 100 μg/m3, adjusting for known confounders. Each hour that PM2.5 concentrations exceeded 100 μg/m3 was associated with a 7% increase in incidence of ALRI among children aged 0-11 months (adjusted incidence rate ratio (IRR) 1.07, 95% CI 1.01-1.14), but not in children 12-23 months old (adjusted IRR 1.00, 95% CI 0.92-1.09). Results from this study suggest that reducing indoor PM2.5 exposure could decrease the frequency of ALRI among infants, the children at highest risk of death from these infections. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd. Source | <urn:uuid:8294c9d2-5fec-47d2-ba45-79b4510ceb17> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/b-formerly-known-as-international-center-for-diarrheal-diseases-research-2676440/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00472-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939704 | 376 | 2.578125 | 3 |
Create and open the next available file on the camera roll for writing.
camera_error_t camera_roll_open_photo(camera_handle_t handle, int *fd, char *filename, int namelen, camera_roll_photo_fmt_t fmt)
The handle returned by a call to the camera_open() function.
A pointer to the file descriptor. The pointer that is returned pointers an open photo file on the camera roll.
A pointer to returned name of the file on the camera roll. Ensure that the array pointed to by filename is at least of size CAMERA_ROLL_NAMELEN.
The size of the buffer provided by the caller as the filename. The maximum size is indicated by the value of CAMERA_ROLL_NAMELEN.
The image file format to create.
The camera roll is a directory on the device where the camera application saves files. The camera service manages unique filenames on behalf of the user. Use this function to retrieve the next available file from the camera roll. You require CAMERA_MODE_ROLL access mode when you call the camera_open() function to open the camera.
After you successfully call this function, a file is created and opened for writing. To close the file, you must call the camera_roll_close_photo() function.
CAMERA_OK when the function successfully completes, otherwise another camera_error_t value that provides the reason that the call failed. | <urn:uuid:36476c73-d4bc-45bc-83f0-ed709105bed8> | CC-MAIN-2017-04 | https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.camera.lib_ref/topic/camera_roll_open_photo.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00528-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.809751 | 313 | 2.6875 | 3 |
Technology news outlets are abuzz this morning with a report from IBM that its scientists have made great strides toward to bringing the theories of quantum computing to reality.
From a Computerworld story on our site:
Scientists at IBM Research today said they have achieved a major advance in quantum computing that will allow engineers to begin work on creating a full-scale quantum computer.
The breakthrough allowed scientists to reduce data error rates in elementary computations while maintaining the integrity of quantum mechanical properties in quantum bits of data, known as qubits.
The creation of a quantum computer would mean data processing power would be exponentially increased over what is possible with today's conventional CPUs, according to Mark Ketchen, the manager of physics of information at the IBM's TJ Watson Research Center in Yorktown Heights, N.Y.
In this video, the IBM researchers discuss what they're doing in layman's language:
And in this one, Ketchen explains how the researchers conduct experiments in an extreme low temperature lab, "which is really they key to this work."
Welcome regulars and passersby. Here are a few more recent buzzblog items. And, if you’d like to receive Buzzblog via e-mail newsletter, here’s where to sign up. You can follow me on Twitter here and on Google+ here.
- 2012’s 25 Geekiest 25th Anniversaries.
- No, Wikipedia has not forgiven GoDaddy for backing SOPA.
- Another survey that shows vendors think you’re stupid.
- Dying Jobs kept letter from Gates at his bedside.
- Supreme Court backs privacy over police in GPS case.
- Who’s flying drones in U.S.? … FAA won’t say.
- Slashdotters rain on “cloud computing”
- “The Joy of Books” tap dances all over your Kindle.
- Who’s lying? The iPad owner or the border guard?
- “LAN-party house” guy spills important details.
- Follow the Mythbusters' bouncing cannonball.
- Steve Jobs and his gadgets … in LEGO. | <urn:uuid:c6eead85-dd4e-43c8-9580-5255780caeb4> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2221784/data-center/ibm-scientists-discuss-quantum-computing-breakthrough.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00071-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.89339 | 441 | 2.5625 | 3 |
126.96.36.199 How do I find someone else's public key?
Suppose Alice wants to find Bob's public key. There are several possible ways of doing this. She could call him up and ask him to send his public key via e-mail. She could request it via e-mail, exchange it in person, as well as many other ways. Since the public key is public knowledge, there is no need to encrypt it while transferring it, though one should verify the authenticity of a public key. A mischievous third party could intercept the transmission, replace Bob's key with his or her own and thereby be able intercept and decrypt messages that are sent from Alice to Bob and encrypted using the ``fake'' public key. For this reason one should personally verify the key (for example, this can be done by computing a hash of the key and verifying it with Bob over the phone) or rely on certifying authorities (see Question 188.8.131.52 for more information on certifying authorities). Certifying authorities may provide directory services; if Bob works for company Z, Alice could look in the directory kept by Z's certifying authority.
Today, full-fledged directories are emerging, serving as on-line white or yellow pages. Along with ITU-T X.509 standards (see Question 5.3.2), most directories contain certificates as well as public keys; the presence of certificates lower the directories' security needs. | <urn:uuid:4d9d9121-2541-40f1-87ae-97e6ca1627ce> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/someone-else-public-key.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00071-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960632 | 298 | 2.734375 | 3 |
New York is experimenting with differential GPS, which offers greater accuracy and translates to better GIS data for user agencies.
LATHAM, N.Y. - The latest step in the evolution of global positioning system technology is being tested by a New York state department which maps and analyzes endangered species habitat. The Fish & Wildlife Division of New York's Environmental Conservation Department is conducting a pilot program to determine the feasibility of using a differential global positioning system (DGPS) to support a broad range of geographic information-based applications in ecosystem management.
One of the main potential applications for this technology is mapping New York's 2.5 million acres of wetlands. Other applications include mapping forest areas, hiking trails, rare plant and animal communities and monitoring toxic-substance areas.
The pilot program - directed by Senior Wildlife Biologist John Ozard and Senior Fish and Wildlife Ecologist Scott Crocoll - was prompted by the need to ensure accuracy and currency in the division's geographic information system (GIS) and other biodiversity databases. The division was also looking for a faster, more accurate and economical method of mapping and data collection.
PROBLEMS WITH EXISTING METHODS
According to the project's leaders, existing methods for mapping wetlands and locating species habitats are slow, error prone, and costly in manpower. "Where GPS is going to come into wetlands mapping," explained GIS Project Coordinator Wayne Richter, "is in the data gathering and data updating stage."
"For instance, freshwater wetlands have traditionally been mapped by air photography or ground surveys," Richter said. "Boundaries were then transferred to a quad map by estimating locations. Surveys conducted with GPS will be used as a basis to amend maps already in the GIS, [but] with much greater accuracy."
The differential in DGPS is a referencing technique used to overcome natural and artificial errors in a GPS. Natural errors result from atmospheric conditions, timing differences, and minor perturbations in satellite orbits. Another source of error, called selective availability (SA), is randomly activated by the Defense Department to degrade the accuracy of GPS for civilian use. The Pentagon's SA is ostensibly motivated by national security concerns.
Raw GPS with SA activated produces accuracies of 100 meters; without SA, predictable accuracies for civilian use are between 20 meters and 30 meters. DGPS, however, can produce accuracies in the millimeter range, depending on the type of measurement used by the receiver.
Differentially-corrected static files obtained with pseudorange measurement produce accuracies of between 2 and 5 meters. The higher-price carrier-phase measurement, an option with the Magellan ProMark V, produces differentially-corrected accuracies in the sub-meter range.
The differential principle has been used in electronic navigation systems for years. In DGPS, a computer measures the differences between the known geographic location of a GPS base station and its satellite-reported positions, and generates time-stamped corrections, which can be transmitted as real-time corrections to remote receivers within a 300-mile radius, or downloaded to a hard disk for post processing (applying differential corrections after remote files have been collected).
POST PROCESSING - FROM GPS TO GIS
The base station, located at the agency's Wildlife Resources Center in Albany, takes a fix each second, 10 hours a day, seven days a week, and stores the data on an internal file. Every hour, a computer automatically downloads the file to its hard disk, then clears the receiver's memory. The base station's recording rate and hours of operation are user selectable and can be changed locally, or remotely by PC, modem and telephone.
GPS files collected in the field can be post-processed at the Resources Center or from remote locations. Files from the remote receiver and the base station are first converted to RINEX, a receiver-independent exchange format that enables post-processing software to work with files from different brand GPS receivers. The software then time-matches data from the two receivers and applies corrections to the remote files. Corrected files imported to the GIS are first converted by a utility program in the software to a language understood by AutoCAD, Arc/Info, MapInfo and other databases.
Currently in the developmental research phase of the pilot program, DGPS is being evaluated with a broad range of applications for the bureaus of Wildlife, Fisheries, and Environmental Protection. The project team, for example, mapped Tern and Piping Plover communities on Long Island. Both populations are declining along the Atlantic Coast and the species have been put on the state's endangered list. Other endangered or threatened species habitats being mapped by the team include the Massasauga Rattlesnake, found in wetlands, and the Eastern Timber Rattlesnake, which is found mainly in mountainous areas.
Crocoll explained that wetlands are plotted on New York state planimetric maps using 1:24,000 scale, with thick lines indicating the approximate outer edge of the wetland. "Based on the scale of the map, that line could be 50-feet wide," he said. But "GPS is going to give us an accuracy of 2 [meters] to 5 meters."
Differentially-corrected static files taken in open terrain are producing this kind of accuracy with occupation times of nearly five minutes. Forested areas with heavy canopy require longer occupation times.
Data collected can be checked by a crew still in the field using a portable PC, Ozard said. "The flexibility is especially useful because it lets us check the accuracy of our product before we return to the home office. We can also differentially correct our data from a remote office and produce an output map before returning home."
BASE STATION RANGES
Part of the DGPS test includes verifying the accuracy of known horizontal control stations. When a National Geodetic Survey marker is in the area, the team will take one or two static files. If those correct to better than 5-meter accuracy, they are fairly confident that the data will be in the same accuracy range. Most of the tests thus far have shown accuracies within 5 meters, Ozard said.
Another test is to see how far from a base station DGPS measurements can be taken while maintaining accuracy within 5 meters. Points taken from as far away as 311 miles from a station have produced errors of less than 3 meters. "The accuracies are better than those required for wildlife and land management applications," Ozard said. "It looks like one base station will adequately cover the entire state."
Although it will take another year to complete the evaluation, Ozard believes DGPS will be a very useful tool in determining locations of rare and endangered species, particularly in areas without landmarks. "It should also assist us in determining the movements of animals, their home range, and how far they travel in various time periods. Another practical benefit will be using mobile files to map locations of smaller communities and habitats that cannot be determined from aerial photographs."
Following training and product evaluation, the New York project leaders chose a Ranger, 12-channel, all-in-view Base Station from Ashtech, and a package of three Magellan NAV 5000 PRO GPS Receivers, Hewlett-Packard HP-95LX Dataloggers, and Magellan's mission-planning, data-collection and post-processing software. The NAV 5000 units were subsequently upgraded to the more powerful ProMark V's. Their expanded data-storage capacity eliminates the need for external data recorders. Important factors in the selection of remote receivers were portability, weight and compactness. "We were looking for receivers that could easily be carried through rugged terrain and brush," Crocoll explained. | <urn:uuid:cfe2dc9e-4148-454b-b280-d974ec842e39> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/WILDLIFE-HABITATS-LOCATED-WITH-GPS.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00189-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937881 | 1,577 | 2.890625 | 3 |
The increasing connectedness of the world's people and political economies is grounds for optimism for the future, despite the growing risks associated with climate change, organised crime and terrorism, say UN agencies.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The State of the Future report published this month by the Millennium Project of the World Federation of UN Associations (WFUNA), showed that more than a billion people (16% of the world poulation) are now connected to the internet. It showed the digital gap between developed and developing economies continues to close.
"Most people in the world may be connected to the internet within 15 years, making cyberspace an unprecedented medium for civilisation," the authors said. "This new distribution of the means of production in the knowledge economy is cutting through old hierarchical controls in politics, economics and finance. It is becoming a self-organising mechanism that could lead to dramatic increases in humanity's ability to invent its future.
"As the integration of cellphones, video, and the internet grows, prices will fall, accelerating globalisation and allowing swarms of people to quickly form and disband, co-ordinate actions, and share information ranging from stock market tips to bold new contagious ideas (meme epidemics)."
The report's authors noted that developing countries generate more than half of the world's £31-trillion economy. "This is helping to democratise the coming knowledge economy with tele-nearly-everything and providing self-organising mechanisms for emerging collective computer-human intelligence and management systems.
"A worldwide race to connect everything not yet connected is just beginning, and great wealth will be generated by completing the links among systems by which civilisations function and flourish."
The authors were upbeat about the application of the artefacts of digital technology to biology. "Just as lines of code were written to create software to do amazing things, genetic code may be written to create life to do even more amazing things, such as producing hydrogen fuel instead of oxygen from photosynthesis. Artificial organs may be constructed by depositing living cells, layer by layer, using dot-matrix printers in a manner similar to 3-D prototyping.
"Future synergies among nanotechnology, biotechnology, information technology and cognitive science can dramatically improve the human condition by increasing the availability of food, energy and water, and by connecting people and information everywhere. The effect will be to increase collective intelligence and to create value and efficiency while lowering costs. The factors accelerating all these changes are themselves accelerating, which will make the past 25 years seem slow compared with the next 25," they said.
Not everything is rosy, however. The authors said that although the number of electoral democracies is increasing, press freedoms are decreasing. "According to Freedom House, only 17% of the world's population has access to free media," they said. In addition, "trivial entertainment flooding our minds with unethical behaviour and the increasing proliferation of media and information makes it difficult to separate the noise from the signal of what is important to know about our global situation in order to make good decisions."
Against this, "E-government is taking hold around the world and it will become more effective as increasing numbers of citizens have access to the needed technologies," they said.
The World Federation of UN Associations is an independent, non-governmental organisation with Category One Consultative Status at the Economic and Social Council (ECOSOC) and consultative or liaison links with many other UN organisations and agencies. The Millennium Project of WFUNA is a global participatory futures research think tank of futurists, scholars, business planners and policy makers who work for international organisations, governments, corporations, NGOs and universities. The Millennium Project manages a coherent and cumulative process that collects and assesses judgements from its several hundred participants to produce the annual "State of the Future", "Futures Research Methodology" series, and special studies such as the State of the Future Index, Future Scenarios for Africa, Lessons of History, Environmental Security, Applications of Futures Research to Policy, and an annotated scenarios bibliography. | <urn:uuid:70f98eb8-0de3-487b-948e-78edd29cbdd2> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/2240083079/Millennium-Project-praises-increasing-global-connectedness | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00493-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936054 | 852 | 2.75 | 3 |
DELL EMC Glossary
Cloud infrastructure encompasses the computers, storage, network, related components, and facilities required for cloud computing and IT-as-a-Service.
Why Should I Consider Cloud Infrastructure?
Organizations leverage cloud infrastructure to build hybrid and private clouds that deliver cloud computing services. To fully effectively deliver the benefits of cloud computing, organizations are implementing cloud-enabled infrastructure as part of their data center modernization.
How Does Cloud Infrastructure Work?
Cloud computing infrastructure includes the following components:
- Servers - physical servers provide "host" machines for multiple virtual machines (VMs) or "guests'
- Virtualization - virtualization technologies abstract physical elements and location. IT resources – servers, applications, desktops, storage, and networking – are uncoupled from physical devices and presented as logical resources.
- Storage - SAN, network attached storage (NAS), and unified systems provide storage for primary block and file data, data archiving, backup, and business continuance.
- Network - switches interconnect physical servers and storage.
- Management - cloud infrastructure management includes server, network, and storage orchestration, configuration management, performance monitoring, storage resource management, and usage metering
- Security - components ensure information security and data integrity, fulfill compliance and confidentiality needs, manage risk, and provide governance.
- Backup & recovery - virtual servers, NAS, and virtual desktops are backed up automatically.
- Infrastructure systems - pre-integrated software and hardware, such as complete backup systems with de-duplication and pre-racked platforms containing servers, hypervisor, network, and storage, streamline cloud infrastructure deployment and further reduce complexity.
What Are The Benefits of Cloud Infrastructure?
Infrastructure built for cloud computing provides numerous benefits:
- Flexible and efficient utilization of infrastructure investments
- Faster deployment of physical and virtual resources
- Higher application service levels
- Less administrative overhead
- Lower infrastructure, energy, and facility costs
- Increased security | <urn:uuid:cafcdf83-8a4d-4df0-a266-b564b84e9583> | CC-MAIN-2017-04 | https://www.emc.com/corporate/glossary/cloud-infrastructure.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00519-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.852632 | 407 | 2.96875 | 3 |
The issues with biometric systems
There are two basic types of recognition errors: the false accept rate (FAR) and the false reject rate (FRR). A False Accept is when a nonmatching pair of biometric data is wrongly accepted as a match by the system. A False Reject is when a matching pair of biometric data is wrongly rejected by the system. The two errors are complementary: When you try to lower one of the errors by varying the threshold, the other error rate automatically increases. There is therefore a balance to be found, with a decision threshold that can be specified to either reduce the risk of FAR, or to reduce the risk of FRR.
In a biometric authentication system, the relative false accept and false reject rates can be set by choosing a particular operating point (i.e., a detection threshold). Very low (close to zero) error rates for both errors (FAR and FRR) at the same time are not possible. By setting a high threshold, the FAR error can be close to zero, and similarly by setting a significantly low threshold, the FRR rate can be close to zero. A meaningful operating point for the threshold is decided based on the application requirements, and the FAR versus FRR error rates at that operating point may be quite different. To provide high security, biometric systems operate at a low FAR instead of the commonly recommended equal error rate (EER) operating point where FAR = FRR.
Compromised biometric data
Paradoxically, the greatest strength of biometrics is at the same time its greatest liability. It is the fact that an individual's biometric data does not change over time: the pattern in your iris, retina or palm vein remain the same throughout your life. Unfortunately, this means that should a set of biometric data be compromised, it is compromised forever. The user only has a limited number of biometric features (one face, two hands, ten fingers, two eyes). For authentication systems based on physical tokens such as keys and badges, a compromised token can be easily canceled and the user can be assigned a new token. Similarly, user IDs and passwords can be changed as often as required. But if the biometric data are compromised, the user may quickly run out of biometric features to be used for authentication.
Vulnerable points of a biometric system
The first stage involves scanning the user to acquire his/her unique biometric data. This process is called enrollment. During enrollment, an invariant template is stored in a database that represents the particular individual.
To authenticate the user against a given ID, this template is retrieved from the database and matched against the new template derived from a newly acquired input signal.
This is similar to a password: You first have to create a password for a new user, then when the user tries to access the system, he/she will be prompted to enter his/her password. If the password entered via the keyboard matches the password previously stored, access will be granted.
There are seven main areas where attacks may occur in a biometric system:
- Presenting fake biometrics or a copy at the sensor, for instance a fake finger or a face mask. It is also possible to try and resubmitting previously stored digitized biometrics signals such as a copy of a fingerprint image or a voice recording.
- Producing feature sets preselected by the intruder by overriding the feature extraction process.
- Tampering with the biometric feature representation: The features extracted from the input signal are replaced with a fraudulent feature set.
- Attacking the channel between the stored templates and the matcher: The stored templates are sent to the matcher through a communication channel. The data traveling through this channel could be intercepted and modified - There is a real danger if the biometric feature set is transmitted over the Internet.
- Corrupting the matcher: The matcher is attacked and corrupted so that it produces pre-selected match scores.
- Tampering with stored templates, either locally or remotely.
- Overriding the match result. | <urn:uuid:a45c74a6-858b-40fd-a137-4ea7e8808494> | CC-MAIN-2017-04 | http://www.biometricnewsportal.com/biometrics_issues.asp | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00427-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.911884 | 841 | 2.75 | 3 |
Using Penetration testing and vulnerability scanning
There are some systems which can help one understand about the testing's which are related to the penetration. One can carry out those tests and can be assure of the fact that he is safe. Also one can give with the option of some vulnerability scanning which can determine that how many chances are there that a system can get affected from some attack or the loophole. Here are the watts through which one can learn about all those things and can increase his knowledge level;
This test has some short form too and is also known as the pen test. One must know that it is the attack which is done on the computer with the thinking of having some security weakness which can be found there and hence one might get access to that. So the data and the functionality of that security can be compromised greatly. The process here involves some identification of the target system den then the goals which are set here. Then the information which is available is reviewed. Hence the means which are available for the goal attaining are reviewed as well and the penetration test's target can be the white box. It can also b the black and the grey box too. The penetration can be most likely to be surveying some rabbit proofs fences. That is built up so that rabbits can stay out. Into hat survey the defences and the penetration testers might identify some holes which might be large so that the rabbit can get into them. Hence when the defence has even passed, then there are some further reviews which are occurred with the movement of the tester. Then comes the next security control and it would mean that there are going to be some really big holes and there would be many of them. All of those holds actually exist on the front line of that defence. Hence the tester would find the first one easily since it is the first successful exploit available. So that's the difference which exists between the penetration test and the assessment for the vulnerability.
Verify a threat exists: The first thing which is carried out in it is the verification that there is some threat which actually exists since without that, the whole idea of that test is rubbish.
Bypass security controls: For the tests, one would have to get passed through the security controls so he can go deep into the system and dig something out there.
Actively test security controls: the security controls which are responsible for the whole security systems, should also be tested accurately and actively to know they are updated all the time.
Exploiting vulnerabilities: if there are some vulnerability which are found there, they should be exploited so that the system can become much safer place for the data.
Instead of having the penetration testing, one can make use of this vulnerability testing's as well which can help a lot own knowing about the weaknesses each system contains. There are some methods for the vulnerability tastings;
Passively testing security controls: The security controls which are supposed to keep the back sectors out should be tested variously and regularly so their security can be ensured.
Identify vulnerability: as the name suggests, this test is all about the testing for the vulnerabilities so it should be ensured that there are some of them found here so that they can be taken care of easily.
Identify lack of security controls: if the security controls which are supposed to be there are less, then it should be pointed out too, hence one can take some measures to strengthen the current ones or they can also add new ones so that there are no further threats there.
Identify common misconfigurations: if there are some configurations which are done badly, then it should be identified too so that one can take measures for them and can correct them. Most of them which are found are normal in every system and they are common.
Intrusive vs. non-intrusive: one must check whether the problem is intrusive or not so that it can be depth with according to the type.
Credentialed vs. non-credentialed:
If the problem is that it can steal away some credential data, then it might be corrected so that one can stay safe and can keep the entire data safe with his identity.
False positive: there are some attacks too which are false positive. So they should be dealt with care and more time since they might create some problems in the future and one would surely like to not have them faced.
The black box is not the black box which we hear about that planes have, it is about the type of penetration testing to check whether there is some intrusion or not. The black box is the method for the software testing which can help examine the function of some application. That can be done with without erring with this internal structure it has and the working are not touched either. This kind of method can be used for applying some test of the virtual level which can be of any level. It normally consists of some higher level testing's. It might also be having some of the unit testing's as well. The procedure for the test is the simple. One must know about some specific knowledge about the code of applications and the internal structures. Hence the knowledge about the programming is not required here. But if it is, it might be a bad thing. But there is one thing that the tester should be aware of some things like how the software can work and how it reacts when being installed. Also he should know that what input would result into what and the knowledge of how the output would be preceded by the software is also something so important to learn about. There can be some specifications and requirements too which would be built around an application. Like they would help suggesting that what things should that application be doing. These cases are normally getting some descriptions which can be the external ones and they should include some design parameters, the requirements and the specifications. These tests are also the primary unction's and they have the nature of it. There can be some non-function tests too which can be applied there. The designer of the test can select the both invalid and the valid outputs and can determine some right output even though he might not have some knowledge about the internal structure of that object. The test contains many of the techniques and they can be the all of pair testing's, the country values testing's, error guessing, decision table test etc.
This testing is also known as the clear/ glass or transparent box testing. Some people might also refer it to as the structural tastings. It is the method in which the software gets tested for the internal structures or some working which are done by the applications. They are different from the functionalities. This test also contains some of the internal perspective about the system. There might be some programming skills which can be required and they can be sued for designing the cases of tests. Also, the tester can help oneself choosing the paths which are external and he can do that through some code and hence can also determine some of the outputs which are appropriate. Also, this analogue can be used for testing u the notes in some circuit. This white box test can also be carried out on some system levels, the units and the integration systems of the software testing procedure and Even though the testing type which is external, can be done here. It can be used for some system testing and the integration so frequently. This method for the testing can also uncover many of the problems and errors which are there in the software and hence it has some potential too for missing the parts of the software specification which are unimplemented. Or the required, if they are missing as well. There are some techniques which are involved while one is applying this white box testing techniques and some of them are the control flow, data flow, decision coverage, statement coverage etc. there are many advantages which are associate to this testing. Like, this testing is the biggest method which is being used these days by many people and there are some advantages which help it get this position. Like there are the safe effects which are related to knowing about the source code but they become beneficial to the testing process. Also, the codes are optimized through the errors which are hidden are revealed and hence there are the abilities which can make possible to remove the defects which appear here. These tests are pretty easy to automate hence they are preferred by many people. Also, these tests give some pretty clear tests results which is engineering based and they can be used for stopping the testing too.
This testing is actually the mixing up of the white and the black boss testing's. There is some objective which is related to it like the search for any defectors if there are any of the improper structure and the usage of the application. Hence it is made sure that the application which is being used is done in some good manner. The thing is that a white box tester would know about the internal structure of that application which the black box tester would not know about that internal structure. Hence the Gray box tester would know something about the structure and he won't be aware of something about it. So he would be partial and he would know about the documentations which is related to that internet structure of data and he might not know the algorithm usage and vice versa. He can even know about both. The Gray box testers are supposed to be having some really high level and the documents which are pretty detailed about the applications. They collect them al do that they can be used for defining the test cases. The gray box testing's can be used in since it is pretty beneficial when it comes to the straight techniques which are there in some black box testing's. It also combines that code with some tests in the white box testing's. This type of testing is basically catered on the requirements tests generation since it can help presenting all the conditions which are tested before by program through some assertion methods. There is some specification language too which is the requirement and it can make it become easy to understand the requirement which one should be having and can also verify the correctness. Also, there are some of the positive and the negative effects which one can find with these tastings. Like this system can offer some really communication of benefits like the advantages which arise from both the white and the black box testing's. Also this test is not intrusive one. Which means there it is based on some specification which is functional and there the architectural views are not on some source clues and the binaries which can make it become invasive. There the testing authority is pretty intelligent one; hence there are some communication protocol and the data types handling which are involved there. The testing which is done under this system is pretty un biased and one can go easy with it since the result won't be effected by any of the problem which one can think of would affect the testing system.
So there are many penetrations testing technique which can be used by one. They can help one determine whether the software's are good enough or not. So one should know about these testing's so he can make some better choices about the penetration testing's and can know which one should be applied to the system he is using. | <urn:uuid:da8c734c-8c8d-436c-a06f-6fcdd93c369e> | CC-MAIN-2017-04 | https://www.examcollection.com/certification-training/security-plus-using-penetration-testing-and-vulnerability-scanning.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00025-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.974834 | 2,238 | 2.6875 | 3 |
The tragedy that unfolded in Minneapolis on Wednesday, August 1, 2007 is still being assessed, and rescuers continue to search for more of the missing. The horrifying bridge collapse, however, is yet another recent example of the power and limitations of wireless devices and networks during a disaster situation.
Emergency responders rely largely on wireless communications to coordinate operations at the scene, find those who are injured and rescue them from the wreckage. And ordinary citizens use mobile devices to alert loved ones to their status or whereabouts.
But what also usually happens near accident scenes like the one in Minneapolis is a disruption in cellular service because there's too much radio and network congestion. Many news outlets reported that cell phone service in the greater Minneapolis area went down, citing the fact that cell phone towers and antennae were overloaded by the sheer number of users trying to place calls.
One story reported that "Jay Reeves, 39, was one of the first people on the scene after the collapse. He tried calling 911, but all the lines were jammed."
A separate first-person account of Wednesday's events in Minneapolis detailed the cellular disruptions and related frustrations of many. "While I was out, I got a dozen or so SMSes but only one or two calls. [Only] every tenth call I tried to make went through, and half of the successful ones had problems like not hearing the other end, dropping or unusable quality," wrote Charlie Demerjian, a contributor to The Inquirer website who lives just five blocks from the bridge.
"Why? Simple overload. The infrastructure could not handle it," he continued. "Friends as far away as 10-15 miles could not place calls, the entire network was teetering on going down."
This tragic incident isn't the first time such a wireless infrastructure failure has occurred--and it likely won't be the last. Recent events such as Hurricane Katrina, the London subway bombings, the 2003 electrical blackout in the Northeast United States, and the 9/11 attacks on New York City and Washington, D.C., all led to mobile phone-service outages.
So why does this happen?
According to David Crowe, a 20-year veteran of the cell phone industry and founder of Cellular Networking Perspectives consultancy, the cause is a combination of simple overload on the cellular systems and the randomness of the events. "The problem with major tragedies is that they are completely unpredictable and often occur where large amounts of cell phone capacity are not normally required," he says.
A fascinating article on cell phone service overload can be found on Crowe's website.
In the article, Crowe points out that wireless systems are assigned limited frequencies, and "the amount of frequency that can be used in one place is also constrained by neighboring cells needing to use part of the same block and the amount of radio equipment installed in the cellsite. With analog and TDMA/GSM cellular systems, the number of users who can be supported in a single cellsite is a simple function of the number of transceivers installed, but with CDMA it is more complex."
"No matter what the technology," he writes, "cellsite capacity is carefully engineered to ensure that the undesirable tone known as 'fast busy' (two beeps per second versus one beep for normal busy) is rarely heard, even during the busiest times of the day. This tone indicates a lack of resources for the call, usually radio capacity, although sometimes it reflects the lack of a connection back to the main switch or other network overload or failure situations."
An event such as the Minneapolis bridge collapse can generate as much as 10 times more cellular traffic than normal levels, Crowe says. For example, Crowe, who is based in Canada, notes that the Dawson College shootings in Montreal last September led to 11 times the amount of normal traffic. "It is impossible for cellular carriers to have this much extra capacity in place," he writes. "It is not just that this would increase the cost of cell phone communications several-fold, something that consumers would not tolerate, but there simply is not enough frequency available in many locations, particularly in urban areas."
What's interesting to note is just how much companies rely on wireless communication services for their disaster recovery and business continuity plans. "I would suggest that [CIOs] ensure that their wireless communication is split between at least two carriers," Crowe advises. "[CIOs] should all have long-distance cards as well, so that if their cell phone isn't working perhaps they can find another phone."
Crowe points out that the Minneapolis bridge tragedy should remind businesses and CIOs that it's wise for disaster-recovery teams to play out all kinds of different scenarios. For example, he says, what would they do if a bridge took out their sites and their cellular communications functionality? What would be their most critical communications needs, and how could they be accomplished?
"But in many situations," Crowe says, "there is no choice but cellular communications." | <urn:uuid:1cd497bd-e9a8-4db4-8fea-c627671ba4a3> | CC-MAIN-2017-04 | http://www.cio.com/article/2438307/mobile/minneapolis-bridge-collapse--why-cellular-service-goes-down-during-disasters.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00143-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.970503 | 1,023 | 2.640625 | 3 |
A single white trailer sits near runway 4-22 at the Las Cruces airport, the antenna on top chatting with the 21.5-foot-long Aerostar A drone parked on the tarmac outside.
This is where New Mexico State University's Unmanned Aircraft Systems Flight Test Center, once the lone such federally approved center in the nation, conducts most flight tests and evaluates procedures for unmanned aircraft as drones increasingly become part of everyday life.
Though drones are most often associated with military strikes on suspected terrorists abroad, scientists and CEOs are turning to unmanned aircraft to expand their research and businesses. Amazon founder Jeffrey P. Bezos said he hopes to use small drones to deliver packages. Some photographers use drones to capture weddings. A tourist recently crashed his camera-equipped drone into Yellowstone National Park's iconic Grand Prismatic Spring, and other parks have reported problems with drones buzzing loudly overhead or crashing into scenic landmarks as tourists try to capture unique photos.
The private use of unmanned aircraft outdoors violates the FAA's current regulations, which restrict most flights to law enforcement and researchers. But as the devices become increasingly popular and cheaper, the federal agency has recognized it is in dire need of new rules.
"The technology is moving so quick that the FAA as a regulator and, really, the public cannot keep up with the speed," NMSU Unmanned Aircraft Systems Flight Test Center Deputy Director Dennis "Zak" Zaklan said. "... It's such a big field. The important thing is to understand that drones are not a negative entity. Really, the enhancements they can bring to all aspects of society are tremendous and unlimited."
The FAA has regulations for manned aircraft, from rules on tires to brakes to airspace, but federal officials are struggling to create guidelines for drones, which can be as small as a fingernail or as large as a Boeing 737, Zaklan said.
"The UAS manufacturers really had no legal place to go and test their aircraft," Zaklan said.
Though they may be seen as a cheap toy or fancy camera, flying a drone comes with risks as well, especially if pilots are untrained. Drones have crashed into crowds and buildings, injuring spectators.
So a few years back, FAA officials decided they needed to establish a testing site, and NMSU's UAS Flight Test Center was born.
Zaklan started at the Flight Test Center in 2003, setting up an operations team in 2004. Initially, the group did mostly demonstrations, said Zaklan, a retired Navy master chief and cryptologist brought in for his intelligence background and experience managing projects on ships, aircraft and land.
FAA officials chose southern New Mexico as the test site for the region's limited air traffic, good weather and low population density, Zaklan said. As he recalled someone saying: "It's not at the end of the world, but you can see it from there."
The crew of about seven people, mostly with military and aviation experience, began running test flights and collecting data for the FAA to use in developing standards and procedures for unmanned aircraft.
"We'd sit down and say, 'OK, how do we do this safely? How do we do that with manned aircraft and how do we do that with unmanned aircraft?'" Zaklan said.
Drone manufacturers also approach the Flight Test Center, which will test and assess the companies' unmanned aircraft and flight protocol, sending that data on to the FAA as well.
The center also contracts with researchers and Sandia and Los Alamos national laboratories, Zaklan said. NMSU flies the aircraft and collects data on the flight and how the systems work together, while researchers collect the data they need. After the FAA contract ends this month, the center will increasingly rely on these other contracts to sustain itself.
The Flight Test Center crew also run their own test flights, testing unmanned aircraft and the attached sensors to develop procedures on how to incorporate drones into national airspace with manned aircraft, Zaklan said.
The crew have conducted about one flight a week since 2005. The Flight Test Center usually has seven to 10 drones in its care, including aircraft companies loan the center.
Some of the procedures NMSU developed have already been adopted and put in place. The Department of Homeland Security is using some of NMSU's procedures out of its Corpus Christi location, Zaklan said, and FAA officials adopted NMSU practices for chase planes, the planes that follow behind a drone to monitor its flight.
Meanwhile, the FAA has opened five test sites across the country this year, in Nevada, Alaska, North Dakota, Texas and New York. A sixth center will open in Virginia.
Zaklan said he expects drones to be increasingly widespread in the coming years, helping farmers determine how and where to spray pesticides and monitoring mines or volcanoes for scientists. Sensors hooked up to the drones could detect changes beyond the ability of the human eye, Zaklan said.
"Anything from mining, plants, search, basically anything that's dull, dirty or dangerous, it's better to use an unmanned aircraft than aircraft, because that way you're not putting a life in danger while you do the research that you're trying to do," he said.
The Aerostar A was $500,000 back when the Flight Test Center purchased it in 2005. The camera it carries was $500,000, too, Zaklan said. The crew spent three weeks in Israel to train on the devices.
Monday's flight is what Zaklan calls a "currency flight — checking out systems and making sure its good."
Before the launch, external pilot Joe Millette runs his fingers over each of the Aerostar's bolts, making sure they're secure. He will control the aircraft during takeoff and landing. Inside the trailer, internal pilot Tim Lower and payload operator Clifford Tyree check their computer systems, testing the aircraft's wing flaps and camera. Lower will control the aircraft in flight.
A crew member hops onto the all-terrain vehicle pulling the Aerostar and slowly guides it down the runway. At the other end, two men stretch a rope across the runway, held down by two weights. When the drone lands, a wire hook on the bottom of the craft will catch the rope, slowing the aircraft down.
Millette stands at the ready on the runway's edge as the drone's engine buzzes. A pick-up truck is parked nearby, makeshift shelter Millette can dive behind if he suddenly loses control of the aircraft.
With the sun rising behind the clouds left over from weekend rains, the drone's engines whir as it lifts off into the morning light.
©2014 the Las Cruces Sun-News (Las Cruces, N.M.) | <urn:uuid:289e7604-1e73-4971-8883-b92c664d034d> | CC-MAIN-2017-04 | http://www.govtech.com/products/New-Mexio-State-University-FAA-Pursue-Sought-After-Drone-Research.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00051-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959195 | 1,372 | 2.6875 | 3 |
Rutgers professors Vinod Ganapathy and Liviu Iftode presented their group’s findings today at the International Workshop on Mobile Computing Systems and Applications (HotMobile 2010) in Annapolis, Md. The group, comprising the two professors and three students, was able to install a rootkit on a smart phone operating system, providing them with the capability to eavesdrop on calls made from the devices.
In addition, the complex malware installed on the smart phone permitted the team to call up the phone’s location by tapping into its GPS application; they were also able to run software on the phone that rapidly drained device’s battery.
The two Rutgers researchers told Infosecurity that smart phones could be infected by a rootkit via the same methods used to compromise other traditional desktop and laptop systems. They said this is because many smart phones are nothing more than portable mini-computers, and these devices are becoming ever-more sophisticated.
“More complex means more vulnerabilities,” said Ganapathy.
The study did not discover flaws in a smart phone operating system, but it did provide proof that rootkits could be deployed on these devices. “We didn’t exploit any flaw in the operating system”, Ganapathy told Infosecurity. “We simply installed the rootkit on the operating system.” However, the researcher does believe that a rootkit could be installed on a smart phone much the same way as on a traditional computer, whether it is via a browser exploit or by visiting sites that load malicious code.
Both Ganpathy and Iftode stressed that vulnerabilities of different smart phone operating systems were not compared in this study. In lieu of commercial smart phones, the group employed devices primarily intended for use by software developers.
“Our intention is to make the [security] community aware of these threats”, said Ganapathy, adding that his group’s future objective will be to research potential defenses to these smart phone security threats, along with the ability to detect them. | <urn:uuid:17c27ffd-4235-4b51-b197-9ce80dd49228> | CC-MAIN-2017-04 | https://www.infosecurity-magazine.com/news/rutgers-team-demonstrates-new-smart-phone/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00051-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95025 | 424 | 2.65625 | 3 |
Definition: A series of moves of a chess knight that visits all squares on the board exactly once.
See also Hamiltonian cycle.
Note: The associated problem is to find such a series of moves. The problem can be generalized to an n × m rectangular chess board. Solutions may be found using backtracking.
Dan Thomasson's Knight's Tour page, with application to prime numbers, tour on a cube's face, etc.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 6 June 2005.
HTML page formatted Mon Feb 2 13:10:39 2015.
Cite this as:
Paul E. Black, "knight's tour", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 6 June 2005. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/knightstour.html | <urn:uuid:7e345ab6-09aa-4309-af05-1a6897892e90> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/knightstour.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00291-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.879094 | 208 | 2.859375 | 3 |
Computer terminology can be one of the hardest things to wrap your head around. It’s can all seem like mumbo jumbo, with made-up or adopted words. Some of the more confusing terminology comes from programs and software with harmful intent. Is the program infecting your computer a Trojan horse, worm or malware though? It can be hard to differentiate them sometimes.
Here’s an overview of the most commonly used terms for malicious software.
Malware – Malware is a portmanteau of malicious and software. When we, or any other IT professional, talks about malware, we are generally speaking about any software that is designed to steal information, disrupt operations or gain access to a computer or network. In tech, and indeed many news articles, malware is used as a general term. It can also be referred to in legal circles as a ‘computer contaminant’.
Virus – A virus is a malicious code that is spread from one computer to another. Computer viruses are usually introduced to a system by a user downloading and opening an infected file. They can also be spread by any removable media including CDs, DVDs, USB drives, SD cards, etc. If an infected file is put onto say a USB drive, which is then plugged into a new computer and the infected file is opened, the virus will be introduced into the system. For malicious software to be labeled as a virus, it has to be spread through human action, usually in the form of the user unknowingly opening an infected file.
Trojan horse – A Trojan horse takes its name from the Greek story where a wooden horse was used to hide Greek soldiers who secretly entered Troy. In a similar way this computer virus is a program that is disguised as a useful program that when installed will do damage to your system. The severity of a Trojan horse varies from annoying to completely destructive, and while they are malicious, they will not replicate or transfer to other computers. Many modern Trojan horse programs also contain a backdoor (more on that below).
Worm – Worms are similar to a virus. In fact, many experts consider a worm to be a subclass of virus. Worms, like viruses, spread from computer to computer; the major difference being that worms can spread themselves. Computer worms also have the ability to replicate on a host system and send these copies to other users. The most common way of transmission is through email, or via a company’s network, often causing computers to run slowly while using a ton of bandwidth, ultimately leading to a system crash.
Spyware – Spyware is a malware program that captures user activity and information without the user’s knowledge or consent. Some can even go so far as to capture every single keystroke a user makes – this is commonly known as a keylogger. Spyware infects computers either through user deception (i.e., “You’ve won 1,000,000,000 dollars” ads) or through exploits in programs. Some spyware has been known to redirect users to websites or even change computer settings.
Adware – The main purpose of adware is to show ads and gain the hacker ad revenue. These ads can be pop-ups, extra banners added to web browsers, or ads shown during the installation of third party software. While generally not a form of malicious software on its own, it can, and often does, come with spyware.
Rootkit – Rootkits are all about stealth. When installed they hide themselves from detection while allowing an unauthorized user to access and control your computer. Nine times out of ten, the unauthorized user will have full administrative access, which means that if they were malicious enough, they could really do some damage.
Backdoor – Backdoors are similar to Rootkits, in that they allow an unauthorized user to access your computer. Many Trojan horses install a backdoor for the hacker to access and remotely control your system.
Bug – Some users think that a bug in software is a form of malware, placed there by the developer to ruin the program or a system. In fact, bugs aren’t malware, they are an error or fault in the software’s code. It’s true that hackers have exploited bugs to infect systems, but the bug was the way in, not the malicious software itself.
In the early days of the Internet, viruses were often installed separately from Trojans and worms. With the rising complexity and effectiveness of malware prevention software, hackers have started to blend their attacks together, often using a combination of one or more types of malicious software to infect systems. These combination malware infections are normally complex, but have been incredibly effective.
While malware is usually malicious towards single users, a new form of warfare that utilizes malware has arisen. Cyberwarfare is rumored to have been used by governments and companies to steal information or completely disrupt a countries information networks. While most Cyberwarfare is conducted at the country or conglomerate organization level, it is only a matter of time before small to medium companies are targeted.
Tools like Microsoft’s Enhanced Mitigation Experience Toolkit (EMET), which is meant to fix bugs in Internet explorer, as well as strong anti-virus measures, timely virus scans and an efficient Internet use policy will go a long way toward preventing malware from infecting your computers. If you’re worried about the security of your computers and network, please give us a shout, we may have a solution for you. | <urn:uuid:78c09dc0-7bbe-4c3f-9555-1ffc75eb34b6> | CC-MAIN-2017-04 | https://www.apex.com/worm-virus-whats-difference/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00043-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952561 | 1,123 | 3.546875 | 4 |
A new 60 teraflops supercomputer and 1 petabyte high speed storage system recently installed on the campus of the University of California at Santa Cruz will give astrophysicists at the college the computational and storage headroom they need to model the heavens like never before.
“Hyades,” as the new supercomputer system is known, is composed of 376 Intel Sandy Bridge processors, eight Nvidia K20 GPUs, three Intel Xeon Phi 5110P accelerators, and 13 terabytes of memory. The Dell computer is 10 times as powerful as “Pleiades,” the system it replaces, yet it occupies the same space in the UC High-Performance AstroComputing Center (UC-HiPACC) and uses the same amount of electricity, according to this Phys.org story.
The system, which is named after a cluster of stars that makes up the head of the bull in the constellation Taurus, will be used to model space events, such as exploding stars, black holes, magnetic fields, planet formation, the evolution of galaxies, and what occurred after the big bang.
Perhaps more impressive than Hyades is the high performance, 1,000 terabyte storage system it’s paired with. Based on Huawei’s Universal Distributed Storage (UDS) system, the ARM-based array is similar to a system Huawei installed at CERN to store data from the Large Hadron Collider, and is expected to become one of the largest repositories of astrophysical data outside of national facilities.
The massive storage array is needed because supercomputer simulations generate so much data that it must be analyzed after the fact, Joel Primack, Professor of Physics at UCSC and Director of UC-HiPACC, told Phys.org.
“The Huawei system will be used to store our astrophysics results, not only from Hyades but also from simulations that we run at the big national supercomputing facilities, such as at NASA Ames or Oak Ridge National Laboratory,” Primack said. “Those facilities can only store the results for a limited time, and they also restrict access to them. Now, with the Huawei storage system, we can put our results on a local server.”
Hyades cost $1.5 million, and was funded by a combination of a grant from the National Science Foundation (NSF) and contributions collected on campus. Dell and Intel also chipped in with discounts on hardware. The supercomputer will be used by the Theoretical Astrophysics at Santa Cruz (TASC) group, which includes about 20 faculty and more than 50 postdoctoral researchers and graduate students in four departments, including Applied Math and Statistics, Astronomy and Astrophysics, Earth and Planetary Sciences, and Physics.
The Huawei UDS storage system is on loan to the Center for Research in Storage Systems (CRSS) at the UCSC’s Baskin School of Engineering. The CRSS, which is a joint academic/industry research center supported by the NSF, will be studying the performance of the storage system. | <urn:uuid:c0989c2f-d1a7-4ef7-92a9-228bffa7b0fe> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/08/07/ucsc_ramps_up_astrophysics_work_with_new_hpc_gear/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00043-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92901 | 632 | 2.84375 | 3 |
Suicide is now the second leading cause of death for adolescents 15 to 19 years old; having passed up homicide.
One out of every 53 high school students has reported having made a suicide attempt that was serious enough to be treated by a doctor or a nurse (CDC, 2010a).
For each suicide death among young people, there may be as many as 100–200 suicide attempts (McIntosh, 2010)
The hard truth of these statistics is that suicide is preventable. There are warning signs of suicidal behavior. By connecting, caring, and communicating, potential student suicides can be prevented. Using technology can be a tool that supports this endeavor.
Creating community through technology involves providing a means of connectivity for students, teachers and staff. Promoting student blogging can allow students to share their feelings and thoughts with others in a non-confrontational way. Utilizing software such as Edmodo and Schoology can provide a means of two way communication, that are similar to that of social media messaging. Sharing out information on suicide via these apps, and giving students the ability to comment can open up dialog in a less threatening way than face to face conversation.
Connecting with students bereaved by suicide or who have lived experience of suicidal behavior is key to suicide prevention. Being there for young people who have become disconnected and guiding them to the supports they need can be a lifesaving activity.
Over 70% of teens utilize social media and instant messaging to communicate with their peers, according to Pew Research. This quite clearly indicates/ obviously means that young people are likely to be communicating online while in school.
Listening is a key ingredient to communication. Listening to students through online monitoring can detect, address, and prevent instances of self-harm and suicide. Conversations around bullying, abuse, depression, racial and religious hatred, and LGBT persecution can all signs of at risk situations.
Using online monitoring software, such as Impero Education Pro, schools can listen to student online activity to detect signs of risk. Monitoring software contains an expansive library of terms and phrases and uses algorithms to detect and flag keywords with concerning content. The software alerts safeguarding adults to this content, through a report of terms with definitions, captured screenshots, and short videos, along with severity levels to provide context of the situation. Then school staff can analyze activity to determine if it is a true threat. Issues can then be sent on to the appropriate next step, according to the school’s safety procedures.
The CDC suggests that removing social or material barriers to help-seeking, those contemplating or planning suicide may be more easily identified and treated and therefore less likely to engage in life-threatening behaviors.
Utilizing the chat features of a digital classroom management software can allow students to communicate with their teacher without anyone else hearing them. This can be helpful for students who want to reach out but are afraid of peers hearing them. Additionally, anonymous reporting tools, such as Impero’s Confide function, allow students to report concerns to a safeguarding adult assigned by the school, without fear of anyone else knowing.
All the technology intervention in the world will have no effect in helping prevent suicide in our youth without the final ingredient, care. As educators we need to continuously look out for students who may be struggling, and let them tell their story in their own way and at their own pace. Those who have been affected by suicide have much to teach us in this regard.
Suicide prevention through technology saves lives
Impero Education Pro monitoring software has been in place in classrooms across the UK for several years, and was recently updated for US terminology. Recently, a UK district reported that because of keyword monitoring, three suicides were prevented in one year. That is three lives of students that were saved by monitoring online activity on school devices. Think of how many lives could be saved across the country and around the world by connecting, caring, and communicating suicide prevention through technology.
IF YOU OR SOMEONE YOU KNOW IS HAVING SUICIDAL THOUGHTS, PLEASE CALL 1 (800) 273 TALK, OR GO TO THIS WEBSITE FOR CONFIDENTIAL HELP!
For more about detecting and addressing suicide through Impero Education Pro online monitoring software, read this recent blog post.
For a comprehensive explanation of behavior management software, Internet safety monitoring, and Impero Education Pro, you can download a whitepaper here.
Find out more about how Impero education network management software can help your school in the early prevention of student suicide by requesting free demos and trials on our website. Talk to our team of education experts by calling 877.883.4370, or by emailing Impero now to arrange a call back. | <urn:uuid:daaaa23f-8048-46d6-a7d1-251b9c3e14dd> | CC-MAIN-2017-04 | https://www.imperosoftware.com/suicide-prevention-through-technology-connect-communicate-care/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00282-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952961 | 963 | 3.875 | 4 |
New data capture and analytics techniques thrown up by the Internet of Things (IoT) are having a profound effect upon the way back office IT shop engineering is developing — the datacenter as we know it today is changing fast.
Innumerable streams of data
The growth of the IoT has, very obviously, helped continue to drive the need for datacenter construction and development. The innumerable streams of data produced by the proliferating variety of devices has to reside somewhere and most typically it has to reside ‘in the cloud’, as the expression goes.
Of course, there is no REAL cloud at all… it’s simply our way of describing the service-based delivery of application and data storage via an abstracted layer of computing power over an electronic connection that we usually call the Internet.
But that cloud is the cloud datacenter and the cloud datacenter houses the server racks that hold the IoT. So… after that brief history of IoT cloud, what is happening?
Close to the ‘edge’ computing
Cloud datacenters do a great job if looking after IoT data, but one of their limitations is that they are not completely virtual i.e. they have to physically exist somewhere on planet Earth and this it turns out is their Achilles heel. Not just for reasons of compliance and governance, very often we want data to reside ‘on the edge’ i.e. close to its original source. This is precisely why we talk about so-called Edge Computing.
So wouldn’t it be great if we could bring the datacenter closer to the sensors, devices and networks all producing IoT data?
It’s a small world after all
Micro datacenters (or microdatacenters, if you prefer) are compact prefabricated pieces of hardware and management software intelligence designed to live in a single unit. In terms of form and function, micro datacenters boast many of the components found inside a full blown datacentre but scaled down and in some areas rationalized for size.
Inside the micro datacenters ‘box’ we will find processing power, memory, Input/Output (I/O) intelligence, cooling systems, Uninterruptible Power Supply (UPS) hardware, security software and telecommunications power.
Manufacturers producing micro datacenters include Huawei, Zellabox, Dell, Canovate Group and the company that used to be Silicon Graphics but is now SGI.
Speaking to Internet of Business in reaction to this story, Dale Green, Digital Realty’s marketing director points out that all IoT initiatives require a tailored datacenter strategy that balances current needs with future growth and potential applications.
“As the Internet of Things localizes data streams down to the smallest wearable device, many organisations are growing increasingly concerned with latency and the emerging need to maintain datacenters within physical proximity to their customers. In situations such as this, data and computing infrastructure will need to reside in close proximity to users and the devices, while still being able to connect directly to trading partners and the digital supply chain,” he said.
Nutanix: web-scale 2.0 is the answer
Paul Phillips, regional senior director for Nutanix thinks that the explosion of consumer devices requiring cloud based services presents an insurmountable challenge to the traditional three-tier architecture that the datacenter has relied upon for the past two decades.
“Google, Facebook and the other new wave of cloud based companies recognized this early on and chose a different path with what is often referred to as web-scale technology,” said Phillips.
The Nutanix man explains that the ability to scale in small increments, seamlessly and with 100% uptime as the business demands can only be achieved using the same web scale techniques these companies pioneered.
“However, this technology needs to be consumable, easy to manage and simple to implement if it is to be pushed to the edge, and yet intelligent enough to be managed centrally as a single platform right out of the box. Nutanix’ Enterprise Cloud platform provides that simplicity and incremental scalability coupled with an embedded management plane purpose designed to stretch from the edge to the centre”
There are a whole plethora of firms with opinions to share on this subject. TIBCO has gone to pains to remind Internet of Business that its history has been defined by high performance, real time systems running at the core of the business.
Now, says the firm, as real time becomes the new norm, it’s the edge of an enterprise that is becoming the centre of attention. This is an agile space through which to reach new audiences, exemplified by the creation Project Flogo, the first design bot created specifically for IoT edge application development.
TIBCO CTO for EMEA Maurizio Canton states the following, “Going forward, big data, cloud, IoT and mobile technologies demand alternative integration beyond the confines of the traditional enterprise as we know today. Our Flogo bot enables us to connect more intelligently, bringing a surge of process power to even the smallest connected smart devices. This allows us to build more agile applications that are processed locally in to the smart devices addressing the need to be connected all the time, all of which accelerates the evolution of digital business.”
Follow Adrian Bridgwater on Twitter | <urn:uuid:55cc6d51-92fa-4a29-a3e9-6e2d027b0f8f> | CC-MAIN-2017-04 | https://internetofbusiness.com/iot-drives-trend-micro-data-centers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00190-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92788 | 1,108 | 2.59375 | 3 |
The US Government's decision to adopt the Advanced Encryption Standard (AES) for securing sensitive information will trigger a move from the current, ageing Data Encryption Standard (DES) in the private sector, according to users and analysts.
But it will not happen overnight. Technology standards bodies representing industries such as financial services and banking need to approve AES as well, and that will take time. Products such as wireless devices and virtual private networks that incorporate AES have also yet to be developed.
Companies using Triple DES technologies, which offer much stronger forms of encryption than DES, will have to wait until low-cost AES implementations become available before a migration to the new standard makes sense from a price perspective.
"AES will likely not replace more than 30% of DES operations before 2004," said John Pescatore, an analyst at Gartner.
US secretary of commerce Don Evans announced the approval of AES as the new Federal Information Processing Standard on 4 December. The formal approval makes it compulsory for all US Government agencies to use AES for encrypting information from 26 May.
AES is a 128-bit encryption algorithm based on a mathematical formula called Rijndael (pronounced "rhine doll") that was developed by cryptographers Joan Daemen at Proton World International and Vincent Rijmen at Katholieke Universiteit Leuven, both in Belgium.
Experts claim that the algorithm is small and fast, and that it would take 149 trillion years to crack a single 128-bit AES key using today's computers.
AES offers a more secure standard than the 56-bit DES algorithm, which was developed in the 1970s and has already been cracked. AES is considered even better than Triple DES, which is compatible with DES but uses a 112-bit encryption algorithm that is considered unbreakable using today's techniques.
In software, AES runs about six times as fast as Triple DES and is less chip-intensive.
The advantages of AES make it inevitable that private companies will start using it for encryption, said Paul Lamb, chief technology officer at Oil-Law Records, which provides regulatory and legal information to oil and gas companies. "[Companies will adopt AES] because of the perceived problems with DES and the greater sense of security with AES," he added.
"I would expect the adoption curve to be pretty steep," said Steve Lindstrom, an analyst at Hurwitz Group. Any concerns companies had about AES not being widely adopted have been put to rest with the Government's decision, he added.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. | <urn:uuid:c0309e1b-e385-4430-9016-0efe355d4730> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/2240043602/US-companies-to-embrace-encryption-standard | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00098-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960614 | 541 | 2.546875 | 3 |
During the winter, the canyon is full of skiers and snowboarders heading to the slopes, while cyclists, hikers, and campers keep it busy during the summer months. However, with only a single tower on the side of the canyon, wireless service was unreliable—making it difficult for travelers to coordinate planning, receive up-to-date weather and safety reports, or access 911 services. It also meant that emergency responders often didn’t have access to wireless communications while in the canyon. This was more than an inconvenience; it was a safety hazard. We planned, coordinated, and deployed a small cell solutions (SCS) network that enabled reliable voice and data services to the canyon and improved safety. | <urn:uuid:d32cf8a9-040d-43b7-b979-6d8c5313598c> | CC-MAIN-2017-04 | http://crowncastle.com/public-safety/public-safety-projects.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00218-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.970885 | 144 | 2.5625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.