text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Riot Games is putting artificial intelligence to work to improve the sportsmanship of millions of gamers
In this article...
- League of Legends creator Riot Games has been experimenting with AI and predictive analytics to find and stop online trolls and increase sportsmanship
- Since implementing their AI-assisted program, verbal abuse in games has dropped 40 percent
Millions of young online gamers today are accustomed to battling bad guys. But their biggest foes are often their fellow players. Many online gaming sites are rife with creepy bigotry, harassment and even death threats. It's a common issue for many online communities, too, including Twitter, YouTube and Facebook.
So how do you root out the rotten apples? Over the past several years, Riot Games, which produces the immensely popular League of Legends, has been experimenting with artificial intelligence (AI) and predictive analytics tools to find the online trolls and make their games more sportsmanlike. League players are helping spot toxic players and, as a community, deciding on appropriate reactions. Their judgments are also analyzed by an in-house AI program that will eventually—largely on its own—identify, educate, reform and discipline players. The research Riot Games is doing into how large and diverse online communities can self-regulate could be used in everything from how to build more collaborative teams based on personality types to learning how our online identities reflect our real-world identities.
“We used to think that online gaming and toxic behavior went hand in hand.”
“We used to think that online gaming and toxic behavior went hand in hand," explains Jeffrey Lin, lead game designer of social systems at Riot Games. “But we now know that the vast majority of gamers find toxic behavior disgusting. We want to create a culture of sportsmanship that shows what good gaming looks like."
Achieving that goal presents big challenges. Riot Games has always maintained rules of conduct for players—forbidding use of racial slurs and cultural epithets, sexist comments, homophobia and deliberate teasing—but in the case of League, the volume of daily activity has made it all but impossible to enforce the rules through conventional tools and human efforts. More than 27 million people play at least one League game per day, with over 7.5 million online during peak hours.
That's one reason why Riot Games is putting serious brainpower behind the initiative. Lin, who holds a doctorate in cognitive neuroscience, works with two other Riot doctors—data science chief Renjie Li (Ph.D. in brain and cognitive sciences) and research manager Davin Pavlas (Ph.D. in human factors psychology)—to drive the program forward. Creating the tech foundation for this effort wasn't easy either. A giant data pipeline was needed to turn petabytes of anonymous user data into useful insights on how players behave. Lin's team also collaborated with artists and designers to make sure their work didn't interfere with the look or flow of the game.
Phase 1: The Tribunal
In the first phase of the program—the Tribunal, which launched in 2011—players would report fellow gamers when they felt they broke the rules. Reports were fed into a public case log where other players (called “summoners") were assigned incidents to review. The case often included chat logs, game statistics and other details to help the reviewer decide if the accused should be punished or pardoned. Linn says that most negative interactions come from otherwise well-behaved players who are simply having a bad day and take it out online.
The players use the context of the remark to vote on the degree of punishment for the case, which could range from a modest email “behavior alert" reminding them of a rules infraction and hopefully pointing them to positive play or a lengthy ban. After tens of millions of votes were cast, Riot put the Tribunal “in recess" in 2014 and started to make the pivot toward a new system that could be managed more on its own through AI.
Read these too< >
“There are two ways to deal with any type of problem at this scale, and we support both in tandem," says Erik Roberts, head of communications at Riot. “First, put the tools in the hands of the community and second, build machine learning systems that leverage the scale of data—contributed from the community through reports—to combat the problem."
Phase 2: AI
Last year, Riot kicked off testing of its new “player reform" system, one that provides faster feedback and automates parts of the process. It specifically targets verbal harassment, with the system capable of emailing players “reform cards" that mark evidence of negative behavior. Lin's team hand-reviewed the first few thousand cases to make sure everything was going well and the results were astounding: Verbal abuse has dropped 40 percent since the Tribunal and the new AI-assisted evaluation program took over.
“Since implementing their AI-assisted program, verbal abuse in games has dropped 40%.”
Lin believes that more game developers will follow thisr model—linking cognitive research to better game play and hiring cross-discipline teams dedicated to that purpose. “By showing toxic players peer feedback and promoting a discussion among the community, players reformed," Lin says. “We showed that with the right tools we could change the culture."
To learn how Big Data, automation and artificial intelligence will shape the future, download the HPE white paper “Big Data in 2016.” | <urn:uuid:a0da7ba6-f89a-4e5c-8824-d9b3e36b0206> | CC-MAIN-2017-04 | https://www.hpematter.com/childhood-issue/League-of-Legends-New-Model-Civility-in-Online-Communities | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00027-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964347 | 1,117 | 2.765625 | 3 |
A group of researchers from the Institute of Telecommunications of the Warsaw University of Technology have devised a relatively simple way of hiding information within VoIP packets exchanged during a phone conversation.
They called the method TranSteg, and they have proved its effectiveness by creating a proof-of-concept implementation that allowed them to send 2.2MB (in each direction) during a 9-minute call.
IP telephony allows users to make phone calls through data networks that use an IP protocol. The actual conversation consists of two audio streams, and the Real-Time Transport Protocol (RTP) is used to transport the voice data required for the communication to succeed.
But, RTP can transport different kinds of data, and the TranSteg method takes advantage of this fact.
“Typically, in steganographic communication it is advised for covert data to be compressed in order to limit its size. In TranSteg it is the overt data that is compressed to make space for the steganogram,” explain the researchers. “The main innovation of TranSteg is to, for a chosen voice stream, find a codec that will result in a similar voice quality but smaller voice payload size than the originally selected.”
In fact, this same approach can – in theory – be successfully used with video streaming and other services where is possible to compress the overt data without making its quality suffer much.
To effect the undetected sending of the data through VoIP communication, both the machine that sends it and the one that receives it must be previously configured to know that data packets marked as carrying payload encoded with one codec are actually carrying data encoded with another one that compresses the voice data more efficiently and leaves space for the steganographic message (click on the screenshot to enlarge it):
The method is efficient in sending and receiving the data, but in order to be considered good enough to use, it must be undetectable by outside observers.
According to the paper, the first thing can be accomplished whether VoIP phones or intermediate network nodes are used by one or both participant in the conversation, but the second one only if two VoIP phones are the sending and receiving nodes, since there is no change of format of voice payloads during the traversing of the network. | <urn:uuid:2796fe36-d653-4557-a646-b6f6f3596124> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2011/11/15/hiding-messages-in-voip-packets/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00542-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940024 | 468 | 2.796875 | 3 |
CDC Issues Pandemic Systems PlanBy Doug Bartholomew | Posted 2008-03-11 Email Print
In the face of pushback from hospitals and physicians, the CDC has revamped its ambitious BioSense network, designed to provide early warning of a potential flu outbreak. Now the agency is offering grants to promote the sharing of data among state health departments, while building new systems to alert physicians in the event of a pandemic.
By Doug Bartholomew and Chris Gonsalves
Are the nation’s systems ready to support an all-out fight against a pandemic influenza such as the avian flu?
The just-released Influenza Pandemic Plan from the Centers for Disease Control and Prevention (CDC) lists 13 different systems for identifying, alerting, responding to and tracking a potential pandemic flu outbreak. Most of these systems—some are completed, while others are under development—are designed to perform specific tasks, but the CDC continues to struggle with its centerpiece, the fledgling $100 million BioSense network.
If avian flu were to suddenly make the biological leap to a form that could be transmitted among humans,
Nor is the avian flu, known as H5N1, the only potential virus that could develop into a pandemic. During the 20th century, there were three influenza pandemics. The so-called Spanish influenza pandemic of 1918 killed more than half a million people in the
The possibility of an influenza pandemic—defined by the CDC as a novel virus that is readily transmissible and causes disease in humans—is so serious that Uncle Sam is setting up a host of systems to detect and support a nationwide response should an outbreak occur. Since 2002, the CDC has spent more than $5 billion to improve public health preparedness and response.
Early Warning System
The CDC’s primary nationwide hospital network-based defensive system, the BioSense Network (see Baseline, March 2006), was designed to provide early warning of a potential avian flu or other massive influenza outbreak. Originally conceived as a nationwide alarm system that would enlist thousands of hospitals on a network that would automatically “sniff out” every flu-like diagnosis, BioSense was to provide CDC with what, in effect, would have been a national stethoscope capable of gauging the health of the population. Urban, statewide, regional and even national outbreaks of any illness displaying flu-like symptoms would have been detected very early, enabling local, state and federal health agencies to respond quickly.
BioSense was designed to sit atop participating hospitals’ existing systems, gathering and analyzing their data in real time. The network was intended to provide a constantly refreshed flow—updated every 15 minutes—of patient information from the field. Using this information, CDC epidemiologists would be able to immediately detect the early signs of an outbreak of any flu-like disease. The hope was that by providing early detection, BioSense would help the CDC focus its resources on controlling an outbreak.
Unfortunately, in 2006 and 2007, BioSense encountered significant resistance from wary physicians and hospital administrators around the country. “Our approach turned out to be much more difficult than anticipated,” says Dr. Leslie Lenert, director of the CDC’s
Part of the problem was that BioSense required health care facilities to recode patient and medical records so the CDC’s custom-built software could monitor the hospital’s network and identify relevant data about patients. Many hospitals were unwilling to take on such a complicated technology project, given their limited IT staffs and resources.
“We found that the process was very human-dependent,” Dr. Lenert explains. “Getting a public health technician inside a hospital requires close collaboration with the hospital’s
In addition to the technological hurdle of having to standardize patient data, many physicians and state health department epidemiologists expressed concern over the federal government’s move to usurp what they saw as their role in protecting the public’s health. Part of the problem, according to Dr. Lenert, resulted from the federal government ignoring state, regional and local health care agencies. In addition, some hospital administrators were wary of transmitting local health care data electronically to the federal government.
“BioSense was very controversial, and there was a question about its usefulness,” says Dr. Erica Pan, director of the Bioterrorism and Infectious Disease Emergencies Unit at the San Francisco Department of Public Health. The medical community was skeptical of the need for such a system, believing that it is their responsibility, along with community and state health department officials, to identify and act on a disease outbreak. “You really need people on the local level to ask the right epidemiological questions,” she explains. | <urn:uuid:edcb7f4f-cded-4be8-a8f4-f7835b66428d> | CC-MAIN-2017-04 | http://www.baselinemag.com/c/a/IT-Management/CDC-Issues-Pandemic-Systems-Plan | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00568-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963706 | 978 | 2.65625 | 3 |
When I was a young programmer, there existed a group of people known as Operators.... They were responsible for keeping the OS updated, Monitoring the system for errors, printing
Various examples of using SQL on IBM system i - DB2 tables. This section includes both stand alone examples and embedded within SQLRPGLE & CLLE using QSHELL.
Many examples/tips to make programming on the IBM system i much more enjoyable. These tips written in RPGLE, SQLRPGLE, CLLE are free to use and modify as see fit.
Data queues are a type of system object (type *DTAQ) that you can create and maintain using OS/400 commands and APIs. They provide a means of fast asynchronous
The integrated file system is a part of OS/400© that lets you support stream input/output and storage management similar to personal computer and UNIX© operating systems
This section provides introductory, conceptual, and guidance information about how to use OS/400 application programming interfaces (APIs) with your application programs.
Subfiles are specified in DDS for a display file to allow you to handle multiple records, same type on the display. A subfile is a group of records read/written to a display.
Qshell is a command environment based on POSIX and X/Open standards made up of the shell interpreter (or qsh) and QSHELL utilities (or commands)
This example program is a very basic project tracking application. I think the most useful part of this example is
We have been collecting your city and zipcodes and are now ready to start posting the compiled lists. Let us
The Sort (QLGSORT) API provides a generalized sort function that can be directly called by any application program. This API
One of the more powerful tools for iSeries programmers is the user space. Many IBM list APIs use a user
Add or subtract a duration in years, months, days, hours, minutes, seconds, or microseconds Determine the duration between two dates,
Simple example of a program displaying a window showing job information of the person locking the record the program is | <urn:uuid:b65f9128-9e8f-409b-b11f-d6201b6d9f05> | CC-MAIN-2017-04 | http://www.code400.com/mylinks/inside.php?category=TIP | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00476-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.904107 | 433 | 2.75 | 3 |
Criminals, by their very nature, can’t be trusted. It may seem like a bargain to be able to get pirated software cheap, or even free, but when you acquire software illegally you also open yourself up to other risks and security concerns. The cybercriminals that distribute the pirated software aren’t just acting as Robinhood-esque philanthropists. There are ulterior, insidious motives as well, and that’s why malware in pirated software is costing the world billions of dollars.
Microsoft worked with IDC and the National University of Singapore to investigate the prevalence of malicious code in pirated software, and to explore the link between that malware and organized cybercrime. The study was conducted on 203 computers, spanning 11 nations (Brazil, China, India, Indonesia, Mexico, Russia, South Korea, Thailand, Turkey, Ukraine, and the United States). The study also includes results of a survey of 951 consumers and 450 IT professionals across 15 nations, and a survey of 302 government officials from six countries.
Researchers determined that there is a 33 percent chance of encountering malware when installing pirated software or purchasing a PC that includes pre-installed pirated software. The forensic analysis of the 203 computers in this study by the National University of Singapore found that 61 percent of the machines that had pirated software installed were also infected by malware.
David Finn, Associate General Counsel and Executive Director of the Microsoft Cybercrime Center, stressed in a blog post that these statistics should not come as a shock. “After all, cybercriminals aim to profit from any security lapse they can find. And through pirated software, they’ve found another way to introduce malware into computer networks – breaking in so they can grab whatever they want: your identity, your passwords and your money.”
IDC estimates that consumers will spend a combined $25 billion, and waste 1.2 billion hours dealing with security issues resulting from malware on pirated software just in 2014. 60 percent of the consumers surveyed listed loss of data or personal information in the top three biggest fears, followed by 51 percent concerned with unauthorized access or online fraud. In spite of these concerns, 43 percent of the consumers surveyed don’t routinely install security updates to keep their PCs protected.
For enterprises, that figure jumps to almost half a trillion dollars. IDC estimates malware in pirated software will cost enterprises $127 billion to deal with security issues, and an additional $364 billion addressing data breaches. That’s half a trillion dollars that could be put to much better use if the risks associated with malware in pirated software could be minimized or completely eradicated.
The Microsoft Digital Crimes Unit (DCU) is spotlighting the risks associated with pirated software as a part of its annual Play It Safe Day. To help you recognize and avoid pirated software, Microsoft provides tips and resources on the HowToTell.com website. | <urn:uuid:f062679e-e6f3-4ea4-9332-112476b5a003> | CC-MAIN-2017-04 | http://www.csoonline.com/article/2137154/malware-cybercrime/malware-in-pirated-software-is-costing-us-all-billions.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00110-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938301 | 598 | 2.640625 | 3 |
South Africa got its first solar electricity producing power plant in 2012. It has very high ‘renewable resource’ usage targets not just in electrical power generation but also to meet the growing energy demand. The country’s installed capacity for solar power is expected reach 8.4 GW by 2030 from 922MW in 2014. South Africa has added 800MW just in 2014 (over 122MW in 2013), and entered into ‘top 10’ in added capacity rankings. The country is also expected to have installed over four million solar panels and have the plans to set up 1.6 million more.
The global annual solar power production is estimated to reach 500GW by 2020, from 40.134 GW in 2014, making solar power market one of the fastest growing one with South Africa being in the forefront. South Africa solar photovoltaic market is estimated to reach $XX billion in 2020.
With fossil fuel prices fluctuating continuously and disasters like Fukushima and Chernobyl raising serious questions about nuclear power, renewable sources of energy are the answer to the world’s growing need for power. Hydro Power has environmental concerns; so apart from water the other renewable source of energy in abundance is Solar. The Earth receives 174 petawatts of solar energy every year. It is the largest energy source on the Earth. Other resources like oil and gas, water, coal etc. require lot of effort and steps to produce electricity, solar energy farms can be established easily which can harness electricity and the electricity produced is simply given to the grid.
Falling costs; government policies and private partnerships; downstream innovation and expansion; availability of excellent solar irradiation levels, and various incentive schemes for the use of renewable energy for power generation are driving the solar power market at an exponential rate.
On the flipside, high initial investment, intermittent energy Source, and Stringent Requirements are restraining the market from growth.
In the recent years, a lot of research is going on in this field to make solar panel production easier, cheaper and also to make them smaller, sleeker and more customer friendly. A lot of efforts are being put into increase the efficiency of solar panels which used to have a very meager efficiency percentage. Different techniques like Nanocrystalline solar cells, thin film processing, metamorphic multijunction solar cell, polymer processing and many more will help the future of this industry.
This report comprehensively analyzes the South Africa solar photovoltaic market by segmenting it based on Materials (crystalline silicon, thin film, multijunction cell, adaptive cell, nanocrystalline, and others). Estimates in each segment are provided for the next five years. Key drivers and restraints that are affecting the growth of this market along with recent trends and developments, government policies and regulations were discussed in detail. Value chain analysis and Porters five force analysis were also covered in the report. The study also elucidates on the competitive landscape and key market players. | <urn:uuid:15f5e3b2-0aec-4178-b384-29aa98d12946> | CC-MAIN-2017-04 | https://www.mordorintelligence.com/industry-reports/south-africa-solar-power-market-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00138-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947914 | 600 | 3.15625 | 3 |
Up To: Contents
See Also: Configuration Overview, Object Definitions
What Are Objects?
Objects are all the elements that are involved in the monitoring and notification logic. Types of objects include:
More information on what objects are and how they relate to each other can be found below.
Where Are Objects Defined?
Objects can be defined in one or more configuration files and/or directories that you specify using the cfg_file and/or cfg_dir directives in the main configuration file.
Tip: When you follow quickstart installation guide, several sample object configuration files are placed in /usr/local/nagios/etc/objects/. You can use these sample files to see how object inheritance works and learn how to define your own object definitions.
How Are Objects Defined?
Objects are defined in a flexible template format, which can make it much easier to manage your Nagios configuration in the long term. Basic information on how to define objects in your configuration files can be found here.
Once you get familiar with the basics of how to define objects, you should read up on object inheritance, as it will make your configuration more robust for the future. Seasoned users can exploit some advanced features of object definitions as described in the documentation on object tricks.
Some of the main object types are explained in greater detail below...
Hosts are one of the central objects in the monitoring logic. Important attributes of hosts are as follows:
Host Groups are groups of one or more hosts. Host groups can make it easier to (1) view the status of related hosts in the Nagios web interface and (2) simplify your configuration through the use of object tricks.
Services are one of the central objects in the monitoring logic. Services are associated with hosts and can be:
Service Groups are groups of one or more services. Service groups can make it easier to (1) view the status of related services in the Nagios web interface and (2) simplify your configuration through the use of object tricks.
Contacts are people involved in the notification process:
Contact Groups are groups of one or more contacts. Contact groups can make it easier to define all the people who get notified when certain host or service problems occur.
Timeperiods are are used to control:
Information on how timeperiods work can be found here.
Commands are used to tell Nagios what programs, scripts, etc. it should execute to perform: | <urn:uuid:47bf0dd1-fe04-48ac-a7f4-ddec9028346f> | CC-MAIN-2017-04 | https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/3/en/configobject.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00046-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.883924 | 505 | 3.0625 | 3 |
Are Macs More Secure?
In October, Apple announced a small percentage of recently shipped iPods contained a virus that would affect computers running the Microsoft Windows operating system. Apple deflected blame, saying the virus originated from a factory computer running Windows and that it was a Windows-based virus.
In an announcement on its Web site, Apple stated, “As you might imagine, we are upset at Windows for not being more hardy against such viruses and even more upset with ourselves for not catching it.”
Of course, numerous tech pundits and computer enthusiasts immediately cried foul, and not much in the way of understanding was engendered on either end of the operating system spectrum. But the incident illuminated a debate that’s existed since the dawn of the personal computer age: Are Macs really more secure than Windows?
Apple users often claim their computers are absolutely secure, resistant to all forms of viruses and malware, while PCs running Windows are extremely susceptible to corruption. Apple’s own marketing even makes this assertion.
In one TV ad from its popular “Get a Mac” marketing campaign, starring author and comedian John Hodgman as a PC and actor Justin Long as a Mac, “PC” behaves as if he’s sick, indicating that he’s caught a virus. “Mac” offers sympathy as PC warns him to stay away because “last year there were over 114,000 known viruses for PCs.”
Mac calmly responds, “For PCs, but not Macs.” PC then falls over — he crashed.
The ad is more or less in line with conventional wisdom and perhaps even factual accuracy, considering that more than 114,000 viruses for PCs were widely reported as identified by the end of 2005. But it’s misleading to suggest the disproportionate amount of viruses and malware attacking Windows versus the amount attacking Macs can be entirely attributed to the integrity of the respective operating systems.
The difference is more one of proportion. The Web site www.netapplications.com offers statistics on Microsoft and Apple’s market shares by net applications. From August 2005 to July 2006, on average, Apple had between 3 percent to 4 percent of that share, while Microsoft has about 95 percent. Some might place the split more around 90 percent Microsoft, 10 percent Apple, but the difference is clear: If you were a cybercriminal, which operating system would you pick as a target — the one with 90 percent of the market or the one with 10 percent? In this case, viruses and malware become the sincerest form of flattery.
Meanwhile, stories of Macs being corrupted or found to be vulnerable are becoming more and more frequent. In November, the U.S. Computer Emergency Response Team (US-CERT) reported various security failures on the part of Apple’s Mac OS X.
According to the team, OS X “does not properly clean the environment when executing commands, which allows local users to gain privileges via unspecified vectors; does not properly search certificate revocation lists (CRL), which allows remote attackers to access systems by using revoked certificates; allows remote attackers to execute arbitrary code via unspecified vectors; (and) does not authenticate the user before installing certain software requiring system privileges,” among other problems.
In the wake of this vulnerability summary, Apple released a security update to address these problems. Meanwhile, antivirus researchers have begun to spot adware and spyware programs capable of launching browser windows on OS X. It seems that the Mac’s reputation of impenetrability is beginning to crack, perhaps owing to the continuing success of Apple itself. | <urn:uuid:616c44ce-040c-4f64-8733-f1bcf46f6b06> | CC-MAIN-2017-04 | http://certmag.com/are-macs-more-secure/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00440-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954871 | 749 | 2.859375 | 3 |
Domain registration is the process by which a company or individual can secure a website domain, such as www.yoursite.com. Once you have completed domain registration the domain becomes yours for the period of the contract, usually one year. Before registration expires it must be renewed, or the domain reverts back to being available to the general public.
The Internet Corporation for Assigned Names and Numbers (ICANN) manages the international Domain Name Server (DNS) database. ICANN insures that all registered names are unique and map properly to a unique Internet Protocol (IP) address. The IP address is the numerical address of the website that tells other computers on the Internet where to find the server host and domain.
Before a domain registration can be approved, the new name must be checked against existing names in the DNS database. If the name is not already taken, it is available for domain registration.
Domain Name Registration & Renewal:
- New Domain Name Registration
- Transfer Your Domain Registration
Once your domain name is registered and your web content is ready, a web server must host the domain. | <urn:uuid:3029b95c-70a0-4ab8-abb3-6214372edf94> | CC-MAIN-2017-04 | https://www.consolidated.com/business/data-internet/domain-services/domain-registration-renewals | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00348-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.873317 | 225 | 2.640625 | 3 |
Whether we live in a so-called “developed” or an “emerging” nation, when the lights go out, we all revert to the Dark Ages. Recent blackouts in India and parts of the United States are fresh reminders that grid infrastructure revitalization and improvement are truly global challenges.
How will we meet those challenges? According to market analysts at Rockville, Maryland-based research publisher SBI Energy, responses to grid instability will include smart grid technologies as cost-effective alternatives to additional power plants, or to transmission and distribution (T&D) infrastructure expansion and retrofits.
Voltage disruptions, blackouts and brownouts are perennial problems because contemporary grid systems remain inherently disjointed. In India, energy demand is outpacing available generation. Transmission line congestion and regional bottlenecks in America’s patchwork grid have been implicated in grid disruptions; some major blackouts have been sourced to the malfunction or loss of one substation or switchyard.
Smart grid strategies, say the analysts, do not address grid instability through system redundancies or improvements to the physical integrity of a legacy grid. Rather, they enable the dynamic deployment of system resources —for example, load shaving, additional generation (power plants, storage) and voltage regulation—and provide added system flexibility through real-time, two-way communications.
A case in point: The loss of a transmission line or the malfunction of a transmission-to-distribution substation could be addressed by a microgrid that effectively isolates or "islands" a distribution network. A microgrid can manage its own generators and consumption loads, independent of an unstable or downed centralized grid.
"This ability [of microgrids] to improve [the] energy security and reliability [of] the centralized grid has caught the attention of the market," notes SBI Energy analyst Bernie Galing. Galing appraises the market for microgrid projects at 5 percent of the total smart grid sector.
What’s more, via smart meters, individual ratepayers can monitor their real-time electricity usage and current rates. Smart meters also provide utilities with tremendous volumes of data for analytics that can later be used to for programs that incentivize lower electricity usage during peak loads; as well as demand response (DR) programs using two-way communications to directly cut or reduce individual loads in a household, business or factory.
Deployment of smart meters in India has been motivated by a desire to reduce electricity theft, but these smart grid components also form the foundation for the long-term development of DR programs that could prevent blackouts during exceptional peak periods.
In the end, smart grid technologies represent a value proposition. In the United States alone, the cost of service interruptions is estimated to reach $71 billion by 2020 (American Society of Civil Engineers, 2012). The U.S. smart grid market by that year will represent less than 10 percent of that cost and, the analysts believe, an excellent investment over more costly grid infrastructure replacements or expansions.
The SBI Energy report, "World Smart Grid, 2nd Edition," presents an in-depth analysis of the development, applications, products, manufacturers, and trends in the worldwide development of the smart grid.
Want to learn more about the latest in communications and technology? Then be sure to attend ITEXPO West 2012, taking place Oct. 2-5, in Austin, TX. ITEXPO (News - Alert) offers an educational program to help corporate decision makers select the right IP-based voice, video, fax and unified communications solutions to improve their operations. It's also where service providers learn how to profitably roll out the services their subscribers are clamoring for – and where resellers can learn about new growth opportunities. For more information on registering for ITEXPO click here.
Stay in touch with everything happening at ITEXPO. Follow us on Twitter.
Edited by Rachel Ramsey | <urn:uuid:e0e4bd48-d2f4-4014-9b6e-4ddf89dfbe6a> | CC-MAIN-2017-04 | http://www.iotevolutionworld.com/topics/smart-grid/articles/2012/08/08/302526-smart-grids-come-light-as-solution-blackouts.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00074-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923117 | 800 | 2.609375 | 3 |
A robocall is defined by the FCC as a call made either with an automatic telephone dialing system (“autodialer”) or with a prerecorded or artificial voice. Robocalls are often associated with product marketing or political campaigns, and unwanted calls have been a source of consumer frustration.
To help consumers block unwanted robocalls, telephone service providers (including cable operators) have formed a Robocall Strike Force to develop comprehensive solutions to prevent, detect, and filter unwanted robocalls. Cable operators on the strike force include Comcast, Charter and Cox, which are each providing their customers with specific guidance and tools about how to block unwanted robocalls.
Learn More: Watch the Video
The Federal Trade Commission (FTC) offers Americans helpful resources to deal with robocalls. To prevent unwanted calls to both landline and wireless phones, consumers can register their numbers with the FTC’s Do Not Call Registry.
Cable Digital Voice Subscribers
Robocalls made July-September 2016
Number of active registrants on the National Do Not Call Registry | <urn:uuid:be81ac79-6a2d-4d0c-b0c0-e7a159eea56f> | CC-MAIN-2017-04 | https://www.ncta.com/positions/preventing-robocalls | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00284-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930144 | 224 | 2.90625 | 3 |
Anybody who drives one of Ford’s recent vehicles spends a little less money on gasoline thanks to HPC work the carmaker undertook with Oak Ridge National Laboratory, where more than one million processor hours were spent getting a handle on the complex fluid dynamics governing airflow under the hood.
Carmakers around the world are spending billions of dollars to find ways to comply with new fuel efficiency mandates of the U.S. government. The manufacturers are turning over every rock to find any performance gains, from low rolling resistance tires to hybrid drivetrains.
One area of exploration that may slip by the public’s eye is the flow of air through the front grill of a car into the engine bay, which has a significant impact on the car’s fuel consumption and overall performance. However, understanding how to build for maximum cooling efficiency while simultaneously minimizing front-end drag is a very difficult task because each of the many components within the compartment can alter the airflow.
“Any change in the size and position of just one component can have a significant impact on the computational model as a whole,” said Burkhard Hupertz, the thermal and aerosystems computer-aided engineering (CAE) supervisor at Ford of Europe, in a recent story on the Oak Ridge Leadership Computing Facility website. “Making one more efficient could result in the loss of cooling or increased drag for another.”
In the past, carmakers would spend a large amount of time using the trial and error method to come up with a suitable design. Several years ago, Ford decided to speed the design process and build a prototype model of airflow that could be applied to many cars across its lineup. However, this approach would require running thousands of simulations to find the optimal design parameters. This is what led the team, led by Hupertz and senior HPC technical specialist and lead investigator Alex Akkerman, to ORNL and the Jaguar supercomputer.
The first step in the process was porting its computational fluid dynamics code, called Underhood 3D (UH3D), to Jaguar. After scaling UH3D to run on Jaguar (which has since been transformed into Titan), the Ford team used approximately 1 million processor hours to test 11 geometric and non-geometric parameters (such as cooling fan speed) against four different operating conditions, for a total of 1,600 simulation cases, according to the ORLF story.
The work on Jaguar has enabled Ford to find a design that represents a happy medium between maximizing cooling airflow, while minimizing front-end drag. “Access to Jaguar enabled us to develop a new methodology that allowed Ford, for the first time, to conduct engine bay analysis with the required number of design variables and operating conditions for a true design optimization,” Akkerman told ORLF.
The results are evident in the vehicles that Ford has put on the road the last few years. While Ford has come under fire recently for overstating the mileage of some of its vehicles–specifically the C-MAX Hybrid, for which Ford this month lowered mileage estimates–the carmaker has delivered notable fuel efficiency gains across the breadth of its lineup. At least some of those gains can be attributed to the research done on Jaguar. | <urn:uuid:0e09a5de-7511-4a2d-8003-f2959f17565f> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/08/19/ford_taps_ornl_to_boost_vehicle_airflow_fuel_efficiency/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00100-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949182 | 664 | 3.015625 | 3 |
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
PHP is a server-side scripting language often used by Web programmers to dynamically create HTML pages.
Vulnerabilities in code used to upload files to the server from a user's PC via a Web page could allow a hacker to take temporary control of the web server using PHP.
It could also interrupt normal operations of the Web server, warned the US-based Computer Emergency Response Team/Coordination Centre (CERT).
PHP can be installed on Web servers such as Apache, IIS, Caudium, iPlanet and OmniHTTPd, CERT said.
According to a warning posted by Stefan Esser of the German Web-design and security company e-matters, there are several flaws in the "php_mime_split" function used by PHP to handle multipart/form-data POST requests.
CERT has recommended that users upgrade to the newest version, PHP 4.1.2, or apply patches to older versions.
PHP is an open source project of the Apache Software Foundation. Patches are available from its PHP support site, www.php.net .
PHP is included with many distributions of the Linux open-source operating system. Linux developers Red Hat and MandrakeSoft have be made aware of the holes in PHP and are working to eradicate the problems as well as offer patches for their customers, CERT said.
Full details are contained in the CERT-CC advisory at www.cert.org/advisories/CA-2002-05.html. | <urn:uuid:c13f8227-7acf-463e-be40-f280b107f148> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/2240044450/CERT-warns-of-PHP-security-holes | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00404-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926427 | 340 | 2.734375 | 3 |
Last week marked an exciting new first – ransomware on a Mac disclosed by Palo Alto who had seen it with a client. What’s simultaneously encouraging and disappointing is that it could have been prevented without detection using highly recommended best practices – application whitelisting (yes, it is possible to do whitelisting today – we do it with lots of our clients as do other providers).
Just 10 years ago, malware attacks came straight at us as scripts or executables sent on USB devices, through email or linked on websites. We compensated by securing removable devices, filtering email attachments and content and securing web browsers as much as we could without entirely disrupting productivity.
Today, attackers resort to more sophisticated tactics to penetrate applications that are exposed to content coming through the internet. Meta-analysis of today’s malware threats reveals that attacks tend to occur in multiple stages. The first stage often attacks in-memory vulnerabilities of web browsers, browser plugins and the applications that open business documents. The intent of the first stage is often to download and execute a secondary payload – typically a script or executable. This attack pattern is necessary because it’s increasingly difficult to “stay in memory” while performing sophisticated lateral attack or data theft.
So we’re right back at where we were 10 years ago – stopping malware scripts or binaries from running!
Why do we continue to fail at this since it can be stopped quite effectively with an effective application whitelisting implementation? Whitelisting has been notoriously difficult to operationalize in the enterprise due to the vast range of software in use and continual change. How do we know which applications to trust, and which to stop?
To address this issue of trust, the concept of code-signing certificates was introduced. Microsoft Authenticode is the most prevalent, and Apple has their own. In theory, code signing is intended to prove that the application you’re installing or running came from a provider verified as a real entity accountable for the product. Current cryptographic signatures are impossible to forge.
The provider has a “secret” (a very long complex password AKA a 256 bit cryptographic key) used to sign the application using algorithms everyone knows and can verify using a verification algorithm. It’s ok to know the algorithms because it’s the secret that makes it secure. This is much like applying your royal seal imprint to the wax seal of an envelope in medieval times. It was out of reach to recreate the royal seal then, and today it’s not practical to figure out the key used to sign software. That’s why people steal the code signing certificates instead. If attackers steal a code signing certificate they can make their malware appear as though a valid software vendor produced it. If we trust any signed software, it would be a mistake. If we trust any software signed by our known vendors, it could also be a mistake.
The KeRanger ransomware relied on a stolen certificate to bypass Apple’s Gatekeeper protections. Once discovered, Apple revoked the certificate, which meant it was no longer trusted and the malware would not be allowed to run on Apple OSX. Good thing that this malware wasn’t designed to hide in the background and quietly steal your corporate data, it may not have been discovered so easily! Of course, our malware writing friends can steal another code signing certificate and continue on with the same style of attack.
The problem is that we’re verifying applications based on a single secret. In the security industry, we’re actively looking to replace passwords (something that can be stolen) as the sole means of authenticating users because it’s too easily circumvented. Why are we not thinking the same way about the applications that inherit the privileges of users once they log on?
I love that Microsoft is making strides in security with Windows 10’s Device Guard feature (and others). However, Device Guard is fundamentally based on the premise that code signing certificates and the chain of trust remains secure. I don’t have to circumvent Windows security to bypass Device Guard, I just need to steal someone’s valid code signing certificate and use it. If you’re able to operationalize the full capabilities of Device Guard, it’s unlikely you’ll fall victim. I’ll let you know when I see a client with a fully realized DeviceGuard implementation – great tech, big fan here, though I’m not seeing a practical way to use it in the enterprise just yet. To their credit, MSFT has a great whitepaper on securing code signing certificates and using them securely as do others such as CA, Thawte and others.
What’s my point? Application authentication needs a reboot – lets add another measure of authenticity beyond code signing since it’s a single factor authentication. We should authenticate the vendor (holder of the certificate) and the application. I’d been thinking maybe I should start a company and do this, but it occurred to me we have authorities on such matters including the US National Institute of Standards and Technology (NIST) which maintains the National Software Reference Library (NSRL). It’s exactly what is states… a reference library of authentic software that’s used in legal matters and wherever you have to be really sure what application you’ve got at hand. Whether or not the application is code-signed, you can verify it’s exactly what it claims to be by comparing its hash (a unique condensed representation of the application binary) to the known hash for the application.
Another potential approach is for the certificate issuing authority to verify both the certificate validity, and that the application hash was registered by certificate owner (strong authentication of the cert owner to the CA using separate secrets). I saw that Verisign offers code signing services now. This is a step in the direction of being authority on publisher identity (certificate) and the artifacts knowingly signed by the publisher. This approach also greatly reduces the risk of a software provider being compromised and losing control of their certificates. It also has the implication of consolidating all application authentication to a single mechanism. I prefer the idea of independent verification via different channels.
If Google can organize the world’s information and make it universally accessible and useful, it should be possible to organize the world’s software and verify it’s legitimate.
Ransomware is the most profitable malware in history but, by taking a proactive approach, you can stop it in its tracks and prevent your business being hit.
This report covers what ransomware is, how it works, the damage it causes, predictions for the future and, crucially, how to prevent it with a simple yet effective security strategy. | <urn:uuid:69e411f9-cfda-4105-b460-232b3d420365> | CC-MAIN-2017-04 | https://blog.avecto.com/2016/03/application-authentication-needs-a-reboot/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00128-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936294 | 1,379 | 2.578125 | 3 |
Datarecovery.com engineers often use disk images to protect media or to rebuild large multi-drive systems efficiently. This guide explains what disk images are, how they’re used, and why they’re often important in the data recovery process.
If you need to recover files from a disk image, a hard drive, or another damaged device, call us at 1.800.237.4200 to set up a free evaluation.
What is a RAW Disk Image?
A disk image file is an exact bit-for-bit copy of an entire hard drive, solid-state drive, or optical disk. The image contain a complete copy of all of the data stored on the source drive – not just files and folders. A typical image will include the disk’s boot sector, file allocation tables or MFT (if applicable), volume attributes, directory forks, free space, and slack space.
A disk image is not a collection of files or folders. It is an exact duplicate of the raw data of the original disk, sector by sector, in the form of a single file. Disk images can be created for any hard drive, solid-state drive, or other storage device, and some operating systems have built-in mounting capabilities for virtual disks (for example, Linux and Unix).
Since disk images contain raw disk data, it is possible to create an image of a disk even if it is written in an unknown format or with an uncommon operating system. Disk images are extremely useful when backing up mission-critical systems, since the resulting image will retain the OS, files, settings, and various other data. Various programs can be used to create disk image files, including Stefan Fleischmann’s Winhex, a popular file utility.
How are Disk Images Used in Data Recovery?
We create disk images or clones of every functional hard drive that we recover, and in some cases, we work from mounted virtual disks instead of working directly with the original media. This limits our clients’ risks and substantially improves the chances of a full recovery if the original hard drive is not physically damaged or if it has been temporarily repaired as part of our data recovery process.
Logical disk images are especially helpful during RAID recovery. By creating images from RAID members, we can reconstruct an array logically without swapping out physical media. We can also use software to rebuild some types of RAID arrays without their controller cards. Finally, disk image files can allow our engineers to correct problems with data slippage, a problematic occurrence in which RAID data does not line up across multiple disks due to filesystem damage.
Disk images are extraordinarily beneficial during the data recovery process, and we use precise tools to obtain accurate images from client media. To learn more or to set up a case, call us today at 1.800.237.4200. | <urn:uuid:012f1e59-ac0e-4a10-9560-9c1a0b8280d5> | CC-MAIN-2017-04 | https://datarecovery.com/rd/disk-images-explained/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00340-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.896132 | 579 | 2.921875 | 3 |
It was 25 years ago today (well, this month - March 12, 1989) that Tim Berners-Lee introduced the idea of the World Wide Web in a proposal for an information system, although until 1995 there was limited practical commercial access. (Sorry Al Gore, you didn't invent the Web or the Internet). What became the Internet was developed in the late 1960s by the US Department of Defense as ARPANET. Since the mid-1990s, the Internet has had a profound impact on commerce and culture, including the Web, the advent of cloud computing, and near-instant and instant communication with email, instant messaging, VOIP telephone calls, and interactive video calls.
The Web has also had a tremendous impact in Microsoft, who first released Internet Explorer 1 as part of the OEM release for Windows 95 and Windows 95 Plus! package. Internet Explorer was actually a reworked version of Mosaic, licensed from Spyglass, Inc.
However, the first versions of Internet Explorer were more of an afterthought, as Microsoft didn't get the significance of the Internet at that time. Once they did, the company ended up "turning on a dime," integrating some type of Internet capability into just about every one of their products within three months.
If you're old enough to remember, Internet Explorer ended up being a key part of the US Department of Justice antitrust case against Microsoft, initiated in 1998, where it was argued that Microsoft's bundling of the browser restricted the market for competing web browsers. However, it wasn't only antitrust issues where the Internet has had an impact on Microsoft. Internet Explorer went from being the most widely used browser to now having about 19.6% of market share, with Chrome leading at 36.5%.
Fast forward from the late 1990s to today, where in the past few years, Microsoft has put a significant amount of effort into retooling their product lines to run in the Internet cloud and trying to leapfrog the competition, in particular Google and Amazon. This has definitely applied to their management tool line of System Center, where "the cloud" is the phrase of the day, with "on prem" used for a more private cloud implementation. Looking at the consumer market, Microsoft is dealing with the impact of Apple Inc with their iPhones and iPads and Google with their 'droids. And now we have new CEO Satya Nadella, who was previously the executive vice president of Microsoft's Cloud and Enterprise group.
So yes, the Web has been changing the world and Microsoft. This is not to forget - which most people do - potential shortcomings of the Internet and cloud computing. How does it function in rural areas without high-speed Internet connectivity? Or when you have no Internet connectivity? How will the Internet continue to evolve? Will the rush to the cloud address the low/no connectivity challenges? And will Microsoft continue to evolve - or turn on a dime - to address those changes? | <urn:uuid:07cd87fd-b418-4cc0-bae7-02b8fee18804> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2226583/microsoft-subnet/twenty-five-years-ago-today---.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00458-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.969895 | 598 | 3.09375 | 3 |
Definition: A computable set of steps to achieve a desired result.
Specialization (... is a kind of me.)
probabilistic algorithm, randomized algorithm, deterministic algorithm, nondeterministic algorithm, on-line algorithm, off-line algorithm, oblivious algorithm, external memory algorithm, heuristic.
Note: The word comes from the Persian author Abu Ja'far Mohammed ibn Mûsâ al-Khowârizmî who wrote a book with arithmetic rules dating from about 825 A.D.
The Analysis of Algorithms web site.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 10 January 2007.
HTML page formatted Mon Feb 2 13:10:39 2015.
Cite this as:
Paul E. Black, "algorithm", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 10 January 2007. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/algorithm.html | <urn:uuid:46a02fab-03d7-42d5-874c-735bb0fc3256> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/algorithm.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00184-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.795033 | 232 | 2.546875 | 3 |
I know last week I spoke briefly about contrast from a visual sense and how that effects an image, and draws a viewer in. In the course of doing that, I "discovered" there are different TYPES of color blindess. Under these types, there are underlying catagories "based on the number of primary hues needed to match a given sample in the visible spectrum." So depending on the color blindness and the subcatagory of it, it will completely affect how an image will appear. The sub catagories are monochromacy, dichromacy, and anomlous trichromacy. I'm not going to go into the details on each, but rather the visual effects some of the subcatagories have on visuals - but if you're interested in more details, Google it.
This kind of blew my mind, because all that I've ever heard of is red-green and blue-yellow. So if you take a rainbow and how a normal person would see it, it would look like this:
If a person has no perception of red what so ever, which would be a type of dichromacy, and red would appear dark, almost like a dark brown:
If a person has no perception of green, it will also affect the ability to tell the difference in the red and green hues, which mean the yellow and blue pigments are the primary colors seen:
AND THEN if there is a total absence of blue pigment receptors the rainbow will look like this:
Fascinating stuff, isn't it? A bit mind boggling at the same time. But why is this important to consider, as a designer, as a developer? Because we put stuff on the web to be consumed, and we want it to be consumed by everyone and be accessible to everyone, and we can do that with contrast, "the percieved difference in colors that are in close proximity to each other" !
There are different ways to approach this. Some designers will start a design in black, white, and shades of gray. Others would use tools like graybit.com or Color Scheme Designer 3 to make sure the color scheme would work. The latter sucked me in for a bit, as I played with it - good times.
I also tested some of our current feature graphics with Fujitsu ColorDoctor to see how we've done with our contrast in the grayscale conversion, and I've noticed that we need to up the contrast on the background color, for some of them.
We've got a bit of work to do, but learning new (to me) stuff is fascinating!
*Image samples of the rainbows are from the wikipedia.com entry on color-blindness | <urn:uuid:626d88c9-4a84-48f6-bdc2-bbab7c114268> | CC-MAIN-2017-04 | https://www.ibm.com/developerworks/community/blogs/8a10b5cd-53d2-417a-bde2-2d8bca03d4c2/entry/learn_new_things_everyday_especially_about_contrast3?lang=en | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00184-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953657 | 566 | 2.65625 | 3 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today.
That could be made possible through recent advancements made with the Raspberry Pi computers. These Linux-based computers are as small as a credit card and only cost about 25 to 35 dollars. Last year, the University of Southampton’s Simon Cox coupled 64 of these little systems together to form his own miniature supercomputer.
“As soon as we were able to source sufficient Raspberry Pi computers we wanted to see if it was possible to link them together into a supercomputer,” said Cox on how he created a small but powerful supercomputer on Raspberry Pi’s. “We installed and built all of the necessary software on the Pi starting from a standard Debian Wheezy system image and we have published a guide so you can build your own supercomputer.”
As seen in the diagram below, the diminutive processors are brought physically together by Legos while MPI serve as the interconnect system.
Joshua Kiepert, PhD candidate at Boise State, also got involved in building supercomputers out of the Raspberry Pi machines, completing a 32-node cluster that came in at a cost of just under $2000.
“In order to keep the cluster size to a minimum while maintaining ease of access,” Kiepert said in explaining how the computers were clustered, “the RPis (Raspberry Pis) were stacked in groups of eight using PCB-to-PCB standoffs with enough room in between them for a reasonable amount of air flow and component clearance.”
The advantage, according to Kiepert, to building his own system is the ability to outfit and customize based solely on his requirements. Per Kiepert, “by building my own cluster I could outfit it with anything I might need directly.”
Such customization included side-stepping the Pi’s micro-USB power port for a 5-volt pin that attached to the machines’ I/O ports. Further, Kiepert overclocked the Pis when he needed more processing power to run his simulations. With that said, the performance did still leave a little to be desired, according to Kiepert.
This story on fun, creative, and cost-effective ways to build supercomputers could actually represent a significant tool for today’s young learners who will grow up with computers, and in some cases supercomputers, all around them. Understanding the dynamics and logistics of high-performance computing early is important in maintaining interest in the field later in life.
“We want to see this low-cost system as a starting point to inspire and enable students to apply high-performance computing and data handling to tackle complex engineering and scientific challenges as part of our ongoing outreach activities,” Cox concluded. | <urn:uuid:197a9925-77d9-4ada-87ea-89ea39a5ec40> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/05/22/building_supercomputers_with_raspberries/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00486-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.962042 | 602 | 3.515625 | 4 |
How tech tools available today put a modern touch on the labor of love
In this article...
- Advances in technology undoubtedly aid today’s digital-first generation of parents, but also trigger new challenges and vulnerabilities
Raising kids has always meant taking charge, but today, it’s also about taking chargers… and USB cables, and Bluetooth speakers—all the stuff from your work bag—and migrating it to the diaper bag.
The smartphone you jokingly called a “toy”? Well, load it up with Elmo apps because now it’s an actual toy. And while Wi-Fi enabled devices aren’t exactly a choking hazard, they do present new challenges for this digital-first generation of parents.
The moments captured here are part of our collective new normal.
“Toddlers will use the first generation of apps for kids that teach more than touch typing.”
Touchscreen devices are designed to be useful for adults, but children are just as transfixed—if not more so. As toddlers enter school, they’ll be using the first generation of apps and games aimed at kids that teach more than touch typing. In fact, apps aimed at the youngest users may not presume literacy at all. App developers who can think intuitively and creatively and build experiences that appeal to the youngest audiences could find themselves gaining lifelong loyal customers.
Tapping towards destruction
Devices can carry thousands of games, much to the delight of parents—and their kids. Downloading new weapons or magic potions is fun for them, but even more fun for hackers who can sneak malware into digital goods, ringtones or even streaming video. Developers must leverage user data and find new ways to bake security into the low levels of their apps so that preventing malware doesn’t mean that magic potion downloads any slower. (There are monsters to kill, after all.) The operating systems that run these apps and the stores that sell them are starting to do their part to prevent vulnerabilities by taking data-centric approaches to security.
A lot of families find themselves on an annual quest for the land before screen time. But in fact, technology has become a key element to making the most of vacations. Finding your way to a campsite? Use a navigation app. In a foreign country? Learn basic language skills via a podcast. Planning your trip? Ebooks make better tourism guides than bulky hardcovers. Travelling to remote locales? You can rely on anytime, anywhere connectivity. The travel industry is no stranger to the digital world, and they rely on technology partners like HPE to keep up with the needs of modern parents who are ready for some R&R.
Read these too< >
The parent-teacher (tele)conference
Thanks to the cloud, schools can offer new resources for parents to access report cards, review assignments and track their children’s progress online. So while there shouldn’t be any surprises at the next parent-teacher (tele)conference, kids always find a way.
At school during flu season, the nurse’s office is often even busier than the computer lab. Digital health records can help regional health officials predict when a cold virus might reach a tipping point in your community, giving ample warning to students who are at risk of complications. It’s no longer a foregone conclusion that everybody has to get what’s going around, which means fewer missed classes and better grades.
To download the HPE + 451 Research report “The Transformative Impact of the Cloud,” CLICK HERE.
For more information on HPE’s anytime, anywhere, any-device mobility solutions, CLICK HERE. | <urn:uuid:a8906431-dc6a-4f95-b315-b796a2dcb35d> | CC-MAIN-2017-04 | https://www.hpematter.com/childhood-issue/5-gifs-capture-parenting-digital-age | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00028-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927236 | 762 | 2.53125 | 3 |
Two years ago, Timothy Croll felt, as he describes it, "a blinding light" erupt in his mind. Touring Lawrence Livermore National Laboratory (LLNL) with other local government officials, he saw a powerful technology for mapping the spread of dangerous materials in the atmosphere. If a terrorist attack or an accident released toxins back home in Seattle, he realized, this suite of tools could save lives.
Croll is community services director at Seattle Public Utilities, the agency that runs Seattle's water, sewer, drainage and solid waste services. As a member of the Environment Task Force of Public Technology Inc., he was visiting LLNL to learn about a broad spectrum of technologies available there. Croll and his colleagues were particularly impressed with the lab's National Atmospheric Release Advisory Center (NARAC). The technologies there help emergency responders determine where a hazardous plume will spread given the local terrain and current weather conditions, and how best to protect people in that area.
Croll had used modeling software to design responses to accidental chlorine leaks from the city's water treatment facilities, but he'd never seen capabilities like NARAC's. "I felt like I had stepped from a tricycle to a Ferrari," he said.
Soon, safety officials in Seattle and other cities will get to take a spin in that Ferrari. Today, NARAC helps federal facilities and emergency workers plan responses to radiological, chemical and biological releases. A new demonstration program, called Local Integration of the NARAC with Cities, or LINC, will put NARAC's power into the hands of local agencies.
Seattle is the first pilot site for LINC, which is funded by the U.S. Department of Energy's Chemical and Biological National Security Program (CBNSP), working in partnership with PTI and LLNL. Officially, LINC focuses on chemical or biological materials released in terrorist attacks, but NARAC's system could help local agencies respond to accidents as well, Croll pointed out.
The NARAC system has two components: a local software package called iClient (Internet Client), and a central system at the lab in Livermore, Calif. Local responders use iClient to enter basic information about an incident, such as the material involved and the location. The software immediately maps the plume and returns advice on how to respond.
"They have the capability to run a quick, simple model of what the downwind hazard areas might be," explained John Nasstrom, a deputy program administrator at LLNL. "At the same time, they can reach back to our more powerful computers in Livermore. They can do more detailed, three-dimensional atmospheric transport, including terrain effects. Those are returned in about five to 10 minutes."
NARAC's system does more than predict how a plume will spread. "It also talks about the impact," Croll explained. "There would be a map that would say that in these neighborhoods, with these bounding streets, the odds are you're dead already. If you're in these other bounding streets, stay inside and shut your windows."
With real-time meteorological data in the mix, city officials who planned to evacuate people to another part of the city would know for sure that the plume wasn't heading toward that area.
"Within about five minutes, a local responder has information about whether you ought to shelter in place, whether you ought to start evacuation processes," said Ronda Mosely-Rovi, director of environmental programs at PTI.
Like responders in many other cities, the HAZMAT team at the Seattle Fire Department currently uses software called CAMEO, which stands for Computer-Aided Management of Emergency Operations, to help it respond to chemical accidents. Developed by the National Oceanic and Atmospheric Administration and the Environmental Protection Agency, CAMEO includes an extensive database of chemicals and their properties, along with tools for modeling and mapping their dispersion.
The group that developed CAMEO is based in Seattle, and officials from NARAC have been talking with them about cooperating on the LINC pilot. "We're starting a dialog with them on how we can make best use of both systems, since the CAMEO system is used by a lot of HAZMAT teams," Nasstrom said. CAMEO's chemical database would complement NARAC's other strengths.
NARAC's system already includes links to meteorological stations across the United States, as well as local map data for the entire country. But to configure it for use by cities and counties, the system needs to integrate more detailed geographic data from local agencies.
"In the case of Seattle, we're working with their GIS group to import all the city map data they routinely use for emergency management," Nasstrom said.
Additionally, NARAC is incorporating feeds from more Seattle-area meteorological stations, as well as local databases that pinpoint where chemicals are stored in certain buildings.
NARAC is also working with firefighters in Seattle's HAZMAT unit to make sure the software interface is easy to use, Croll said.
Once the partners configure a version of the system for Seattle, they will run drills to show how NARAC responds when users enter data on hypothetical emergencies, Mosely-Rovi said.
The central system at NARAC can provide automated feedback. If need be, live operators can also assist local responders. Seattle will test both scenarios and might also simulate an off-hours emergency, Croll said. In the latter case, NARAC's operators would be paged and asked to rush to the lab to help emergency workers in the field.
The LINC program has received $750,000 from CBNSP. None of this funding goes directly to Seattle. The city receives in-kind support, such as training for its emergency workers. It is also supporting some of the program costs on its own, including hosting meetings and sending its emergency workers to Livermore for additional training.
PTI and LLNL are applying for further funds; they would like to extend the program for three years and bring it to other cities, Mosely-Rovi said. The partners hope to conduct some pilots in cities that are smaller than Seattle and less technically advanced.
"The next city will probably be medium-sized, and then we'll do a small city. The learning curve will be different in each one," she said.
In every case, the prospect for local agencies and their partners is exciting, Mosely-Rovi said. "I'm absolutely convinced that we're onto something really big here and it's going to save lives." | <urn:uuid:090d146c-7d2e-4351-84a5-deafba3ca280> | CC-MAIN-2017-04 | http://www.govtech.com/public-safety/New-LINCs-to-Safety.html?page=2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00330-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.962212 | 1,354 | 2.625 | 3 |
The DROUND( ) function performs double-precision rounding on a value.
expression1 is the value to round.
Double-precision rounding uses 2 words to store a number.
If a precision is not specified as the second expression, DROUND() uses whatever precision is set at the time of the calculation. If no PRECISION statement was issued then the default precision of 4 is used.
You must specify the precision (either with the PRECISION statement or with the optional precision parameter) to be greater than or equal to the rounding digit, otherwise the calculation will truncate the value to the whatever precision is 'active' at the time of the calculation. | <urn:uuid:3ed2f90a-32c0-4f1c-ad38-c064689e8368> | CC-MAIN-2017-04 | http://www.jbase.com/r5/knowledgebase/manuals/3.0/30manpages/man/jbc2_DROUND.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00267-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.685843 | 138 | 2.953125 | 3 |
A distributed Denial of Service (DDoS) attack is a simple variation of a Denial of Service attack in which the attacker initiates the assault from multiple machines to mount a more powerful, coordinated attack.
A DDoS attack is an amplified Denial of Service attack. In DDoS attacks, multiple hosts simultaneously attack the victim server, resulting in a powerful, coordinated, Denial of Service attack. This type of attack can even take down large sites such as Yahoo, Amazon and CNN, which are designed to handle millions of requests in a short amount of time.
A DDoS attack is executed as follows: an attacker locates vulnerable machines, gains access to them, and installs an attack program. These machines are often referred to as "zombies". When the attacker decides to strike, the attacker commands all the "zombies" to start flooding the victim target. The owners of the "zombies" have no clue that their computers are being used to attack remote systems, and it is more difficult to locate the attacker because the attack program is not running from the attacker's computer. Recently, web servers have also been used to execute DDOS attacks. Web servers provide a more muscular attack platform with higher bandwidth and processing power—one server is the equivalent of 3,000 infected PCs.
The concept of DDoS can also be used to achieve other goals, such as stealth scanning (just a few packets from each zombie) and distributed password cracking (using the aggregate processing power).
The impact of a DDOS includes:
- Application outages
- Brand damage
- Financial loss due to the inability to process financial transactions | <urn:uuid:1c22182d-1258-4aeb-b51c-6737963ce651> | CC-MAIN-2017-04 | https://www.imperva.com/Resources/Glossary?term=distributed_denial_of_service_ddos | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00019-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946409 | 331 | 3.359375 | 3 |
Geocast : Applications in Mobile GeoGames and Public Safety
Scalable Ad Hoc Wireless Geocast is a network protocol for addressing messages to receivers in a geographic area, like "180 Snyder Ave." or "Dodge Park". Geocast is designed for wireless, GPS-enabled devices, like the iPhone, but could run on almost any PDA-scale device that can provide its geographic position.
Geocast has two important characteristics. First, Geocast is an ad hoc network protocol. Communication between Geocast-enabled devices is peer-to-peer and requires no fixed infrastructure, like towers, routers, or access points. This permits Geocast-enabled devices to communicate instantaneously in challenging environments, like remote locations where fixed infrastructure may be unavailable or unreliable.
Second, Geocast is highly scalable. It can handle high traffic between tens to thousands of devices, even in urban settings. Plus Geocast can use fixed communication tiers, like a cellular network, for efficient transport of messages over long distances.
Geocast was invented in Research to enable a novel military training technology for the U.S. Army known as OneTESS. Two key requirements of OneTESS were that the wireless network protocol be ad hoc and highly scalable. Geocast was studied extensively in simulation and was successfully tested in a field trial at the Army National Training Center at Fort Irwin in the Mojave Desert.
Our prototype of Geocast on the iPhone includes a novel GeoGame application, public safety applications, and a GeoTexting application, all built upon an implementation of the Geocast network framework.
In emergencies, first responders need to locate and alert people with instructions appropriate to their location and situation.
Suppose there is a disaster at a school (green dot in the green circle). Using a command console, like a laptop (at blue dot), authorized personnel can issue a GeoQuery to the area. GeoQuery causes contacted phones (red dots) to automatically send Geocast to command center, which can plot position of receiver.
When searching in collapsed structures, knowing lat/long location of a phone is not always enough due to sensor imprecision. In these cases, commander can send GeoAlarm message which causes devices in defined Geocast region to set off loud alarm that can help rescuers find trapped individuals.
Commander can also issue GeoAlert messages to devices in Geocast region, which results in dialog pop-up. Message can be tailored to specific areas within emergency response area.
Because Geocast is a peer-to-peer, ad hoc protocol, it works even if infrastructure is damaged or absent and it can achieve better coverage in more challenging situations than can a broadcast-based system. Its scalability means it works even if there are 1000s of units in the effected area.
GeoGames are a novel class of augmented-reality video games that require strenuous physical activity. iTESS is a hide-and-go-seek GeoGame played with virtual munitions (or water balloons) in a large outdoor area like a forest or park. The object of iTESS is to avoid being "hit" by virtual munitions shot by other gamers.
The map shows a game situation. Blue dot and uncertainty halo show gamer's approximate position. Numbers in upper-right show available munitions and number of permitted simultaneous shots. Shots take significant time to travel from “mortar battery” to target (e.g, 10-15 secs) just like in reality, because game is being played outdoors and gamers need sufficient time to run.
The gamer taps the screen to direct a virtual UAV (unmanned aerial vehicle, green circle at right) to move and search for other gamers. Gamers found in the "look-down circle" beneath the UAV are shown as red dots. The UAV is implemented by sending GeoQueries to the look-down circle.
Here, the UAV has located a target (red dot), but the gamer is already targeted by an incoming round (red shaded circle containing gamer's blue dot).
The gamer then has 10 seconds to run out of the circle before the shell “lands”.
Simultaneously, the gamer can point gray cross hair at gamers in UAV range and launch munitions.
Gamers that do not run out of red target circle in time are hit and their game is over, while other "live" gamers can continue to play.
iTESS can augment existing simulation games, like paintball or laser tag, with new kinds of weapons and can simulate other types of interactions, like medical engagements.
Within one wireless channel, the originator transmits a message, labeled with a unique serial number, to all nodes within the originator's range. Any participant that receives the message enqueues it for re-transmission.
Each time a copy of the same Geocast message is received, the receiver records statistics, including the copy count, the location of the sender, the distance from the target Geocast region, etc.
When a re-transmission reaches the head of the queue, the sender makes heuristic decision whether to re-transmit or drop the message. The technical paper entitled "A Tiered Geocast Protocol for Long Range Mobile Ad Hoc Networking" listed below gives the details of the heuristic algorithm for re-transmission
The Army wants to train its troops as realistically as possible without using live fire weapons. The aim of OneTESS is to replace live fire with electronic simulations in which weapons fire "electronic bullets" and engagements are mediated by a wireless data network.
Small PDA-scale devices and sensors, like GPS and compasses, are carried by soldiers and mounted on weapons and vehicles to sense and respond to action in realistic battlefield environments.
Sensor data, such as trigger pull, tuble angle, and unit positions are sent via the wireless data network to appropriate computational devices, simulation results are computed, and then engagement results are transmitted out to units and soldiers, who take simulated damage.
After a battle scenario is enacted with OneTESS units, all locally-collected situational data are transmitted to a central collector where it can be presented for after-action review and further learning. | <urn:uuid:13ac18bd-2fa7-4072-b9f3-508dd27c41f4> | CC-MAIN-2017-04 | http://www.research.att.com/projects/Geocast/index.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00139-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931285 | 1,289 | 2.6875 | 3 |
Do You Hear What I Hear?�Part V: Integrated Services
Continuing with our series on VoIP Quality of Service (QoS), our previous installments have looked at some of the key factors surrounding the quality of the voice connection:
Part I: Defining QoS
Part II: Key Transmission Impairments
Part III: Dealing with Latency
Part IV: Measuring "Toll Quality"
Having thus looked at some of the QoS challenges, in this, and in the next few tutorials, we will look at some of the solutions that the IP networking industry has proposed. It should be noted, however, that most vendors have a preferred solution, and may go to great lengths to convince you that their solution is the solution. Knowledge is a good thing, and arming yourself with information regarding all the alternatives will broaden the choices that you have available, and hopefully guide you to a system that is the best match for your enterprise. We will begin with the oldest solution, known as Integrated Services, or intserv.
Integrated Services was a project of the Internet Engineering Task Force (IETF), with the intserv working group chartered for the purpose of "defining a minimal set of global requirements which transition the Internet into a robust integrated-service communications infrastructure." This work began in the early 1990s, which speaks highly of the IETF's foresight in acknowledging that real-time streams of audio and video would need to coexist with more traditional data traffic such as file transfers. The working group focused on three problems:
- Defining the services to be provided with an enhanced Internet
- Defining the interfaces for end-to-end, routing, and subnet technologies
- Defining any additional router requirements to enable the Internet to support the new service model
With those objectives in mind, the intserv working group published three key RFC documents:
- RFC 1633: Integrated Services in the Internet Architecture: an Overview (June 1994)
- RFC 2212: Specification of Guaranteed Quality of Service (September 1997)
- RFC 2215: General Characterization Parameters for Integrated Service Network Elements (September 1997)
RFC 1633 is the foundation document that details the integrated services model, which requires that network bandwidth must be managed in order to meet application requirements. Those requirements result from user activity, and generate flows, which are distinguishable streams of related datagrams. For example, a single voice or video stream would constitute one flow, which would then be distinguishable from other applications and their respective streams of information. In other words, these datagrams would have consistent source and destination addresses, protocols employed, and so on. Note that a flow is a simplex (uni-directional) path of information, from a single source to one or more destinations.
The two building blocks of this service are resource reservations, which reserve network bandwidth on behalf of an application, and admission controls, which determine if additional flows can be allowed within the network without impacting earlier network commitments. And if you are defining such a flow-based architecture, you must also determine mechanisms to keep track of these flows within the routers or other network elements. Those mechanisms are embedded within the four components of the framework of this architecture: the packet scheduler, the admission control routine, the classifier, and the reservation setup protocol.
The packet scheduler manages the forwarding of packet streams using mechanisms such as queues and timers. The classifier maps packets into a particular class, based upon criteria such as the packet header contents or other distinguishing factor. For example, all of the video packets may belong to the same class. The admission control element runs a decision algorithm that enables a router or host to determine if a new flow can be added to the network without impacting the existing flows. Finally, the reservation setup protocol creates and maintains the flow state information along the path of the flow. For intserv, the protocol defined is the Resource Reservation Protocol, or RSVP, which will be the subject of our next tutorial.
Copyright Acknowledgement: © 2005 DigiNet ® Corporation, All Rights Reserved
Mark A. Miller, P.E. is President of DigiNet ® Corporation, a Denver-based consulting engineering firm. He is the author of many books on networking technologies, including Voice over IP Technologies, and Internet Technologies Handbook, both published by John Wiley & Sons. | <urn:uuid:f5973b49-1d81-47c7-921e-d5019a241f21> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/unified_communications/Do-You-Hear-What-I-Hear151Part-V-Integrated-Services-3522931.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00037-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921076 | 893 | 2.90625 | 3 |
PC Mag recently published an infographic that visualizes a study by Commtouch about “The State of Hacked Accounts.” Commtouch collected data from email users who have had their email accounts hacked to draw conclusions about email security and the motives of email hackers.
The study found that two-thirds of hacked email accounts are used to send spam or scams to email addresses listed in the account’s address book, full of family and friends. Many of these messages are focused on obtaining money from the recipients. They utilize angles such as “stuck in a foreign country, please send money,” and recipients see that someone close to them is asking for financial help.
Traditionally, email spam has been focused on marketing (generally unwanted) products through huge email blasts. Email and security providers quickly caught on, however, and now automatic spam folders work their magic on a regular basis and botnets can now be taken down instantly. What does this mean for spam?
A Changing Landscape:
The spam landscape has changed. Hackers have realized that, with the onset of spam filters and the decline of botnets, they have to switch tactics. They have been finding success in compromising existing email accounts for spam and scams because (1) these accounts exist within whitelisted IP address ranges like Hotmail, Yahoo and Gmail, thus bypassing spam filters and (2) recipients are more likely to open emails from a familiar addresses than unknown senders, and are therefore more likely to follow through in providing personal information.
eWeek’s Fahmida Rashid wrote an article describing the modern inner workings of the hacker community: “Hackers are often perceived as isolated, alienated individuals, working alone or in small groups. In reality, hackers are quite social, frequenting online forums and chat rooms to brag about their exploits, exchange tips and share knowledge, according to a recent analysis of hacker activity.”
So what does this mean? We can likely expect an increase of such personalized scams, in email as well as social media outlets. To combat these intelligent, organized and widespread hacker communities, we have to do our best to predict next moves and be a step ahead. Then again, that’s why the U.S. government is hiring hackers left and right, but that’s for another blog post.
In the meantime, be smart. See the prevention tips in at the bottom of the infographic, and check out identity protection tips from our consumer identity theft expert, John Sileo, in earlier blog posts. | <urn:uuid:873a2983-ed8f-4db9-9bc4-18bf790d7fe5> | CC-MAIN-2017-04 | https://www.csid.com/2011/10/the-changing-landscape-of-spam/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00249-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959527 | 523 | 2.703125 | 3 |
Posted 10 October 2008 - 07:43 PM
Line-in is an input (normally stereo) that accepts a signal level as would be supplied by an audio component such as a CD/MP3 player or tape deck, or a music keyboard, much higher than microphone level.
Whether it's available may depend on how your sound is configured - for example the mic and line in can share a single socket and you set which one is operative in the sound card's control applet. Other cards have an input socket for each and mic and line input can operate concurrently.
Top 5 things that never get done: | <urn:uuid:b8bc9024-150a-47b0-9cc9-d334817eec2e> | CC-MAIN-2017-04 | https://www.bleepingcomputer.com/forums/t/173730/what-does-line-in-mean-in-volume-control/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00065-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959435 | 125 | 2.578125 | 3 |
Steve Lionel, commonly known as “Doctor Fortran” made a convincing argument this week for why the 54 year-old language is still relevant—and why it just doesn’t get the respect it deserves.
To counter the myth that Fortran is the Latin of the programming world, Lionel points to a few new applications that have been written in Fortran, including hurricane weather forecasting applications like the Weather Research and Forecasting Model (WRF) which is written mostly in the venerable language. He also points to PAM-CRASH, an auto crash simulator as a prime example that stands out, claiming that in HPC, there are many valid, fresh uses for Fortran.
He admits that indeed, there are not a large number of applications in Fortran and that indeed, 20 years ago there were far more uses for it. Still, he says it isn’t fading completely even though there are, as he says, “a lot of C and C++ that that is more appropriate for certain things than Fortran is like string processing.
That aside, he says, “if you’re doing number crunching, working with a lot of floating-point data, or doing parallel processing, it’s an excellent choice. Its strengths in array operations — its wide variety of routines — make it attractive, and there is a huge library of freely available high-performance routines written over 40 years that still work together.”
Lionel looks to the strengths of Fortran in comparison to other languages, noting that Fortran 2008 has built-in parallel programming capabilities that no other languages have. He says, “Other languages have parallel programming features, but Fortran incorporated modern parallel programming in ways that non of the other languages have.” He points to a “an incredible body of well-written and well-debugged routines” in Fortran that are still open for reuse.
According to Lionel, just because the language is venerable, it doesn’t mean that it hasn’t changed over time. He points to a series of updates, including one just last year, claiming that new capabilities are being added constantly in response to the desires of programmers looking for vendor extensions and other features that became popular in other languages. | <urn:uuid:63d40a3d-1c54-4ddc-9d8f-e380d99ed255> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/09/20/why_fortran_still_matters/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00551-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.966873 | 468 | 2.59375 | 3 |
GCN at 25: March of the microprocessors
There is a lot more to a computer processor than clock speed, especially today. But as a quick point of reference for Moore's Law, here's a look at the first 35 years of Intel microprocessors.
1970Intel introduces the first microprocessor, the 108 KHz Intel 4004.
1974The Intel 8080 ups the ante to 2 MHz.
1979The 8088 uses a 16-bit architecture in 4.77 MHz and 8 MHz versions.
1982The Intel 282, also known as the P2, is available in 2 MHz, 10 MHz and 12 MHz versions.
1985The 386, aka the P3, makes a considerable leap forward with a 32-bit architecture and clock speeds ranging from 16 MHz to 33 MHz.
1989The 486, or P4, is released with clock speeds of 25 MHz, 33 MHz and
50 MHz, with a built-in memory cache and architecture that triples the speed of the 386.
1993After a few specialized versions of the 486, Intel delivers the Pentium, with a speed from 66 MHz and expanded address and data buses. Two generations later, the Pentium would reach 200 MHz.
1997The Pentium II arrives, at 233 MHz and with an easily removable casing.
1999The Pentium III arrives with 500 MHz.
2001The first Pentium 4 is released at 1.3 GHz.
2004The latest Pentium 4, nicknamed Prescott, is released at 4 GHz. | <urn:uuid:ea684e1e-0bd1-4eb3-9466-58860854b857> | CC-MAIN-2017-04 | https://gcn.com/Articles/2007/05/19/GCN-at-25-March-of-the-microprocessors.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00487-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.892333 | 312 | 3.6875 | 4 |
Opinion: Carnegie Mellon University's Alice is helping girls (and boys) learn to program using 3-D objects.
Girls, want to learn to program? Go ask Alice.
I dont mean to make any more reference to the Jefferson Airplane song, "White Rabbit," that made famous the line "Go ask Alice" than necessary, as its a sad tale of drug use. The Alice Im talking about is the Alice software for teaching students to program.
Alice is a Java-based, interactive program that enables users to create 3-D computer animations without the need for high-level programming skills.
Indeed, by simply working with a computer mouse, users can select characters, including dinosaurs, penguins, bugs, monkeys or faeries, and place them in an environment they choose, like an amusement park, a kitchen or even a country. With these pre-existing elements, the user can construct stories, outlining the actions of characters or objects using simple commands, like move, turn, or resize, rather than technical, sometimes confusing terms common in other graphics programs, like translate, rotate or scale.
Randy Pausch, professor of computer science and human computer interaction, as well as director of the Stage 3 Research lab at Carnegie Mellon University, began work on what was to become Alice in the early 1990s while at the University of Virginia. Pausch said Alice began as an easy-to-use scripting tool for building virtual worlds, a way of making computer graphics more accessible.
Yet, over time, Alice was transformed into a tool that used computer graphics to make computer programming more accessible as well, adopting a drag-and-drop interface that allowed users to construct stories from pre-existing graphic elements. And Professor Stephen Cooper at St. Josephs University and Professor Wanda Dann of Ithaca College helped out by developing educational materials to back up Alice.
To read more about the current state of Computer Science, click here.
Along came Caitlin Kelleher, then a graduate student at CMU who had Pausch as her thesis adviser.
"As my thesis work, I created and evaluated a programming system for middle school girls called Storytelling Alice that presents programming as a means to the end of storytelling," Kelleher, now a post doctoral computer scientist at CMU, said in a description of her work.
Storytelling Alice includes high-level animations that enable users to program social interactions, a gallery of characters and scenery designed to spark story ideas, and a story-based tutorial, Kelleher said.
"To evaluate the impact of storytelling support on girls motivation and learning, I compared girls experiences using Storytelling Alice and a version of Alice without storytelling support (Generic Alice)," Kelleher wrote. "Results of the study suggest that girls are more motivated to learn programming using Storytelling Alice; study participants who used Storytelling Alice spent 42 percent more time programming and were more than three times as likely to sneak extra time to work on their programs as users of Generic Alice16 percent of Generic Alice users and 51 percent of Storytelling Alice users snuck extra time."
In a separate paper on lessons learned during her efforts, Kelleher said, "Women are currently underrepresented in computer science. Studies have shown that middle school is a critical age, during which many girls turn away from scientific and mathematical pursuits, including computer science. By giving middle school girls a positive first programming experience, we may be able to increase girls participation in computer science."
CMU officials said Alice has been successful in teaching students at more than 60 colleges and universities, as well as a growing number of middle and high schools, to program. And not only has Alice been effective in getting girls interested in programming, it also has attracted minorities, as the program has been introduced into schools in the interior of cities such as Washington, D.C., CMU officials said.
Im huge on efforts like Alice and other projects to bring novices and newbies into the ranks of programming. Just as I think everybody has a book in them, I believe everybody has at least one good program in them. Im not saying Id be interested in reading all those books (or caring about all those programs), but I think about the boost in literacy that bringing more folks to reading and writing would bring. The same could be said for bringing more folks to programming.
Grady Booch, chief scientist for IBMs Rational division, mentioned Alice recently after he attended the Association for Computing Machinerys (ACM) Special Interest Group on Computer Science Educations (SIGCSE) 2007 symposium in Covington, Ken. Booch said Alice "really seems to be hitting its stride this year."
Check out Alice here.
Alice is open source.
Meanwhile, in a similar vein Microsoft is targeting beginners with all sorts of opportunities, including a new Web site aimed at absolute beginners, as well as the software giants Express versions of its Visual Studio tools.
But at its recent MVP Summit, one Microsoft Most Valuable Professional indicated that Microsoft started targeting newbies a long time ago. Sources said the MVP got up and asked Microsoft Chairman Bill Gates if he would sign a copy of the documentation for the Altair BASIC interpreter that Microsoft delivered as its early product. The MVP said he was nine years old when his father taught him to program using the Altair and BASIC, and it launched him into a career in the field. Some say Bill seemed to get a little choked up.
The funny part is that the documentation included a line that said something like: If you have any problems with this software, call Bill Gates, Paul Allen" or a third person who helped with the softwareprobably Monte Davidoff, who wrote the floating point arithmetic for the interpreter. And it gave an Albuquerque, N.M., number to call.
I love that story.
Check out eWEEK.coms for the latest news, reviews and analysis in programming environments and developer tools. | <urn:uuid:16b54d89-db82-45ea-9ef0-c8b832b1fcf8> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Application-Development/Girls-Ask-Alice-for-Programming-Skills | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00395-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956062 | 1,221 | 3 | 3 |
18 Sep 2003
Kaspersky Lab, a leading information security expert, announces the detection of the network worm, I-Worm.Swen. This malicious program spreads via email, the Kazaa file sharing network and IRC channels.
Infected messages appear to have been sent from various Microsoft services, including, MS Technical Assistance, Microsoft Internet Security Section, etc. Message text advises users to install a "special patch" from Microsoft. The "patch" is included as an attachment.
Sven uses the same vulnerability in the Internet Explorer detected in March 2001 that was used by many other well-known worms, such as Klez
. Thus, once Swen breaks into an undefended machine it executes itself independently of the owner.
The new malware program is written in Microsoft Visual C++ and is about 107 KB. The worm is activated in two cases: if the infected file is executed or when the email program contains the IFrame.FileDownload vulnerability. The worm then installs itself into the system and initiates propogation procedures.
When the attachment is opened the first time, a window named Microsoft Internet Update Pack appears on the screen and imitates the installation of a patch. At the same time, the malicious code blocks all firewalls and anti-virus software. Then Swen scans the file system of the infected computer and extracts all email addresses, using them to mail itself to all available addresses via a direct connection to an SMTP server. The infected letters are in HTML and include an attachment containing Swen. In some cases, the worm can send copies of itself in .zip of .rar form.
Swen propagates via the Kazaa file-sharing network by copying itself under random names in the file exchange directory in Kazaa Lite. It also creates a subdirectory in the Windows Temp folder with randomly generated file names making several copies of itself with random names as well. This directory then is then identified in the Windows system registry as the source for the file sharing system and as a result, the new files created by Swen become available to other Kazaa network users.
Finally, for spreading via IRC, the worm scans for installed mIRC clients. If these are detected then Swen modifies the script.ini file by adding its propagation procedures. Whereupon the script.ini file sends infected files from the Windows directory, to all users that connect to the now-infected IRC channel
Kaspersky Lab experts currently attribute over 30,000 computer infections worldwide to I-Worm.Swen. The number of infections continues to rise.
The defence against I-Worm.Swen has already been added to the Kaspersky® Labs anti-virus database.
to view the I-Worm.Swen description in the Kaspersky Virus Encyclopedia. | <urn:uuid:20500f1a-427f-4fae-b69a-2859b7bad811> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/virus/2003/I_Worm_Swen | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00211-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.915888 | 578 | 2.921875 | 3 |
The Internet of Things
Data from Embedded Systems Will Account for 10% of the Digital Universe by 2020
There have been three major growth spurts for the digital universe in modern memory. The first was when digital camera technology replaced film; the second, when analog telephony went digital; and the third, when TV went digital.
Now comes a fourth growth spurt – the migration of analog functions monitoring and managing the physical world to digital functions involving communications and software telemetry.
Call it the advent of the Internet of Things (IoT). Fed by sensors soon to number in the trillions, working with intelligent systems in the billions, and involving millions of applications, the Internet of Things will drive new consumer and business behavior that will demand increasingly intelligent industry solutions, which, in turn, will drive trillions of dollars in opportunity for IT vendors and even more for the companies that take advantage of the IoT.
Why the IoT heralds a new era of computing is a matter of math. All earlier eras involved the computerization of enterprises or people, of which there are a finite number on the planet. This era involves the computerization, adding software and intelligence, to things – things as varied as cars and toys, airplanes and dishwashers, turbines and dog collars.
Yes, there is a finite number of things – at least big things – that might be computerized. But, by IDC’s count, that number is already approaching 200 billion. And the number of sensors (e.g., the accelerometer in your smart phone) that track, monitor, or feed data to those things is already more than 50 billion, with scientists talking about trillion-sensor networks within 10 years.
Of course, not all of those 200 billion things are actually wired and communicating on the Internet, but some 20 billion are. And, by 2020, this number will grow by 50% to 30 billion connected devices.
IDC describes the IoT as a network connecting – either wired or wireless – devices (things) that is characterized by automatic provisioning, management, and monitoring. It is innately analytical and integrated, and includes not just intelligent systems and devices, but connectivity enablement, platforms for device, network and application enablement, analytics and social business, and applications and vertical industry solutions. It is more than traditional machine-to-machine communication. Indeed, it is more than the traditional Information and Communications Technology (ICT) industry itself.
The IoT will, in fact, subsume the ICT industry over time – and to good effect. The compound annual growth rate for spending on traditional ICT from 2013 to 2020 is just under 4%. Vendor revenues tied to the part of IoT that are not already in traditional ICT spending will grow at three times that rate. And that’s just the revenue to the supply side. To the buyers and users of IoT technology and services, the payoff should be at least twice that – perhaps many more times.
The IoT, however, comes with its own challenges, including a lack of standards, the ability to scale globally, security concerns, and an immature ecosystem. For vendors, there is no homogeneous IoT market – each industry and application is different. For users, especially IT organizations, there can be issues of managing operational systems in an organization that might be culturally designed as a support organization, as well as dealing with the real-time demands of many IoT applications.
EMC and IDC see the IoT creating new opportunities for business in five main ways by enabling:
New business models
The IoT will help companies create new value streams for customers, speed time to market, and respond more rapidly to customer needs.
Real-time information on mission-critical systems
Enterprises can capture more data about processes and products more quickly and radically improve market agility.
Diversification of revenue streams
The IoT can help companies monetize additional services on top of traditional lines of business.
The IoT will make it easier for enterprises to see inside the business, including tracking from one end of the supply chain to the other, which can lower the cost of doing business in far-flung locales.
Efficient, intelligent operations
Access to information from autonomous endpoints will allow organizations to make on-the-fly decisions on pricing, logistics, and sales and support deployment.
The impact of the IoT is already visible in the digital universe. Data just from embedded systems – the sensors and systems that monitor the physical universe – already accounts for 2% of the digital universe. By 2020 that will rise to 10%.
There is one final way to look at the importance of embedded systems – the load they will put on the IoT in terms of management. Computers don’t just have to manage megabytes, they also have to manage the containers, or software-based digital “files” that the megabytes come in. Some containers are big, like a digital camera image or a 30-minute loop on a surveillance camera. But some are small. RFID tags and sensor “containers” may contain as few as 32 bytes. Because of this small signal size, the number of containers that must be managed from these embedded systems will dominate the digital universe in 2020. We’re talking 99% of all “files” in the digital universe.
Because of the growth of embedded systems data in the digital universe, the number of “containers” is growing faster than the number of petabytes, from 28 quadrillion in 2010 to 4,200 in 2020.
Finally, a good portion of the digital universe will be generated by mobile devices and people – from 17% in 2013 to 27% in 2020 – but the percentage of mobile “things” in the IoT will be more than 75% by 2020.
Every few years in this industry we witness the emergence of a new “next big thing.” The IoT is surely the next big thing in 2014. | <urn:uuid:91525ac2-d88b-4968-8163-372836605b37> | CC-MAIN-2017-04 | https://www.emc.com/leadership/digital-universe/2014iview/internet-of-things.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00423-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934017 | 1,218 | 2.75 | 3 |
New NAND flash manufacturing technology being introduced by IM Flash Technologies could lead to solid-state hard drives for notebooks.
IM Flash Technologies is boosting the capacity of its NAND flash memory chips, pushing the chips another step closer toward being able to take a starring role in PCs.
The Intel-Micron Technology joint venture,
based in Lehi, Utah, announced July 25 that it has begun sampling 4-gigabit NAND flash memory chips manufactured using a new, 50-nanometer process.
The chips, which will be available in larger quantities in coming months, can store about twice as much data as those produced a year ago in 2005. Of the two types of flash memory, NAND and NOR, NAND is used most often to store large amounts of data.
Higher data storage capacity or density, largely determined by the number of memory cells present in a standard-sized flash memory chip, allows manufacturers to make music players that can store more songs or cell phones that can take more pictures. But its also the key to creating reasonably affordable solid-state hard drives for use in notebook PCs, memory makers say.
"The value today of this announcement is really the capability of this process being ready to go," said Brian Shirley, vice president of memory at Micron, in Boise, Idaho. "This 50-nanometer process will allow us, here, fairly shortly to do a lot of other very advanced densities
Things like 8-gigabits."
Click here to read more about how flash memory can be used in PCs.
Finer manufacturing processesIM Flash Technologies is moving from 72-nanometer to 50-nanometer manufacturing, which shrinks the size of the features inside the chipsallow memory makers to reduce the size of each flash memory cell.
That, in turn, increases the number of memory cells a standard-sized chip can fit, allowing it to pack in more data. Given that memory chip prices tend to stay in the same ballpark, the overall cost for storing a megabyte of data falls as capacities increase.
The 50-nanometer manufacturing, which is scheduled to reach high volumes in 2007, will help to accelerate increases in NAND chip densities, speeding the arrival of chips with capacities as high as 16G bits. Previously, IMFT had been limited to producing 8G-bit chips. A 16G-bit chip can hold about 2GB of data each, while an 8G-bit chip holds about 1GB of data, Shirley said.
Later, using even finer manufacturing technologies for a 35-nanometer process will again double NAND flash densities to 32G bits, allowing a single chip to hold 4GB of data. "What were excited about are the opportunities available in things like replacing hard drives with these kinds of densities," Shirley said. "Im a firm believer itll happen sooner [rather] than later."
Read more here about IMFTs relationship with nanotechnology startup Nanosys.
Stringing together several NAND chips would deliver reasonably sized notebook hard drives that offered reductions in power consumption and increases in performance and reliability, given that solid-state drives do not contain moving parts, he said.
Indeed, at 16G-bit densities, it would take 20 NAND chips to equal a 40GB drive. But only 10 32G-bit chips would do the same job. The progression will help cut the costs of building solid-state drives, making them more attractive for use in notebooks, for example.
Thus, "To some extent, what the world is waiting for is the capability to buy these NAND devices at a high volume," Shirley said.
Micron has hinted that it may have plans to produce its own solid-state drives, following its June 2006 acquisition of Lexar Media. Lexar Media is known for making memory devices.
But Micron isnt the only chip maker pursuing solid-state drives. Samsung, for one, has already announced plans to produce solid-state hard drives.
Meanwhile, NAND flash is finding its way into PCs in other ways, namely inside hybrid hard drives, which incorporate smaller amounts of flash memory.
Hybrid hard drives, which incorporate 64MB or 128MB of flash as a buffer for storing files, are expected to be produced by numerous drive makers and used widely in notebooks in 2007. The onboard memory is said to speed boot times and increase battery life by allowing a notebook to turn off its hard drive for long periods of time while operating on battery power.
Intel, too, has devised a flash-booster for notebooks, dubbed Robson Technology. Robson places a flash memory module on a notebooks motherboard. It will be a feature of Santa Rosa, a new notebook chip platform due from Intel in the first half of 2007.
Still, solid-state hard drives arent a shoo-in. Traditional hard drives will also continue to increase in density, particularly in smaller sizes. Where notebooks with 2.5-inch and 3.5-inch drives dominated in the past, companies are now producing 1.8-inch, 1-inch and smaller drives for a variety of devices.
Flash memory also faces challenges in manufacturing. Technical issues must be solved to push past the 35-nanometer manufacturing mark, Shirley said.
However, he said, "I tend to not be too worried about those questions. Ive watched the DRAM [dynamic RAM] industry for the last 15 years. The cliff edge of doom was always right around the corner and somehow we managed to find the answersometimes just in timebut the answers were out there."
IMFT is already working on some new technology. On June 29, it expanded its collaboration with Nanosys, a Palo Alto, Calif., nanotechnology startup thats designing microscopic nanowires that could be used to boost the capacities of NAND flash memory chips.
Shirley declined to give specifics on IMFTs new 4G-bit chips, including their prices, citing confidential agreements with customers.
Check out eWEEK.coms for the latest news in desktop and notebook computing. | <urn:uuid:328e5571-2137-448a-bffb-6cafecf0066b> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Desktops-and-Notebooks/Fatter-Flash-Chips-Offer-PCs-a-Better-Fit | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00268-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947617 | 1,247 | 2.546875 | 3 |
Everything You Want to Know about the Cryptography behind SSL Encryption
SSL (Secure Sockets Layer) is a standard security technology for establishing an encrypted link between a server and a client—typically a web server (website) and a browser; or a mail server and a mail client (e.g., Outlook). It allows sensitive information such as credit card numbers, social security numbers, and login credentials to be transmitted securely. To establish this secure connection, the browser and the server need an SSL Certificate.
But how is this accomplished? How is data encrypted so that no one—including the world’s biggest super computers—can crack it?
This article explains the technology at work behind the scenes of SSL encryption. It covers asymmetric and symmetric keys and how they work together to create an SSL-encrypted connection. It also covers different types of algorithms that are used to create these keys—including the mathematical equations that make them virtually impossible to crack.
Not sure you understand the basics of SSL Certificates and technology? Learn about SSL Certificates >>
Asymmetric encryption (or public-key cryptography) uses a separate key for encryption and decryption. Anyone can use the encryption key (public key) to encrypt a message. However, decryption keys (private keys) are secret. This way only the intended receiver can decrypt the message. The most common asymmetric encryption algorithm is RSA; however, we will discuss algorithms later in this article.
Asymmetric keys are typically 1024 or 2048 bits. However, keys smaller than 2048 bits are no longer considered safe to use. 2048-bit keys have enough unique encryption codes that we won’t write out the number here (it’s 617 digits). Though larger keys can be created, the increased computational burden is so significant that keys larger than 2048 bits are rarely used. To put it into perspective, it would take an average computer more than 14 billion years to crack a 2048-bit certificate. Learn more >>
Symmetric encryption (or pre-shared key encryption) uses a single key to both encrypt and decrypt data. Both the sender and the receiver need the same key to communicate.
Symmetric key sizes are typically 128 or 256 bits—the larger the key size, the harder the key is to crack. For example, a 128-bit key has 340,282,366,920,938,463,463,374,607,431,768,211,456 encryption code possibilities. As you can imagine, a ‘brute force’ attack (in which an attacker tries every possible key until they find the right one) would take quite a bit of time to break a 128-bit key.
Whether a 128-bit or 256-bit key is used depends on the encryption capabilities of both the server and the client software. SSL Certificates do not dictate what key size is used.
Which Is Stronger?
Since asymmetric keys are bigger than symmetric keys, data that is encrypted asymmetrically is tougher to crack than data that is symmetrically encrypted. However, this does not mean that asymmetric keys are better. Rather than being compared by their size, these keys should compared by the following properties: computational burden and ease of distribution.
Symmetric keys are smaller than asymmetric, so they require less computational burden. However, symmetric keys also have a major disadvantage—especially if you use them for securing data transfers. Because the same key is used for symmetric encryption and decryption, both you and the recipient need the key. If you can walk over and tell your recipient the key, this isn’t a huge deal. However, if you have to send the key to a user halfway around the world (a more likely scenario) you need to worry about data security.
Asymmetric encryption doesn’t have this problem. As long as you keep your private key secret, no one can decrypt your messages. You can distribute the corresponding public key without worrying who gets it. Anyone who has the public key can encrypt data, but only the person with the private key can decrypt it.
How SSL Uses both Asymmetric and Symmetric Encryption
Public Key Infrastructure (PKI) is the set of hardware, software, people, policies, and procedures that are needed to create, manage, distribute, use, store, and revoke digital certificates. PKI is also what binds keys with user identities by means of a Certificate Authority (CA). PKI uses a hybrid cryptosystem and benefits from using both types of encryption. For example, in SSL communications, the server’s SSL Certificate contains an asymmetric public and private key pair. The session key that the server and the browser create during the SSL Handshake is symmetric. This is explained further in the diagram below.
- Server sends a copy of its asymmetric public key.
- Browser creates a symmetric session key and encrypts it with the server's asymmetric public key. Then sends it to the server.
- Server decrypts the encrypted session key using its asymmetric private key to get the symmetric session key.
- Server and Browser now encrypt and decrypt all transmitted data with the symmetric session key. This allows for a secure channel because only the browser and the server know the symmetric session key, and the session key is only used for that session. If the browser was to connect to the same server the next day, a new session key would be created.
Public-Key Encryption Algorithms
Public-key cryptography (asymmetric) uses encryption algorithms like RSA and Elliptic Curve Cryptography (ECC) to create the public and private keys. These algorithms are based on the intractability* of certain mathematical problems.
With asymmetric encryption it is computationally easy to generate public and private keys, encrypt messages with the public key, and decrypt messages with the private key. However, it is extremely difficult (or impossible) for anyone to derive the private key based only on the public key.
RSA is based on the presumed difficulty of factoring large integers (integer factorization). Full decryption of an RSA ciphertext is thought to be infeasible on the assumption that no efficient algorithm exists for integer factorization.
A user of RSA creates and then publishes the product of two large prime numbers, along with an auxiliary value, as their public key. The prime factors must be kept secret. Anyone can use the public key to encrypt a message, but only someone with knowledge of the prime factors can feasibly decode the message.
RSA stands for Ron Rivest, Adi Shamir, and Leonard Adleman— the men who first publicly described the algorithm in 1977.
Elliptic curve cryptography (ECC) relies on the algebraic structure of elliptic curves over finite fields. It is assumed that discovering the discrete logarithm of a random elliptic curve element in connection to a publicly known base point is impractical.
The use of elliptic curves in cryptography was suggested by both Neal Koblitz and Victor S. Miller independently in 1985; ECC algorithms entered common use in 2004.
The advantage of the ECC algorithm over RSA is that the key can be smaller, resulting in improved speed and security. The disadvantage lies in the fact that not all services and applications are interoperable with ECC-based SSL Certificates.
Pre-Shared Key Encryption Algorithms
Pre-shared key encryption (symmetric) uses algorithms like Twofish, AES, or Blowfish, to create keys—AES currently being the most popular. All of these encryption algorithms fall into two types: stream ciphers and block ciphers. Stream ciphers apply a cryptographic key and algorithm to each binary digit in a data stream, one bit at a time. Block ciphers apply a cryptographic key and algorithm to a block of data (for example, 64 sequential bits) as a group. Block ciphers are currently the most common symmetric encryption algorithm.
*Note: Problems that can be solved in theory (e.g., given infinite time), but which in practice take too long for their solutions to be useful are known as intractable problems. | <urn:uuid:375e0cff-5426-4ed3-895f-7f7ff1dcefc4> | CC-MAIN-2017-04 | https://www.digicert.com/ssl-cryptography.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00204-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923013 | 1,695 | 4.09375 | 4 |
Storing data in SQLite databases
SQLite is a relational database management system. With a code footprint of roughly 300 KB, SQLite is well suited to small devices such as smartphones. In addition, SQLite makes efficient use of memory, disk space, and disk bandwidth; and SQLite databases require no maintenance from a database administrator.
To create and use SQLite databases in your Java app, you must use the Database API, which is the net.rim.device.api.database package.
The Database API implements SQLite databases in a way that might be slightly different from what you're used to. To improve efficiency, SQLite runs as a service on the BlackBerry smartphone. Database operations use a runtime bridge to transfer data between Java and native code. For more information, see Performance of SQLite databases.
BlackBerry Device Software 7 uses SQLite 3.7.2. For more information about SQLite, see www.sqlite.org.
Viewing SQLite databases
SQLite database viewers are available from third-party vendors. These viewers can be useful aids to your database development process. Database viewers are especially useful for viewing changes to a database. When you run an SQL statement, you can see the result in the database viewer immediately.
An SQLite database viewer runs on your computer, not on the smartphone. To use the viewer, configure the BlackBerry Smartphone Simulator to emulate a microSD card. When you run your application, your database viewer reads the database from a folder on your computer.
SQLite database viewers cannot work on encrypted databases. You can encrypt the database after you complete your SQLite application.
Simulate a media card
To view SQLite databases in a database viewer, you might have to configure the BlackBerry Smartphone Simulator to emulate a media card. By default, database files are stored on a media card.
Create a folder on your computer to store emulation files for the media card.
In the BlackBerry Smartphone Simulator, on the Simulate menu, click Change SD Card.
Click Add Directory.
Navigate to and click the folder you created. | <urn:uuid:7af3710e-3efc-432a-a6d9-431181656f54> | CC-MAIN-2017-04 | http://developer.blackberry.com/bbos/java/documentation/sqlite_intro_1981752_11.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00020-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.775226 | 430 | 3.0625 | 3 |
Google Earth Engine will provide satellite images to help scientists see how forests are changing over time. The idea is to stunt deforestation in developing countries.
Google Dec. 2 rolled out a new cloud-based computing platform that puts past
and present satellite imagery online to gauge changes in Earth's environment.
Introduced at the International Climate Change Conference in Cancun,
is intended to help scientists detect how forests are changing
over time using trillions of images collected by U.S.
and French satellites over the last 25 years.
With the data, scientists may build applications for detecting deforestation
and mapping land use trends in developing nations such as Brazil,
central Africa and the Amazon.
In turn, the data could help these nations better allocate resources for
disaster response or water resource mapping, Google Earth Engine Engineering
Manager Rebecca Moore said in a blog post
Google Earth Engine leverages Google's parallel cloud of servers "to
cope with the massive scale of satellite imagery archives, and the
computational resources required for their analysis."
The company's Google.org philanthropic division will donate 10 million CPU
hours a year over the next two years on the Google Earth Engine platform to
help world nations track the state of their forests.
This, Google believes, will help nations prepare for the Reducing Emissions
from Deforestation and Forest Degradation in Developing Countries framework
proposed by the United Nations to provide financial incentives for protecting
forests all over the world.
Protecting forests, whose trees provide the oxygen support system humans
rely on, is crucial. Deforestation accounts for 12 to 18 percent of annual
greenhouse gas emissions, and the world loses 32 million acres of tropical
forests every year.
Google is encouraging scientists to use Google's Earth Engine API
to bring their applications online for deforestation, disease mitigation,
disaster response and water resource mapping, among other climate-related
The Earth Engine API is currently
available to a small group of partners but will be available more broadly later. | <urn:uuid:79401517-bc5a-4d21-ab8f-7209afbf6a06> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Cloud-Computing/Google-Earth-Engine-Launches-as-Cloud-Climate-Platform-193059 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00020-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.863835 | 416 | 3.359375 | 3 |
This fee is a monthly charge to customers or other telephone companies by a local telephone company for the use of its local network.
What are access rates?
When you make a long distance call, your long distance carrier must pay the local exchange carrier (CenturyLink) for starting, or originating, the call. In addition, your long distance carrier must pay the telephone company that provides local service to the person you are calling to complete the call. The charges that the long distance provider pays to CenturyLink to originate the call and the other local exchange company to complete the call are referred to as "access," as in the access the long distance provider must get to the local network.
Why is this charge on my bill even if I don't make long distance calls?
Local exchange companies incur significant costs to provide service to their customers. The access revenues that local exchange companies receive from long distance companies help offset some of those costs to keep the cost of local service affordable. As the costs are associated with local service, the FCC determined that it was appropriate to allow local exchange carriers to recover a portion of the lost access revenues from their customers.
What reductions to access rates are being made?
In an Order released on November 18, 2011, the FCC required local exchange carriers to reduce the rates they charge to long distance companies to complete or terminate long distance calls. The current system that is used by long distance companies to compensate local companies for use of the local network was first established in 1984, after the AT&T divestiture, and at a time when there was no competition for local service. Much has changed since 1984 and the FCC recognized that this system did not work well when there is competition for local service. In addition, the FCC believes the current system may make it difficult to develop and use new technologies, such as internet protocol networks.
So the long distance carriers are getting reductions in their costs, but my costs are increasing because of the ARC? Why is this fair?
The access revenues that local exchange carriers receive from the long distance carriers help offset the cost of providing local service. The FCC determined that the customer chooses to place a long distance call and the long distance carrier that is used; therefore the customer should bear more of the cost.
Why aren't cable companies implementing an ARC?
The FCC's order provided that only incumbent local exchange carriers may recover a portion of the lost access revenues from an ARC charge. While cable companies will also see access reductions, no provision was made to permit cable companies to implement an ARC charge. | <urn:uuid:eccf6389-e635-4154-9656-9d2e01745681> | CC-MAIN-2017-04 | http://www.centurylink.com/home/help/billing/overview-of-taxes-and-fees/access-recovery-charge-arc-explained.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00232-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953001 | 515 | 2.546875 | 3 |
Technology these days changes so fast that it's hard to remember what life was like back when this technology didn't exist. Luckily, we have YouTube, to remind us what the ancient times looked like and how "exciting" some new concepts were.
Like in this ancient 1981 clip, a San Francisco TV news report (KRON) about how newspapers like the San Francisco Chronicle were developing "electronic newspapers" for customers to read. Like all reporting on new technology, note the tone of the reporters when they talk about this exciting new development:
My favorite parts of this clip include the following:
* The intro: "Imagine if you will sitting down to your morning coffee, turning on your home computer to read the day's newspaper. Well, it's not as far-fetched as it may seem." Sure, in 1981 people were experiencing the home computer, but it's the tone of the newscaster that I love - I keep hoping that some day I'll turn on the news and the newsreader says, "Imagine if you will a car that lets you travel into the future or into the past. It's not as far-fetched as it may seem…"
* 0:25 - The blood-red rotary dial telephone being used to connect.
* 0;48 - "With the exception of the pictures, ads and the comics. Well, at least they were able to fix that quickly.
* 1:04 - "We're not in it to make money" - well, maybe that's the attitude that eventually killed the newspaper in this century.
* 1:25 - "The newspaper isn't as spiffy as the ads imply". Hee. It wasn't very spiffy in the ad.
* 1:33 - The title identifying Richard Halloran - "Owns Home Computer". Priceless.
* 1:54 - "Engineers now predict the day will come when we get all our newspapers and magazines by home computer." Interestingly, with tablets, even that statement is now ancient.
* 2:01 - "For the moment at least, this fellow isn't worried about being out of a job." Well, at least for another 25 years or so.
Read more of Keith Shaw's ITworld.TV blog and follow the latest IT news at ITworld. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook. | <urn:uuid:776924c1-93dc-4757-a9c3-50ec3c1ed39f> | CC-MAIN-2017-04 | http://www.itworld.com/article/2729766/networking/time-machine--newspapers-on-a-computer-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00350-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.96528 | 507 | 2.71875 | 3 |
The magnetic read/write heads are the workhorses of the hard drive. These components work harder than any other part of the hard drive and can be the fastest to wear out. They can also be damaged by physical trauma. If your hard drive’s read/write heads have failed and your hard drive is clicking, our heads failure data recovery services can reunite you with your data.
What Are the Read/Write Heads?
An extreme close-up of a magnetic read/write head resting on the mirrored surface of a platter (source)
Every read and write operation your hard drive performs gets carried out by its read/write heads. The heads are tiny electrically-charged coils of copper wire. They are capable of picking up and altering the electrical signals produced by the billions of magnetic sectors on your hard drive’s platters. These electrical signals are the data that lives on your drive.
The components of your hard drive are constantly in motion. The spindle motor spins your drive’s platters at thousands of revolutions per minute. While the platters spin below them, the heads are constantly darting across the radius of the platters.
Hard drives today have many platters stacked on top of each other. The read/write heads are also arranged in a stack. Each head reads data from one surface of the platters. Some hard drives have more platters than they have heads. Often, the only physical difference between a 500-gigabyte Western Digital hard drive and a 750-gigabyte Western Digital hard drive will be that one has less heads than the other.
The heads hover over the platters. A thin cushion of air produced by the platters’ rotational speeds helps keep them aloft. The read/write heads maintain a distance a scant few nanometers from the platters. With every read and write command, the arms which hold the heads over the platters shift to position them over the correct sector. This is tiring work, and the read/write heads are very fragile. Over time, they begin to wear down. In the early stages of failure, heads can mess up their read or write operations. Users will start to hear the hard drive click of death occasionally. The hard drive is clicking because it is failing to detect the magnetic tracks that encapsulate the binary data. Attempting to write new data with clicking heads can result in file corruption. If a set of read/write heads writes the wrong data to a firmware sector, it can turn your hard drive into a brick.
Read/write heads are also a common point of failure in hard drives. An excessive physical trauma or sudden loss of power can cause the heads to impact the platters. This will often destroy the heads. The platters, too, are somewhat fragile, and can sustain damage this way. Clicking hard drives that are having consistent i/o errors should be shut down if the data on the drive is critical and not backed up.
Failed heads don’t just keep you from accessing the data on your hard drive. Damaging the platters can lead to irretrievable data loss. If your hard drive begins to make a steady ticking noise or grinding noise, stop running it immediately. This could be a sign that its heads are damaging the platters. Our heads failure data recovery technicians have the tools and experience to salvage your data from your crashed hard drive.
How Does Heads Failure Data Recovery Work?
The complete headstack from a Western Digital WD2500JS-00MHB0 hard drive (source)
Replacing read/write heads is a tall order. Read/write heads are delicate, easy to destroy, and hard to replace. It takes an engineer with steady hands and keen eyes to remove one set of heads and replace it with another without damaging something.
Every model of hard drive has read/write heads that are just a bit different from each other. At Gillware, we keep a massive library of donor hard drives on hand. Just about every model of hard drive is somewhere in this library. And yet compatibility issues still arise. Hard drives are so sophisticated nowadays that even drives of the same model will be subtly different from each other. The calibration parameters will differ even on two drives that came off of the assembly line within seconds of each other. And so even with ample donors, finding a good set of heads can require trial and error.
It may take a few tries, and we may have to manipulate with the hard drive firmware a bit, but eventually, our engineers will find a set of heads that works. They may not work optimally, but even a stopgap solution is better than nothing. Our proprietary data recovery tool HOMBRE can manipulate the performance of a hard drive in a way that normal data recovery software can’t. This gives us a high degree of flexibility. Even if a newly-repaired hard drive’s performance leaves much to be desired, our engineers can still salvage data from it.
In many heads failure data recovery scenarios, the heads will need to be replaced several times. With each set of read/write heads, our engineers read more and more of the hard drive. Our focus is on reading as much of the used area as possible. Eventually, the condition of the hard drive may degrade to the point where no more data can be read off of its platters. When a hard drive fails, it is not long for this world. Our engineers always go after the user’s most critical files first for this reason.
Why Choose Gillware for My Heads Failure Data Recovery Needs?
At Gillware, we understand that hard drives often fail without warning. Few people plan for the expense of data recovery. And in general, people don’t like spending money and getting nothing in return. This is why Gillware’s data recovery evaluations are completely free. In fact, we even offer to cover the cost of inbound shipping. We can create prepaid UPS Ground labels for any clients in the continental US.
Our engineers assess the hard drive’s condition in our Class-100 cleanroom workstations. We inspect the read/write heads and data storage platters for damage. We determine whether the platters need burnishing and how difficult it will be to replace the read/write heads. We use all this data to formulate a price quote and probability of success.
The price quote is not a bill. You have a chance to review the cost and possibility of success before signing off on any recovery work. If you approve the quote, our heads failure data recovery engineers put in the work to salvage your data. We only charge you for our efforts after we’ve successfully recovered your critical data. If we can’t recover everything from your drive, we show you a list of results so that you can help us determine how successful our efforts have been. If we do not recover your most important data, you owe us nothing.
Our read/write head data recovery engineers are highly-skilled and world-class. We have data recovery tools no other data recovery lab in the world has. We will do everything in our power to recover your data. When you send your failed hard drive to Gillware, you can rest assured that your data is in good hands.
Ready for Gillware to Assist You with Your Heads Failure Data Recovery Needs?
Best-in-class engineering and software development staff
Gillware employs a full time staff of electrical engineers, mechanical engineers, computer scientists and software developers to handle the most complex data recovery situations and data solutions
Strategic partnerships with leading technology companies
Gillware is proud to be a recommended provider for Dell, Western Digital and other major hardware and software vendors. These partnerships allow us to gain unique insight into recovering from these devices.
RAID Array / NAS / SAN data recovery
Using advanced engineering techniques, we can recover data from large capacity, enterprise grade storage devices such as RAID arrays, network attached storage (NAS) devices and storage area network (SAN) devices.
Virtual machine data recovery
Thanks to special engineering and programming efforts, Gillware is able to recover data from virtualized environments with a high degree of success.
SOC 2 Type II audited
Gillware has been security audited to ensure data safety, meaning all our facilities, networks, policies and practices have been independently reviewed and determined as completely secure.
Facility and staff
Gillware’s facilities meet the SOC 2 Type II audit requirements for security to prevent entry by unauthorized personnel. All staff are pre-screened, background checked and fully instructed in the security protocol of the company.
We are a GSA contract holder.
We meet the criteria to be approved for use by government agencies
GSA Contract No.: GS-35F-0547W
Our entire data recovery process can be handled to meet HIPAA requirements for encryption, transfer and protection of e-PHI.
No obligation, no up-front fees, free inbound shipping and no-cost evaluations.
Gillware’s data recovery process is 100% financially risk free. We only charge if the data you want is successfully recovered.
Our pricing is 40-50% less than our competition.
By using cutting edge engineering techniques, we are able to control costs and keep data recovery prices low.
Instant online estimates.
By providing us with some basic information about your case, we can give you an idea of how much it will cost before you proceed with the recovery.
We only charge for successful data recovery efforts.
We work with you to define clear data recovery goals for our technicians, and only charge you upon successfully meeting these goals and recovering the data that is most important to you.
Gillware is trusted, reviewed and certified
Gillware has the seal of approval from a number of different independent review organizations, including SOC 2 Type II audit status, so our customers can be sure they’re getting the best data recovery service possible.
Gillware is a proud member of IDEMA and the Apple Consultants Network. | <urn:uuid:75f9eb64-25ce-4c18-9863-dba4e5d665b2> | CC-MAIN-2017-04 | https://www.gillware.com/readwrite-heads-failure-data-recovery/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00350-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93319 | 2,072 | 3.375 | 3 |
Milbrath M.O.,Michigan State University |
van Tran T.,Michigan State University |
van Tran T.,Bee Research and Development Center |
Huang W.-F.,University of Illinois at Urbana - Champaign |
And 4 more authors.
Journal of Invertebrate Pathology | Year: 2015
Honey bees (Apis mellifera) are infected by two species of microsporidia: Nosema apis and Nosema ceranae. Epidemiological evidence indicates that N. ceranae may be replacing N. apis globally in A. mellifera populations, suggesting a potential competitive advantage of N. ceranae. Mixed infections of the two species occur, and little is known about the interactions among the host and the two pathogens that have allowed N. ceranae to become dominant in most geographical areas. We demonstrated that mixed Nosema species infections negatively affected honey bee survival (median survival= 15-17 days) more than single species infections (median survival = 21 days and 20 days for N. apis and N. ceranae, respectively), with median survival of control bees of 27 days. We found similar rates of infection (percentage of bees with active infections after inoculation) for both species in mixed infections, with N. apis having a slightly higher rate (91% compared to 86% for N. ceranae). We observed slightly higher spore counts in bees infected with N. ceranae than in bees infected with N. apis in single microsporidia infections, especially at the midpoint of infection (day 10). Bees with mixed infections of both species had higher spore counts than bees with single infections, but spore counts in mixed infections were highly variable. We did not see a competitive advantage for N. ceranae in mixed infections; N. apis spore counts were either higher or counts were similar for both species and more N. apis spores were produced in 62% of bees inoculated with equal dosages of the two microsporidian species. N. ceranae does not, therefore, appear to have a strong within-host advantage for either infectivity or spore growth, suggesting that direct competition in these worker bee mid-guts is not responsible for its apparent replacement of N. apis. © 2014 Elsevier Inc. Source
Beaurepaire A.L.,Martin Luther University of Halle Wittenberg |
Truong T.A.,Bee Research and Development Center |
Fajardo A.C.,University of the Philippines at Los Banos |
Dinh T.Q.,Bee Research and Development Center |
And 3 more authors.
PLoS ONE | Year: 2015
The ectoparasitic mite Varroa destructor is a major global threat to the Western honeybee Apis mellifera. This mite was originally a parasite of A. cerana in Asia but managed to spill over into colonies of A. mellifera which had been introduced to this continent for honey production. To date, only two almost clonal types of V. destructor from Korea and Japan have been detected in A. mellifera colonies. However, since both A. mellifera and A. cerana colonies are kept in close proximity throughout Asia, not only new spill overs but also spill backs of highly virulent types may be possible, with unpredictable consequences for both honeybee species. We studied the dispersal and hybridisation potential of Varroa from sympatric colonies of the two hosts in Northern Vietnam and the Philippines using mitochondrial and microsatellite DNA markers. We found a very distinct mtDNA haplotype equally invading both A. mellifera and A. cerana in the Philippines. In contrast, we observed a complete reproductive isolation of various Vietnamese Varroa populations in A. mellifera and A. cerana colonies even if kept in the same apiaries. In light of this variance in host specificity, the adaptation of the mite to its hosts seems to have generated much more genetic diversity than previously recognised and the Varroa species complex may include substantial cryptic speciation. Copyright: © 2015 Beaurepaire et al. Source
Forsgren E.,Swedish University of Agricultural Sciences |
Wei S.,Chinese Academy of Agricultural Sciences |
Guiling D.,Chinese Academy of Agricultural Sciences |
Zhiguang L.,Chinese Academy of Agricultural Sciences |
And 5 more authors.
Apidologie | Year: 2015
Populations of Apis mellifera and Apis cerana in China and Vietnam were surveyed in order to study possible pathogen spill-over from European to Asian honeybees. This is the first survey of the prevalence of honeybee pathogens in apiaries in Vietnam, including pathogen prevalence in wild A. cerana colonies never in contact with A. mellifera. The bee samples were assayed for eight honeybee viruses: deformed wing virus (DWV); black queen cell virus (BQCV); sac brood virus (SBV); acute bee paralysis virus (ABPV); Kashmir bee virus (KBV); Israeli acute paralysis virus (IAPV); chronic bee paralysis virus (CBPV); and slow bee paralysis virus (SBPV), for two gut parasites (Nosema ssp.) and for the causative agent for European foulbrood (Melissococcus plutonius). The Vietnamese samples were assayed for Acarapis woodi infestation. No clear evidence of unique inter-specific transmission of virus infections between the two honeybee species was found. However, in wild A. cerana colonies, the only virus infection detected was DWV. With findings of IAPV infections in Chinese samples of A. cerana colonies in contact with A. mellifera, inter-specific transmission of IAPV cannot be ruled out. BQCV was the most prevalent virus in managed colonies irrespective of bee species. We did not detect the causative agent of European foulbrood, M. plutonius in wild or isolated colonies of A. cerana in Vietnam or China; however, low incidence of this pathogen was found in the Asian host species when in contact with its European sister species. No evidence for the presence of A. woodi was found in the Vietnamese samples. © 2014, The Author(s). Source | <urn:uuid:bdd15ba8-ddd1-4f3a-be59-c77673d503f3> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/bee-research-and-development-center-641396/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00350-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916543 | 1,313 | 2.6875 | 3 |
Indira is the administrator for a single-server network at Stationery Supply, Inc. Indira can't rely on replication for fault tolerance, because her environment has only one server. The Backup Tool functionality provides a simple solution for Indira to back up and restore eDirectory. It's server-centric and it's fast.
On eDirectory 8.7.3 or to later versions, Indira sets up unattended backups for her server using batch files to run the Backup Tool.
Indira wants to do a full backup of eDirectory every Sunday night, and an incremental backup every weeknight. She sets the unattended backups to run shortly before her full and incremental file system backups each night, so her tape backups contain the eDirectory backup files as well as the file system data. She has contracted with a remote data storage company to send the tape backups offsite.
Every Monday morning, Indira checks the backup log to make sure the full backup was successful. She also checks the logs occasionally during the week to make sure the incremental backups were successful.
Indira decides not to turn on roll-forward logs for the following reasons:
She does not have a separate storage device on her server, so turning on roll-forward logs would not provide any additional backup of eDirectory. If there were a storage device failure, the logs would be lost along with eDirectory, so there is no point in creating them.
The tree does not change very much, and she is satisfied with being able to restore only up to last night's backup. She doesn’t need to be able to restore eDirectory to the moment before a failure.
Because the server does not participate in a replica ring with other servers, roll-forward logs are not required for the restore verification process to be successful.
Stationery Supply, Inc. decides to reorganize the staff, so Indira does a manual backup before and after making significant changes to the tree. Her strategy is to make a new backup of changes during the middle of a weekday when necessary, instead of running roll-forward logs all the time.
To make sure her backup strategy is ready to go when she needs it, Indira tests it occasionally. She doesn’t have the budget to purchase a second server for testing, so she makes arrangements with a test lab in her town. Using a server like hers in the test lab, she installs her operating system and tries to approximate the environment of her eDirectory database. She restores her backups and checks to make sure eDirectory is restored as she expects.
One Wednesday morning, the hard drive containing eDirectory on the server has a failure. Indira obtains a new hard drive and the backup files from the full backup on Sunday evening, the incremental backup on Monday evening, and the incremental backup on Tuesday evening. She installs the new hard drive and installs eDirectory on it. Then she restores the full and incremental backups. Any changes to the tree that were made on Wednesday morning before the hard drive failure are lost because Indira was not running roll-forward logs on the server. But Indira is satisfied with restoring only to last night's backup. She doesn’t feel that running roll-forward logs would be worth the administrative overhead.
Jorge at Outdoor Recreation, Inc. has 10 servers running eDirectory. He does full backups every Sunday night and incremental backups nightly, running the eDirectory backup shortly before the file system backup to tape.
All of the servers are participating in replica ring. Jorge uses roll-forward logging for all the servers. On each of his servers, he has placed the roll-forward logs on a different storage device than eDirectory. He monitors the free space and rights on those storage devices to make sure the roll-forward logs don't fill up the storage device. Occasionally he backs up the roll-forward logs to tape and removes all except the one in use by eDirectory, to free up space.
The administrative overhead of turning on continuous roll-forward logging is worth it to Jorge, because it gives him the up-to-the moment backup required for servers that participate in replica ring. This way, if he needs to restore a server, the restored server will match the synchronization state that other servers in the replica ring expect.
In his test lab, Jorge periodically tests his backup files to make sure his backup strategy will meet his goals.
One Thursday at 2:00 p.m., the Linux server named Inventory_DB1 has a hard drive failure on the drive containing eDirectory.
Jorge needs to gather the last full backup and the incremental backups since then, which will restore the database up to the point of last night's incremental backup at 1:00 a.m. The roll-forward logs have been recording the changes to the database since last night's backup, so Jorge will include them in the restore to bring the database back to the state it was in just before the hard drive failure.
Jorge takes the following steps:
He gets a replacement hard drive for the server.
He gets the tape of the full backup for the server from the previous Sunday night.
The batch file he uses to run full backups every Sunday night places the backup file in /adminfiles/backup/backupfull.bk.
He had specified a file size limit of 200 MB in the backup configuration settings, so there are two backup files:
He also gets the tapes containing the incremental backups for Monday, Tuesday, and Wednesday nights.
The batch file he uses to run incremental backups every weeknight places the backup file in /adminfiles/backup/backupincr.bk.
Because he runs the same batch file every weeknight for the incremental backups of eDirectory, they all have the same filename. He needs to give them new names when he copies them back onto the server, because they all must be placed in the same directory during the restore.
Jorge installs the replacement hard drive.
In this case, the Linux operating system for the server was not on the hard drive that failed, so he does not need to install Linux.
Jorge restores the file system from tape backup for the disk partitions that were affected.
Jorge reinstalls eDirectory, putting the server into a new temporary tree (the restore puts it back into the original tree again later).
Jorge creates an /adminfiles/restore directory on the server, to hold the files to be restored.
He copies the full backup (the set of two files) into that directory.
He copies the incremental backups for Monday, Tuesday, and Wednesday nights into the directory.
Each of them is named backupincr.bk, so when he copies them into the directory he changes the filenames to
NOTE:Full and incremental backups aren't required to be in the same directory together, but all the incremental backups must be in the same directory.
He uses iManager to restore eDirectory:
He goes into iManager and clicks> .
He logs in to the server, using the context of the new temporary tree.
In the Restore Wizard - File Configuration screen, he does the following:
Enters /adminfiles/restore for the location where he placed the backup files.
Enters /adminfiles/restore/restore.log for the location where the restore log should be created.
In the Restore Wizard - Optional screen, he does the following:
Enters the location of the roll-forward logs.
(This is the separate location that he created specifically to hold the roll-forward logs. Because he placed them on a different hard drive than eDirectory, the hard drive failure did not affect them and they are still available.)
Wants eDirectory to open if the restore verification is successful.
He starts the restore and enters the filenames of the incremental backup files when prompted.
The restore verification is successful, so the database opens, back in its original tree.
The restore verification was successful because roll-forward logs were running on the server when the hard drive failed, and Jorge included the logs in the restore.
Jorge re-creates the roll-forward logs configuration on the server after the restore is complete, then he creates a new full backup.
The settings are reset to the default during a restore, which means roll-forward logging is turned off, so he has to turn it back on. The new full backup is necessary so that he is prepared for any failures that might occur before the next unattended full backup is scheduled to take place.
Jorge checks the way the server is running, and it appears to be normal.
Bob is the administrator for 15 servers at GK Designs Company. He does full backups every Saturday night and incremental backups nightly, running the eDirectory backup shortly before the file system backup to tape.
All of the servers are participating in replica ring. Bob uses roll-forward logging for all the servers.
An electrical fire destroys one of the servers in a branch across town. Fortunately, all but one of the partitions held by this server are also replicated on other servers. Bob had turned on roll-forward logs on that server, but they were lost along with all the other server data, so he can't restore the eDirectory database on that server to the state it was in just before the server went down.
However, he is able re-create the server's eDirectory identity by restoring with the existing backup files. Because Bob can't include the roll-forward logs in the restore, the server does not match the synchronization state that the other servers expect (see Transitive Vectors and the Restore Verification Process), so the restore verification process is not successful. This means that by default the eDirectory database is not opened after the restore.
Bob addresses the situation by removing this server from the replica ring, using DSRepair to change all the outdated replica information on the server to external references, and then re-adding a new copy of each partition to this server using replication from the other servers that hold the up-to-date replicas. These steps are described in Section 17.7, Recovering the Database If Restore Verification Fails.
The one partition on this server that Bob had not replicated was a container that held network printing objects for the branch office location, such as a fax/printer and a wide-format color printer. This partition information can't be recovered by the method noted above because no other server has a replica. Bob must re-create the objects in that partition, and this time he chooses to replicate them on other servers for better fault tolerance in the future.
Bob also re-creates the roll-forward log configuration after the server is back on line (because the restore turns it off and resets the settings to the default), and creates a new full backup as a baseline.
Joe administers 20 servers across three locations. At one location, a pipe bursts and water destroys 5 out of 8 servers.
Joe has eDirectory backups for all the servers. However, all the servers participate in replica ring, and he is concerned about bringing them back into the tree without the roll-forward logs, which were also lost. He is not sure which servers to restore eDirectory on first or how to address inconsistencies between replicas. Because of the complex issues involved, he calls NetIQ Support for help in deciding how to restore.
Delores and her team at Human Resources Consulting, Inc. administer 50 servers at one location.
For fault tolerance during normal business circumstances, they have created three replicas of each partition of their tree, so that if one server is down, the objects in the partitions it holds are still available from another server. They have also planned for recovery of individual servers by backing up all their servers regularly with the Backup Tool, turning on roll-forward logging, and storing the backup tapes at a remote location.
For disaster recovery planning, Delores and her team have also designated two of their servers as DSMASTER servers. They use two servers because their tree is large enough that more than one DSMASTER server is needed to hold a replica of every partition. Every partition in the tree is replicated on one of the two DSMASTER servers. Neither of the two DSMASTER servers hold replicas of the same partition, so there is no overlap between them. This design is an important part of their disaster recovery plan.
In their test lab, Delores and her team periodically test the backups to make sure their backup strategy will meet their goals.
One night the Human Resources Consulting, Inc. building is damaged by a hurricane, and all the servers in the data center are destroyed.
After this disaster, Delores and her team first restore the two DSMASTER servers, which hold replicas of every partition. They use the last full backup and the subsequent incremental backups, but can't include roll-forward logs in the restore because they were lost when the servers were destroyed. Delores and her team planned the DSMASTER servers so that they don't share replicas. Because the two DSMASTER servers do not share replicas, the restore verification process is successful for both servers even though the roll-forward logs are not part of the restore. After the DSMASTER servers are restored, all the objects in the tree for Human Resources Consulting, Inc. are now available again.
The DSMASTER servers are important because Delores and her team can use them to re-create the tree without inconsistencies after a disaster.
They were using roll-forward logs so they could restore a server to the state it was in at the moment before it went down, bringing it back to the synchronization state expected by other servers in the replica ring. This allows the server to resume communication where it left off, and receive any updates it needs from the other replicas to keep the whole replica ring in sync.
However, in this disaster situation, Delores and her team do not have the roll-forward logs. Without the roll-forward logs, only one server in a replica ring can be restored without errors—the first one they restore. For the rest of the servers, the restore verification process will fail because the synchronization states don't match what the other servers expect (see Transitive Vectors and the Restore Verification Process). If the restore verification fails, the restore process will not activate the restored eDirectory database.
Delores and her team anticipated this, and they have planned for it. They use the two DSMASTER servers as a starting point, which gives them only one replica of each partition.Those servers can be restored without verification errors, and then the replicas they hold can be used as masters to be copied onto all the other servers.
After restoring the DSMASTER servers, restoring the rest of the servers requires some extra steps. Delores and her team must restore each of the remaining servers by doing the following:
Making sure that the replicas on the DSMASTER servers are designated as master replicas.
Removing all the servers except the DSMASTER servers from the replica ring.
Restoring the full and incremental backups for each of the other servers.
Delores and her team know that the restore verification process will fail for the rest of the servers, because they could not use roll-forward logs in the restore for any of the servers. This leaves them with a restored database that is not activated.
Activating the restored database, but keeping it locked, using advanced restore options
Using DSREPAIR to change all the replica information to external references.
Unlocking the restored database.
At this point the server has the same identity it did before but it will not try to synchronize replica information. Instead, it is prepared to receive a new copy of the replicas it held before.
Adding the replicas back on to each server by replicating them from the copy on the DSMASTER server.
Delores and her team have a pretty good idea which replicas were held by each server, but they can read the header of the backup files for each server to see a list of the replicas that were on the server at the time of the last backup.
Re-creating the roll-forward log configuration after the servers are back on line (since the restore turns it off and resets the settings to the default), and creating a new full backup as a baseline to prepare for any other failures that might happen before the next unattended full backup is scheduled.
(These steps are explained in more detail in Section 17.7, Recovering the Database If Restore Verification Fails.)
Delores and her team have a lot of work to do, but they can get the tree itself up relatively quickly, and they can expect to recover the eDirectory identity for all of their servers. | <urn:uuid:830fe24e-0475-4737-9197-09adc2304dce> | CC-MAIN-2017-04 | https://www.netiq.com/documentation/edir88/edir88/data/ahpvs92.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00076-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947021 | 3,451 | 2.6875 | 3 |
Fiber Optical Cable has brought a revolution to the data transmission system. As the earlier Electrical Wire System was difficult to manage and was sometimes also hazardous to life. With the emergence of Fiber Optical Cable, data transmission is no more an irksome job. It is now simplified, providing much more convenient than ever imagined.
Following Are The Reasons For Choosing Optical Cables For Network Cabling:
Safe To Use: Fiber Cable is far better than copper cable from the safety point of view. Copper and Aluminum Wire are good conductors of electricity and carry electric current. But when their outer insulated coating gets damaged, one can experience electric shock that can be dangerous to life. In this regard, Fiber Cables are safer to use because they do not transmit current but rather light waves.
Withstand Rough Conditions: Fiber Cable is capable of resisting tough conditions that co-axial or any other such cable cannot do. The reason is that other cables are usually made up of one or the other metal and are prone to corrosion, while Fiber Cable is covered with protective plastic coating with glass inside and transmits light impulses in spite of electric current, which make it resistant towards corrosion.
Long Distance Data Transmission: There cannot be any comparison in terms of data carrying capacity of Fiber Optical Cable and Copper Cable. Fiber Cable can transmit signals 50 times longer than Copper Cable.
Moreover, signal loss rate of Fiber Optical Wire is also very less, and thus does not need any kind of reminder in transmitting the signals at same pace. Fiber Cable has higher bandwidth that is amount of data communication resources available or consumed – this is the reason how Fiber Cable can transmit data at longer distances.
Easy Installation: Ethernet Cable is long and thin with intact cables inside. It is also light in weight which makes its installation at almost every place easier as compared to other wires.
No Electrical Interference: Fiber Optical Cable neither carries electric current nor need earthing. Therefore, it does not get affected by the electrical interferences. Fiber Cable is immune to moisture and lighting, which makes it ideal to be fitted inside the soil or an area where there is high Electromagnetic Interference (EMI).
Durable and Long Lasting: Fiber Optical Cable is durable and lasts longer than any other cable such as Co-Axial Cable, Copper Cable, etc. It is perfect for network cabling.
Data Security: Extra security can be provided with Fiber Optical Cable as it can be tapped easily and data transmitted through it remains secure, while in case of the Copper Cable there is no surety of data security and any loss of data cannot be obtained back.
There are various types of optical fiber cables available on the market, including 250um Bare Fiber, 900um Tight Buffer Fiber, Large Core Glass Fiber, Simplex Fiber Cable, Duplex Fiber Optic Cable, OM4 OM3 10G Fiber Cable, Indoor Distribution Cable, Indoor & Outdoor Cable, Outdoor Loose Tube Cable, Fiber Breakout Cable, Ribbon Fiber Cable, LSZH Fiber Optic Cable, Armored Fiber Optic Cable, FTTH Fiber Optic Cable, Figure 8 Aerial Cable, Plastic Optical Fiber, PM fiber & Special Fiber, etc. They are used for different applications, one must do a thorough research before buying fiber cables for network cabling. | <urn:uuid:dde04177-265c-453f-ab53-d7bc5f1bdb04> | CC-MAIN-2017-04 | http://www.fs.com/blog/fiber-optic-cables-are-the-first-option-for-data-transmission.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00562-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925956 | 669 | 2.953125 | 3 |
Password management is a common IT support issue that creates problems for many organizations. Password management can be divided into two categories. The first category is normal user accounts, for day-to-day activities by normal users. The second category is privileged accounts, such as background applications and server administration accounts, managed only by IT administrators.
Regular user accounts usually cause a lot of routine work for the IT help desk, because users tend to forget passwords and accidentally lock their accounts, and they call support for help, which can use up a lot of time. Privileged accounts are managed by very thorough IT guys, but they can also cause a lot of worries and headaches because of their shared nature. One privileged account is usually managed by multiple admins, and each admin usually has multiple accounts, creating complex relationships to keep in mind.
Privileged accounts fall into two types: service and administrative. Privileged service accounts run services and other background applications. Privileged administrative accounts are used for managing servers. A local Administrator account is an example of a privileged server management account.
Every IT team needs many privileged passwords for managing servers and applications. It's a common situation when a group of servers is managed by several different administrators, and proper account maintenance requires close cooperation between them. Password changes may cause unexpected effects, such as account lockouts, if not properly communicated to all team members.
In the following example (see picture), Joe and Bill manage two servers each, one of them shared. One day Bill comes to work and decides to change the service password because it's going to expire today. He changes that and updates his two managed services. He is happy! And guess what happens next? Yes, he's just broken the Exchange server managed by Joe, and not only that, he's locked out the shared service account, because the Exchange is still running with old credentials.
Another example is a local Administrator account. Now what if Joe, getting back at Bill, resets the local admin password for SQL1 and says nothing to Bill? How can Bill now access this shared server? That's a good revenge?
To address privileged password management issues, Netwrix designed a product called Netwrix Privileged Account Manager. Once the product is deployed, all privileged password management takes place from a central server, accessible from a Web browser. Administrators never update passwords directly, but rather go through a management console, which ensures the proper workflow. All you have to do is specify a list of managed servers once for each account. Then, when someone from your team changes a password, the product goes through all of your servers and updates automatically discovered services. You may even remove administrative permissions from your normal accounts to prevent inadvertent changes and let Netwrix Privileged Account Manager take care of your service accounts.
Note: if you are looking for password management of regular user accounts, please visit Netwrix Password Manager home page. | <urn:uuid:610ab0b9-d31b-4607-867f-d4ef51e87035> | CC-MAIN-2017-04 | https://www.netwrix.com/privileged_password_management.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00378-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940948 | 589 | 2.828125 | 3 |
It was November 1989, just 24 years ago, when the first public dial-up Internet Service Provider (ISP), The World, was introduced. The available speed was 0.1Kbps. It could take between 10 minutes to several hours to download a 3.5 MB song and it would have taken days to download a 700 MB movie file. Equally antiquated were the browsers. There was no HTML, no apps, and no CSS. In fact, there was no way of even showing images in line with corresponding text.
For some of us, this time is barely a memory. The evolution of web browsers and the drastic improvement in broadband speeds over the last quarter-century tell fascinating stories – stories that run parallel. Innovations in web browsing are the result of advancements in broadband connections. And their shared story is also the story of building and sharing the largest, most important communications network in human history. | <urn:uuid:86c2ef9a-2097-464e-87fc-ca4c8d29e3a4> | CC-MAIN-2017-04 | https://www.ncta.com/platform/broadband-internet/a-shared-history-of-web-browsers-and-broadband-speed-slideshow/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00102-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.969485 | 185 | 3.03125 | 3 |
“What is your name?” crowed the Bridge Keeper of the abyss in Monty Python and the Holy Grail. Invariably, the next question was “What is your quest?” Like the user name and domain name in typical username/password authentication systems, these should be easy, although some days I’ve typed my username or domain incorrectly, and sometimes people aren’t so clear on their quest. But that’s another story.
It is the next question and its answer which are possibly unique per person. In the aforementioned story, getting that wrong is simply disastrous, whether it be ambivalence of affinity to a frequency in the electromagnetic spectrum or ignorance of ornithological species differentiation in terms of unladen velocity. In the wonderful world of computing, the proper authentication is the critical precursor to specific authorization and accounting.
In terms of Windows authentication, the correct username and passphrase combination or smart card and PIN validation yield a user’s security access token (SAT). This token contains a user’s security identifier (SID), the SIDs of the groups they are a member of, some user rights (privileges), and other associated information. In recent versions of Windows (Vista, Server 2008, Seven, and Server 2008 R2) there could be a split token for administrative and non-administrative identities. Through this token obtained via authentication, all Windows authorization and accounting is derived.
That’s the prologue. Many services hosted on Windows have management and end user roles associated with them, along with assignments of abilities through permissions to different users and groups. Operating systems and devices classically offer discretionary access control (DAC) to resources. For example, the Windows registry, NTFS file system, and Active Directory (AD DS and AD LDS) all utilize security descriptors which include ownership, auditing, and permissions. The permissions for each value, file, or object area specified in thee security descriptors’ discretionary access control list (DACL).
Such permissions lists are “discretionary” in the sense that owners of resources, and other users who have been delegated with “take ownership” or “change permissions” permissions could potentially modify the initial security controls, including the permissions and auditing, which had been established for those resources. Therefore, the security of the system is at their discretion. This discretion is a violation the prime directive of another sort of access control called mandatory access control (MAC). As the name implies, the security controls in such a system are indeed mandatory. Everyone, including owners and delegates of resources must follow the security rules established by the central security authority without discretionary exception. Because many systems administrators, and of course resource owners and delegated administrators, want flexibility, DAC is far more commonly implemented than MAC. Again, Windows is primarily a DAC-based system.
In the story of the aforementioned bridge keeper, when King Arthur queried the bridgekeeper, it was shown that the bridgekeeper was subject to the same controls as everyone else, therefore that is an example of MAC not DAC – just in case you were wondering.
In the next part, we will actually focus on RBAC and Hyper-V. | <urn:uuid:fa39207a-8f49-4e55-99a4-577af6d7c427> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2010/04/20/hyper-v-server-authorization-part-1/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00314-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931796 | 663 | 2.796875 | 3 |
Yeung P.M.Y.,Diocesan Girls School |
Yeung P.M.Y.,Hong Kong Academy of Gifted Education |
Li W.,Achievers |
Ip C.H.,Wah Yan College |
And 10 more authors.
Progress in Nutrition | Year: 2011
Tangerine peel is a common traditional Chinese medicine and an herb for Chinese food preparation. In order to study the influence of oxidative stress, ozonated water, gaseous ozone and UV radiation were applied to potted tangerine plants and the antioxidant levels in fruit pericarp were compared. After being watered daily with 500 mL ozonated water (0.84 mg/L dissolved ozone) for two weeks; a 19.5% increase in pericarp antioxidant level was detected. After plant samples were exposed to 6 ppm ozone, 2.88 KJ/m2 UV or a combination of both for 30, 60 and 120 min daily for the same period; decreases in antioxidant level were found and were generally in proportion to the time course of treatment. The results indicated that the oxidants depleted the antioxidant in the pericarp of mature tangerine. Ozonated water application to soil, however, as a systemic response increased the antioxidant level in the pericarp. Source | <urn:uuid:a350abc4-4093-48e2-b4e5-68d8bbdd9c2e> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/hong-kong-academy-of-gifted-education-1573186/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00038-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946675 | 268 | 2.609375 | 3 |
10 G, 40G, and 100G Ethernet standards were continuously set up to meet the increasing demands of packet-based networks. Gigabit Ethernet is not strange to us now. It’s very popular for the high speed, low implementation cost, high reliability, and relative simplicity of installation and maintenance. This article will see the 10 Gigabit Ethernet.
10 Gigabit Ethernet transmits the data at the rate of 10Gbps per second. This standard extends the IEEE 802.3ae standard protocols and expands the Ethernet application space to include WAN-compatible links.
10 Gigabit Ethernet retains Layer 1 and Layer 2 protocol. At the physical layer (Layer 1), an Ethernet physical layer device connects the optical or copper media to the MAC (media access control) layer. Ethernet architecture further divides the physical layer into three sublayers: physical medium dependent (PMD), physical medium attachment (PMA), physical coding sublayer (PCS). PMD provides the physical connection and signing to the medium like optical transceivers. The PCS consists of coding and a multiplexer. The IEEE 802.3ae standard defines two PHY types: the LAN (local area networks) and the WAN (wide area networks) PHY.
Figure 1. 10 Gigabit Ethernet protocol
Ethernet technology is now the most deployed one for high-performance LAN environments. Compared with Gigabit Ethernet, 10 Gigabit Ethernet LAN network can reach longer distance and support more bandwidth. So 10 Gigabit Ethernet is a natural choice for expanding, extending, and upgrading existing Ethernet networks.
The 10 Gigabit Ethernet standard not only increases the speed of Ethernet to 10Gbps, but also extends its interconnectivity and its operating distance up to 80 km. 10 Gigabit Ethernet supports both single-mode and multimode fiber connection. The single-mode fiber connection can extend the transmission distance to 80 km in 10 Gigabit Ethernet network. This advantage of long distance transmission allows customers who manage their own LAN environments to extend their data center to a more cost-effective location up to 80 km away from their campuses. This also allows them to support multiple campus locations within 40km distances.
With the development of 10 Gigabit Ethernet based technology, the cost for 10 Gbps communications has dropped significantly. So to a certain degree, the cost becomes a main factor to increase the popularity of 10 Gigabit Ethernet. The following will talk about its applications in three areas.
As it mentioned before, 10 Gigabit Ethernet technology has been widely deployed for LAN environments. The supportable network links can reach up to 80 km. So the location of the data center and server farms can be 80 km away from campuses. In the data center, 10 Gigabit Ethernet backbones can be deployed with cost-effective, short-haul and multimode fiber medium.
With 10 Gigabit Ethernet backbones, the network congestion can be reduced, which enables greater bandwidth-intensive applications such as stream video, medical imaging and high-end graphics. It also makes other implementations come true, including distance learning, telecommuting, and digital video conference, etc.
Figure 2. 10 Gigabit Ethernet use in expanded LAN
10 Gigabit Ethernet has already been applied as a backbone technology for dark fiber metropolitan networks. With appropriate optical transceivers and fiber cables, network and Internet service providers can build links reaching 80 km or more, connecting the whole metropolitan areas.
10 Gigabit Ethernet enables low-cost, high-speed infrastructure for both network attached storage (NAS) and storage area networks (SAN). It can offer superior data carrying capacity at latencies like Fibre Channel. 10 Gigabit Ethernet are applied in many areas, such as remote back-up, storage on demand and streaming media.
10 Gigabit Ethernet greatly reduces the cost of creating high-speed links from co-located, carrier class switches and routers to the optical equipment directly attached to the SONET/SDH cloud. 10 Gigabit Ethernet also allows WANs to connect dispersed LANs between campuses or points of presence over existing SONET/SDH network. The links between a service provider’s switch and a DWDM device may be very short (less than 300 meters).
Figure 3. 10 Gigabit Ethernet use in WAN
The IEEE 802.3ae standard provides a physical layer that supports specific link distances by different media. The following table shows 10 Gigabit Ethernet interfaces and supported distances over fiber and copper.
|Fiber||10GBASE-SR||850 nm||Serial||300 m||MMF|
|10GBASE-LRM||1310 nm||Serial||220 m||MMF|
|10GBASE-LX4||1310 nm||WDM||300 m/10 km||MMF/SMF|
|10GBASE-LR||1310 nm||Serial||10 km||SMF|
|10GBASE-ER||1550 nm||Serial||40 km||SMF|
|10GBASE-ZR||1550 nm||Serial||80 km||SMF|
|Copper||10GBASE-CX4||-||4 lanes||15 m||Twin Axial|
|10GBASE-T||-||Twisted pair||100 m||UTP|
10 Gigabit Ethernet has come into our daily life step by step. With the development of Ethernet technology, we can achieve higher-speed Ethernet network of 40G and 100G, or even more. Except the major promise of advanced technology, we also need to choose a vendor with the most cost-effective fiber optic products to build network. And Fiberstore (FS.COM) is no doubt the best choice. For more services, please contact via email@example.com. | <urn:uuid:5b75a213-a0ab-4633-8d2f-bdbd0955bdfa> | CC-MAIN-2017-04 | http://www.fs.com/blog/do-you-know-enough-about-10-gigabit-ethernet-technology.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00524-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.875504 | 1,187 | 3.375 | 3 |
Crowdsourced mapping for flood tracking, prediction
- By Suzette Lohmeyer
- Dec 02, 2016
The latest tool in flood prediction for low-lying coastal areas in Hampton Roads, Va., is crowdsourced drone footage posted on YouTube.
While regulations and no-fly zones in the Norfolk area (home of the Norfolk Naval Base) should prevent drones from being flown, Derek Loftis and his team at the Virginia Institute of Marine Science realized that the regulations don’t seem to stop drone hobbyists from flying.
“After Hurricane Matthew hit, people were out there recording with their drones,” Loftis said. “Some of them even attached phones to produce live streaming video.”
Loftis realized he could use the drone video as a cost-free way to check the accuracy of his primary flood prediction model, StormSense.
StormSense, which uses street-level hydrodynamic modeling to determine types of flooding as well as the areas at highest risk, is dependent on ultrasonic sensors. At around $5,000 a pop, Loftis said, many towns can’t afford to put them in every spot where there might be flooding. And while Loftis just won a $75,000 grant from the National Institute of Standards and Technology to purchase more sensors, it still won’t be enough to cover every area he wants.
With the video from the drones, Loftis can see “the maximum line of flooding, and we can check if it is the same as we predicted,” he said. “We can figure out how off we are. Are we 20 feet or are we five feet off.”
If using YouTube drone video sounds less than scientific, Loftis agreed. “That’s true,” he said. “But if you can get a hold of the raw footage, you can stitch it into useable data using Esri’s Drone2Map tool that analyzes drone images and converts them into 2-D and 3-D maps.
Using an app from Esri is a strategic part Loftis’ long-term plan to make his flood prediction methodology useable anywhere by anyone. “A lot of cities have contracts or site licenses for the Esri GIS program and are filled with people certified to use it.”
Another way to track flooding is with the Sea Level Rise app, which, like the drone footage, crowdsources native knowledge. Created by the non-profit Wetlands Watch, the app allows local citizens to map flooding during and after an event.
“I watched all the spontaneous social networking spring up during Hurricane Sandy,” Wetlands Watch Executive Director Skip Stiles said. “I thought, well wait a minute, could you use social networking and crowdsourcing to get the knowledge to people who need it?”
The organization partnered with Concursive to create the Sea Level Rise App. Anyone can view the app, but only those who have gone through a 10-minute training are authorized to add data to the map.
Users go out during a flooding event and walk around the edge of the water. Every five steps they use the app to drop GIS pins. The data they collect is exported as an Excel file so it can be turned into a shape file and overlaid on an emergency management grid.
This allows researchers like Derek Loftis who gets data from Wetlands Watch to reduce the margin of error on his prediction models, Stiles said, who is hoping to get the margin of error down to 10 feet.
This not only helps emergency crews know where to go but also has a very practical application for areas like Hampton Roads that have school flood days instead of snow days. “My phone might buzz and say, ‘High Water Advisory for the city from 12-4, and I don’t know when or where exactly,” Stiles said. “With this app you can get enough data” so that at 8 a.m. parents will know if after-school activities might be cancelled because of flooding.
While the app is currently free, Stiles and his team have created a business model to make it self-sustaining.
“Eventually we want to give people franchise areas in which they can map, say, for example, the City of Virginia Beach. The backend data support would be done by Concursive, and the City of Virginia Beach would pay a few bucks and then they could then have all the data,” he said. “Collect a few pennies from a lot of people and it becomes self-supporting.”
Stiles also is looking into the idea of selling the data to insurance companies. “Do you know how many cars we lose due to flooding?” Stiles asked. “If an insurance company paid the cost of just one SUV, we could sustain the Sea Level Rise project for a year.”
Suzette Lohmeyer is a freelance writer based in Arlington, Va. | <urn:uuid:42323081-9955-444c-9f3f-2564360e5416> | CC-MAIN-2017-04 | https://gcn.com/articles/2016/12/02/crowdsourced-flood-tracking.aspx?admgarea=TC_Mobile | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00524-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960584 | 1,042 | 2.546875 | 3 |
Google is to include scanned documents in its search results for the first time.
"In the past, scanned documents were rarely included in search results as we could not be sure of their content. Today, that changes. We are now able to perform Optical Character Recognition (OCR) on any scanned documents that we find stored in Adobe's PDF format."
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
This Optical Character Recognition technology lets Google convert a picture of a document into the words contained in it.
Whilst Google has indexed documents saved as PDFs for some time, scanned documents are a lot more difficult for a computer to read.
Scanning is the reverse of printing. Printing turns digital words into text on paper, whilst scanning makes a digital picture of the physical paper (and text) so you can store and view it on a computer.
The scanned picture of the text, however, is not quite the same as the original digital words, said Google. "Often you can see tell-tale signs: the ring of a coffee cup, ink smudges, or even fold creases in the pages.
"To people reading these documents, the distinction between words and pictures of words makes little difference, but for a computer the picture is almost unintelligible." | <urn:uuid:a8f8ac8f-bad6-4667-88b5-f558274d12e2> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/2240087331/Google-to-include-scanned-documents-in-search-results-for-first-time | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00066-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955255 | 279 | 2.765625 | 3 |
How smart are social bots?
- By William Jackson
- Nov 28, 2011
In 1950, computer pioneer Alan Turing proposed the Imitation Game in which a person would question two unseen subjects, one a machine and the other a human, in an effort to distinguish them. This has become known as the Turing Test, as he wrote in his paper, "Computing Machinery and Intelligence."
“I believe that in about fifty years' time it will be possible to programme computers ... to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning.”
It now is 61 years after Turing’s prediction. He set the bar pretty low, giving the machine only three chances in 10 of winning. How are we doing?
New cyber threats put government in the cross hairs
'Socialbot' fools Facebook, and users give up their info
With the creation of the Internet, our increasing use of remote transactions and interactions, and especially with the advent of social networking, the question has taken on more than academic importance. It is at the heart of the problem of how we know who or what we are dealing with online and how to ensure our security and our privacy.
In an interesting experiment, researchers at the University of British Columbia recently were able to infiltrate Facebook with a herd of automated “social bots” that went largely undetected by the network’s defenses for eight weeks, friending thousands of users and harvesting their personal information. Over a six-week period, the bots sent out 3,517 friend requests to human Facebook users, 2,079 of which — 59 percent — were accepted. At first glance, this looks as if the social bots won the Imitation Game and passed the Turing Test with flying colors.
We need to take those results with a grain of salt, however. The bots were good at defeating defenses such as CAPTCHA codes used to identify and block spamming bots and at gathering and posting appropriate information to create the impression that there were real people behind the accounts. But the bots were not really speaking with the other Facebook users. The bots didn’t pass the Turing Test because the Facebook users never really questioned them.
It turns out that the automated software bots really aren’t that smart but that the Facebook users were acting dumb. When it comes to social networking, we are our own worst enemies.
Social networking creates online communities that can be used for socializing and for collaboration and increasingly it is being used in the workplace. This has led to a lot of questions about the security and privacy controls of the systems. But the first question we need to ask about these networks is how we are behaving on them. Are “friends” being collected indiscriminately as status symbols, and is personal information being posted inappropriately? It is very difficult — if not impossible — to protect a person who is determined to be his own worst enemy by palling around with semi-sentient social bots.
Alan Turing might be disappointed in the performance of our 21st-century computers in the Imitation Game if he were around today, but he might be even more disappointed in the performance of the people in the game. Artificial intelligence won’t be very impressive if it is measured against people who have lowered themselves to the level of machines.
William Jackson is a Maryland-based freelance writer. | <urn:uuid:ee010895-93b6-4878-a33e-feb6f5700845> | CC-MAIN-2017-04 | https://gcn.com/articles/2011/12/05/cybereye-how-smart-are-social-bots.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00460-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.968947 | 703 | 2.96875 | 3 |
Written by Michael Frenn, Administrator of the Solano County Emergency Medical Services Agency, a bureau in the Public Health Department.
The Solano County, California, Public Health and Social Services Department takes the health and quality of life of its community seriously and is preparing ahead of time to safeguard them from any emerging health threats such as a flu pandemic. They organized a large preparedness drill in the Fall, just prior to the typical flu season, and utilized digital pen and paper technology to streamline the process of treating thousands of people quickly.
In the event of an actual emerging flu pandemic, the Solano County Public Health and Social Services knew it would need to be able to treat as many of its inhabitants as possible in as short a time as possible. In order to achieve this, they used digital pen and paper technology, invented by Anoto Group AB, to assist in a practice drill involving volunteer participants from the public to receive real doses of the year's seasonal flu vaccine.
The ultimate goal of the exercise was to achieve a vaccination rate established by the Centers for Disease Control and Prevention (CDC). This rate was possible based on the process rate from the county's exercise in which the digital pen and paper system was tested. The general public, a total of 267 patients, filled out their personal health surveys with these special pens, a new technology being increasingly used in hospitals and nursing homes around the country. The details from each survey determined whether the patient should be seen by a nurse for further screening prior to receiving the vaccine, and the pen was able to capture their information, process it immediately, and alert a nurse.
Any size pandemic has the potential of changing our lives, which is why Solano County Public Health and Social Services provided the public with free flu shots, thereby realistically dispensing mass numbers of doses of vaccine in a simulated pandemic situation. Using the innovative digital pen and paper technology, they achieved the speed, easy of use, and efficiency needed to safely process patients.
"Solano County Public Health is constantly working to prepare for public health emergencies, including the very real possibility of a pandemic flu," said Robin Cox, Health Education Manager for Solano County Public Health. "This technology gave the county healthcare professionals the critical patient information in real time to ensure that they were able to administer the flu vaccine to patients who would not have an adverse reaction. They were also able to assess the geographic locations that need additional communications of the US. Dept. of Health's message," Cox continued.
Setting up the system
The Solano County team spent weeks preparing for this flu vaccination event, which utilized the services of 101 county employees, including doctors, nurses, health educators, the Public Information Officer, and numerous county line staff. Arthur James, president of IS2BE who developed digital pen and paper technology for the program, provided approximately 100 pens in addition to the development of the survey templates. In the event 267 members of the public participated and were screened, with 265 receiving vaccinations. Two patients were deemed potentially allergic to the vaccine and therefore were not administered with it.
The data collected by the system was critical to the screening process by helping non-medical staff determine whether the patient would need to be seen in a second screening process by a nurse. In this landmark trial, speed was a paramount component to achieving Solano County's objective of vaccination. The contributing factor of success was the digital pen and paper system ability to immediately collect and process the health history and contact details of each patient in real time.
Solano County Public Health's key objective was threefold: promotion of receiving a flu vaccine, promotion of attendance to the health community event, and promotion of safety while offering and administering free vaccines to the county's inhabitants. As part of their safety goal, a survey was formulated to capture a patient's medical history and contact information as well as guide the patient in a workflow question and answer process. In the end the Public Health Department was able to evaluate how well it marketed the event while also determining how well it's targeted marketing messages reached the community.
The event was held at the Solano County fair grounds in order to provide enough room for its elaborate operations which involved a lot of planning, technology and processes.
A proper screening protocol was established that entailed two screening processes. The first involved non-medical screening personnel who would distribute to the public approximately 100 digital pens and health surveys printed on digitally encoded paper. The survey was developed in a way that would guide the patient through a series of comprehensive workflow questions and answers. These questions need to be answered and documented prior to anyone being given the vaccine. Thanks to the digital pen and paper, the participants themselves could supply important details of their own medical history such as allergies as well their addresses and contact details. The staff noted that fewer data errors occurred when participants filled out their own surveys.
Additionally, the use of digital pen and paper reduced the need for data entry and hastened the speed at which survey data could be processed. The patient's data could be immediately downloaded from the digital pen and utilized by non-medical screeners in order to decide whether to send the patient for further screening by a nurse. Nurses conducted the second screening process because they are better qualified to determine whether or not a patient could have an adverse reaction to the vaccine.
The final process entailed another check in with the participant before the actual administering of the vaccine by the nurses.
The goal to see 250 patients an hour was exceeded using the digital pen and paper. The use of digital pen and paper technology allowed for the easy collection of data that could be used and processed immediately. | <urn:uuid:cbc6ea52-3ea8-45b9-9024-73464fa7f1ea> | CC-MAIN-2017-04 | http://www.govtech.com/health/Preparation-for-Epidemic-Requires-a-Speedy-Vaccination_Process.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00368-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.97023 | 1,139 | 2.953125 | 3 |
"Lean is the never-ending process of eliminating waste; finding every activity that does not create value for the customer and eliminating it."
The Five Principles of Lean
- Specify the value: Define value from a customer’s perspective and express it in terms of a specific product
- Map the value stream: Map all the steps - value added and non-value added - that bring a product or service to the customer
- Establish the flow: The continuous movement of products, services, and information throughout the manufacturing process
- Implement pull: Nothing is done by the upstream process until the downstream customer signals the need
- Work to perfection: Complete elimination of waste so that all activities create lean manufacturing value for the customer
Benefits of Lean
- Lean reduces costs, defects, lead times, inventory, space, and waste
- Lean improves productivity, customer satisfaction, profit, customer responsiveness, capacity, quality, cash flow, on time delivery
- Value stream mapping - analyzing and streamlining the flow of materials and information
- Kaizen – a philosophy on continuous improvement
- Muda - eliminating waste and promoting efficiency
- Just-in-time - reducing in-process inventory and the associated costs
- Theory of Constraints - using throughput, operating expense, and inventory as a measure to improve operating efficiency and achieve business objectives
- Heijunka – a technique to reduce waste by sequencing and production leveling
- Jidoka - stopping at every abnormality; human intelligence built into machines
- 5S – workplace organization methodology - especially a shared workplace
- Kanban - a simple, visual system for signaling customer demand
- Poka-Yoke - a mechanism in lean manufacturing which helps an operator or system to avoid mistakes through "mistake-proofing"
Lean - The HCL Way
1. Lean Assessment
The first step in the Lean manufacturing journey, this assessment allows the weighting of the nine key areas of manufacturing. We measure the Lean Index and the benchmark processes. The assessment is followed with a report, indicating:
- The current "leanness" of the organization
- The elements of strength that the organization can build on
- A complete action plan to fully integrate Lean principles into the organization
2. Value Stream Mapping (VSM)
VSM is a method of visually mapping the flow of materials and information as a product makes its way through the value stream. The basic idea is to first map your processes and then map the information flow that enables the processes to occur. VSMs are the blueprints for lean transformation and serve as a starting point to help in recognizing waste and identifying its causes.
3. Value Stream Engineering
While most organizations are able to produce "current-state" maps, many struggle with the process of creating "future-state" maps and the corresponding implementation plans. In this phase, we create a future state value stream map that identifies and quantifies the opportunities. This phase not only involves the re-engineering of current processes to improve performance through the deployment of lean manufacturing systems, but also monitoring and stabilizing the processes and transitioning them to process owners. | <urn:uuid:d409a948-d99e-4c52-a78c-eb0434f79734> | CC-MAIN-2017-04 | https://www.hcltech.com/industrial-manufacturing/lean-manufacturing | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00396-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.910604 | 641 | 2.859375 | 3 |
As the saying goes, “There’s more than one way to skin a cat.” And this proverb exactly describes the situation with passwords. I’ve already discussed the inherent problems with passwords in a previous blog post, and I listed the possible alternatives to passwords in my latest piece in TechCrunch. In this post, I’ll describe some of tools and tricks hackers use to either steal your password or bypass it.
Brute force attacks
Brute force is the most primitive and simple type of attack against passwords, and it involves guessing passwords by trying different possible combinations. Hackers usually have a dictionary of commonly used passwords and their different variations, which they use in brute force attacks. That’s why you’ll also hear the name “dictionary attack.”
Since trying millions of different possibilities is beyond human capacity and would take thousands of years, brute force attacks are usually staged through an automated program. More resourceful hackers use botnets (an army of zombie infected computers and connected devices that are at the command of a remote “herder”) to split the task, speed things up, and thwart the victim’s attempts to block out a single node that is trying to brute force its way in.
Online services usually warn users when their account is being brute-forced or is trying to be accessed from an unknown location, so watch out and pay attention to the warnings your service provider gives you. For plain vanilla passwords, you can reduce the risks by choosing very strong and unpredictable passwords. Sadly, this is a guideline that many users do not take seriously, and strings such as “password” and “123456” continue to remain among the most popular passwords.
The more advanced types of multi-factor authentication methods can prevent such attacks because they rely on the user knowing the password or PIN number, and having something, such as a fob or a mobile device, which the attacker doesn’t.
Man in the middle attacks
This type of attack is also known as “bucket brigade.” As the name implies, in MitM, a malicious user intercepts communications between you and the online server you’re logging into, and steals your username and password when you submit them. This usually happens without you ever finding out, which makes it even more dangerous, because the attacker will start using your account and spying on you without leaving a trace.
Staging MitM attacks on unencrypted communications is a piece of cake for hackers. That’s why sites that exchange data with users usually use encryption protocols such as SSL and TLS to secure their communication channels. However, although these protocols increase the difficulty of MitM attacks by several orders of magnitude, they do come with their own vulnerabilities and have been broken in the past.
The key to eliminating the threat of MitM attacks is to use procedures that avoid exchanging secrets between servers and clients. Authentication protocols that rely on “zero-knowledge proof” or use signature-based handshakes are immune to such attacks.
Keyloggers are malware which, once installed on your computer, will monitor every keystroke you make on your keyboard and exfiltrate it to some clandestine location where it will be stored for later perusal.
Computers fall victim to keyloggers when visiting infected websites or opening an attachment that comes with an infected email. After that, the next time the user logs into an account, the credentials are captured sent to the server that controls the malware.
There are many ways to deter keylogger attacks, including constantly installing fresh updates for your system and antivirus program. Also, using a password manager tool can help, but there have been cases of password managers themselves being breached by hackers.
Two factor authentication and authentication methods that do not involve passwords fix the problem at the root, making spying on your keystrokes irrelevant.
In phishing attacks, hackers target thousands or millions of users by sending cleverly crafted emails that seem to come from reliable and authentic sources, and try to trick the recipient into log into their account by clicking on a link that is contained in the message. They usually use messages and warnings to urge the user to take action at once and perform a one-time procedure to either activate a new feature on their account or to prevent it from being closed down.
Once users click on the link, they’re redirected to a website that resembles the real service, say Facebook or PayPal, but is in fact a fake version of the same site. When they enter their username and password, the information is sent to the malicious actor that is in control of the counterfeit site.
Phishing attacks can usually be detected by checking the url of the site you’re logging into (because hackers can’t spoof domains that belong to others), by using updated anti-malware solutions that are constantly updated with list of malicious websites, and by simply not being naïve. Don’t believe anything you read in an email that comes from someone you don’t know, and certainly don’t click on links in emails from unknown senders.
Then again, replacing passwords with more advanced methods of authentication can eliminate the threat of phishing attacks altogether.
No matter how strong a password you choose, it won’t help you if your online service provider becomes the target of a data breach, because hackers will be able to pick of the server’s database without a hitch. And this happens a lot. If you’re lucky enough, your service provider would have encrypted your password. But in many cases, online services using outdated hashing algorithms such as MD5 and SHA1, which can be broken or are easily reversible through large hash dumps that are available in the internet. In others cases, strong symmetric encryption is used to protect passwords, but hackers who gain administrative access to the servers also find access to the keys to decrypt the passwords.
What’s worse, many users share passwords across accounts, so when a hacker gains access to one of your passwords, they have better chances of hacking other accounts you own, including critical email, bank and credit card accounts.
The solution to this problem is to first use strong hashing algorithms such as SHA-2 and 3 to protect passwords, and second, to salt passwords for good measure. But the ultimate way out of smash-and-grab attacks is to use methods of authentication that do not use shared secrets and avoid storing critical information on the server. There are already several good technologies that are based on this concept.
I barely scratched the surface of the many ways your identity and credentials can be hijacked by malicious users. For one thing, I didn’t even mention social engineering, which happens to be one of the most fatal type of attacks. Just ask CIA director John Brennan, Director of National Intelligence Jim Clapper, and Wired Magazine’s Mat Honan on how their accounts were breached. The point is, we have to stay vigilant, take threat seriously and look for new ways to protect our identities. | <urn:uuid:423919cf-fe61-49f3-9850-a47ef49ad7b4> | CC-MAIN-2017-04 | https://bdtechtalks.com/2016/02/12/the-many-ways-your-password-can-be-stolen-or-bypassed/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00304-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946349 | 1,449 | 2.890625 | 3 |
The self driving car uses several sensors incorporated into it and navigates through the streets using map data in real time.
Google has demonstrated its self-driving cars in California as the company is planning to introduce them for public use in next three years.
During 30-minute demonstration at Computer History Museum in Mountain View, US, it showcased the self driving car’s ability to automatically drive through the city streets, navigating through the cyclists, pedestrians and traffic signs.
The self driving car uses several sensors incorporated into it and navigates through the streets using map data in real time, deciding on its own without human intervention.
During the demonstration it drove through the streets stopping at all red lights while automatically slowed for cross-walks.
The demonstration gave a sneak peek into the autonomous cars since Google conceived the project five years ago through its Google X division.
Google said it is talking to auto manufactures to how to bring its self-driving-car technology to market.
Project director Chris Urmson was quoted by Wall Street Journal as saying that a self-driving car is probably still six years away.
"We are thinking now about how to bring this car to market," Urmson added.
Urmson added that the company has not decided yet whether to design a car on its own or develop the software and operating system to be used by automakers.
"We’re trying to figure that out now."
Currently Google uses Lexus cars for the test, which are mounted with cameras and radar systems, laser as well as other technologies developed by the company, which helps in navigating through the roads. | <urn:uuid:c2e21fa4-b820-4aa2-8125-b4d62a5e98db> | CC-MAIN-2017-04 | http://www.cbronline.com/news/mobility/devices/google-shows-off-driverless-car-4267236 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00332-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.972319 | 327 | 2.875 | 3 |
The Know-It-All: Dealing with Difficult Students
Whether they’re overly enthusiastic or just plain hyper, difficult students can easily derail a class by disrupting the learning process. By dominating the discussion or distracting others, these problem pupils can waste hours of valuable class time if the instructor doesn’t deal with them directly.
To keep their courses on track and maintain a positive learning environment, trainers need to create an atmosphere of empowerment, acceptance and internal control.
Promoting these values in the classroom helps instructors cultivate a psychology of success in their students, said John Shindler, associate professor at California State University, Los Angeles, and author of “Transformative Classroom Management.”
Further, he said developing this psychology improves the classroom environment by addressing the fear of failure that causes many students to be disruptive.
“When you have a failure psychology, you’re easily threatened, you quit easily and you’re looking for ways to displace the responsibility onto other people,” Shindler explained. “And when you feel that way, you become disruptive because it’s a lot easier to become disruptive than it is to feel like a failure.”
These students can act out in many ways, not all of which are rude or abrasive — the failure psychology can manifest itself in seemingly benign actions such as dominating classroom conversations or asking too many questions, Shindler said. Because these students are not causing trouble but genuinely trying to learn, they can pose unique challenges.
Some prime examples of these difficult students include the know-it-all, the question hog and the daily debater.
The know-it-all student tends to dominate classroom discussions with long-winded answers and comments that demonstrate his or her extensive knowledge of a particular subject. Although trainers love engaged students, these individuals can monopolize the discussion to the point where other students tune out as soon as they open their mouths.
To prevent this situation before it starts, instructors should make a statement early in the class about the importance of hearing everyone’s ideas. If they make it known at the beginning that they might have to stop some speakers to make sure that everyone gets a chance to talk, know-it-all students probably won’t feel singled out or hurt when they are cut off, Shindler explained.
If the problem persists, however, the trainer should pull that student aside after class to discuss the problem.
“Validate the fact that they have a lot to contribute and make them feel like they’re a special student, but let them know right upfront that there needs to be some kind of proportionality to the amount that people talk,” Shindler said.
The Question Hog
The question hog is generally a student who feels very confused or anxious about his or her ability to understand the concepts being discussed in class. Although it’s important for an instructor to answer this student’s questions and allay his or her fears, after class is probably the best time to discuss personalized, in-depth questions, Shindler said.
It’s also beneficial to make students wait until the end of class to ask their questions so they are forced to spend some time trying to find the answer for themselves — finding the solution on their own will give them much more confidence than having the explanation given to them.
“It’s conditioning —if they ask something, and you give them what they want, you condition them to continue that behavior,” Shindler explained. “So, when they say, ‘I want the answer,’ you say, ‘I’m going to give you the answer later.’”
The Daily Debater
The daily debater will latch on to any po | <urn:uuid:45a79a65-8b8b-4c27-a207-dbc9e99f6937> | CC-MAIN-2017-04 | http://certmag.com/the-know-it-all-question-hog-and-daily-debater-dealing-with-difficult-students/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00240-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952825 | 788 | 2.90625 | 3 |
Manufacturing Breakthrough Blog
Friday September 4, 2015
In my last posting we discussed the basics of the Future Reality Tree (FRT) which is constructed simple if-then or sufficiency based logic. I also explained that the FRT is designed to predict how changes we intend to make to our current reality would be realized in the future. As with all of these logic diagrams, I told you that the FRT can be used in isolation or it can be a part of a full system’s thinking analysis using all of the logic trees.
I explained that the FRT is intended to provide a framework to help you design and refine changes you intend to make to improve your future reality. In other words, you will be able to plot your proposed changes as a chain of cause and effect logic to be able to predict your future reality.
What Does an FRT Look Like?
In similar fashion on how we produced our Current Reality Tree, the Future Reality Tree (FRT) is constructed using sufficiency-based logic (i.e. if-then statements
The FRT provides a framework to help you design and refine changes you intend to make. It combines the current reality with new ideas (injections) in hopes that you can create new, expected future outcomes. In other words, you will be able to plot your proposed changes as a chain of cause and effect logic to be able to predict your future reality.
In my last posting I explained that the FRT is built upward from injections to desired effects. That is, while the CRT builds upward to identify Undersirable Effects (UDEs), the FRT does the opposite. In effect the FRT create the polar opposite of the CRT. You can build an FRT from scratch or you can build one in conjunction with the CRT and/or the CRD, but it’s easier done by the latter. We will be building our FRT from the entities contained in the CRT. The FRT I will be presenting, will be a simple example using one of the key undesirable effects (UDE) contained in the CRT. The simple way to think of the FRT is that if you have an undesirable effect in the CRT, then that same effect in the FRT will be worded as a desirable effect. So let’s create our FRT.
Building a Future Reality Tree
As mentioned, while it is possible to build an FRT from scratch, it’s not the easiest way to do so. Dettmer suggests that you start you Thinking Process analysis with a Goal Tree (a.k.a. Intermediate Objectives Map), but I am choosing to demonstrate how the TP tools were originally designed to be used. The first step is to, from the CRT, make a list of undesirable effects (UDEs) and core problems you might have identified. For reference, the figure below is our original CRT.
Undesirable Effects and Core Problems from CRT
- Performance metric efficiency is used
- There is no Quality System in place
- Excessive amounts of WIP inventory
- Extended product cycle times cause later customer deliveries
- Throughput rates are too low causing later deliveries
- Repetitive defects are causing excessive repairs & OE
- Excessive repairs drive up operating expenses & delay shipments
- Excessive rework causes higher operating expenses
- Operating expenses are too high
Dettmer suggests to begin building your FRT by writing, on Post-it Notes, statements of the desired effects you’re trying to achieve. In our example, the following are the UDEs transformed into desirable effects (DEs):
|Performance metric efficiency used at all steps||Efficiency used only at constraint|
|No quality system in place||Quality system in place an functioning|
|Excessive amounts of WIP inventory||Minimal WIP inventory|
|Extended product cycle times||Short product cycle time|
|Throughput rates are too low||Throughput rates are high|
|Repetitive defects||Very few defects are seen|
|Excessive repairs||Very few repairs are required|
|Excessive rework||Very little rework required|
|Operating expenses are too high||OE are at optimum levels|
When transforming UDE’s into DE’s, it’s very important that your wording expresses the desired effect as truly positive and not neutral. In other words, when someone reads your DE’s, they come across as being truly good. The other thing to consider is when you are wording the DE’s, make sure they are stated in the present tense as though they were already in place.
The next step is to arrange the desired effects, you have selected horizontally across the top of your paper or white board. These are the important DE’s that you are aiming to achieve or the end point, if you will. For example, one of the high level desired effects is that Throughput rates are high and in order to achieve this end point, what must you put in place? Clearly you need short product cycle time, so this would be a lower-level DE. And what might you need to have short product cycle times? The quick answer is another DE, Minimal WIP inventory. The point is, we want to create a logic tree (i.e. FRT) where DE’s build on each other from bottom to top, culminating with the most desired DE’s.
It should be clear to you that there are more FRT entities than just the listed desired effects. When constructing the FRT you will need to add other ideas (i.e. injections) in order to be able to realize the final desired effect. Clearly, in order to achieve your desired effect and ultimately your objective, you must change how you are doing things in your current reality.
Based upon what’s been presented thus far, I want you to try and construct our FRT using the desired effects and other injections that you come up with yourself.
In my next posting we’ll complete our FRT and move on to the final two TP Tools, the Prerequisite Tree and the Transition Tree. As always, if you have any questions or comments about any of my posts, leave me a message and I will respond.
Until next time.
The Logical Thinking Process – H. William Dettmer, Quality Press, 2007 | <urn:uuid:70f7f759-cca0-4e33-b8e1-82892e683c8e> | CC-MAIN-2017-04 | http://manufacturing.ecisolutions.com/blog/posts/2015/september/the-thinking-processes-part-10.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00056-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933389 | 1,336 | 2.9375 | 3 |
User Experience (UX) is all about the emotions and responses a user experiences or anticipates from his/her interaction with a system or a service. This includes emotional experience, perceptions, behaviors, physical and psychological responses, and much more. Simply presenting the information or service to a user these days, is not only ineffective but unacceptable. Your website or service needs to engage and interest the user. You would also want your user to feel comfortable and secure while consuming your service, which will in turn provide him/her with a positive experience.
JS is an interesting language. For much of its history it was misunderstood and undervalued and labeled a toy language, predominantly used for performing small tasks in HTML pages. But in reality, JS is a robust and powerful language. In this article I will touch upon how this “toy” language has evolved, and what is on the horizon for this little scripting language that has become one of the most trending technologies in the world!
Up until 10 years ago
With JS, we enhanced the web interface. It added behavior to the web page, where the web page responds to a user instantly without loading a new page to process a request. We did not have to fill an entire form and submit it only to be told that we had made a typo in a particular field and needed to re-enter data all over again. The search box gave you suggested results while you typed, information changed constantly without user intervention (stock market tickers), we obtained content on-demand, and facilitated animations to gain user attention to a specific portion of a page, or to make the page easier to use. None of these would have been possible without JS.
The past 5 ‘Golden’ years (purely client)
The brave new world of hyper-data and complex interactions has harnessed the power of traditional JS through model–view–controller (MVC) JS frameworks on the client side. The first such framework to gain successful adoption was Backbone.js. Today, AngularJS’s popularity and acceptance outshines that of any other framework, although Ember.js has a vibrant community of contributors and users. It also seems odd that ReactJS, developed at Facebook, is taking the lead from AngularJS.
Modern JS frameworks like these alleviate code management by providing clean and distinct application architectures (often using the MVC pattern) that can simplify development to a great extent. So, by employing one of these frameworks, we not only get highly responsive desktop alike user interfaces (Single Page Applications), but also ensure well-structured and easily-maintainable code base, which can save effort and money in the long run. A good UX is all about the end user’s comfort. They comfortably use a web-app that acts much like a desktop application, updating only the relevant views as they interact with your web-app. Also, independent of real performance, the user experiences a considerably faster website than a traditional-style website, resulting in a far better UX and user retention.
All these frameworks share equal success in making truly complex user-facing applications a reality, and are radically changing the way we think about development.
Quick Fact- The Guardian, (a giant news publication) and YouTube on PS3 shifted to Angular, clearly indicating that Angular is strong enough to be able to scale to the most demanding of commercial environments.
Going real cross-platform
A web browser exists in almost every consumer-based computing device in one form or the other, making JS apps highly versatile. Building a web app for a variety of platforms like Android, iOS, Windows, BlackBerry and even the desktop becomes no different than developing native apps. It was extremely expensive, if not impossible until recently to think of a real cros-platform app. Each platform had a proprietary technology that calls for a separate code repo for every single device, thereby pushing the cost and effort for cross-platform development in every possible way.
Now, coming to cross-platform mobile frameworks like Phonegap and Cordova, it is possible that a native app for every single device can be built using the same HTML5 and JS technology, with the combination of these mobile frameworks. Such apps offer better, if not, equally immersive UX on all devices – all using standard web APIs to bridge the gap between web applications and mobile devices.
Quick Fact- Take a look at the official Wikipedia mobile application, which provides a native experience with web technologies, and is built using Phonegap.
Evolution of Server-Side Java Script
In the past couple of years, the web is witnessing many data-intensive, rich internet and high real-time applications such as multiplayer online games, virtual meeting tools, and instant messaging engines. The server needs a responder to manipulate and process such requests and update the client. Java, .Net, and other mature technologies provide a variety of frameworks for the server to work like those responders. But for real-time and highly concurrent applications, choosing an efficient server software and hardware is essential. Hiring a different skill like Java/.Net to work on this portion is not only a costly affair, but underutilizing the client side (JS) developers too is a burden on the project.
One such super successful SSJS is Node JS, an extremely simple and lightweight framework, which is already adopted by giants like Walmart, Yahoo, and PayPal in their critical user facing business applications.
Quick Fact- Walmart witnessed ~50% of all its online traffic through Node.js server during the recent Black Friday sales.
Over the past three decades, RDBMS remained dominant for DB management, and is still useful and relevant for many of the same traditional applications. The need to realize changing business needs such as high scalability, increased velocity, big data analytics, and social interaction, demanded unstructured DB (NoSQL) in the context of low-cost commodity hardware.
Quick Fact- The Weather Channel turned to MongoDB to get killer features out to users quickly. Changes that used to take weeks earlier, can now be pushed out in hours.
The “things” we have today are not going to be dumb anymore. They can talk to each other with REST services in the cloud, with internet browsers, with mobile phones, and with a whole lot of other devices. JS is the only language that can be leveraged to write the software for all of them, with JSON being the interchange format.
The possibilities are just endless.
JS started off as scripting to stitch HTML blocks and grew to be a multi-paradigm language to develop real cross-platform apps, full stack web apps, and drones. It has entered an exciting cycle of evolution and innovation - stepping into every technology environment, elevating and evolving UX possibilities.
Everyone knows how rapidly technology evolves, but we in UxD are envisioning and celebrating the curve. We are not only prepared, but also exalt in these good times, believing that there is plentiful innovation ahead of us. | <urn:uuid:8c222853-3f8c-4cfd-9dde-742383f27c9b> | CC-MAIN-2017-04 | https://www.hcltech.com/blogs/evolution-javascript | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00361-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936292 | 1,425 | 2.59375 | 3 |
Rumors are circulating that Ford may build the Google autonomous car.
Just a few weeks ago, Tesla made headlines when they released a software patch that lets you go hands-free and feet-free in a Model S sedan. While it’s a milestone in engineering and the first time a car has been able to drive at highway speeds for long periods of time unassisted, only Google seems to have the engineering chops to make a car drive on its own in downtown traffic, controlling speed, steering, and braking without human intervention. (Volvo certainly comes close.)
The reason it makes so much sense for Ford to partner with Google is that they’ve been doing that for years. You can already send Google Maps directions to the navigation system in your Ford Escape, for example, or use Android Auto. Way back in 2011, Ford announced they were working with Google on predictive analytics in the car. It was a way to find out if it was better for fuel economy to take a different route based on traffic conditions, among other things.
As FoxNews.com reported, it’s a bit of a no brainer. Former CEO of Ford Alan Mulally already sits on the Google board and Ford opened an innovation lab in Silicon Valley earlier this year. My guess is that the lab was partly a way to be “closer to the action” and make some inroads (no pun intended) at Google, develop partnerships and just get their fingers in the pie.
But next year? Is that even possible? My view is that Tesla paved the way (literally, and also as an insurance and legal precedence) for the fully autonomous car, one that can understand traffic conditions. Self-driving car technology is all about the algorithms; it's basically a math problem. There are thousands of calculations related to speed, steering, and even the angle of the car on the road. And, there are thousands of scenarios. As the truck ahead of you swerves to avoid a pylon in the road, should your car move to the left or to the right? Should it speed up or slow down? If the car brakes, will that be more dangerous or less dangerous?
The good news is that, despite what other automakers like Toyota say about the human driver being in control at all times, a computer is much better at using on-board sensors to scan in all directions at once and avoid problems. I’ve driven just about every modern car equipped with sensors to help with automated driving, and it feels a bit like having a shield that protects you.
Granted, no computer could ever assess every situation perfectly. But then again, humans can’t do that, either. I’d rather have the car avoid an accident I don’t see.
The one major detriment to making this a reality next year is called YouTube. The videos of Tesla drivers not paying attention as they drive are downright frightening. There’s a lot of excitement about the tech, but all it will take is one video showing a Model S crashing when the driver isn’t paying attention to push this tech back into the dark ages. Let's just hope that doesn't happen before the Ford-Google car exists.
This article is published as part of the IDG Contributor Network. Want to Join? | <urn:uuid:c9022465-2543-48dd-a262-e4faa127c953> | CC-MAIN-2017-04 | http://www.computerworld.com/article/3017759/emerging-technology/here-s-why-ford-will-build-the-google-self-driving-car-next-year.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00571-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958498 | 679 | 2.671875 | 3 |
Mapping the country's changing biomass
- By Patrick Marshall
- Feb 23, 2009
There are a lot of reasons to be concerned about how much biomass the country has. For starters, land cover — whether it’s grass or old-growth forest — consumes carbon dioxide, one of the biggest contributors to the greenhouse effect.
Unfortunately, we haven’t had a good idea of how much biomass the country has, and we have even less insight into how much it is changing.
Researchers at the Woods Hole Research Center — an environmental science, education and policy institute with a Cape Cod, Mass., location — are rectifying that situation. Through the National Biomass and Carbon Dataset project, they are using data gathered on space shuttle missions and satellites to produce a map of the country’s biomass.
The project’s baseline is the National Elevation Dataset compiled by the U.S. Geological Survey, which records ground elevations for the continental United States, Alaska and Hawaii. The team at Woods Hole overlaid that information with data gathered during a space shuttle flight in 2000, which used radar to record the topography of the continental United States.
“It was the first radar mission and used two antennae, one inside the shuttle bay and one deployed 60 meters outside the shuttle bay on a boom,” said Josef Kellndorfer, the team’s leader and an associate scientist at Woods Hole. “From that mission, we got an interferometric signal that would allow us to retrieve topographical information.”
Kellndorfer said creating the National Biomass and Carbon Dataset was simply a matter of lining up the other two datasets and subtracting one from the other. “If you take ground elevation, which is available from the [U.S.] Geological Survey in the National Elevation Dataset, and subtract it from the elevation reading of the vegetation from the space shuttle, you get a reading of vegetation height,” he said.
However, to determine biomass from measurements of vegetation height, you need to know the density and type of vegetation in each map zone, Kellndorfer said. So the team turned to LandFire, a project funded by the Forest Service and the Interior Department, and the Forest Service’s Forest Inventory and Analysis project.
The team then used an open-source tool from the R Project for Statistical Computing to merge and model the data from the various sources and used PCI Geomatics to manage and analyze the immense amount of imagery. Finally, researchers relied on a number of applications from ESRI to perform the final integration of the data and deliver it to users via the ArcGIS Server 9.2 map service.
The result is that the team can derive estimates of biomass from satellite readings with about a 10 percent margin of error, Kellndorfer said.
“What’s nice is that at the end of the day, it’s going to be a fairly high-resolution dataset for the entire nation, except Alaska and Hawaii,” Kellndorfer said. “Whereas the Forest Service dataset is a statistical dataset, we’re actually mapping it out. It will be the first spatially explicit assessment of biomass for the country.”
However, the map’s full potential won’t be realized until the team goes through the process again. “That will allow a comparison over time,” Kellndorfer said. “We can hopefully detect regrowth, where carbon is accumulating. We can compare and get an idea of the carbon flux in the northeastern forests.”
Kellndorfer said the Agriculture Department and NASA have provided funding so the team can take a follow-up set of readings for the northeastern part of the country.
Patrick Marshall is a freelance technology writer for GCN. | <urn:uuid:be9ab75a-106d-4120-844a-7d5b99c1d645> | CC-MAIN-2017-04 | https://gcn.com/articles/2009/02/23/mapping-the-nations-changing-biomass.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00205-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916976 | 803 | 3.171875 | 3 |
The Vikram Sarabhai Space Center (VSSC) is acting as a proving ground for the future of GPUs and heterogeneous computing in India. According to an article that explored the center’s use and subsequent power, space and performance improvements following implementation of a GPU and CPU “hybrid” environment, there are significant benefits to moving from a CPU-only approach to supercomputing.
According to Vishal Dhupar who manages Nvidia’s South Asian presence, VSSC had “equipment in a single room delivering 220 teraflops” but to get to the 200+ teraflop range to run a homegrown x86-tailored CFD application called PARAS, they would have needed 5,000 CPUs. Dhupar says that Nvidia “offered them the same architecture, [ability to] use the same room and offer a quantum jump in performance with a hybrid architecture of CPUs and GPUs.” He said that by adding 400 GPUs to the existing 400 CPUs, they got to their 220 teraflop goal.
In comparison, another Indian supercomputing center, Tata CRL, has a 170 teraflops system with 3600 CPUs built at a cost of $30 million. VSSC achieved 220 teraflops with an investment of $3-3.5 million.
Dhupar says that “Only the code that was more parallelized had to be tweaked and this gave them a 40x performance boost on one account and a 60x boost on the other.”
As a further point of comparison, Prashant L. Rao writes that “There’s a substantial energy efficiency advantage from using GPUs. VSSC consumes 150 kWh for generating 220 teraflops. Tata CRL, on the other hand, is using 2.5 mWh for 170 teraflops.”
Rao also pointed to other differences between CPU-only and heterogeneous systems, noting “Cost being a perennial problem, Nvidia hopes to convince scientists that they should move their data centers onto GPUs. At the same time, it wants to boost the acceptance of CUDA. They have been looking at Message Passing Interface (MPI) for parallel computing. MPI is a subset of the CUDA framework. So, there’s no relearning. The framework has SDKs, debuggers, libraries, compilers etc. Whether you use Fortran, C or C++, it’s all supported,”
Vishal Dhupar summed up the focus on GPUs in the rapidly-growing Indian market (IDC estimates claim the HPC market in India is worth $200 million and is growing at a 10% annual rate), pointing to the price, performance and efficiency changes that hybrid computing could bring. He claims “with 2 teraflops available for $10,000, it changes the equation. We want every scientist or researcher to have this.”
This statement makes no bones about the fact that Nvidia is setting its sights on the Indian academic sector. The company hopes to provide these researchers with 2-8 teraflops on personal supercomputers and make it simple to mesh these together to form clusters or grid computing environments. | <urn:uuid:b375e640-7a58-4543-87ab-550d5ac2af89> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/10/17/indias_gpu_proving_ground/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00507-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939293 | 674 | 2.828125 | 3 |
Megharaj M.,University of South Australia |
Megharaj M.,Cooperative Research Center for Contamination |
Ramakrishnan B.,University of South Australia |
Ramakrishnan B.,Indian Agricultural Research Institute |
And 7 more authors.
Environment International | Year: 2011
Due to human activities to a greater extent and natural processes to some extent, a large number of organic chemical substances such as petroleum hydrocarbons, halogenated and nitroaromatic compounds, phthalate esters, solvents and pesticides pollute the soil and aquatic environments. Remediation of these polluted sites following the conventional engineering approaches based on physicochemical methods is both technically and economically challenging. Bioremediation that involves the capabilities of microorganisms in the removal of pollutants is the most promising, relatively efficient and cost-effective technology. However, the current bioremediation approaches suffer from a number of limitations which include the poor capabilities of microbial communities in the field, lesser bioavailability of contaminants on spatial and temporal scales, and absence of bench-mark values for efficacy testing of bioremediation for their widespread application in the field. The restoration of all natural functions of some polluted soils remains impractical and, hence, the application of the principle of function-directed remediation may be sufficient to minimize the risks of persistence and spreading of pollutants. This review selectively examines and provides a critical view on the knowledge gaps and limitations in field application strategies, approaches such as composting, electrobioremediation and microbe-assisted phytoremediation, and the use of probes and assays for monitoring and testing the efficacy of bioremediation of polluted sites. © 2011 Elsevier Ltd. Source | <urn:uuid:2f96c39e-4481-4490-9ab6-1d4253f095e1> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/cooperative-research-center-for-contamination-1898017/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00141-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.887606 | 349 | 2.8125 | 3 |
In the storage world, snapshots are a point-in-time copy of data. They have been around for some time and are increasingly being used by IT to protect stored data. A snapshot copies the metadata (nee index) of the data instead of copying the data itself. This means taking a snapshot is almost always instantaneous. This is one of the primary advantages of storage-based snapshots—they eliminate backup windows. In traditional backup deployments, applications either have to be taken off-line or suffer from degraded performance during backups (which is why traditional backups typically happen during off-peak hours). This means snapshot-based backups can be taken more frequently, improving recovery point objectives.
For more information, read the blogs: | <urn:uuid:0584cb3f-d951-417c-a8cc-eed852afa3d0> | CC-MAIN-2017-04 | https://connect.nimblestorage.com/docs/DOC-1043 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00563-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952099 | 144 | 2.5625 | 3 |
If you've ever been in a data center or computer room, you know it's not a fun place. Fans screech so loud you have to shout to have a conversation, and the computer room is kept at a frigid air temperature. That's air cooling for you.
An alternative is slowly, very slowly, creeping into the market. Market research firm IHS says in its new "Data Center Cooling Report—2014" that the market for liquid cooling infrastructure in data centers will grow by 40% this year to $2.3 billion. At that rate, this market will double in size in just two years.
Air cooling is more popular because it's easier to do and if it fails, the worst that happens is the computer shuts down. If liquid cooling fails, you have a mess on your hands. The drawbacks to air cooling are many; you need to build air conditioning into the whole data center with things like a raised floor for piping and HVAC systems. The whole room has to be an ice box so the intake fans can suck in cold air.
If you've never seen liquid cooling, it's pretty much what the name implies. Instead of a heat sink bolted into place over the CPU, a copper-based heat transfer element is bolted down and two tubes are attached. One brings cool liquid in to the heat absorber while the other sucks the hot liquid away from the CPU. The liquid keeps the copper element – copper is used because it transfers heat very efficiently – cool, which in turn keeps the CPU cool. That's known as direct to chip cooling.
In the more extreme cases, the entire electronics are submerged in a coolant. Obviously this is not a pure water coolant but a special formula that won't fry the electronics.
The real benefit of liquid cooling is that it can cut cooling expense by up to half. Liquid cooling occurs at the chip/server level, which means it is as physically close as possible to the heat source. The closer the heat dissipation occurs to a heat source, the more efficient the heat removal is, IHS says. No other heat dispersion method can get this close.
Cooling isn't a total panacea. Water cooling must be contained in a closed environment so no contaminants get into the water or coolant. The up-front costs are much higher for water cooling than for air cooling as well, but it pays off over time if you have the right scenario.
Liquid cooling is popular with hobbyists and overclockers who like to jack their CPU to 5-6Ghz, but in a standard PC, the stock heat sink is usually enough. More and more space is being dedicated to liquid cooling at retail outlets like Fry's Electronics and Micro Center and mail order outfits like NewEgg.com
But those are for consumers. There's a whole separate brand of enterprise-class liquid immersion cooling firms, including Green Revolution Cooling, Icetope and LiquidCool. A few companies that specialize in direct to chip cooling include Asetek, Chilldyne, and CoolIT.
Because liquid cooling can cost as much as five times that of air cooling, rack densities have to exceed the 15 kilowatt+ range before you will see any return on investment from liquid cooling, and servers aren't quite that dense. Because of this, IHS sees liquid cooling only accounting for 2% of total data center cooling revenues. It will be largely a niche practice used in high performance computing and ultra-dense systems.
Still, many technologies that started as a niche has become mainstream, so maybe this will catch on as well. | <urn:uuid:c2280d9d-056f-4954-a40a-65b55f79bc3d> | CC-MAIN-2017-04 | http://www.itworld.com/article/2696520/data-center/data-centers-make-migration-to-liquid-cooling.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00195-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961013 | 738 | 2.6875 | 3 |
Wireless Security and Compliance
Wireless Intrusion Prevention
The explosive growth in wireless networks has been matched by an explosive growth in attempts to attack, hack and otherwise compromise these networks. While basic wireless network security standards have improved, there are many environments (including financial institutions, defence, healthcare and education) where a breach of security could be catastrophic to the organisation under attack.
The primary purpose of a wireless intrusion prevention system (WIPS) is to prevent unauthorised network access by wireless devices. Some advanced wireless infrastructure have limited integrated WIPS capabilities.
Large organisations are particularly vulnerable to security breaches caused by rogue access points and unauthorised wireless devices attaching to the network. If an employee (trusted entity) in a location brings in an easily available wireless router, the entire network can be exposed to anyone within range of the signals.
PCI DSS Compliance
The Payment Card Industry Data Security Standard (PCI DSS) is an information security standard for organisations that handle cardholder information for the major debit, credit, prepaid, e-purse, ATM, and POS cards. Defined by the Payment Card Industry Security Standards Council, the standard was created to increase controls around cardholder data to reduce credit card fraud via its exposure
In July 2009, the PCI Security Standards Council published wireless guidelines for PCI DSS recommending the use of WIPS to automate wireless scanning for large organizations.
Leading security vendors, have developed cost effective, cloud-based solutions which manage large numbers of sites (eg retail or fast food chains) spread across large geographical areas from a central location. | <urn:uuid:183720a8-9e4a-46bd-acd4-f6f5647ce5c8> | CC-MAIN-2017-04 | http://www.wavelink.com.au/technologies/wireless-security-compliance.php | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00103-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928363 | 323 | 2.671875 | 3 |
Research from the University of California San Diego and the University of Washington – and which concludes that modern cars are susceptible to wireless hacking – is the result of a security issues being ignored at the car electronics software design stage, say Fortify Software.
With the latest cars now coming with as many as 50 or more interconnected computer systems – controlling everything from the brakes to the door locks and ignition system – now that the vehicles are becoming wirelessly-enabled, they are a lot easier to electronically hack into.
“It’s interesting to see that the researchers have identified that most cars built since the late 1990s have a computer diagnostic port, since this port needs direct physical access to operate and therefore hack. But now these systems are being wirelessly enabled and held together with several tens of megabytes of code, it’s a relatively small step to modify the code and allow hackers an easy – and wireless – back door into a car’s computer system,” said Barmak Meftah, CPO at Fortify Software.
This was no theoretical exercise, as the researchers were able to load new firmware onto their own circuit board and, by plugging the board into the car’s internal network, translate the data flowing between the vehicle and a laptop.
This reverse engineering process allowed the researchers to develop a customized vehicle network interface and effectively take control of the car’s electronic nervous system.
So far, so normal, but the killer hack was when the researchers were able to generate network commands wirelessly from another car.
“In theory this will eventually allow a wireless drive-by attack on the firmware of a car, to the point where it’s central locking and ignition protection systems can be disabled. A professional thief can then saunter up, open the car and simply drive off,” Meftah explained.
Car manufacturers should have foreseen the development of hacking attacks on their vehicle computer systems and built security safeguards into the firmware to stop this type of electronic hacking.
“It’s all very well saying that the manufacturers should enhance the security of their car computer networks and the protocols used, but this potential fiasco could be have been avoided if car developers had built security in from the ground up on a vehicle’s electronics systems. That way, if someone were to hack into the electronics, the car’s central nervous system would realize it was under attack and take appropriate action, such as immobilising the vehicle,” he said.
When you consider the high standard of IT defenses that a typical office server has built in, it seems strange that something like a car – which costs ten times the price of a server, and then some – does not have similar levels of protection. | <urn:uuid:0fefd7b8-fa0a-4ded-a55b-32ee1ef1c185> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2010/09/06/wireless-car-hacking-due-to-poor-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00499-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960448 | 564 | 2.921875 | 3 |
Joined: 14 Jun 2006 Posts: 331 Location: Jacksonville, FL
This is an addition problem. y must be either a 1 or a 2 in the answer (yxxxz) when you add the three numbers(x+y+z). In order for z to be in the answer (yxxxz), x + y must add up to ten. If x + y must add up to 10, then y has to be 1 (because 10 + z would not be greater than 19). If y is 1, then x must be a 9. z has to be one less the x because you carry the one to the second colume to make 9. | <urn:uuid:80aae94e-ceaa-4918-83f1-3f0cff936bfa> | CC-MAIN-2017-04 | http://ibmmainframes.com/about12376.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00039-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.83196 | 135 | 3.140625 | 3 |
The Internet of Things is making children and schools smarter, according to a survey of more than 600 IT leaders from higher education organisations in the US.
Nearly half of the professionals who took part in the survey (46 percent) said they believe that IoT will have a major impact on kids’ learning in schools over the next two years.
In the US, a smart school is a concept where schools use IoT technology to enhance the learning experience of students in different subjects. Tech could be smart boards, VR, robots and wearables.
Many in the dark
The survey, conducted by Extreme Networks, revealed that 9 percent of the education professionals have begun adopting the framework. 3 percent said they plan to implement it in the near future.
These adoption statistics pale in comparison to the amount of people who aren’t even aware of smart schools. 29 percent said the idea of a smart school is completely new to them, while 36 percent are only slightly aware.
This could come down to the fact that the requirements of the concept are relatively demanding. Schools need to have Wi-Fi running throughout their campuses and are expected to purchase expensive IoT devices.
Rich benefits from the Internet of Things
That said, smart schools offer a heap of benefits. For starters, children – from kindergarten right up to college age – are likely to become more engaged with their learning because they’re familiar with digital formats. IoT also allows for an interactive, personalised learning experience.
Jon Silvera, managing director of FUZE Technologies, believes that while IoT will impact younger generations the most, schools are ill-equipped in terms of resources and skills to teach how such tech works.
He told Internet of Business: “The Internet of Things is here to stay and yet it seems as though most people don’t know what it is. The people it will affect most is our youngest generations which means there is an unfortunate mismatch here.
“The education channel is ill-equipped in both skills and resources to teach the programming skills required to control such devices. And as such, as has been painfully proven by the software industry, our young people are destined to be users, not creators, of this incredibly important new technology.” | <urn:uuid:bed29386-615c-4383-94e9-e6706798141d> | CC-MAIN-2017-04 | https://internetofbusiness.com/schools-get-smart-internet-things/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00039-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.966421 | 453 | 2.9375 | 3 |
Why Is Healthcare Data Attracting More Cyber Criminals?
Human error, bugs in hardware, application software and operating systems add to the complexity of securing healthcare data—and data breaches are not uncommon.
U.S. federal law requires any organization dealing with private health information to be HIPAA compliant, which means that hospitals and healthcare organizations must adhere to HIPAA's strict security guidelines. In a radical change from just a few years ago, many healthcare organizations utilize the public cloud, allowing at least some PHI or other personal data to be accessible through the Internet. Unfortunately, human error, bugs in hardware, application software and operating systems add to the complexity of securing healthcare data—and data breaches are not uncommon.
Data breaches to a healthcare organization can be a crippling threat for multiple reasons. A security breach decreases patient trust, and organizations may have to incur liability to reimburse patients for costs or damages from the breach. Healthcare organizations will have to spend time and money working with patients to help them monitor their credit reports for fraud and other issues by hiring third parties. In addition, organizations are often compelled to terminate the employees that are responsible for the lax security that caused the data breach. Avoid these potentially costly issues by considering the four suggestions discussed here. Ultimately, they can help healthcare organizations decrease costs and increase the security of PHI.
In order to increase the security of healthcare data, there are four key areas that need to be addressed.
- Providing healthcare services requires many actors to have access to PHI across geographies. A national personal eHealthcare Record (eHR) would increases data security.
- The healthcare industry should mimic the financial services industry and adopt best practices.
- Mitigate size and scale issues of smaller healthcare organization by using HIPAA compliant cloud-managed services providers and applications.
- Implement a national system of two-tiered eHR records where the top tier provides a higher level of security for more sensitive personal diagnoses.
- How Does the Cybersecurity Information Sharing Act (CISA) Impact the Hospital and Healthcare Industry
EHR / EMR
- Presentation on Patient Safety: Achieving A New Standard for Care (Institute of Medicine Committee on Data Standards for Patient Safety November, 2003)
- The JCAHO Patient Safety Event - Taxonomy: A Standardized Terminology and Classification Schema for Near Misses and Adverse Events | <urn:uuid:ee67fcb8-0f55-490c-9cb1-809df45d6f00> | CC-MAIN-2017-04 | https://www.givainc.com/healthcare/why-is-healthcare-data-attracting-more-cyber-criminals.cfm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00067-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.915698 | 482 | 2.5625 | 3 |
Some say that the best way to learn how to do something is by watching others. That can be a little tricky with routing and switching, but fortunately Cisco has published their own internal standards for IP addressing.
Cisco’s global network engineers have learned to make IP address planning a required part of planning any new service in order to manage the limited and sometimes inflexible resource of available IP address blocks. Public address space is conserved as much as possible, and private addresses are used in a variety of services whenever it makes sense.
Their best practices document (available online as a PDF) describes the Cisco IT standards for IP addressing, including how and where and why private and public addressing is used. It also describes some of the priorities and policies related to allocating address blocks within a network, and managing address resources within the enterprise. | <urn:uuid:e9309780-8890-4f4a-9910-769bbae05fae> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2009/12/23/ip-addressing-best-practices/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00553-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961028 | 170 | 2.875 | 3 |
The correct answer is 1.
An IPv6 anycast address is a new address type in IPv6. An anycast address is assigned to multiple devices providing the same service and is used for one-to-nearest communication. The router will send the traffic to the nearest (as defined by the routing protocol) device with the anycast address. Anycast addresses are allocated from the global unicast address space.
For more questions like these, try our CCNA Cert Check | <urn:uuid:c46e4c25-ac55-44f8-b75d-f4431f42cede> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2012/02/02/ccna-question-of-the-week-28/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00369-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.910954 | 97 | 2.765625 | 3 |
We have been in the Age of Information. What comes next? More exactly, what will we call what comes next?
We already know the answer to the substantial part of that question. The dominant technology after information processors pretty clearly is the Internet. So, the smart money is on the era being called the Age of the Internet.
There are, of course, lots of ways that that bet could go wrong. Perhaps the name "Internet" will go away. Maybe it will be the Age of the Web. Or the Age of Social Media. Or the Google Age. Or the Age of Some Net Service Not Yet Invented. Or, our age could instantly be renamed the Age of Nuclear Desolation or the Age of Infection or even just Queen Oprah’s Age. History’s funny that way. It likes to make fools of those who predict it. (One of my favorite jokes is an old Jewish one: Want to make God laugh? Tell him your plans.)
But, I’m not actually all that interested in the question of what name we’ll apply to the coming age. I’m more interested in what you would call the Internet Age if you had to name it after its most important attribute.
For example, you might call it the Age of Connection or the Age of Connectedness. Doing so would say that you think the single most important factor about the Internet is that it lets us communicate and lets us draw relationships. Rather than fixating on objects—things with boundaries—the Net has drawn our attention to how objects overcome their isolation. Or, perhaps this would say that the Internet hasn’t created new connections, but has shown the unnaturalness of our old idea that the world consists primarily of isolated objects; the boundaries were smudged all along, but we just couldn’t admit it. The Age of Connectedness also has the advantage as a phrase of implying that the exact mechanism of the connections—by voice or by typing, etc.—isn’t as important as the fact of the connection itself.
Or perhaps we might call it the Age of the Hyperlink. That singles out one particular feature of one particular Internet application—the World Wide Web—but it is a remarkably important feature, especially in contrast to the previous eras. Paper has dominated our culture for over a thousand years because it’s how we preserved our ideas and thus built a culture upon them. But, paper is a disconnected medium. It’s very difficult to go from a reference to the thing referred to. This has a profound effect on the shape of knowledge and, even more so, on the nature of authority. Since it is so difficult to see how a written idea came to be, we accept authorities as stopping points for inquiry. Hyperlinks let us embed the sources that support, amplify or contradict what we’ve just written. That changes how we write, how we read and how we come to belief.
Or, we could call it the Digital Age. This refers to the underlying "material" of the new age, a traditional way of naming epochs. I put "material" in quotes because the strength of the digital is exactly that it isn’t material: The same digital information can be expressed in electrical voltage levels in silicon chips or in holes in paper punch cards. But I’d vote against this particular label. First, it insufficiently distinguishes it from the Information Age, which was also bit-based. Second, I think you can explain the new age as a reaction against the digitizing of our world, even while we are digitizing more and more of it. (History’s funny that way.) The Web runs on bits, but the Web uses bits to enable people to connect socially in ways that overflow expectations, whereas the primary use of bits in the Information Age was to reduce what we know to what was manageable by computers. That explanation is itself a vast over-simplification, of course, but it leads me to be suspicious of characterizing this new age in terms of bits and digitization.
Or, we could call this the Age of Abundance. Obviously, there are still scarcities in the world, many of which severely limit people’s lives. But there is an unthinkable abundance of ideas, creative works and connections on the Web. This abundance wrecks many economic models, and it subverts the authority of traditional institutions. Much of the Net’s effect can be understood by starting from the abundance it enables. (Meanwhile, we’d better get cracking on the material scarcities that are killing people.)
But, I’d like to suggest one other possibility: The Age of Difference. I don’t seriously think thatthat will be our name for this era, but I do think it captures much of what’s distinctive of it. Our previous ages have managed the problem of scaling knowledge and creative works by imposing scarcity. For example, for the past couple of thousand years, there’s already been too much to know by any one individual, so we’ve set up systems that limit what we have set before us. We have authorities who filter information, and educational systems that test us on set curricula, we establish canons of great works, we create information processors that work so long as we pre-process the world into rows and columns of data. Most of all, we have extended the concept of there being a single right answer from axiomatic systems such as math to just about every area of life. We thus squeeze out differences.
We hire experts whom we expect to give us clear guidance, and we often perceive as weak political leaders who acknowledge complexity and nuance.
But the Web manifests all the differences we had managed to ignore. No one agrees on anything, and now all that disagreement is put right before our eyes. We can see the differences in the links, and, at best, in conversation. In one view, the Internet is causing people to huddle with those with whom they agree. A more hopeful says that while there is certainly some huddling, there is also an inevitable awareness that we live in a diverse world ... and that there is value in that diversity. That is a hope, and it is a goal worth struggling for.
If we indeed come to appreciate the strength, vigor and wisdom of difference, then that would be a change worthy of titling an age. | <urn:uuid:07f5a811-72b7-4242-89a8-d3b84141f0e0> | CC-MAIN-2017-04 | http://www.kmworld.com/Articles/Column/David-Weinberger/What-attribute-best-describes-the-Internet-age-57505.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00489-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.96083 | 1,327 | 2.75 | 3 |
Over the past 10 years, agencies have made minimal use of their special government employees (SGE) who do not serve on Federal boards, according to a recent Government Accountability Office (GAO) report. However, the report also states that not all of these SGEs’ assignments have been reliably documented.
An SGE is an officer hired by a Federal agency who is appointed to a certain task. SGEs can work on a full-time or intermittent basis, but they cannot work for an agency for more than 136 days of the year. For example, the Department of Health and Human Services’s (HHS) SGEs work as specialized medical professionals for the National Disaster Medical System. The Department of State’s (DS) SGEs are foreign affairs officers or embassy inspectors.
According to the report, the Office of Government Ethics (OGE) found that several agencies failed to report accurate data on their SGEs who did not serve on Federal boards. Agencies are required to report whether their SGEs serve on Federal boards. They are also required to identify SGEs based on an individual’s personnel action. For this study, GAO reviewed HHS, DS, the Department of Justice (DOJ), the National Science Foundation (NSF, and the Nuclear Regulatory Commission (NRC).
GAO reported that three of the five agencies did not report reliable data. HHS did not identify which SGEs served on Federal boards; the agency offered no explanation for this discrepancy. NRC and DS misidentified SGEs serving on Federal boards, but subsequently offered correct data. The report says that weak internal communication and misunderstanding contributed to these lapses in correct information.
Most SGEs serve on Federal boards. The report states that, as of December 2014, only 3 percent of SGEs did not serve on Federal boards. The unreliable data GAO uncovered suggests that employers could potentially have SGEs who served on Federal boards longer than their allotted time permitted. Because some of their data was inaccurate, managers would not know about the violation.
“Stronger data would better position agencies to report on SGEs and provide the required ethics training,” the GAO stated in its report. “Moreover, accurate and complete data are important to allow OGE and Congress to provide informed oversight of agencies.”
Three of the five agencies in this study appoint their SGEs through expert and consulting hiring authorities. The other two agencies use their own agency specific authorities. In four of the five agencies, supervisors track SGEs’ days of service. The other agency allows SGEs to track their own days.
The report stated that GAO recommends HHS improve the reliability of their data on SGEs who do not serve on Federal boards. GAO also recommends that OGE continue to identify whether other agencies are experiencing data challenges similar to DS, HHS, and NRC. | <urn:uuid:4451ec7c-6227-4de1-8f6a-cc7d032e9059> | CC-MAIN-2017-04 | https://www.meritalk.com/articles/hhs-lacks-reliable-data-on-special-employees/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00241-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.968639 | 602 | 2.6875 | 3 |
I have to thank Frank Scavo for making me think harder about what context means . I and several people I know use the term liberally , and perhaps not very consistently .
Here is my hypothesis –
Answer to every question has a core (which has great precision) and a context (less precise , but without it -core cannot be meaningfully interpreted).
1. Additional questions maybe needed to get context
If all I ask you on phone is “should I turn right or left to reach your office” , you probably will ask me something in return like “are you coming from north or south”. Without this additional information, you cannot help me . Right or left is a precise answer , what is on my right might be on your left or right . Without extra information – you cannot help me with a precise answer .
2. You can infer all or part of the context from historical information .
Maybe you know from your morning commute that I could never be driving from south side on that street given that side of road is blocked for construction . So you can give me a precise left or right answer without asking me anything further.
3. Context can change with time
Perhaps turning right will be the shortest distance to your office , yet you might ask me to turn left since you know rush hour traffic going on now will slow me down . If I had asked you two hours later – you could have given me the exact opposite answer , and still be correct .
4. Multiple things together might be needed to provide context
It is very seldom that one extra bit of information is all you need to make a determination . When I called you during rush hour , if it was raining – you might have asked me to take a left turn so that I will get covered parking and a shuttle to ride to your office . On a sunny day, you could have pointed me to an open lot from where I could have walked a short distance to reach you .
5. Context is progressively determined
As the number of influencing factors increase – you have to determine trade offs progressively to arrive at a useful context . You might know exactly all the right questions to ask to give me the best answer , but if you were pressed for time – you could have told me an answer without considering the entire context . It would have been precise, but probably of limited use to me .
6. Context is user dependent
If I reached your assistant instead of you , she probably would need a whole different context to be provided before she could tell me which way to turn . She might have never taken the route you take to work , and hence might not have seen southbound traffic is closed off . She might not have realized it is raining outside given she was in meetings all day .
If I am your vendor and you know I am coming there to make a pitch that you have limited interest in – you probably won’t think through all the contextual information . If I am your customer – maybe you will go outbid your way to tell me not just to turn right , but also that the particular turn comes 100 yards from the big grocery store I will find on my right .
7. More information does not always lead to better context
If I over loaded you with information – you probably could not have figured out all the trade offs in the few seconds you have before responding . Your best answer might not be optimal . And if you take very long to respond , I might pass the place to make the turn and then have to track back – making it needlessly harder for both of us .
8. Context maybe more useful that precision
Instead of giving me a precise left or right answer , you might tell me to park in front of the big train station and wait for your company shuttle to pick me up. That was not the precise answer to my question – but it still was more useful to me .
This was just a simple question with only two possibilities as precise answers . Think of a question in a business scenario . “How are our top customers doing?” is a common question that you can hear at a company . However , you can’t answer that question in any meaningful way without plenty of context .
The eventual precise answer is “good” or “bad”. What makes the question difficult is that it could mean a lot of different things .
1. What is a top customer ? Most volume ? Most sales ? Most profit ? Longest history with company ? Most visible in industry ? Most market cap?
2. Who is asking ? CMO and CFO might not have the same idea on what makes a top customer
3. How many should you consider as top customers amongst all your customers ?
And so on ..
Information systems in majority of companies do not have the ability to collect context of a question . And hence they may or may not give useful answers without a human user doing most of the thinking and combining various “precise” answers to find out a “useful” answer .
That is a long winded way of saying “context is what makes precision useful”.
Ok I am done – let me know if this makes any sense at all , and more importantly whether it resonates with your idea of what context means | <urn:uuid:d5b1edf4-42e4-4cc3-afa3-dd540b642709> | CC-MAIN-2017-04 | https://andvijaysays.com/2013/09/06/what-on-earth-do-you-mean-by-context/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00114-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.96666 | 1,084 | 2.96875 | 3 |
English is a perplexing language. For example, consider the words moon and good. To the uninitiated, the words should rhyme, but the former is pronounced /mun/ (according to the International Pronunciation Alphabet), while the latter is spoken /good/. Seemingly, the only rule in English is exception.
UNIX shells are equally perplexing. For instance, in the Bourne shell (and most common
UNIX shells), the phrases
`$var` look alike but yield substantially different results.
(Each CLI in the shell examples presented in this article are prefaced with the name
of the active shell and the command number.)
bash-1) # Demonstrate the differences between single-, double-, and back quotes bash-2) var=ls bash-3) echo '$var' $var bash-4) echo "$var" ls bash-5) echo `$var` Rakefile app bin components config db doc lib log patches public script src test tmp vendor
In the sequence above, the variable
var is set to the two-letter
string ls. In the initial
echo command, the apostrophes
prevent interpretation of the variable, instead yielding a verbatim copy of the quoted
text, the four-letter string $var. Next, in command 4, the double quotation marks
do interpret the variable, so the result is the string ls. Finally, the back quotes both
interpret the variable and run the intermediate result as a subshell. Thus,
`$var` yields the intermediate string ls, which runs
as a shell command to produce the contents of the local directory.
Certainly, all three operators—the single, double, and back quotation marks—serve
a valid purpose, but like exceptions in English, memorizing and mastering the variations
can be maddening. Here's more proof: What's the difference between the phrase
$var contains whitespace.)
bash-1) # Create three files and try to remove two bash-2) touch three two one bash-3) var="one two" bash-4) rm "$var" rm: one two: No such file or directory bash-5) rm $var bash-6) ls three
If a variable contains whitespace, the double quotation marks keep the variable expansion intact as a single argument. Otherwise, any whitespace in the variable is interpreted as an argument delimiter.
Yep. Shell syntax can be maddening. And that's unfortunate, because it makes the CLI—one of the most powerful features of UNIX—more difficult to learn. Indeed, inconsistencies like those described above fluster hardened veterans, too.
fish—the Friendly Interactive Shell—swims
upstream against the tide of obfuscation, providing streamlined syntax and a
much-improved user experience. Like other shells,
provides redirection, shortcuts, globbing (that is, expansion of wildcards), subshells,
tab completion, and variables. Unlike alternatives, however,
fish also provides color-coded CLIs, an extensive
command-line editor, and rich documentation.
Additionally (and intelligently),
fish provides only one way to
do anything. If a UNIX utility achieves a particular task,
does not repeat the feature as a built-in command. For example,
fish uses the system-wide application /bin/kill to terminate
processes. (By comparison, the Bourne shell implements its own version of kill as
a built-in application. You can type
/bin/kill at the Bourne
shell prompt to use the application, instead.) Whenever possible,
prefers simplicity over flexibility, making it far more predictable to use.
Here, let's install
fish, reel it in, and try just some of its
Fish is an open source project created by Axel Liljencrantz and licensed under
the GNU General Public License, version 2. As of this writing, the latest version
fish is 1.23.0, released 13 January 2008.
If you use UNIX or a UNIX-like system, such as Linux® or Mac OS X,
fish should build from source code readily and
easily on your system. Here are the steps, as shown in Listing 1:
- Download the most recent source tarball of the program.
- Unpack it.
- Change to the source directory.
- Configure the build.
Listing 1. Build fish from source
bash-1) wget http://www.fishshell.org/files/1.23.0/fish-1.23.0.tar.gz bash-2) tar xzvf fish-1.23.0.tar.gz bash-3) cd fish-1.23.0 bash-4) ./configure --without-xsel checking if autoconf needs to be run... no checking if autoheader needs to be run... no checking for /usr/pkg/include include directory... no ... bash-5) make gcc -c -o function.o function.c ... bash-6) sudo make install ... To use fish as your login shell: * add the line '/usr/bin/fish' to the file '/etc/shells'. * use the command 'chsh -s /usr/bin/fish'.
If you're using a UNIX-like system, the additional flags to
configure aren't strictly needed. However, to
minimize dependencies and to keep
fish in the
same directory structure as common shells, you can add
respectively. (If you use Mac OS X version 10.4 Leopard, also add the parameter
LDFLAGS=-liconv. If you omit the latter option on Mac
OS X, the attendant
fish utilities fail to build.)
Optionally, if you use a popular UNIX variant, you can likely find pre-built binaries
ready to install on your distribution. For example, if you use Debian Linux, you
fish in an instant with the command
sudo apt-get install fish. Check the
home page for availability for your system.
School is in
Before diving in to more complex topics, let's look at how common shell tasks
are accomplished in
- To redirect standard input and standard output, use the operators
>, respectively. To redirect standard error, use the carat (
^), as shown in Figure 1. To append standard error to a file, use
Figure 1. Redirect standard error with the caret operator
In command 3, the error messages produced by
rmare captured in the file named errors. Command 4 shows the contents of the file. The
fishshell has rich support for redirection, such as combining descriptors into one stream and closing descriptors.
By the way, the colored and underlined text shown aren't editorial. The shell highlights text in the CLI as you type. In the lines, green indicates that the command name is valid; an invalid command name is colored red. The underline is a hint that the named file exists. (A section below covers shell feedback in more detail.)
- To run a subshell, use the parentheses (
()), as shown in Figure 2. Text enclosed within parentheses is interpreted as a list of commands and replaced by the result.
Figure 2. Use parentheses to run a subshell
- To create an alias, or shortcut, create a
A function can contain one or more commands, and the special variable
$argvautomatically expands to the list of arguments passed on the original command line.
You can list all defined functions with the command
functions. To erase a function, use
functions --erase name, as in
functions --erase ll.
You can also instantly save any function you write at the command line. When your code is complete, type
funcsave name, such as
funcsave ll. The function is immediately available to all your currently running shells and all future shells, as well. The command
funced nameinteractively edits an existing function. The
funcedcommand has full syntax highlighting, tab completion, and automatic indenting;
funcedmake it easy to customize your shell.
- To set a variable, type
set variable namevalue. As with the
functionsbuilt-in variable, type
set --erase variable nameto "unset," or erase, a variable. To retrieve the value stored in the variable, type a dollar sign (
$) followed by the variable's name, as shown in Figure 3.
Figure 3. Test for the existence of a variable
--queryoption to test whether a variable is defined. If the variable is set,
set --queryreturns a status code of 0, indicating that no error occurred; otherwise, it returns a 1. Statement 6 chains two commands with the
oroperator: The second command (
echo) executes only if the first command fails.
So, how does
fish handle the dreaded
`$var`? True to
form, it follows a few simple rules:
- If a variable contains whitespace, the whitespace is always preserved,
and the variable always evaluates to a single argument (see
Figure 4. Fish keeps strings with embedded whitespace intact
- If the double quotation mark is the outermost quote, all variables are expanded.
- If the single quotation mark is the outermost quote, no variables are expanded.
Let's look at how these rules work in practice.
Command 1 creates four files, where the last file has whitespace in its name.
Commands 3 and 4 delete the file named by the variable
Commands 6 and 7 delete the two files named in the
variable. Look closely at command 6: Because the value is not placed in quotation
marks (either single or double), whitespace is not protected. Hence, command 7
expands the variable into two arguments and deletes both files. Commands 9 and
10 reiterate the scenario in commands 6 and 7.
Commands 11 and 12 demonstrate the whitespace rule. Even though the variable is
not surrounded by double quotation marks in command 12,
maintains the whitespace set in command 11. Very nice.
Commands 14 through 16 exhibit
fish's nested quoting rules.
Now, glance again at commands 11, 15, and 16. The shell uses color codes to display
balanced quotation marks and reinforce proper syntax. Look, too, at commands 9 and
11. The latter command underlined the file name, indicating that it exists. The missing
underline in command 9 is a big hint you did something wrong.
fish's first name.
A great feature for landlubbers
Speaking of friendly features,
fish's tab completion
feature is another novelty that new UNIX users—and experts—find
extremely useful. To see completion in action, type along in the example that
follows. Click the Tab key at the end of each line.
If you're unsure of a command name, you can click Tab after typing a few letters to view a list of possible completions, as Figure 5 shows. (The list of completions on your system may differ from the list shown here. The list depends both on your PATH environment variable and the contents of your UNIX system.)
Figure 5. Click Tab to complete a command name
Notice the red text in the CLI. If
fish doesn't recognize
the name of a command, it is shown in red. Clicking Tab reveals all the
application names—with a brief description—that begin with
what you've you typed so far. You can also click Tab at an empty prompt to
see all applications in your PATH.
If you'd like to know what options are available for a command, click Tab after a
-) or double hyphen
--), as shown in Figure 6.
Figure 6. You can also click Tab to complete an option
fish tells you which options are available. The
shell maintains a large index of common commands and options, and chances are, you
can get the help you need. However, custom or more esoteric utilities may lack such
data. You can read the
fish documentation to learn more
about writing your own completions.
You can also click Tab after typing a few letters of the option, as Figure 7 shows. The shell displays all the possible matches.
Figure 7. You can type part of an option, too
If you don't know what kind of operand a command manipulates,
fish can help—in many cases, but not all. For
example, if you type
fish variable editor), a space, and then press Tab,
fish presents a list of available variables. The
set is an argument. Similarly, if you type
type, a space, and then click Tab,
displays a list of built-ins and functions that extend which utilities are available to
you on the file system.
In general, all the built-ins and functions included with
have context-sensitive operand completion. Try
example, as shown in Figure 8.
Figure 8. Many commands are context sensitive and present suitable arguments
cd function is a
function and is aware that its operand is an existing directory. When you click
Tab after typing
presents all the existing directories contained in every directory in your CDPATH.
Another smart completion is associated with
ssh followed by a space, and then click Tab to see a
list of known hostnames taken from your Secure Shell known hosts file (typically
found in ~/.ssh/known_hosts):
fish-1) ssh login.example.com (Hostname) host1.example.com (Hostname)
fish shell also completes file names and directory
names. Again, it highlights correct elements as you type path names.
One significant difference between
fish and other shells
is the lack of history shorthand, such as
Casting for more fish
If you like
fish and want to adopt it as your login
shell, add the path to
fish to the official list of
shells, /etc/shells, and then run
bash-1) type fish fish is /usr/bin/fish bash-2) sudo vi /etc/shellsAdd the line /usr/bin/fish to the file if it's missing, and save the file bash-3) cat /etc/shells /bin/bash /bin/csh /bin/ksh /bin/sh /bin/tcsh /bin/zsh /usr/bin/fish bash-4) chsh -s /usr/bin/fish Changing shell for strike Password: ******** bash-5) login strike Password: ******** Last login: Wed Oct 8 15:02:21 on ttys000 Welcome to fish, the friendly interactive shell Type help for instructions on how to use fish fish-1) echo $SHELL /usr/bin/fish
There is a lot to discover and like in
fish. Dare I say
it? "There is a lot of
C in the
You can tweak the colors in syntax highlighting. You can customize your startup by editing
~/.config/fish/config.fish. You can share variables across shell instances using
universal variables and
fishd. The shell also has
a great history search feature, an interactive variable editor, and an interactive
Best of all, there is a tremendous amount of documentation all available from
fish itself. If you need help, just type
help at any command prompt.
The doctors are right:
fish is good for you.
- Speaking UNIX: Check out other parts in this series.
Learn more about the design of
See more screen shots of
- Learn more about UNIX shells.
- The AIX and UNIX developerWorks zone provides a wealth of information relating to all aspects of AIX systems administration and expanding your UNIX skills.
- New to AIX and UNIX? Visit the New to AIX and UNIX page to learn more.
- Browse the technology bookstore for books on this and other technical topics.
Get products and technologies
- Check out developerWorks blogs and get involved in the developerWorks community.
- Participate in the AIX and UNIX forums: | <urn:uuid:1204289a-4a72-4be0-b808-f6627a319e14> | CC-MAIN-2017-04 | http://www.ibm.com/developerworks/aix/library/au-spunix_fish/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00472-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.832997 | 3,413 | 2.671875 | 3 |
News Article | November 20, 2015
In this weekly column, science writer Carrie Poppy puts together the most striking and telling science images from the past week's news for your viewing pleasure. Scroll down to find phenomenal images and fascinating facts about the science behind them. This week, researchers concluded that bats have developed the uncanny ability to land upside down, via their heavy and powerful wings. This photo from the National Park Service was posted in celebration of National Bat Week. Eleven bat species live in Olympic National Park. On the subject of frequent fliers, Tech Times reported on a fossil revealing the remains of an ancient wasp. The 53 million-year-old insect is called Ypresiosirex orthosemos, and was almost 3 inches long. This week, the Supreme Court of Hawai'i halted construction of a 30-meter telescope because of safety concerns, and because it is being built on ground considered sacred by Native Hawai'ians. On Wednesday, Brazilian researchers announced the discovery of a daddy longlegs with no eyes, who lives in caves in Brazil. They named the creature Iandumoema smeagol, after the Lord of the Rings character, Smeagol. Olympic National Park in Washington was partly closed due to storms this week. Check out the incredible difference just a few days makes, in these photos taken Sunday and Thursday by Olympic National Park staff. NASA released these photos this week, of young students posing with their newly autographed photos of Astronaut Reid Wiseman during his visit to the Maryland Science Center in his hometown of Baltimore, Md. Wiseman explained what it was like to live on the International Space Station for 6 months, and inspired these future astronauts. The Max Planck Society of Ornithology's bird lab released video this week of songbirds performing amorous songs and "tap dances" for their love interests. A single step of the blue-capped cordon-bleus, a species of waxbill, takes a fifth of a second. The European Southern Observatory in Chile just released this incredible telescope image of a horde of secret, massive galaxies that existed when our universe was in its infancy. Our final image is a chart, but it's a doozy. In this study released by the University of Zurich, researchers determined that mice have incredibly long sperm, whereas elephants' sperm is short, but plentiful. Here is their once-in-a-lifetime chart of the length of elephant versus mouse sperm, based on body size and weight. Wonders never cease.
Uz Schollaert S.,The Interdisciplinary Center |
Ackerman W.,Maryland Science Center |
O'leary J.,Maryland Science Center |
Culbertson B.,National Oceanic and Atmospheric Administration |
And 2 more authors.
Journal of Geoscience Education | Year: 2014
Engaging the general public on climate topics and deepening their understanding of key discoveries by the Earth science community requires a collaborative approach between scientists, developers, and museum educators to converge on the most effective format. Large Science On a Sphere (SOS) displays of Earth attract attention to global data at museums worldwide, yet just looking at raw data does not generally lead to new insights by the public. Working closely with the Maryland Science Center, the EarthNow project realized the time limitations of the museum staff and audience and began creating short, narrated videos for SOS. The videos introduce recent climate science findings on a variety of topics and can be used as part of live, facilitated programs or played while SOS is in its autorun mode. To measure the effectiveness of the delivery method, we developed a survey and tested several groups that saw a video within a live show compared to groups that saw it in autorun without a live program. We also wondered whether adding a hands-on activity would enhance learning and how hearing the information while doing an activity would compare to watching and hearing the SOS show, so we tested two large groups using the activity with and without seeing Science On a Sphere. Overall survey results demonstrate the groups who saw an SOS show gained certain concepts better than the group that only heard the information while doing the activity. The live shows conferred a slight but not substantial advantage over the autorun shows. Playing short, narrated videos on SOS that include global Earth processes, such as atmospheric and oceanic circulation, seems to enhance understanding of certain concepts more than hearing the information while doing an activity. Ongoing communication with museums and their visitors is critical for ensuring that these stories are as effective as possible and make best use of the strengths of the Science On a Sphere exhibit to enhance the public’s climate literacy. © 2014 National Association of Geoscience Teachers. Source
Agency: NSF | Branch: Continuing grant | Program: | Phase: AISL | Award Amount: 3.00M | Year: 2010
Filmmakers Collaborative, Principal Large-Format LTD, and SK Films, Inc. are requesting funds to produce a large format, 3-D film and multi-component educational materials and activities on the annual migration of monarch butterflies, their life cycle and the web of life at select sites where they land. Project goals are to 1) raise audience understanding of the nature of scientific investigation and the open-ended nature of the scientific process, 2) enhance and extend citizen science programs to new audiences, and 3) create better awareness of monarch biology, insect ecology and the importance of habitat.
INNOVATION/STRATEGIC IMPACT: The film will be simultaneously released in both a 3-D and 2-D 15/70 format. RMC Research Corp. will conduct evaluation of the project, including a study of the comparable strengths of the 2-D and 3-D versions of the film and to assess the effectiveness of 3-D to enhance the learning experience. RMC will also conduct a long-term evaluation of the projects citizen science programs.
COLLABORATION: This project promises a highly collaborative model of partnerships between the project team and The Smithsonian, Project Learning Tree, Monarchs in the Classroom, Monarch Watch, 4-H through the University of Kentucky Extension and the University of Florida WINGS programs, The Society for the Advancement of Chicanos and Native Americans (SACNAS), Online NewsHour, and Earth & Sky. RMC will conduct formative and summative evaluations to assess the success of project materials in communicating science and achieving the projects learning goals. | <urn:uuid:b8ea0483-06a0-4f17-93d3-538731804ed5> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/maryland-science-center-1534485/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00316-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930489 | 1,309 | 3.296875 | 3 |
Least privilege was first put forward as a design principle by Jerry Saltzer and Mike Schroeder 40 years ago . Avecto, along with many others, has championed least privilege and our 5 reasons to love least privilege shows that it is key to mitigating attacks.
Microsoft have acknowledged this approach with User Access Control (UAC) in Windows, attempting to provide a least privilege solution with an ‘all or nothing’ approach using out-of-the-box rudimentary messages. A large number of Windows features for customization and basic system maintenance are locked away behind UAC, with the standard user receiving a UAC message when they try to perform these actions.
The majority of UAC messages occur when a user starts a program that requires administrative rights to perform a task. The program runs with elevated privileges for its lifetime, despite only needing the elevated privileges for a small set of actions, like in Event Viewer purging application event logs.
However some programs have been designed to separate the administrative actions by running it as an independent activity to accomplish a specific and limited function that requires elevated privileges, such as changing the system date and time. This is known as COM elevation.
Normally when a user clicks on an assigned COM elevation action, it will trigger a UAC message which requires access to an administrator account to proceed. Every COM class has a unique identifier, called a CLSID, used to launch the action. This can be seen in the details of the UAC message.
This all sounds fine, as COM elevation actions are isolated to perform a specific and limited function. However not all features that use COM elevation are as limited as they could be! Let’s take a look at the Windows Firewall.
You will notice that the Windows Firewall has a number of functions locked behind the UAC shield indicating their need for elevated privileges to perform an action. For example if you select Advanced settings you will see a UAC message for Windows Firewall with a COM elevation CLSID in the message details. After entering the credentials of an administrator account, the Windows Firewall with Advanced Security window appears.
Now remember that the purpose of this window is to perform a specific and limited function that requires elevated privileges. On the right-hand pane there is an Export Policy… item, which shows the File Save As dialog. All looks normal so far?
If you change the Save as Type to All files (*.*), you can now browser around the System32 folder and see all the Windows system files including applications like cmd.exe. If you select Open on this from the right mouse button (RMB) menu, an administrator command prompt window is launched. You now can perform broad and unlimited actions on the system!
With Defendpoint, you can target specific COM elevation CLSID’s and assign privileges to the task without granting full admin rights to the user. COM based UAC messages can also be targeted and replaced with custom messaging, where COM classes can be whitelisted and/or audited. This unique capability is reinforced by a number of international granted patents.
As an additional security measure Defendpoint will force the File Open and Save dialogs to run with the user’s standard rights, which prevents the user from tampering with protected system files or running programs with elevated privileges.
The most important step for most organisations is to remove administrator rights for standard users. Although developments in the Windows operating system have made it possible for users to run with minimal rights, the lack of granular control, introduces its own set of security challenges.
SALTZER, J.H. AND SCHROEDER, M.D., ‘The Protection of information in computer systems,’ Proceedings of the IEEE, vol. 63, no. 9 (Sept 1975), pp. 1278-1308. | <urn:uuid:f37f4ff0-b0ac-4a5b-8bae-71cdcd265d72> | CC-MAIN-2017-04 | https://blog.avecto.com/2016/05/taking-command-of-your-privileges/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00132-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.915544 | 776 | 2.609375 | 3 |
It is often necessary to transform the light output from an optical fiber into a free-space collimated beam. In principle, a simple collimation lens is sufficient for that purpose. However, the fiber end has to be firmly fixed at a distance from the lens which is approximately equal to the focal length. In practice, it is often convenient to do this with a fiber collimator (fiber-optic collimator). They can be singlemode or multimode. Their diameters can be as small as the fiber itself, for example 125 um, or as large as tens or hundreds of millimeters. Their basic structure, however, consists of a lens and an optical fiber. A lens can collimate the output from a fiber, or launch a collimated beam into the fiber.
Types of Lens
Different kinds of lenses can be used in collimators: include fiber lenses, ball lenses, aspherical lenses, spherical singlets and doublets, GRIN (GRaded INdex) lenses, microscope objectives, cylindrical lenses. For standard telecom fibers and in fact many others, one mostly uses GRIN lenses (graded-index lenses), as these are relatively cheap and small. However, they are less suitable for larger beam diameters, e.g. of more than a few millimeters. In such cases, one tends to use conventional singlet or doublet lenses, which may be of spherical or sometimes aspheric type. This is needed, for example, when a collimated beam needs to be transmitted over a large distance, such as in free-space optical communications, where a long Rayleigh length is required.
Size of the Collimated Beam
The beam radius of the obtained collimated beam depends on the circumstances. In some cases, the beam diameter is as small as the fiber diameter, e.g. 125 μm; the Rayleigh length can then be less than 1 cm. In other cases, one needs beam diameters of several millimeters or even more. For calculations, the simpler case is that of a single-mode fiber. Here, the beam radius can be calculated with fairly good accuracy using the following equation:
This assumes that the beam profile of the fiber mode has an approximately Gaussian shape, so that we can apply the corresponding formula for the beam divergence half-angle θfiber. It is also assumed that the distance between fiber end and lens is close to the focal length f of the lens. If the distance is too small, the beam will diverge, and for too large distances it converges to a focus at some distance. It can be useful to get slightly into that latter regime, where a beam focus (with a beam diameter slightly below that at the collimator) is reached in a suitable working distance. The longer the focal length, the less critical is the longitudinal positioning. A smaller fiber mode size often leads to a larger collimated beam! Note that a smaller mode size of the fiber implies a larger beam divergence and thus a larger collimated beam for a given focal length. This also means that a shorter wavelength, which usually leads to a smaller mode size, leads to a larger output beam. This holds even more if the fiber gets into the multimode regime for sufficiently short wavelengths. For such reasons, a visible pilot beam for an infrared beam, for example, may not accurately show the size of the infrared beam. Also, the correct fiber positioning for collimation may depend on the wavelength, particularly if no achromatic lens (see below) is used. For multimode fibers, the beam divergence at the output (and thus the collimated beam size) depends on the launch conditions, and possibly even on the condition (e.g. bending) of the fiber. Generally, the beam divergence angle will be larger than according to the estimate for the single-mode fiber – possibly even much larger. While GRIN lenses are perfect for small telecom devices, they are not suitable for generating large optical beams such as those used in Free Space Optic (FSO) communication applications where beam size can vary from a few millimeters to tens of millimeters. For beam size of 1 mm to 5 mm, aspherical lenses are ideal largely due to their excellent ability to correct spherical aberration. For beam sizes larger than a few millimeters, a spherical singlet or doublet may be a better choice since they are readily available and low cost.
Theoretical Approximation of the Divergence Angle
Angled fiber ends are often used to suppress back-reflections from the fiber end face into the core, i.e., to maximize the return loss. Unfortunately, the angle leads to some deflection of the output beam. Singlemode fibers are often polished at an 8 degree angle to reduce back reflection (increase return loss). The price to pay there is that the beam is slightly off-centered. It is possible, however, to correct this with specially designed fiber ferrule and alignment fixtures. Almost all multimode fibers are polished at 0 degree due to the fact that system return loss requirements are much lower. This divergence angle is easy to approximate theoretically using the formula shown below as long as the light emerging from the fiber has a Gaussian intensity profile. This works well for single mode fibers, but will underestimate the divergence angle for multimode fibers where the light emerging from the fiber has a non-Gaussian intensity profile. The divergence angle (in Degrees) θ ≈ (D/f)(180/3.1415927) where D and f must be in the same units. θ is Divergence Angle, D is Mode-Field Diameter (MFD) and f is Focal Length of Collimator Example Calculation: When the SF220SMA-A collimator is used to collimate 515 nm light emerging from a 460HP fiber with a mode field diameter (D) of 3.5 µm and a focal length (f) of approximately 11.0 mm (not exact since the design wavelength is 543 nm), the divergence angle is approximately given by θ ≈ (0.0035 mm / 11.0 mm) x (180 / 3.1416) ≈ 0.018°. When the beam divergence angle was measured for the SF220SMA-A collimator a 460HP fiber was used with 543 nm light. The result was a divergence angle of 0.018°.
We offer four gradient index (GRIN) fiber collimators that are aligned for either 1310 nm or 1550 nm and have either FC connectorized or unterminated fibers. Our GRIN collimators feature a Ø1.8 mm clear aperture, are AR-coated to ensure low back reflection into the fiber, and are coupled to standard Corning SMF-28 single mode fibers. These graded-index (GRIN) lenses are AR coated for applications at 1300 or 1560 nm that require light to propagate through one fiber, then through a free-space optical system, and finally back into another fiber. They are also useful for coupling light from laser diodes into fibers, coupling the output of a fiber into a detector, or collimating laser light.
Our pigtailed ferrules have broadband AR coatings centered at either 1310 nm or 1550 nm and are available with either a 0 or 8 degree angled face. These pigtailed ferrules include 1.5 meters of SMF-28e fiber. | <urn:uuid:941cfd23-1d96-46e2-ba78-d1c446b92985> | CC-MAIN-2017-04 | http://www.fs.com/blog/passive-fiber-optics-fiber-optic-collimator.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00526-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932262 | 1,530 | 3.703125 | 4 |
FTTH fiber optic cable technology is the delivery of information through pulses of light over a fiber optic network directly to the end-user. A FTTH network is easier to maintain and delivers 100 times more bandwidth than coaxial, wireless, or copper networks. It has virtually unlimited capacity to bring telephone, Internet, digital television and other great services to subscribers — with high quality and reliability!
FTTH Fiber Cable is able to deliver a multitude of data, voice and video services to the home more efficiently-and securely-versus traditional copper transport mediums. Fiber optic cables are made up of hundreds or thousands of fibre optics, which are long thin strands of pure glass about the diameter of a human hair, which can carry data at speeds exceeding 2.5 gigabits per second (gbps).
FTTH Fiber Cable replaced the standard copper wire of the native Telco
Cable Systems Fiber offers a 100% fiber optic connection, while companies using partial fiber systems still rely on copper wire to deliver signals over extended distances, leading to poor signal quality. Only a FTTH network can carry high bandwidth signals over long distances using light, which has no interference issues and will deliver superior products.
Typical copper telephone wires carry analog alerts generated by phone gear, including fax machines. Analog technology is, by nature, a less precise signaling technology than digital technology. Though multiplexing has allowed digital alerts to be transmitted across multiple channels over copper strains, FTTH fiber optic cable is superior for relaying these alerts and allows for quicker transfer charges and just about limitless bandwidth. This opens the door to higher Internet speed, streaming video, and other demanding applications.
Copper cabling is an efficient means of delivering information over very short distances. However, networks that rely on partial fiber-to-copper infrastructure are subjected to extreme bottlenecking of information due to a limited amount of available bandwidth. They are susceptible to interference of radio frequencies (RF) and must continuously “refresh” or strengthen the signal to deliver it to the consumer’s home. FTTH networks have virtually limitless bandwidth, which allows free flowing of information at the speed of light. Signals over fiber can travel greater distances without having to be refreshed and are not subject to RF interference.
The Internet utilizes a backbone of fiber optic cables capable of delivering incredible bandwidth. This inherent ability makes it a prime source for advancing network technologies that can be brought to the home or business. Most subscribers, however, log on to this network through copper lines with limited capacity. This creates a bottleneck for advancing technologies that increasingly require greater bandwidth. FTTH bridges this gap.
FTTH services commonly offer a fleet of plans with differing speeds that are price dependent. Fiber optical Cable For FTTH help define successful communities just as good water, power, transportation, public safety, and schools have done for decades. People can work from home – increasing personal productivity and decreased commute times and air pollution. | <urn:uuid:5e0734d5-5c69-4e8d-9443-018af76f73e1> | CC-MAIN-2017-04 | http://www.fs.com/blog/ftth-fiber-cable-network-replaced-the-standard-copper-wire.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00004-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.914932 | 595 | 3.34375 | 3 |
The Changing Landscape - The Impact to Patients’ Privacy
Both President Bush and President Obama agree that every American should have an electronic health record by 2014. Congress agrees too and has poured $27 billion into digitizing the healthcare system. Using data instead of paper records, technology tools can analyze mountains of health information to understand what treatments work best for each of us, improve quality, facilitate research, and lower costs. Strong support for electronic health records systems and health data exchanges is bipartisan.
But the systems being funded have major, potentially fatal design flaws which are NOT being addressed by either party:
- Patients have no control over who sees or sells sensitive personal health information.
- Comprehensive, effective data security measures are not in use; 80% of health data is not even encrypted.
- Health data is held in hundreds or thousands of places we have never heard of because of hidden data flows.
- Hundreds of thousands of employees of corporations, third parties inside and outside the healthcare system, researchers, and government agencies can easily obtain and use our personal health information, from prescription records to DNA to diagnoses.
- There is no "chain of custody" for our electronic health data.
The consequences of the lack of meaningful and comprehensive privacy and security protections for sensitive health data are alarming. Over 20 million patients have been victims of health data breaches– these numbers will only increase. Millions of patients each year are victims of medical ID theft, which is much harder to discover and much more costly than ID theft. Such easy access to health data by thousands of third parties is causing an explosion of healthcare fraud (see FBI press release on $100M Armenian-American Fraud ring: http://www.fbi.gov/newyork/press-releases/2010/nyfo101310.htm). Equally alarming, this lack of privacy can cause bad health outcomes, millions of people every year avoid treatment because they know their health data is not private:
- HHS estimated that 586,000 Americans did not seek earlier cancer treatment due to privacy concerns. 65 Fed. Reg. at 82,779
- HHS estimated that 2,000,000 Americans did not seek treatment for mental illness due to privacy concerns. 65 Fed. Reg. at 82,777
- Millions of young Americans suffering from sexually transmitted diseases do not seek treatment due to privacy concerns. 65 Fed. Reg. at 82,778
- The Rand Corporation found that 150,000 soldiers suffering from PTSD do not seek treatment because of privacy concerns. "Invisible Wounds of War", The RAND Corp., p.436 (2008). Lack of privacy contributes to the highest rate of suicide among active duty soldiers in 30 years.
Public distrust in electronic health systems and the government will only deepen unless these major design flaws are addressed.
The President's Consumer Privacy Bill of Rights shows he knows that trust in the Internet and electronic systems must be assured. The same principles that will ensure online trust must also be built into the healthcare system --- starting with Principle #1:
"Consumers have a right to exercise control over what personal data companies collect from them and how they use it."Back to blog | <urn:uuid:77783cfa-775e-41c5-aa03-be73dea91431> | CC-MAIN-2017-04 | https://www2.idexpertscorp.com/blog/single/the-changing-landscape-the-impact-to-patients-privacy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00490-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940574 | 641 | 2.828125 | 3 |
Nov/Dec 2016 Digital Edition
Oct 2016 Digital Edition
Sept 2016 Digital Edition
Aug 2016 Digital Edition
July 2016 Digital Edition
June 2016 Digital Edition
May 2016 Digital Edition
The chemistry behind the Flint Water Crisis: Corrosion of pipes, erosion of trust
By George Lane
When Flint, Michigan changed its water supply in 2014, it initiated a cascade of chemical reactions inside decades-old water pipes that caused Lead to leach into its drinking water, triggering a major public health crisis. When Flint used its own river as a water supply, drinking water contained a staggering 13,200 parts per billion (ppb) Lead, almost 900 times higher than the 15 ppb regulatory limit set by the Environmental Protection Agency (EPA). Some water samples exceeded the EPA criteria for Lead concentration in hazardous waste, 5,000 ppb.
Although Lead pipes have been used for water distribution for over two thousand years beginning with the Romans, the use of Lead pipes carrying water in the United States on a major scale began in the late 1800s, particularly in larger urban cities. By 1900, more than 70% of cities with populations greater than 30,000 used Lead-lined pipes for drinking water.
The use of Lead pipes to carry drinking water was recognized as a cause of Lead poisoning by the late 1800s in the United States. In 1890 the Massachusetts State Board of Health advised the state’s cities and towns to avoid using Lead pipes to transport drinking water. By the 1920s, many cities and towns were prohibiting or restricting their use. To combat this trend, the Lead industry carried out an effective campaign to promote the use of Lead pipes, affecting public health and delaying the replacement of Lead water pipes.
Normally water managers add chemicals to water, such as orthophosphates, to prevent corrosion. Orthophosphates bond with Lead in pipes, creating a protective coating between Lead and water. When that shield is intact, corrosive chemicals like Dissolved Oxygen (DO) can’t interact with the Lead; however, orthophosphates have to be added continually or the barrier breaks down. If the barrier does break down, DO combines with Lead atoms, oxidizing them. Oxygen takes electrons from Lead, grabs its Hydrogen protons, turning into water, and allows Lead to leach into drinking water. Once oxidized, Lead dissolves into the water instead of sticking to the pipe.
Flint’s water treatment plant did not add orthophosphates, allowing the pipes to corrode, and Lead quickly contaminated the drinking water. Additionally, Flint River water had high levels of chlorides, which accelerate corrosion. There were two other sources of chloride: Ferric chloride used in Chlorine disinfection of water and road salt applied during tough Michigan winters. Switching from Detroit’s Lake Huron to Flint River water created a perfect storm for Lead leaching into Flint drinking water.
A complex brew of acids, salts, Chlorine and many other chemicals were involved in oxidizing Flint’s metal pipes and releasing Lead. High levels of Lead in Flint drinking water weren’t reported to the public for 18 months; however, the corrosion happened quickly, especially in the warmer summer months. Without effective treatment to control corrosion, Flint’s water leached high levels of Lead from the city’s pipes into city drinking water. Following the switch, E. coli bacteria was also found in the water.
To combat E. coli, extra Chlorine was added as a disinfectant to remove it. Ferric chloride was also added as a coagulant to remove organic matter from the water, initiating a domino effect of chemical causes and effects. Flint’s water quality problems were also caused by corrosion in both the Lead and Iron pipes that distribute water. When city residents began using the Flint River as its water source, the water’s ability to corrode those pipes wasn’t adequately controlled. This led to high Lead levels, rust-colored tap water, and the growth of pathogenic microbes.
When Flint changed its water supply, the city didn’t adequately control corrosion. Flint isn’t the only city susceptible to these problems. The pipes in its old distribution system had seen the same water for decades, similar to many other cities. Switching water supplies changed the chemistry of the water flowing through those pipes.
When a switch like this happens, the chemistry in the water system moves toward a new equilibrium. In Flint the change was catastrophic. Flint was getting its water from the Detroit Water & Sewerage Department, which would draw water from Lake Huron and then treat it before sending it to Flint.
To lower the city’s water costs, in 2013 Flint officials decided to take water from another source which was building its own pipeline from the lake. Shortly after that, Detroit told Flint it would terminate their original long-term water agreement within a year and offered to negotiate a new, short-term agreement. Flint declined the offer. While waiting for the new pipeline to be finished, Flint began taking water from the Flint River and treating it at the city plant.
Problems with the city’s tap water started the summer after the switch in 2014. First, residents noticed foul-tasting, reddish water coming out of their taps. In August, the city issued alerts about E-coli contamination and told people to boil the water before using it. A General Motors plant in Flint stopped using the water because it was corroding steel parts.
In early 2015 Lead reached Flint’s University of Michigan campus. Researchers sampled water from 252 Flint homes and reported the results (www.flintwaterstudy.org). Hurley Children’s Hospital in Flint released data showing that since the water change, the number of Flint children with elevated levels of lead in their blood had increased from 2.4% to 4.9%.
Lead is neurotoxic, causing behavioral problems and decreased intelligence. The Blood Brain Barrier limits the passage of ions, but because it has not formed in children, they can absorb from 40% to 50% of water-soluble Lead compared with 3% to 10% for adults. | <urn:uuid:27e975ff-b4f2-4ed9-b6ce-4d41151337e1> | CC-MAIN-2017-04 | http://gsnmagazine.com/node/47686?c=it_security | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00242-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957165 | 1,269 | 3.375 | 3 |
Manufacturing Breakthrough Blog
Thursday July 9, 2015
At the end of my last post, I asked you to think about the following scenario. I explained that we want to improve the quality of our product that we manufacture. I also explained that our current defect rate was at nine percent and that we wanted to get it below five percent. My question to you was: which type of logic (i.e. necessity or sufficiency) would you use to develop the plan?
If you answered by saying both types of logic, you would be partially right. However, you will see in future posts, using sufficiency-based logic involves putting together a series of if/then statements to determine the core problem that is negatively impacting your organization and then developing a solution to the problem. That is a good pathway to improvement.
In my opinion, sufficiency-based logic is the better pathway because you will identify all of the “things” you will need to develop your actual improvement plan. You will use the syntax, “in order to have A we must have B” and, as you will see, there will be multiple “Bs.” This will become more apparent when we discuss the Goal Tree in a future post.
The Thinking Process Diagrams
In my last post, I identified six logic diagrams that make up the Thinking Processes. The six, along with the type of logic each one uses are:
- Current Reality Tree—Sufficiency Logic
- Evaporating Cloud (EC)—Necessity Logic
- Future Reality Tree (FRT)—Sufficiency Logic
- Prerequisite Tree (PRT)—Necessity Logic
- Transition Tree—Sufficiency Logic
- Goal Tree—Necessity Logic
In today’s post we will discuss the intent of these diagrams, or what we hope to achieve by using them, and the “rules” for using them. These logic diagrams are intended to answer the three critical questions I have discussed in previous posts:
- What to change?
- What to change to?
- How to cause the change to happen?
You may be wondering which of these logic diagrams answers which question(s). The following table summarizes when to use each one:
“If A then B”
“In order to have A we must have B”
What to change?
Current Reality Tree
What to change to?
Future Reality Tree
How to cause the change?
The Rules of Logic
For sufficiency logic, there are eight “rules” that must be applied to each of the three using sufficiency-based logic. These rules are referred to as the Categories of Legitimate Reservation (CLR) and are intended to validate (or invalidate) or test the logic of the cause and effect connections that we develop. These eight rules and a brief explanation of each one are as follows:
- Clarity reservation—The complete understanding of a word, idea or causal connection. In other words, requesting additional explanation because you don’t fully understand the cause and effect relationship or the individual entities as stated.
- Entity existence reservation—The verifiability of the existence of a stated fact or statement. In other words, you are questioning the actual existence of the cause or effect entity.
- Causality existence reservation—The direct connection between a proposed cause and the effect. Questioning the existence of the causal link between the cause and the effect.
- Cause sufficiency reservation—The complete and unequivocal accountability for all stated causes that supposedly produce the effect. Using another effect to show that the hypothesized cause does not result in the initial stated effect.
- Additional cause reservation – The existence of a completely separate and independent cause of a particular effect.Explaining that an additional non-trivial cause must exist to explain the observed effect.In other words, neither cause by itself can account for the effect.
- Cause/effect reversal reservation—The misalignment of cause and effect. That is, the effect is actually the cause.
- Predicted effect existence reservation—An additional expected and verifiable effect coming from a specific cause. Using another effect to show that the hypothesized cause does not result in the initial effect.
- Tautology—Circular logic where the existence of an effect is offered as rationale for a stated cause. Being redundant in stating the cause and effect relationship.
The Categories of Legitimate Reservation are essentially proof-reading tools to use when constructing the logical cause and effect trees. Remember these CLRs are only used on the trees using sufficiency-based logic.
Cause & Effect Relationships
Undesirable Effects and Core Problems
Over the years, one thing that I typically see when I go into a manufacturing company is firefighting. And when I question the leadership on whether this is a new problem or not, they typically tell me that they always seem to be solving the same problem over and over again. This is symptomatic of a company that is treating the symptoms of a problem rather than identifying the underlying core issues or drivers of the problem. These same companies tend to repair or improve the symptoms of the problem they haven’t identified.
What these companies desperately need is a systematic way to go below the symptoms to discover the actual core problem. This is exactly the purpose of the Current Reality Tree (CRT). So the starting point for this logic tree is to list the symptoms we see in our current reality. We’re not looking for petty gripes or things that we observed years ago, we’re looking for symptoms that we’re seeing today. The fact of the matter is, most of the symptoms we are seeing are coming directly from a single core problem or a core conflict. If we identify and remove the core problem (or core conflict), we may very well be able to remove most of the symptoms as well. In TOC terminology, these negative symptoms we are seeing are referred to as undesirable effects (UDEs) and they are linked through cause and effect.
The Current Reality Tree
A current reality tree is a graphical representation of an underlying core problem and the symptoms (UDEs) that arise from it. It maps out the cause and effect sequence from the core problem to the UDEs. The great thing about the CRT is that if you are able to successfully eliminate the core problem, then most if not all of the UDEs will disappear. Operationally we work backwards from the apparent undesirable effects or symptoms to uncover the underlying core cause.
In the CRT below there are eight negative symptoms, or UDEs, with a single core problem identified. Each of the UDEs are linked through cause and effect and are read as, If…..””the core problem”…., then UDEs 4, 5, 6 and 7 will exist. If UDEs 5 and 7 exist, then UDE 8 will be present. The ellipse is the symbol for the logical “and” statement which, in this case, means that both UDEs 5 and 7 must exist in order for UDE 8 to be present. If UDE 8 is present, then UDE 3 will also be present. Likewise, if UDE 6 is present then UDE 1 will occur, and if UDE 4 is present then UDE 2 will be observed. So as you can see, all of the UDEs are linked to a single core problem and if it is removed, then all of the UDEs will go away.
A Question to Ponder
In my next post we’ll look at a real example of a Current Reality Tree and discuss how to construct it in more detail. As always, if you have any questions or comments about any of my posts, leave a message and I will respond.
Until next time. | <urn:uuid:5c0ecf2f-f582-429d-9b1c-b37396a4528a> | CC-MAIN-2017-04 | http://manufacturing.ecisolutions.com/blog/posts/2015/july/the-thinking-processes-part-2.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00058-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925078 | 1,629 | 2.875 | 3 |
Theoretically, 11g provides a data rate of 54 Megabits (Mbps), nearly five times the rate of the popular 11b standard. But, as they say on TV, "actual results may vary." In many cases, 11g is no faster than using 11b. So what is the difference between the different wireless standards and how can you maximize the actual results?
b Before a
Three of the 802.11 specifications relate to transmission frequencies and data rates: 802.11a, 802.11b and 802.11g. The IEEE approved both the 11a and 11b standards in September 1999, but 11b devices were easier to build so these beat 11a to market by about two years. As a result, 11b is far more prevalent.
If you are in a hotel, airport or Starbucks with wireless access, it will be 11b. Most enterprises also use 11b, though some later deployments have started using 11a. Both these standards have their advantages and disadvantages. 11b utilizes the 2.4 gigahertz GHz band and has a top transmission rate of 11 Mbps. It has three non-overlapping channels.
11a offers several advantages overs 11b. To begin with, it operates at 5GHz which makes it less susceptible to interference. It also utilizes a different transmission method, orthogonal frequency division multiplexing (OFDM), which passes data simultaneously along multiple sub-channels. Doing this results in a higher potential throughput rate of 54Mbps. To top it off, 11a has eight different non-overlapping channels to chose from, rather than just three. | <urn:uuid:bce9ecbd-bb4c-49fc-a669-ef91afeac28b> | CC-MAIN-2017-04 | http://www.cioupdate.com/trends/article.php/3332691/Although-Newer-80211g-Not-Necessarily-Better.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00325-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949226 | 334 | 3.453125 | 3 |
Debunking 6 Myths about SSL VPN Security
Think VPN makes for secure data access and transfer? Think again.
Even though the OpenSSL Toolkit has been hacked 44 times since 2002, VPN solutions continue to heavily rely on this technology. In fact, some of the most high-profile security breaches have involved SSL VPNs. This begs the question: are users not using the technology correctly, or is SSL simply not as good as all the marketing hype makes it out to be?
This year alone, several incidents have surfaced that call into question the security of SSL platforms, such as Comodo issuing nine fraudulent certificates affecting several domains and the recent breach at Dutch digital certificate authority DigiNotar, among others. Clearly, confusion exists about the security capabilities of SSL. Ultimately, this misinformation undermines the technology and lessens its appeal in scenarios where SSL is an ideal solution. This article puts the most persistent SSL myths to rest and clarifies the technology’s capabilities -- and its limitations.
Myth #1: Using trusted certificates from a certificate authority (CA) is airtight.
It’s this assumption that that got DigiNotar and Comodo into some hot water this year. In reality, certificates -- even those from a CA -- are certainly not airtight. Certificates used to authenticate an SSL connection allow for the certain identification of each party and for the negotiation of an encrypted channel for communication. The certificates themselves are files whose alteration can be easily detected and whose origin is verified by a trusted certificate authority (such as Comodo or VeriSign).
Web application developers use this trusted-certificate model extensively when building their applications. The problem is that the CA can be spoofed. Electronic Frontier Foundation staff technologist Peter Eckersley recently published an in-depth analysis demonstrating that Iranian hackers acquired fraudulent SSL certificates for Google, Yahoo, Mozilla, and others by spoofing Comodo. CAs sell digitally signed certificates that browsers use to verify their network connections, but with these spoofed certificates, the hackers could undetectably impersonate Yahoo and Google -- allowing them to read e-mail even if it was being read over a secure connection. The Mozilla certificate would allow them to slip malicious spyware onto the computer of anyone installing a Firefox plug-in.
HTTPS and other SSL-using protocols (secure SMTP, POP, IMAP, Jabber, and many others that build on SSL) still offer protection against casual snoopers. They’ll protect against the use of Firesheep in a hipster café just fine. However, the trust and security promises implicit in the use of SSL -- and which many people depend on -- are promises that CAs simply cannot keep.
Myth #2: Sensitive information transfer via SSL sessions is secure.
Although companies often use SSL to secure sensitive information transfer from customers or partners, but vulnerabilities in this approach are frequently exposed. For example, a recent attack targeted CitiGroup’s 21 million customers and resulted in a 1 percent success rate. This might seem low, but remember it translates to 210,000 compromised users.
Even worse, the CitiGroup breach wasn’t an isolated case. Swiss researchers recently published a memo describing a way to gather information about the data transmitted over an SSL channel by exploiting a vulnerability in the implementations of block ciphers such as AES. It’s worth noting that AES was developed by Defense Advanced Research Projects Agency (DARPA) and is widely accepted as the strongest form of encryption. The memo, however, pointed out that in certain circumstances, it’s possible to decrypt some of the data in the messages, including encrypted passwords. This vulnerability is linked to the way error handling is implemented in applications that use the cipher-block chaining mode, such as AES in SSL. One of the best ways to avoid this pitfall is to never use the same key stream to encrypt two different documents.
The cipher-block chaining also exhibits well-known weaknesses that can be exploited to break SSL communication. Just how easy it is to crack SSL/TLS was demonstrated recently by two researchers, Thai Duong and Juliano Rizzo. They demoed a straightforward attack against SSL/TLS using a Java Applet to decrypt -- or even take over -- a SSL/TLS secured session. The man-in-the-middle (MITM) access required for this attack is typically obtained either from network insiders (and the majority of network attacks are attributed to insiders) or by intruders breaking into weakly secured wireless LANs, for example, from a parking lot or company lobby. The attack only pertains to TLS 1.0 which is still the most prevalent version in all common browsers -- a fact that, again, outlines the importance of effectively controlling the client-side of VPN communications (for example, by deploying managed VPN clients).
Of course, there are numerous ways an attacker can mount a successful attack against the Web browser -- too many to name in this article. If you’re interested in more details, the Open Web Application Security Project (OWASP) is a good resource.
Myth #3: HTTPS is a secure pipe.
In the CitiGroup breach, the security of thousands of customers was compromised because of the faulty assumption that they were using a secure pipe. People assume if a pipe is connected directly to the Internet through the Internet service provider’s (ISP) network, it must be safe -- especially in conjunction with the Windows firewall and anti-virus software typically used to connect (or pipe) to the Internet.
However, IPv6 allocates addresses that are theoretically all routable through the Internet. This might expose someone to a threat, particularly if the firewall isn’t set up correctly. It is also worth noting that IPv6 tunneling allows a path through IPv4 firewalls. With this, attackers might be able to penetrate a network if the IPv6 firewall is not set up properly. Of course, this particular nuance of IPv6 tunneling is not obvious unless you have an advanced network engineering background.
Myth #4: One-way certificate authentication of a SOA Web service is secure because it uses HTTPS.
SOA’s simplicity lies in its use of descriptor-based definitions of application transactions that can be articulated directly from a business process into a service. Because SOA uses Web-based technology, it is convenient to use SSL as the mechanism to secure user sessions. Yet SSL can be used to tunnel any application-level protocol, which would be otherwise run on top of TCP in the communications protocol stack. The most common use nowadays is to secure the HTTP communication vis-a-vis HTTPS, in which case the user’s browser is not authenticated -- only the server side is authenticated by SSL. This is known as one-way SSL authentication. Yet, there are known man-in-the-middle (MITM) attacks that have been successful against this authentication scheme for at least 10 years.
Myth #5: Two-way certificate exchange between a SOA Web service and a client can always be trusted.
SSL achieves its security by using certificates to authenticate each side of a connection made between two parties: the Web server and the client (usually a Web browser), both of which are based on public key cryptography. The SSL protocol assumes that if a public key can be used to decrypt a piece of information, it's all but certain that the information was originally encrypted with the corresponding private key.
When initiating a two-way SSL session, the client will check that the SOA Web service certificate is valid and signed by a trusted entity. The server running the Web service publishes a certificate -- a little chunk of data that includes a company name, a public key, and some other bits and pieces -- and when the client connects to the server, the client sends the server some information encrypted using the public key from the certificate. The server then decrypts this using its private key. Once the connection is established, all information during that session is encrypted with this information.
Since only the server knows the private key -- and hence, only the server can decrypt the information encrypted with the public key -- this allows the client to prove that it is communicating with the rightful owner of the certificate. Herein also lies the flaw.
To defeat this setup, the MITM just has to do a bit more work and create its own certificate with a private / public key pair and sit between the client and server. Thus, it would be acting as a server to the client and a client to the server -- and listening in on everything sent between the two.
Bringing Clarity to SSL
Ultimately, these myths have contributed to staggering vulnerabilities that have undermined people’s trust and confidence in all network security protocols. The only way forward is to untangle SSL’s capabilities from its limitations -- without the marketing haze and distortions.
This article is based on a whitepaper Debunking the Myths SSL VPN Security, written by Rainer Enders and Clint Stewart. You can contact Rainer Enders at firstname.lastname@example.org | <urn:uuid:b9bab1a3-fbf7-485a-8101-2c9c79bd83ff> | CC-MAIN-2017-04 | https://esj.com/Articles/2011/11/14/6-Myths-SSL-VPN-Security.aspx?Page=1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00235-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934305 | 1,877 | 3.03125 | 3 |
When users access your web site through a web server, they do not see the same folders and files that you would see in Explorer. Instead, they see "virtual" folders with names like "/Scripts" or "/Docs" that you map to folders on your hard disk when you set up your web server.
Virtual folders (also called virtual roots, virtual paths, or virtual directories) are the names of folders that you decide to publish on your site. For example, if your site is www.sample.com and you decide to publish c:\website\docs on your hard disk as "/Docs", then users accessing that folder would go to http://www.sample.com/Docs. The local path is c:\website\docs, and the virtual path is /Docs.
To set up a virtual folder using Internet Information Server, open Internet Service Manager and click the right mouse button on the web site entry. (It will usually be called "Default Web Site" unless you have renamed it.) Select New > Virtual Directory and enter the name and location of the virtual directory that you want to create.
Note: After you have created a new virtual directory, run the dtSearch Web Setup program again so that dtSearch Web will know about the new directories. (You do not have to do anything in dtSearch Web Setup -- just open the program and then close it again.) | <urn:uuid:11e625ba-1330-4ebd-b57d-af0c3e69c181> | CC-MAIN-2017-04 | http://support.dtsearch.com/webhelp/dtweb/virtual_directories.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00445-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.87937 | 290 | 3.3125 | 3 |
When it comes to combining biometrics and smart devices, the future is now. Here is a look at three technological "smart" leaps that happened in 2012.
Smart gaming touchscreen recognizes your touch
Even blindfolded, a lover would recognize you by the feel of your touch, but what do you think about a touchscreen that knows you by the feel of your touch? Some touchscreens are shared among different users, but can't distinguish one person's finger from another. The Disney Research Institute in Pittsburgh has changed that. The researchers built a system that uses "capacitive fingerprinting" and weak electrical currents to distinguish between users. Eventually, the Disney researchers believe one tablet's touchscreen could be used by multiple people at the same time, like for gaming, and could also be capable of differentiating between the gamers.
NewScientist reported, "The system works by sending multiple frequencies of a weak electrical current through a user's finger when they first touch a device. Different frequencies take different paths to the ground through the human body, and the team's prototype measures the path each frequency takes, building up an electrical profile that is unique to the user (see video). Each user's interaction with the touchscreen is then assigned to their profile. The system builds on Disney's Touche system, which lets everyday objects detect touch gestures."
A 'spy' underfoot via smart 'magic' carpet
Help me I've fallen and I can't get up? You probably haven't ever stopped to think about your carpet spying on you, or on anyone else for that matter, but now there's 'smart' carpet thanks to researchers at the University of Manchester's Photon Science Institute. This 'magic carpet' has optical fibers in the underlay that distort when someone walks over it; sensors at the carpet's edge send signals to a computer. Footsteps can be analyzed and people can even be identified by their walking patterns.
The 'smart carpet' could not only detect a stumble or fall and sound an alarm, but it could analyze footsteps over time for any subtle changes in gait that might indicate mobility problems. It could be used by physical therapists to monitor patients. The researchers say it could even be used to as an intruder alert system. If it detected walking patterns of unknown individuals, and didn't recognize the footsteps, then the carpet could call the cops for help and be ready to identify the burglars' shoe types. Those sensors in the carpet could also "provide early warning of chemical spills" or fires.
Musical Heart listens and learns
Music and exercise lovers, the University of Virginia [PDF] has invented something just for you that will "select tunes that push your pulse into the optimum range for the kind of workout you want." Music can pump you up or calm you down, and a new app can learn what works best for you and then choose the right exercise music to pump you up for your workout. Researchers "embedded a microphone into a set of headphones that listens to the throb of arteries in your ear. That data, as well as activity levels gathered using an accelerometer, is sent over the internet to a recommendation engine which chooses the next song based on the user's current and desired heart rate."
Researchers said the Musical Heart system may cost about $20. It listens and learns. "An app selects tunes that optimize the heart rate of an individual user based on a given activity, whether running, walking or relaxing – playing fast-paced music for hard workouts, and slowing the beat for cool-downs. An algorithm refines the music selection process of the system by storing heart rate data and calculating the effects of selected music on the rate. Over time, it improves music selections to optimize the user’s heart rate."
If you don't have a housekeeper to help out, how about smart socks?
Now, to the downright bizarre innovation for the person who has everything. How smart is too smart when it comes to socks? BlackSocks.com created "the smartest socks in the world." These 'smart' socks have RFID, NFC, come with a scanner and an iPhone app so you can sort your socks. You can tell how often you wash them and which sock should be paired with another. VentureBeat reported, "The socks themselves cost $189 for a 10-pack and the scanner, and you can order them online. The app, which you can also use to determine the blackness of your black socks, is free from Apple's app store."
Expect to see more innovation like this in the future as the Internet of Things picks up steam.
Like this? Here's more posts:
- 'Everyone in US under virtual surveillance;' Are you sure you have nothing to hide?
- Feds monitor Facebook: What you 'Like' may make you a terrorist
- Killer robots, indestructible drones & drones that fly and spy indefinitely
- Naughty or nice? Verizon DVR will see and hear you to find out before delivering ads
- Terrorism Fear button and funding: Ridiculous DHS spending
- Social media surveillance helps the government read your mind
- Airborne intelligence: U.S. Army building NextGen surveillance planes
- TSA: All your travel are belong to us?
- Intelligence report predicts IT in 2030, a world of cyborgs with Asia as top power
- Digital privacy in the big data era: Microsoft's data protection keynote
Follow me on Twitter @PrivacyFanatic | <urn:uuid:d0d77667-c9ec-4c85-a482-a7b041b8a91f> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2223711/microsoft-subnet/future-smart-spies--innovative-leaps-in-2012.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00473-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929621 | 1,117 | 2.9375 | 3 |
Compact Flash: A common type of flash memory card that incorporates a controller and is about the size of a matchbook. There are two types of cards, Type I and Type II. They vary only in their thickness, with Type I being slightly thinner. A CompactFlash memory card can contain either flash memory or a miniature hard drive. The flash memory type is more prevalent. These cards are most commonly used in digital cameras and PDAs.
Memory Stick: Sony's proprietary technology for capturing, transferring, and sharing digital images, digital photos, and PC files. Used in products such as digital cameras and PDAs. (See Memory Stick Information below or visit www.memorystick.com for more information.)
MultiMediaCard: This flash memory card is about the size of a postage stamp. Initially used in the mobile phone and pager markets, today they are commonly used in digital cameras and MP3 players. Rapidly being replaced by Secure Digital.
NAND Flash (Flash Memory Cards): Solid-state removable memory that does not require power to retain information. These cards are small and reliable. Cards read and write in high-speed serial mode, handling data in small block sizes.
NOR Flash (Embedded Memory): Used in devices like cell phones and PDAs, NOR Flash provides high-speed random-access capabilities, like being able to read and write data in specific locations in the memory. Can retrieve data as small as a byte.
Secure Digital: Introduced as a second-generation derivative of the MultiMediaCard standard (offers backward-compatibility with MultiMediaCards). Secure Digital includes the addition of cryptographic security protection for copyrighted data/music and a fourfold increase in data transfer rates. These cards are most commonly used in digital cameras and PDAs.
SmartMedia: The cards usually incorporate a single flash chip with no controller. Thus, they depend on the host controller to manage all memory reads and writes.
USB Flash Drive: (Also known as: a pen drive, jump drive, thumb drive or key drive) A USB Flash Drive can be used with any computer that has a USB port. Within a few seconds of plugging it in to the USB port, a new drive appears on your desktop. They are very simple and easy to use. They have solid state storage, good transfer speeds, durability, portability, and expected data retention of ten years. With all of these features they can replace the floppy, the Zip disk, and the CD-R/RW all in one small product.
|MEMORY STICK INFORMATION|
|Memory Stick||Memory stick is an IC recording media that uses flash memory. Since it records various types of digital products, it can be used for a broad range of applications.|
|Memory Stick PRO||Memory Stick PRO is an advanced media format that incorporates various capabilities and expandability, such as high capacity, high speed technology and data protection technology. All the advanced technologies, working in concert with optimized Memory Stick PRO format-compatible devices, enable various applications including real-time recording and playback of high-resolution moving images. The combination of the media of this format and compatible devices brings a new level of high quality digital playback and capture, enabling users to view, listen to and enjoy incredibly rich entertainment. The Memory Stick PRO format offers Memory Stick PRO media and Memory Stick PRO Duo media.|
|Memory Stick Duo||Memory Stick Duo is a smaller-sized Memory Stick media designed for ultra-compact and wearable devices. By using the media with a dedicated adapter, Memory Stick Duo can be used in standard-size media slots of other Memory Stick compatible products.|
|Memory Stick PRO Duo||A compact media designed for mobile applications in the broadband era. Provides the expandability and new capabilities to meet the needs of next generation applications. By attaching an adapter, it can also be used in products compatible with Memory Stick PRO media.| | <urn:uuid:8f371002-28e1-456e-90c3-ea1a92d1d546> | CC-MAIN-2017-04 | http://www.dell.com/content/topics/segtopic.aspx/buying_guide_memory?c=us&cs=19&l=en&s=dhs&~section=013 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00381-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.898595 | 796 | 3.171875 | 3 |
IBM taps the power of its World Community Grid to help fight tuberculosis by accessing surplus compute cycles of users around the world.
IBM's World Community Grid
has launched the "Help Stop TB" project, an effort to stamp out tuberculosis, led by the University of Nottingham in the United Kingdom.
The aim of the project is to help science better understand the TB bacterium, so that more effective treatments can be developed. Tuberculosis is caused by infection from a bacterium known as Mycobacterium tuberculosis (M. tb)
Started in 2004, World Community Grid is a philanthropic initiative of IBM Corporate Citizenship
, the corporate social responsibility and philanthropy division of IBM.
The World Community Grid enables anyone with a computer, Android smartphone or tablet to donate their unused computing power to advance scientific research
on topics related to health, poverty and sustainability. Nearly 750,000 individuals and 470 organizations from 80 countries have helped World Community Grid on 26 research projects to date, including trying to find more effective treatments for cancer, HIV/AIDS and neglected tropical diseases.
The grid is hosted on IBM's SoftLayer
cloud technology, and by tapping into the surplus computing cycles of devices all over the world, it provides power equivalent to the fastest supercomputers around, IBM said.
"By enlisting the help of World Community Grid volunteers, we plan to simulate different variations of the mycolic acid structures within the cell wall of M. tb
to understand how these variations impact the functioning of the bacterium," said Dr. Anna Croft
, lead researcher of the Help Stop TB project and associate professor in the Department of Chemical and Environmental Engineering at the University of Nottingham, in a post
on the World Community Grid blog.
"This will help us develop a more complete and cohesive model of the cell wall, and better understand the role these mycolic acids play in protecting the TB bacterium," Croft continued. "This basic research will in turn help scientists develop treatments to attack the disease's natural defenses. We would not be able to undertake the necessary big data approach to understand the structure of these mycolic acids without World Community Grid's computational power. With access to this power, we can observe many different mycolic acid structure models instead of just a few."
Tuberculosis is one of the biggest global killers, Croft said. In 2014, there were 9.6 million newly diagnosed cases and more than 1.5 million people who died from the disease, she noted. More than 1 million of these new cases, and 140,000 deaths, were estimated for children. The World Health Organization has declared TB to be the world's deadliest infectious disease, along with HIV, Croft added.
According to Croft, the tuberculosis bacterium has a coating that shields it from many drugs and the patient's immune system.
"Scientists have learned that M. tb
, which causes TB, has a highly unusual cell wall made up of mycolic acids, which protects it from incoming drugs and from a person's own immune system," she said. Mycolic acids are long fatty acids in the cell walls of certain bacteria.
"Bacterial resistance against the drugs available to treat TB is on the increase throughout the world, and is making TB treatment more challenging," Croft said. "This resistance typically develops when patients don't complete their long courses of treatment, which can take from six months to two years, giving the bacteria an opportunity to evolve resistance to the drugs that were used."
Volunteers can help stop TB by joining
World Community Grid. | <urn:uuid:ddc9ee77-3073-49c5-b25e-a6aaacecd476> | CC-MAIN-2017-04 | http://www.eweek.com/cloud/ibm-enlists-world-community-grid-to-combat-tuberculosis.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00197-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94987 | 736 | 3.171875 | 3 |
A spider that survived more than three months aboard the International Space Station has died less than a week into its new life as a celebrity. Nefertiti, a jumping red-backed spider, breathed her last after just four days of living in the Smithsonian's National Museum of Natural History. I was going to write that she "spun her last web," but jumping red-backed spiders actually don't make webs. Rather, as NASA explains, jumping spiders "hunt using their excellent vision to track and stalk prey, jumping and striking with a lethal bite -- similar to cats hunting mice." The point of the experiment, conceived by an Egyptian high school student who won a global YouTube Space Lab contest, was to see if jumping spiders could successfully hunt prey in microgravity. Nefertiti, at least, showed she could, managing to kill and devour fruit flies placed in her sealed environment on the space station. Nefertiti was aboard the space station from July to October. Here's the Smithsonian announcement:
It is with sadness that the Smithsonian’s National Museum of Natural History announces the death of Nefertiti, the “Spidernaut.” “Neffi” was introduced to the public Thursday, Nov. 29, after traveling in space on a 100-day, 42-million-mile expedition en route to and aboard the International Space Station. She was there to take part in a student-initiated experiment on microgravity.
This morning, before museum hours, a member of the Insect Zoo staff discovered Neffi had died of natural causes. Neffi lived for 10 months. The lifespan of the species, Phidippus johnsoni, can typically reach up to 1 year.The loss of this special animal that inspired so many imaginations will be felt throughout the museum community. The body of Neffi will be added to the museum’s collection of specimens where she will continue to contribute to the understanding of spiders.
Guess that rules out a ceremony at Arlington National Cemetery. Now read this: | <urn:uuid:ebe76551-e575-431f-80d0-66a050574af4> | CC-MAIN-2017-04 | http://www.itworld.com/article/2716658/enterprise-software/spidernaut-never-got-to-enjoy-its-fame.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00197-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965966 | 422 | 3.15625 | 3 |
Two years after its launch, NASA's Juno spacecraft is halfway to Jupiter.
NASA reported Monday afternoon that Juno hit the half-way milestone at 8:25 a.m. ET when it had traveled 879,733,760 miles. At that moment, it had the exact same number of miles to go.
"The team is looking forward, preparing for the day we enter orbit around the most massive planet in our solar system," said Juno Principal Investigator Scott Bolton in a written statement.
Juno is scheduled to arrive at Jupiter on July 4, 2016 at 10:29 p.m. ET.
The four-ton spacecraft was launched on Aug. 5, 2011 from the Cape Canaveral Air Force Station in Florida on top of an Atlas V rocket.
Juno is equipped with three 34-foot-long solar arrays, along with a high-gain antenna affixed to its middle, making it look something like a windmill. The solar arrays will be Juno's only power source, which is a first for a NASA spacecraft traveling beyond the asteroid belt between Mars and Jupiter.
NASA scientists hope that information sent back from Jupiter will confirm their theories about how the solar system was formed. However, they also acknowledged it could change everything they thought they knew.
Once in orbit around Jupiter, the spacecraft should circle the planet 33 times, from pole to pole, and use its eight scientific instruments to probe beneath the gas giant's obscuring cloud cover, according to NASA.
Juno's science team hopes to learn about the planet's origins, structure, atmosphere and magnetosphere. Juno also is expected to investigate whether Jupiter has a solid core.
Jupiter, the largest planet in the solar system, is the fifth planet from the Sun. It is a gas giant -- two and a half times the mass of all the other planets in the solar system combined.
Just last week, NASA noted that it's considering a robotic mission to Jupiter in the hunt for possible life on one of its largest moons - Europa.
Europa's surface is believed to be composed mainly of water-based ice, and NASA says there is evidence that the icy surface may be covering an ocean of water or slush. Europa could have twice as much water as there is on Earth.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed . Her email address is email@example.com. | <urn:uuid:f36f533a-d89f-4ee5-a5d5-a4e154feab24> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2483573/emerging-technology/nasa-s-juno-spacecraft-is-halfway-to-jupiter.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00409-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948122 | 523 | 3.28125 | 3 |
GDG is the Generation Data Groupwhich is used to provide the functional and Chronological relationship among the geneartions.
In simple word it is used to take the back up in real time.
Suppose u r executing a program and printing its output in a simple flat file (PS file) and if again after some days u need to run the program and will print its output in the same file or in a different file but here u need to change the output file, when u print the output in the same PS file ur preveous data will be lost as it will over write all the information and if u r printing in some other file then u have to create new PS file... now suppose if u r executing ur program for 50 times then u have to create 50 seperate PS file so that u will not lose any info.
When u will use the GDG for storing the output info then u need not to create new GDG for new info only one GDG u can create and it will automatically creates a new generation whenever u executes the program, it contains maximum 255 generations.
If ID.GDG.FIN is the GDG then in ur JCL u just need to give this GDG with (+1) for creating a next generation.
i.e. ID.JGDG.FIN(+1) when u open this generation it looks like ID.JGDG.FIN.G0001V00 | <urn:uuid:3cc0f8ac-2b9b-4a59-b449-262912338a88> | CC-MAIN-2017-04 | http://ibmmainframes.com/about12075.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00317-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.85788 | 294 | 2.6875 | 3 |
In order to defend against Denial of Service attacks the combination of attack detection use is typically involved in it, classification of traffic as well as response tools, and the target is to block traffic if identified as illegal and permit the legal traffic only after identifying it. Below is a list of response tools as well as prevention:
The rules of firewalls are quite simple like to permit or not to permit ports, protocols, or IP addresses. There are some DoS attacks that are quite complicated for number of firewalls, for example if an attack occurs on port 80 (web service), it is not easy for the firewalls to differentiate between the bad traffic and the good traffic of DoS attack so the filter packet filter firewalls cannot prevent the attack. In addition, firewalls may be located too deeply in the network. Even before the traffic enters the firewall the affect on routers may take place. However, firewalls can play an effective role in preventing users from the launch of even simple flooding attacks that can occur from the machines behind the firewall.
There are some stateful firewalls, such as OpenBSD’s pf(4) packet filter that can play a role of proxy for connections: the handshake does not simply forward the packet to the desired destination but it validates (with the client). It is even available for other BSDs. In this reference/ context, it is known “synproxy”.
In majority of switches there is a rate-limit as well as ACL capability. There are some switches that provide system-wide or automatic rate limiting, delayed binding (TCP splicing), shaping the traffic, deep inspection of packet as well as filtering Bogon (false filtering of IP) in order to identify/detect and correct denial of service attacks using automatic rate filtering as well as balancing and WAN Link failover.
These schemes work effectively if it would be possible to prevent DoS attacks with its use. The example is, with the use of delayed binding or TCP splicing one can prevent the SYN flood. In the same way, it is possible to prevent content based DoS with the use of deep packet inspection. All those attacks can be prevented whether they are going to dark addresses or originate from dark addresses with the help of Bogon filtering. If you have set the rate-thresholds correctly then automatic rate filtering can work. In order to make Wan-link failover work it is important that both the links must have DoS/DDoS mechanism of prevention.
The way switches have some rate-limiting and ACL capability in the same way routers have it too. Even these can be set manually. When under DoS attack majority of the routers can be overwhelmed quite easily. If the addition of rules takes place in order to take flow statistics out of the router when the DoS attacks are going on, they can make the matter complex and further slow it down. There are preventive features in Cisco IOS to control flooding, i.e. example settings.
Application Front End Hardware
The placement of application front end hardware takes place on the network before any traffic reaches the servers and it is considered as an intelligent hardware. Its use on networks can be done in combination with switches and routers. When the data packets enter the system then the application front end hardware is responsible for analyzing data packets, and also to identify them as priority/preference, normal/regular, or dangerous. The number of bandwidth management vendors is greater than 25. The acceleration of hardware is considered as a backbone to management of bandwidth.
IPS based prevention
Intrusion-prevention systems (IPS) are useful or effective only if the attacks have signatures linked with them. However, the pattern among the attacks is to have legal or authentic content but bad intent. Intrusion-prevention systems which are responsible for working on recognizing content cannot obstruct or block the DoS attacks that are behavior-based.
An ASIC based IPS actually serves as circuit breaker in an automatic manner and since it has an ability to analyze the attacks and it also has processing power so it can easily identify and block denial of service attacks.It is must for the rate-based IPS (RBIPS) to granularly analyze the traffic and continuously keep in check the pattern of traffic and to find out if there is traffic anomaly. It must allow the legal or legitimate traffic while the DoS attack traffic must not be allowed.
DDS based defense
The focus of DoS Defense System (DDS) on the problem is greater than IPS as it can block DoS attacks that are connection-based and the ones with legal/ legitimate content but intent is bad. A DDS can deal with both the types of protocol attacks (like Ping of death and Teardrop) as well as rate-based attacks (like SYN floods and ICMP floods).
The well-known Top Layer IPS products like IPS have a purpose-built system that can easily identify and obstruct denial of service attacks at a greater speed than a software that is based system.
Blackholing and sinkholing
With the help of blackholing, the traffic heading to the IP address or attacked DNS is diverted to a “black hole” (non-existent server, null interface, …). ISP can manage things efficiently and it does not affect network connectivity. Sinkholing can route to a valid IP address and can distinguish between the good and bad traffic. For serious or severe attacks Sinkholing is not at all efficient.
Different methods like tunnels, direct circuits and proxies are used for passing the traffic through a “scrubbing center” or “cleaning center”., which are capable of sorting “bad” traffic (DDoS attacks and many other common internet attacks) and allow the good traffic only to go beyond the server. In order to manage this type of service the provider needs to have central connectivity to the internet unless they are situated within the facility like the “scrubbing center” or “cleaning center”.
The two examples of these service providers include Verisign, Tata Communications and Prolexic. | <urn:uuid:bf769be4-ac86-445f-882b-f7729eb84893> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2012/prevent-stop-dos-attacks | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00069-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952469 | 1,264 | 3.015625 | 3 |
It’s not easy being green, especially on the road.
All-electric cars may help the U.S. become more energy-independent and lower pollution levels, but right now they’re not practical in every situation, especially in the West where people often travel long distances. The fear of running out of power in an all-electric car is common enough to have a name — “range anxiety.”
Limited driving range is the biggest issues for people considering the purchase of an electric car, according to Zeljko Pantic, assistant professor in Utah State University's (USU) department of Electrical and Computer Engineering
“They kind of fear that they will not be able to find a place to recharge, and when they find that place for recharging their vehicle they’ll have to wait maybe half hour or an hour until the vehicle is ready to go,” Pantic said. “Those are the main drawbacks of the existing technology.”
Researchers at USU are working on the cure for range anxiety. On June 25, they received approval from the school’s Board of Trustees to move forward with plans for a new research facility that would include an oval track to test technology for recharging electric vehicles on the go. The proposed facility would be built at 650 E. Grand Ave., in Logan. Woodbury Corporation, the only company to respond to USU’s request for information, would be the builder.
“We hope to have some plans and designs by the end of summer,” said Robert Behunin, vice president for commercialization at USU, adding that costs won’t be known until the plans are more solid. “If we could have something going and started by the end of summer, or really fall, that would be optimum for us.”
USU’s Wireless Power Transfer team, with the Utah Science Technology and Research initiative’s Advanced Transportation Institute at the university, have already created a stationary wireless power charging system. The system has been tested on the school’s electric Aggie Bus. Instead of being plugged in to be charged, Behunin said, the bus is parked over a charging pad that transmits energy to a receiver in the bus without using electrical wires.
The next step is in-motion wireless charging.
“Inside the road, you’re going to have a coil embedded,” said Pantic. “Power would be transferred from the electrical grid to the coils ,then via magnetic field, delivered to a vehicle moving over the coil.”
There is opportunity to replace the grid with alternative sources, he said, that would capture the energy of the wind or sun.
In addition to embedding wireless charging in roadways, it could be placed in parking stalls and at intersections, where cars sit.
The new 4,800-square-foot research space is needed because current facilities aren’t large enough to test the technology being developed at USU, according to Pantic.
“There are a couple of very significant problems that need to be solved,” he said.
One is how the system will work if vehicles aren’t perfectly aligned as they pass over the coils in the road.
“The efficiency of the power transfer might be significantly different compared to stationary charging,” said Pantic. “The synchronization of charging moments, events, is also problem ... and we need to significantly redesign energy storage inside the vehicle.”
There are also safety and security issues to be worked out.
All of that requires testing on a big scale — and on a road.
In the beginning, only certain parts of the 1,300-foot-long track will be electrified.
“Later, we expect that we’ll probably need more electrified parts of the track,” he said, noting that it will provide and opportunity to explore ways to retrofit existing roads with the new technology.
At this point the research facility and test track are just a proposal — no construction has been started. If approved by the Utah State Board of Regents, it will be the only one of its kind and size in the U.S.
“We’re continuing to move forward in creating new ideas, new opportunities, and new intellectual property in an area of research that will help our air quality, and our environment, and I think that’s pretty important,” said Behunin.
©2014 the Standard-Examiner (Ogden, Utah) | <urn:uuid:09744a66-c6a0-44d8-ba77-fd0c8ddf5b97> | CC-MAIN-2017-04 | http://www.govtech.com/transportation/Utah-State-University-Researchers-Want-to-Cure-Range-Anxiety-in-Electric-Cars.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00555-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961025 | 945 | 2.671875 | 3 |
Definition: Searching for approximate (e.g., up to a predefined number of symbol mismatches, insertions, and deletions) occurrences of a pattern string in a text string. Preprocessing, e.g., building an index, may or may not be allowed.
Also known as approximate string matching.
Specialization (... is a kind of me.)
Ratcliff/Obershelp pattern recognition, phonetic coding, Jaro-Winkler, Levenshtein distance, string matching with mismatches.
See also string matching, inverted index.
Note: From Algorithms and Theory of Computation Handbook, page 13-18, Copyright © 1999 by CRC Press LLC. Appearing in the Dictionary of Computer Science, Engineering and Technology, Copyright © 2000 CRC Press LLC.
For large collections that are searched often, it is far faster, though more complicated, to start with an inverted index or a suffix tree.
Qi Xiao Yang, Sung Sam Yuan, Lu Chun, Li Zhao, and Sun Peng, Faster Algorithm of String Comparison 2001.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 2 September 2014.
HTML page formatted Mon Feb 2 13:10:40 2015.
Cite this as:
Algorithms and Theory of Computation Handbook, CRC Press LLC, 1999, "string matching with errors", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 2 September 2014. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/stringMatchwError.html | <urn:uuid:c4c51519-fe20-4a05-a535-1d1b76743935> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/stringMatchwError.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00116-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.812407 | 352 | 2.5625 | 3 |
A group of researchers at the University of California, San Diego (UCSD) has established a new approach to simulating molecular behavior. By running an enhanced sampling algorithm on a GPU-equipped desktop, the team was able to achieve millisecond-scale protein simulations. Prior to this, similar research required the use of Anton, a multi-million dollar, purpose-built supercomputer specifically designed for molecular modeling. HPCwire spoke with project members Ross Walker and Romelia Salomon-Ferrer about their research.
A primary challenge in the study of protein dynamics is the ability to simulate interactions over relatively long time periods. “The problem we’ve always had is that the biological timescale is really at the high-microsecond/low-millisecond time scale,” said Walker. “That’s where most of the interesting large-scale motions in proteins are occurring.”
He went on to explain that conventional CPU clusters could handle a 50-nanosecond simulation per day. Hybrid systems (those accelerated by GPUs) perform slightly better, achieving around 75 to 100 nanoseconds in a day. But that’s still 100 times shorter than a microsecond.
Eventually the simulations hit a wall, limiting their ability to model interactions past a given amount of time. The primary issue lies with interconnect technology, according to Walker. He said that additional GPUs could be added to the nodes, but it would only help if system bandwidth was doubled and latency cut in half.
This dilemma prompted D.E. Shaw Research (a company founded by hedge fund billionaire David Shaw), to advance drug discovery by focusing on molecular dynamics, and to then create the Anton supercomputer. The system consists of specialized ASICs and a custom Torus interconnect. Using this unique architecture, Anton has the ability to outperform traditional supercomputers by two to three orders of magnitude, simulating up to 25 microseconds per day.
While Shaw’s design has obvious benefits in speed and accuracy, its proprietary approach makes gaining access to an Anton machine rather difficult. For academic researchers, there is but a single machine in production, at the Pittsburgh Supercomputing Center (PSC).
So the team at UCSD considered changing the algorithms, enabling them to be run on basic commodity hardware. “Do we really have to stick with the equations we’ve been using for the past 30 years?” asked Walker. “Could we try and act smarter with these equations and tailor them for specific things we want to look at?”
They developed a technique called accelerated molecular dynamics (aMD), which optimizes the conformational space sampling of a given protein molecule. The technique was developed based on a collaboration with Howard Hughes Medical Institute (HHMI), and UCSD professor Andrew McCammon, co-author of the research. According to an official statement, the group ran an aMD simulation on a desktop equipped with just a pair of NVIDIA GTX580s.
The researchers analyzed the bovine pancreatic trypsin inhibitor (BPTI), a relatively small molecule as proteins go. It took around 10 days of computation to capture 500 nanoseconds of protein folding, which is 2,000 times shorter than millisecond-scale simulation performed by Anton. However, the aMD run accurately represented all the different structural states returned by the much longer supercomputer simulation. While the UCSD team used a Fermi-based GPU to complete their application run, according to Walker and and Salomon-Ferrer, a Kepler-generation unit, like the K10, would improve processing time by about 30 percent.
The most obvious advantage to this approach is its ability to perform accurate protein simulations on thousand-dollar desktop systems. That opens up this type of research to thousands of scientists, rather than just those select few with custom-built supercomputers at their disposal. | <urn:uuid:e6a73959-99a7-4b05-8332-e68ae2bb4fc1> | CC-MAIN-2017-04 | https://www.hpcwire.com/2012/08/21/modeling_proteins_at_supercomputing_speeds_on_your_pc/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00354-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951743 | 798 | 3.171875 | 3 |
CEOP-AEGIS - Coordinated Asia-European long-term Observing system of Qinghai Tibet Plateau hydro-meteorological processes and the Asian-monsoon systEm with Ground satellite Image data and numerical Simulations
Agency: Cordis | Branch: FP7 | Program: CP-SICA | Phase: ENV.2007.4.1.4.2. | Award Amount: 4.46M | Year: 2008
Human life and the entire ecosystem of South East Asia depend upon the monsoon climate and its predictability. More than 40% of the earths population lives in this region. Droughts and floods associated with the variability of rainfall frequently cause serious damage to ecosystems in these regions and, more importantly, injury and loss of human life. The headwater areas of seven major rivers in SE Asia, i.e. Yellow River, Yangtze, Mekong, Salween, Irrawaddy, Brahmaputra and Ganges, are located in the Tibetan Plateau. Estimates of the Plateau water balance rely on sparse and scarce observations that cannot provide the required accuracy, spatial density and temporal frequency. Fully integrated use of satellite and ground observations is necessary to support water resources management in SE Asia and to clarify the roles of the interactions between the land surface and the atmosphere over the Tibetan Plateau in the Asian monsoon system. The goal of this project is to: 1. Construct out of existing ground measurements and current / future satellites an observing system to determine and monitor the water yield of the Plateau, i.e. how much water is finally going into the seven major rivers of SE Asia; this requires estimating snowfall, rainfall, evapotranspiration and changes in soil moisture; 2. Monitor the evolution of snow, vegetation cover, surface wetness and surface fluxes and analyze the linkage with convective activity, (extreme) precipitation events and the Asian Monsoon; this aims at using monitoring of snow, vegetation and surface fluxes as a precursor of intense precipitation towards improving forecasts of (extreme) precipitations in SE Asia. A series of international efforts initiated in 1996 with the GAME-Tibet project. The effort described in this proposal builds upon 10 years of experimental and modeling research and the consortium includes many key-players and pioneers of this long term research initiative.
Agency: Cordis | Branch: H2020 | Program: RIA | Phase: SFS-02a-2014 | Award Amount: 7.97M | Year: 2015
FATIMA addresses effective and efficient monitoring and management of agricultural resources to achieve optimum crop yield and quality in a sustainable environment. It covers both ends of the scale relevant for food production, viz., precision farming and the perspective of a sustainable agriculture in the context of integrated agri-environment management. It aims at developing innovative and new farm capacities that help the intensive farm sector optimize their external input (nutrients, water) management and use, with the vision of bridging sustainable crop production with fair economic competitiveness. Our comprehensive strategy covers five interconnected levels: a modular technology package (based on the integration of Earth observation and wireless sensor networks into a webGIS), a field work package (exploring options of improving soil and input management), a toolset for multi-actor participatory processes, an integrated multi-scale economic analysis framework, and an umbrella policy analysis set based on indicator-, accounting- and footprint approach. FATIMA addresses and works with user communities (farmers, managers, decision makers in the farm and agribusiness sector) at scales ranging from farm, over irrigation scheme or aquifer, to river-basins. It will provide them with maps of fertilizer and water requirements (to feed into precision farming machinery), crop water consumption and a range of further products for sustainable cropping management supported with innovative water-energy footprint frameworks. All information will be integrated in leading-edge participatory spatial online decision-support systems. The innovative FATIMA service concept considers the economic, environmental, technical, social, and political dimensions in an integrated way. FATIMA will be implemented and demonstrated in 8 pilot areas representative of key European intensive crop production systems in Spain, Italy, Greece, Netherlands, Czech Republic, Austria, France, Turkey.
Agency: Cordis | Branch: FP7 | Program: CP | Phase: SPA.2010.1.1-04 | Award Amount: 3.04M | Year: 2010
SIRIUS addresses efficient water resource management in water-scarce environments. It focuses in particular on water for food production with the perspective of a sustainable agriculture in the context of integrated river-basin management, including drought management. It aims at developing innovative and new GMES service capacities for the user community of irrigation water management and sustainable food production, in accordance with the vision of bridging and integrating sustainable development and economic competitiveness. SIRIUS merges two previously separate strands of activities, those under the umbrella of GMES, related to land products and services (which address water to some extent), and those conducted under FP5/6-Environment and national programs, related to EO-assisted user-driven products and services for the water and irrigation community. As such, it will draw on existing GMES Core Services as much as possible, by integrating these products into some of the required input for the new water management services.It also makes direct use of the EO-assisted systems and services developed in the FP6 project PLEIADeS and its precursor EU or national projects, like DEMETER, IRRIMED, ERMOT, MONIDRI, AGRASER, all addressing the irrigation water and food production sectors, some of which have resulted in sustainable system implementation since 2005. SIRIUS addresses users (water managers and food producers) at scales ranging from farm, over irrigation scheme or aquifer, to river-basins. It will provide them with maps of irrigation water requirements, crop water consumption and a range of further products for sustainable irrigation water use and management under conditions of water scarcity and drought, integrated in leading-edge participatory spatial online Decision-support systems. The SIRIUS service concept considers the economic, environmental, technical, social, and political dimensions in an integrated way. | <urn:uuid:849936db-19b2-4bf3-b575-6ee772fe48d2> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/ariespace-srl-600619/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00080-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.902302 | 1,281 | 2.65625 | 3 |
When optical transceiver was first deployed, verifying the performance of it was straightforward. The entire network was installed and owned by a single company, and if the system worked, extensive testing of the subcomponents was unnecessary. Today, however, most optical networks use components that may come from a variety of suppliers. Therefore, to test the compatibility and interoperability of each fiber optic transceiver becomes particularly important. How to test a fiber optic transceiver? This article may give you the answer.
As we all know, basically, a fiber optical transceiver consists of a transmitter and a receiver. When a transmitter through a fiber to connect with a receiver but the system doesn’t achieve your desired bit-error-ratio (BER), is the transmitter at fault? Or, is it the receiver? Perhaps both are faulty. A low-quality transmitter can compensate for by a low-quality receiver (and vice versa). Thus, specifications should guarantee that any receiver will interoperate with a worst-case transmitter, and any transmitter will provide a signal with sufficient quality such that it will interoperate with a worst-case receiver.
Precisely defining worst case is often a complicated task. If a receiver needs a minimum level of power to achieve the system BER target, then that level will dictate the minimum allowed output power of the transmitter. If the receiver can only tolerate a certain level of jitter, this will be used to define the maximum acceptable jitter from the transmitter. In general, there are four basic steps in testing an optical transceiver, as shown in the following picture, which mainly includes the transmitter testing and receiver testing.
Transmitter parameters may include wavelength and shape of the output waveform while the receiver may specify tolerance to jitter and bandwidth. There are two steps to test a transmitter:
1. The input signal used to test the transmitter must be good enough. Measurements of jitter and an eye mask test must be performed to confirm the quality using electrical measurements. An eye mask test is the common method to view the transmitter waveform and provides a wealth of information about overall transmitter performance.
2. The optical output of the transmitter must be tested using several optical quality metrics such as a mask test, OMA (optical modulation amplitude), and Extinction Ratio.
To test a receiver, there are also two steps:
3. Unlike testing the transmitter, where one must ensure that the input signal is of good enough quality, testing the receiver involves sending in a signal that is of poor enough quality. To do this, a stressed eye representing the worst case signal shall be created. This is an optical signal, and must be calibrated using jitter and optical power measurements.
4. Finally, testing the electrical output of the receiver must be performed. Three basic categories of tests must be performed:
In summary, testing a fiber optic transceiver is a complicated job, but it is an indispensable step to ensure its performance. Basic eye-mask test is an effective way to test a transmitter and is still widely used today. To test a receiver seems more complex and requires more testing methods. Fiberstore provides all kinds of transceivers, which can be compatible with many brands, such as Cisco, HP, IBM, Arista, Brocade, DELL, Juniper etc. In Fiberstore, each fiber optic transceiver has been tested to ensure our customers to receive the optics with superior quality. For more information about the transceivers or compatible performance test, please visit www.fs.com or contact us over email@example.com. | <urn:uuid:e36ff4a3-5f6c-4b2d-8ffb-f7266a9d6cf7> | CC-MAIN-2017-04 | http://www.fs.com/blog/how-to-test-a-fiber-optic-transceiver.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00566-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933994 | 724 | 3.046875 | 3 |
This post is the result of additional research started by this comment. A Cryptographic Service Provider (CSP) must be digitally signed by Microsoft before it can be installed and used on Windows. But this digital signature is technically very different from AuthentiCode and serves another goal.
AuthentiCode uses digital certificates, a certificate is a digitally signed document which links a public key to an identity. Code signing is performed to link software to its author and to allow detection of program alteration. AuthentiCode is also used by Microsoft to digitally sign device drivers. In this case, the signature is used to show that the driver passed Microsoft’s testing program (the signer is Microsoft Windows Hardware Compatibility Publisher)
A CSP must also be signed by Microsoft, but the technique is different from AuthentiCode. Microsoft will sign the hash of the CSP. This signature can be stored inside the file as a resource (ID 0x29A or 666) that is 144 bytes long, or inside the registry as a blob of 136 bytes. When I looked at several signatures inside CSPs, I noticed that the first 8 bytes were almost always identical and hence are probably not part of the actual signature (144 – 8 = 136).
Since the length of the signature is constant and very short, it cannot be a certificate. Neither can it be decoded as a certificate. My educated guess is that this signature is nothing more than the cryptographic hash of the file encrypted with a private key kept by Microsoft. Checking the signature is thus done in Windows by calculating the hash of the file, decrypting the signature with the public key and comparing the hash with the decrypted signature. Equality shows that the signature is valid. The use of the cryptographic hash ensures that is virtually impossible to modify the file while keeping the same hash, and the use of the private key guards the hash from forgery.
This is an example of the signature inside CSP rsaenh.dll viewed with the free XN Resource Editor:
A signature for a CSP can only be obtained by providing documents to Microsoft promising to obey various legal restrictions and giving valid contact information. Thus the goal of the signature is to proof that Microsoft and the CSP author promise to obey the restrictions on cryptography. But I’m not a lawyer, the formulation of this goal is the result of my inadequate legal skills.
I was also told that Microsoft will perform some testing, but I haven’t yet received confirmation or details about this. | <urn:uuid:f1858e05-9bbe-40a6-a68c-21509d3b4142> | CC-MAIN-2017-04 | https://blog.didierstevens.com/2008/01/23/quickpost-the-digital-signature-of-a-cryptographic-service-provider/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00566-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944292 | 507 | 2.59375 | 3 |
Engineers at the University of Wisconsin-Madison have created the world’s fastest stretchable and wearable integrated circuits. The move is an advance that could help propel the Internet of Things (IoT) forward.
The platform, the details of which have been published in the May 27 journal ‘Advanced Functional Materials‘, could open up options to manufacturers seeking to expand the capabilities and possible applications of wearable technologies, especially as developers look to take advantage of a new generation of wireless broadband technologies, referred to as 5G.
Tiny circuits with biomedical applications
The new technology also has a range of biomedical applications. Used in epidermal electronic systems, which are essentially just like temporary, electronic tattoos, this kind of technology could allow healthcare staff to remotely and wirelessly monitor patients. No wires and less time in the hospital also stands to increase patient comfort.
The stretchable and integrated circuits are just 25 micrometers (or .025 millimeters) thick. But what makes them so powerful is their unique structure, which relies on an ‘S’ shaped formation that works like a 3-D puzzle to give the transmission lines the ability to stretch without affecting performance.
You might like to read: Cisco and Ericsson bet big on 5G and Internet of Things networks
Opening the door to new IoT possibilities
In an official post on the University of Wisconsin-Madison’s website, the team stated that “It also helps shield the lines from outside interference and, at the same time, confine the electromagnetic waves flowing through them, almost completely eliminating current loss. Currently, the researchers’ stretchable integrated circuits can operate at radio frequency levels up to 40 gigahertz.”
Zhenqiang “Jack” Ma, the Lynn H. Matthias Professor in Engineering and Vilas Distinguished Achievement Professor in electrical and computer engineering, said “We’ve found a way to integrate high-frequency active transistors into a useful circuit that can be wireless. This is a platform. This opens the door to lots of new capabilities.”
You might like to read: Are businesses ready to adopt wearables? | <urn:uuid:602e162d-a889-4376-b3ea-8bc3acdf0e6d> | CC-MAIN-2017-04 | https://internetofbusiness.com/5g-stretchable-circuits/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00198-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933772 | 442 | 2.96875 | 3 |
Telemedicine – the practice of delivering healthcare remotely – is booming on an international scale. Through telemedicine adoption, healthcare providers both large and small have a significant opportunity to provide better patient care as well as to open up potentially lucrative new revenue sources.
What Is Telemedicine?
Telemedicine connects healthcare providers via advanced telecommunications services with the goal of broadening and improving patient care. More specifically, telemedicine practice includes:
- Consultation provided by doctors to patients and other healthcare providers at distant facilities.
- Remote assistance in surgery, emergency services, complex clinical cases, or other medical procedures.
- Remote patient monitoring.
- Exchange of health information between disparate care providers.
- Online training of medical staff. | <urn:uuid:63e64cfb-3df8-46ed-b2c5-d94af3fcd5b6> | CC-MAIN-2017-04 | https://www.infotech.com/research/healthcare-providers-take-a-daily-dose-of-telemedicine | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00162-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.899439 | 155 | 2.609375 | 3 |
OAK HAMMOCK MARSH, MANITOBA--(Marketwired - Jan. 31, 2014) - In the spirit of sustainability between conservation and agriculture and to celebrate World Wetlands Day on February 2, Canadian Fertilizer Institute (CFI), CropLife Canada, Ducks Unlimited Canada (DUC), and Soil Conservation Council of Canada (SCCC) have joined together for a shared policy position that Canada can and must do more to protect our country's wetlands in agricultural landscapes.
This is a unique collaboration between conservation and the agricultural industry and is perfectly timed with this year's World Wetlands Day theme of "Wetlands and Agriculture: Partners for Growth."
"Recent research conducted by Ducks Unlimited Canada clearly illustrates the importance of stewarding our water resources such as wetlands for values beyond the ones that immediately come to mind like drinking water, the production of crops, and wildlife habitat," says DUC CEO Greg Siekaniec. "Canadian wetlands provide a first and last line of defense against unintended runoff of agricultural inputs and a sink for greenhouse gasses."
This new partnership between CFI, CropLife Canada, DUC, and SCCC will no doubt put Canada's wetlands top of mind through this joint statement of support.
"Our partners recognize that markets are demanding more food and fibre," says Lorne Hepworth, CropLife Canada President and CEO. "This growth cannot occur at the expense of further loss of wetlands, which provide important environmental benefits. Instead, we must look to innovations to improve agriculture. Products such as GM crops, pesticides, and crop varieties derived from modern plant breeding will all help Canadian farmers increase their productivity without further loss of wetlands."
Wetland loss has dramatically accelerated in parts of Canada in response to increased commodity prices and land values. We have lost over 70 per cent of our wetlands in settled parts of Canada and that number far exceeds over 90 per cent in some developed areas.
"By following the 4R Nutrient Stewardship Best Management Practices farmers can reduce crop nutrient runoff," says Roger Larson, President of CFI. "Maintaining wetlands on the agricultural landscape is a natural way farmers mitigate the impacts of agricultural inputs on the environment. In many cases, wetlands are able to reduce the impact of agricultural inputs by avoiding leakage into larger bodies of water."
"Just as zero tillage has dramatically reduced soil erosion, stopping wetland drainage will dramatically reduce agricultural impacts on our water resources," says Don McCabe, President of SCCC. "Protecting wetlands will not only help preserve our soil, they will also protect the quality of our rivers, lakes and drinking water supplies."
With this shared belief in the need to further protect Canada's important water resources the partners pledge to work together to find solutions to better protect wetlands in agricultural landscapes.
With wetlands and agriculture being so closely linked, conservation agriculture holds the key to healthy landscapes and profitable farming systems. With an overall theme of sustainability, the sixth World Congress on Conservation Agriculture is being held in Winnipeg, MB June 22-25, 2014. As supporters of conservation agriculture, CFI, CropLife Canada, DUC, and SCCC invite you to discover why sustainable agriculture is good for all Canadians. This is the first time this event is being held in North America.
February 2 is World Wetlands Day. This day marks the date of the adoption of the Convention on Wetlands on February 2, 1971, in the Iranian city of Ramsar. Each year since 1997, the Ramsar Secretariat has provided materials so that government agencies, non-governmental organizations, conservation organizations, and groups of citizens can help raise the public awareness about the importance of wetlands. (www.ramsar.org)
About Canadian Fertilizer Institute:
The Canadian Fertilizer Institute (CFI) is an industry association that represents manufacturers, wholesale and retail distributors of nitrogen, phosphate and potash fertilizers. Canada's fertilizer industry plays an essential role in ensuring that world food needs can be met economically and sustainably. Canada supplies approximately 12 per cent of the world's fertilizer materials. We are the world's largest exporters of potash and elemental sulphur. As well, our industry contributes over $12 billion annually to the Canadian economy. Learn more at www.cfi.ca.
About CropLife Canada:
CropLife Canada is the trade association representing the Canadian developers, manufacturers and distributors of pest control products and plant biotechnology. Learn more at www.croplife.ca.
About Ducks Unlimited Canada:
Ducks Unlimited Canada (DUC) is the leader in wetland conservation. A registered charity, DUC partners with government, industry, non-profit organizations and landowners to conserve wetlands that are critical to waterfowl, wildlife and the environment. Learn more at www.ducks.ca.
About Soil Conservation Council of Canada:
The Soil Conservation Council of Canada is the face and voice of soil conservation in Canada. It is a national, non-governmental, independent organization, formed in 1987, to provide a non-partisan public forum to speak and act at the national level for soil conservation. Learn more at www.soilcc.ca. | <urn:uuid:502e85a9-f23d-4ada-a0ff-4bee2b1b9b66> | CC-MAIN-2017-04 | http://www.marketwired.com/press-release/conservation-and-agriculture-groups-come-together-for-canadas-wetlands-1874576.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00190-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927854 | 1,061 | 2.890625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.