text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
The use of wireless systems for network and Web access is exploding-from wireless LANs to e-mail capable cell phones. It makes network access completely portable, and with systems starting at under $300, it’s very affordable.
But widespread wireless use has raised serious new security challenges. How can you be certain the person connecting to your wireless network is a legitimate user and not a hacker sitting in your parking lot?
In a recent study, over half of the companies surveyed didn’t even use the most basic encryption and security features of their wireless LAN systems. For those companies who choose to implement security features, solutions include access control appliances or VPNs to protect their wireless systems instead of (or in addition to) wireless encryption to establish a secure session.
Once a secure session is initiated, however, security cannot stop there. The organization must be confident that users entering the secure sessions are who they say they are-they must be positively identified.
Most organizations, unfortunately, choose fixed passwords to identify users, and fixed passwords are inherently weak. Many password attacks exist today-from hacking dictionaries to sniffers, from social engineering to personal information attacks. These tools-like the L0phtCrack brute-force password cracker -are readily available on dozens of Web sites, free to anyone with a Internet access. These attacks can easily compromise fixed passwords, no matter how stringent the organization’s password policy. In fact, organizations lose millions of dollars every year due to password breaches. In late 2002, an identity theft ring was exposed is the U.S. that had victimized over 30,000 people; the suspects allegedly stole passwords from credit agencies and banks, accessing credit reports and information, and costing customers over US$2.7 million.
Best defense: strong authentication
Passwords are easily compromised because there’s only one factor to possess: the password. With it, an attacker can access network systems again and again. With strong authentication, there are usually at least two factors. Often, these two factors are something you know, like a personal identification number, and something you have, which can be a hardware token, a digital certificate, a smart card, or other device. An ATM card is an excellent example of this: you must have the ATM card and know the PIN to access your accounts.
The user experience: logging on
When a strong authentication system with hardware tokens is put in place to protect a wireless LAN, a user requests access and is presented with an onscreen dialog box or prompt to enter a username and a one-time password. The user activates the hardware token in order to get the one-time password on the token screen, which must be typed in at the prompt in order to gain access to the requested resource. Once a one-time password is used, it can’t be re-used to gain access. This eliminates many of the vulnerabilities of fixed passwords, making sniffing, hacker dictionaries, personal information attacks, and other common password attacks useless to hackers.
The administrator experience: adding strong authentication
There are generally three ways to use strong authentication systems to protect wireless LANs:
1. EAP and 802.1X. Specialized authentication servers can interoperate with strong authentication services using the Extensible Authentication Protocol (EAP) and 802.1X infrastructure. This combination is much more secure than competing standards. EAP and 802.1X are used to control access to network devices, including wireless LANs. These standards have been embraced by a number of leading hardware and software vendors including Cisco, Microsoft, and Hewlett-Packard, and many products, designed for both wireless and wired networks, already implement these standard. Only certain types of EAP protocols, like TTLS (Tunneled Transport Layer Security) and PEAP (Protected EAP), support strong authentication.
2. Wireless access control appliances. Because Web-based authentication is easy to deploy, it is becoming more pervasive. One solution to use Web-based authentication with wireless LANs is called an access control appliance, a firewall-like device that sits between the wireless access point and the rest of the network. These appliances force wireless users to authenticate at the application level (typically from their Web browsers over HTTPS) before receiving access to the rest of the network. In this setup, anybody can connect to the wireless access point without authentication, but users must authenticate in order to get beyond the local subnet to organizations’ trusted networks. Interoperability with strong authentication systems is often accomplished using the RADIUS protocol.
3. Virtual private networks. VPNs are traditionally used to link internal networks across an insecure network, or for remote access. More and more organizations are using VPNs to wireless LAN connections by connecting the wireless client to an internal network via the VPN gateway through an encrypted tunnel. This ensures the authenticity and secrecy of the information as it is passes across insecure networks. To securely authenticate VPN users (whether wireless or not), strong authentication can be added (often using the common RADIUS protocol) to provide a high level of security recommended by experts.
The device experience: embedded in the BIOS
Some organizations prefer an extra layer of security: allowing authorized users only to access protected information from certain computers or workstations. While some authentication systems can recognize and authenticate IP addresses, there is now another way to identify devices. Many BIOS chips can utilize security software that embed security information directly on the BIOS. This information, similar to digital certificates, is recognized by some authentication systems, and ensures that users must access protected networks only from their assigned laptops or other BIOS-based devices.
The bottom line
Creating a secure tunnel for wireless access is a vital element of corporate security, but can be easily undermined by weak user passwords. Without strong authentication, attackers can often access back-end information just as easily as your authorized users. | <urn:uuid:cf5c8415-41f9-410d-b134-fa0af5203177> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2003/05/06/positive-identification-in-a-wireless-world/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00449-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.918884 | 1,204 | 2.8125 | 3 |
What is it that clients want from a datacenter? Well, that’s entirely dependent on the client. Connectivity and uptime are always important to every customer, and any service has to come at the right price. The ecological impact and desire to be carbon neutral is also a key factor for a client’s choice of datacenter. There is an array of eco-friendly datacenters where power usage effectiveness (PUE) is decreasing rapidly, but the industry average is still around 2.0.
The problem is most datacenters are old, and updating these buildings is costly and complicated for both the datacenter and its clients. But new build datacenters are introducing more and more innovative design to help reduce PUE, a consequential fall out cost for the client. Innovative design and investment in research and development is vital for the industry. Here are a few things to look out for if you’re in search of a new, sustainable datacenter.
Datacenters require continual cooling, and this level of electricity consumption is the main source of power within the buildings. New buildings and developed datacenters often use a variety of cooling methods, with adiabatic cooling systems being one popular choice to decrease energy consumption and increase efficiency. An adiabatic process, in scientific terms, is one where the net heat transfer to or from the working fluid is zero, meaning energy lost through heat loss is zero. It is a highly efficient process. Adiabatic cooling is typically only required when the outside air temperature exceeds 24 Celsius. In countries such as the Netherlands where external temperatures rarely exceed 24 Celsius, outside air does not require regulating for data center cooling.
But how does it work? The principle is based on cooling outside air upon entry to the datacenter or module rather than traditional cooling methods, which chill entire rooms. Air enters a datacenter from the outside and is forced through a filter. The surface of this filter is wet and as the air passes through, the water evaporates, cooling the air in the process.
Optimum temperatures within a datacenter are between 24 and 29 degrees Celsius. However, only the servers need to be cooled and so modern datacenters use cool corridors to ensure only the space required is cooled rather than the entire datacenter, thereby increasing operational efficiency.
Designing these modules and cooling facilities in this way means that the electricity used to keep a datacenter cool is minimal.
‘Modular’ building has been the datacenter buzzword for last year now, and describes large, portable prefabricated datacenters designed for rapid and cost effective deployment to better meet the flexibility demands and energy efficiencies of the ever cost-conscious client. These first generation datacenter containers as they were called (1.0) have fuelled the modular approach and were the first steps towards satisfying market expectations. These large 20 or 40 ft. ‘1.0 version’ modules come in a variety of forms and are available with or without servers. The popular, pre-installed option in particular, allows for anywhere up to 2,500 servers. This ‘building block’ approach has been typically adopted by heavy users such as Microsoft and Google, which opt for pre-installed servers. Early modular data centers without pre-installed servers however are more commonly used for rental and emergency solutions. Over time these building blocks grew into a technique that today known as modular building.
Each building block is fitted with its own, independent cooling system consisting of chilling, ventilating or air-conditioning devices where appropriate. Modular datacenters (2.0) these days follow similar lines to the prior format but vastly scale the principle upwards to full facility level; making it the most efficient datacenter model available that meets market demand.
From a cost perspective, a business’ physical capital expenditure is typically rigid. Office space, for example, is a costly and serious consideration in terms of physical expansion to support business growth. In the digital asset space, modular expansion rationalises capital expenditure, offering a more cost-effective approach parallel to demand. This in turn reduces the required upfront investment to release capital for reinvesting back into the business. Resources can be managed on a demand basis, meaning that the organisation is only using what it requires at that given time.
Popularity of the modular model predominantly revolves around its cost-saving benefits; however, we are helping our customers realise other, more practical implications.
Businesses typically consider convenience as a key factor in the decision-making process, as it is usually intrinsically linked to cost. However, modular datacenters address this factor by reducing construction time from years to a matter of months, enabling expansion to be implemented when needed. The new module can be selected to meet specific criteria such as capacity, efficiency and so on.
Gradual modular expansion in line with growth inherently ensures the datacenter is running the latest and most innovative technology. IT lifecycles within a typical datacenter see server replacement adhering to an average 3 year refresh cycle, with building facilities lasting slightly longer at around 10 years. Entire sections of the datacenter can therefore be, in time, simply replaced with next-generation technologies for minimal disruption to operations, as data can be quickly transferred to another module within the current datacenter rather than to a remote backup site.
Modular expansion enables greater business control over IT and cost management with expansion being implemented parallel to customer demand. Datacenter providers such as EvoSwitch currently incorporate the modular model into the construction and expansion of its datacenters to ensure it meets customer demand on a dynamic basis. The key driver, as with most providers, is to reduce overheads and upfront investment which EvoSwitch is then able to pass directly onto its customers. Modular datacenter options are enabling providers to be more flexible and therefore better able to meet the demands of their cost-conscious customers in this still struggling economy. | <urn:uuid:3bd38019-df12-4105-ae26-65106c85156f> | CC-MAIN-2017-04 | http://blog.evoswitch.com/2012/04/25/the-evolution-of-datacenter-design/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00017-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939667 | 1,227 | 3.28125 | 3 |
Wang W.,Chinese Academy of Geological Sciences |
Wang W.,Beijing SHRIMP Laboratory |
Wang S.-J.,Shandong Institute of Geological Survey |
Dong C.-Y.,Chinese Academy of Geological Sciences |
And 9 more authors.
Geological Bulletin of China | Year: 2010
The Lushan is one of the most famous mountain in Shandong Province, which is located in Zibo City. The crustal derived granites, with many amphibolite and diorite xenoliths, exposed in the Lushan area, were composed of coarse grained alkali-feldspar granite, fine-medium grained monzongranite and phyric monzongranite. The study of zircon SHRIMP U-Pb chronology indicated that the formation ages of the granites in the Lushan area were 2525 ± 13Ma, 2517 ± 13Ma and 2508 ± 20Ma respectively. And zircons from these plutons normally have inherited cores. The granites in the Lushan area were likely to originate from crustal melting. Their formation is closely related to the strong tectonic thermal event between 2.50Ga and 2.55Ga in North China Craton, thus can be regarded as the evidence of cratonization of western Shangdong and North China during Late Neorchean. Source | <urn:uuid:9a041a38-e71a-4668-b7de-3b946b076c86> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/beijing-shrimp-laboratory-2398565/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00531-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916007 | 294 | 2.5625 | 3 |
Many of this nation's estimated 129 million cell phone users carry cell phones specifically for emergencies. What few people realize is that the cell phone safety features are not the same as of a landline phone. In most states, if someone calls 911- as 156,000 people do each day - and if the call is placed on a cell phone, the 911 operator cannot locate where the call is coming from, or the phone number from which the person is calling.
That deficiency has long been recognized, in part because of a number of high profile cases that have caught media attention in the last decade.
In 1993, 18-year-old Jennifer Koon, a college sophomore, was on her way home from work when she drove into a shopping plaza outside Rochester, N.Y. to get cash at an ATM. She was car-jacked, but managed to place a 911 call before her cell phone was knocked to the floor.
"The last 20 minutes of her life were basically recorded by the 911 operator," explained her father, New York State Assemblyman David Koon (D). "The 911 operator knew something was terribly wrong, but couldn't locate where the call was coming from."
In Florida, Karla Gutierrez drove off a Florida highway into a canal. As water poured into her car, she couldn't get out. She called 911 on her cell phone, but she couldn't tell the 911 operator where she was located. She drowned in her car.
Last year, in Chicago, a schoolteacher, Wardella Winchester, was kidnapped and kept in the trunk of a car. She had her cell phone with her and called 911. The police couldn't pinpoint her location. She was found dead in Indiana.
To address this problem, the FCC mandated that the situation be rectified in two phases: phone number identification by 1998 and cell phone-location finding by 2000.
A number of states have been collecting taxes from cell phone users specifically to finance these developments, and more states have recently levied a 911 wireless phone tax. Yet progress in meeting the FCC deadlines, which have now been extended, is not what one might expect.
The reason is simple, according to the wireless industry and others: Money collected to finance E-911 services, as they are sometimes called, has been used for other purposes.
Over the last three fiscal years, the District of Columbia has used more than $9 million in E-911 funds "for unspecified personnel expenses of the police department," according to the Cellular Telecommunications & Internet Association (CTIA).
California redirected $50 million from its E-911 pool in 2001. Texas took $40 million for other state programs in 2001. Virginia took $30 million in 2002. Maryland, North Carolina and South Carolina have all raided millions of dollars from E-911 funds.
Perhaps the most egregious example is New York where, which has collected a "911 tax" from wireless customers for 10 years, but has not upgraded a single computer to receive location information.
Recently, New York auditors found that the surcharge money has instead been used for a range of state police activities, including purchasing radio communication systems, microwave communications equipment and maintaining radio equipment. Auditors also found the money has been used for dry cleaning, lawn-mowing services and travel expenses.
"Enhanced 911 service is not a luxury," said H. Carl McCall, state comptroller, and who is running against Gov. George Pataki for governor. "It's a necessity, and it's inexcusable that New Yorkers are paying for this service but are not receiving it. The technology exists; the money is there; more than 20 other states have done it. But the Pataki administration simply has not made the commitment to implement E-911 service and protect New Yorkers."
Assemblyman Koon goes even further.
"In New York, the governor has misled all of the constituents in the state, not letting them know that they are not safe on a cell phone, taking the money and using it for something totally different than E-911 cellular," Koon said. "I think that's totally wrong."
According to Koon, it would take about $40 million to upgrade the 911 centers to show cell phone numbers and about $400 million to implement full location finding throughout the state. Since the E-911 surcharge was first inititated in 1991, the state has collected $162 million, all of which has been used on other things.
On behalf of the Office of the Governor, Andrew Rush, spokesperson for the New York State Division of the Budget, argued that much of the 911 cell phone surcharge has gone to needed upgrades to the state police radio system.
The improvements "allow agencies to respond to 911 calls in a more expeditious and coordinated fashion," he said.
Nevertheless, New York's E-911 taxes are rising. Previously, wireless phone users statewide paid a 70 cent E-911 surcharge each month that brought in more than $43 million in 2001. That figure is expected to more than double in the next couple of years as the state rate now has been increased to $1.20 a month.
"There are almost 19 million people in this state, and almost half of them have cell phones these days," said Koon. "At $1.20 a month - and the governor has now given all the counties the OK to raise it another 30 cents - so that is basically going to be $1.50 a month: Multiply that by 9 million users a month and see what you come up with. That's a lot of money."
The continuing concern is that there is still no assurance that any of this money will go for its specified purposes.
"This proposal, which was accepted after the fact, has no provision that ensures the money will go to build the statewide system," said Theresa Bourgeois, a spokeswoman of the New York Office of the Comptroller.
Taxes Called Excessive
Alleged misuse of E-911 funds s not the only thing that has the CTIA up in arms.
"The wireless industry is one of the highest taxed industries in the country," said CTIA spokesman Travis Larson. "We are up there just behind cigarettes and alcohol, believe it or not. Yet, instead of killing people, like cigarettes and alcohol arguably do, wireless phones help save lives."
The CTIA has launched a public-information campaign to educate people about how much of their phone bill actually consists of state taxes.
"Incredibly, hidden taxes are about the same as the annual price reductions consumers have enjoyed as a result of wireless competition," said Tom Wheeler, president and CEO of the CTIA. "Over the last four years, the cost to consumers of wireless phone use has fallen 32 percent, about 8 percent a year. Yet, nationwide, wireless taxes add nearly 12 percent to the average consumer's monthly bill."
CTIA has issued a "Top 10" list of states with the highest wireless taxes. Heading the list is California at 19.6 percent, Florida at 17.8 percent, Virginia at 17.1 percent and New York at 16.4 percent. Other states on the list are Nebraska (15.1 percent), Texas (14.9 percent), Illinois (13.0 percent), Tennessee (12.6 percent), Mississippi (12.0 percent) and Pennsylvania (11.8 percent).
"Something is wrong when good old American competition decreases prices for consumers, which governments then use as a smokescreen to cover increases in hidden taxes," added Wheeler. "When prices go down, consumers' bills should go down too, without the savings being hijacked by hidden taxes." | <urn:uuid:054fd922-72ed-4132-82ee-690ad3d05348> | CC-MAIN-2017-04 | http://www.govtech.com/public-safety/Wireless-Taxes.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00258-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.972351 | 1,565 | 2.5625 | 3 |
It’s getting increasingly risky to use online services. You store a lot of your personal data in the cloud, and your credit cards are linked to accounts on retail websites. Hackers would love to get at your data, to empty your bank account, or to access your email account, using it for spam and phishing. And if someone can pretend they are you — steal your identity — they can cause innumerable problems to you and your finances.
We also hear of an increasing number of data breaches, where major websites, stores, or services have entire databases of user names and passwords hacked. These databases are then traded on the hacker underground, allowing anyone willing to pay a few cents per name to access your accounts. And in some cases, they’re passed around for free.
More and more websites and services are using two-step or two-factor authentication to provide an additional layer of security. This security technique verifies your identity when you log into a website by requiring you to both know something and have something. The thing you need to know is a user name and a password or a PIN; the thing you need to have is, these days, a mobile phone, but it could also be a USB dongle or other device that can generate one-time codes.
RELATED: What is Two-Factor Authentication?
Many of the main services you use offer two-factor authentication. These include:
- Apple (iCloud and other services)
- Google (Gmail and other services)
- Microsoft Office 365
- PayPal (but only in certain countries)
- Most major banks
- And many others…
You can find out if services you use offer two-factor authentication on the TwoFactorAuth.org website.
How Two-Factor Authentication Works
When you activate two-factor authentication for a website or a service, you generally provide your mobile phone number. (You can also use an app, but the phone is the most common method of using two-factor authentication.) Most forms of two-factor authentication ask you to sign in with your user name and password, and then enter a code that is sent to you via SMS. This method not only proves that you know something (the user name and password), but also that you have something (the mobile phone), which you have “registered” as a device to receive these codes.
In most cases, once you’ve used two-factor authentication on a device, you won’t be asked to do so again on that device. Some services may only trust your device for 30 days or one year, and others may give you the option of trusting a device permanently. For example, if you have two-factor authentication active for Amazon, and want to buy something from Amazon on a friend’s iPad or a public computer, you’ll need to enter a code that Amazon sends to your mobile phone. But there’s a checkbox in the authorization dialog that lets you decide whether this device should be trusted in the future.
If it’s your computer or phone, you’ll likely want to trust it; but, if not, that device won’t be able to log into your account again without getting a new code.
Some services don’t offer such an option, but will send you an email each time you connect a new device to their service. This is to ensure that you haven’t been hacked; that someone hasn’t gotten your user name, password, and your mobile phone. Here’s what Dropbox sends when you sign in for the first time on a device:
Apple sends you emails when a new device logs into your iCloud account, even if you don’t have two-factor authentication turned on.
Why You Should Use Two-Factor Authentication
As I explained at the beginning of this article, two-factor authentication helps protect your sensitive, personal data. Take Dropbox, Google, or iCloud, for example. On any of these services, you may store personal files and photos, and on iCloud or Google, you may sync your contacts, calendars, and email. Just think how much information a hacker would have about you if they were able to access your account. (Remember when Jennifer Lawrence’s nude photos leaked?)
Sure, two-factor authentication is a bit of an annoyance, at least the first time you log in on a device. Set up a new iPhone, for example, and it’ll take you a while to get all your accounts up and running. (You have to do this for each new device, since each device sends its unique identifier to a login server, and authentication is required when you log in on a new device.)
Your mobile phone is generally secure, especially if you use an iPhone with Touch ID. If you do, think about setting a long passcode (i.e., six characters, instead of just four), to ensure that, if your phone is lost or stolen, thieves cannot get access to your device. If they do, they can potentially access all your accounts, because they can receive the codes sent for two-factor authentication (this assumes that you have stored passwords and set your web browser to auto-fill them).
One problem occurs if you lose your mobile phone; you can be locked out of certain accounts for a while. Some services let you set a backup phone number, which could be a friend’s or spouse’s phone, or even a landline, if you can receive text-to-speech SMSs. If you can’t do this, buy a cheap phone, and get a pay-as-you-go SIM, so you can receive SMSs when you need them. Naturally, your carrier will be able to port your phone number to a new SIM card, when you replace your lost phone, but this may take several days.
The inconvenience of this is far outweighed by the added security you get when using two-factor authentication.
Setting Up Two-Factor Authentication
For more detailed information about certain services, see these articles that we’ve published on The Mac Security Blog:
- How to Activate Apple’s Two-Step Verification for iCloud
- Protect Your Amazon Account with Two-Step Verification
- How to Manage Gmail and Google Security and Privacy Settings
For other services, check the TwoFactorAuth.org website; it contains links for each service to explain how to turn on two-factor authentication. It may take a few minutes to activate two-factor authentication for each service, but you’ll be much better protected. | <urn:uuid:452d773b-5e3c-4c14-92d8-d4b7af3e1ede> | CC-MAIN-2017-04 | https://www.intego.com/mac-security-blog/two-factor-authentication-how-it-works-and-why-you-should-use-it/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00258-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922553 | 1,364 | 3.015625 | 3 |
In the futuristic sci-fi movie Minority Report, actor Tom Cruise plays John Anderton, a detective in the police "pre-crime division." Anderton's detective work is a little different than today's -- wearing special gloves with sensors that cover his thumb and index fingers, and using hand gestures and voice commands, Anderton knits together disparate pieces of information from several databases by plucking the data from several plasma screens and dropping it into a central screen. After collecting all the pieces of data, Anderton rearranges them with his hands into a coherent file.
The scene is visually exciting and appears to be another special effect dreamt up by the wizards of Hollywood.
Or is it?
Not So Far-Fetched
At Pennsylvania State University, researchers are working on something similar, combining GIS, natural language technology, cognitive engineering and the relatively new field of gestural science to create their version of the computer used by Cruise in Minority Report.
The difference is the technology will help governments better manage crises such as hurricanes, terrorist attacks, disease outbreaks and forest fires.
The high-tech wizardry behind the project is fascinating, but its purpose is basic: to help people work together better while making decisions as a crisis unfolds. Michael McNeese, one of the project investigators and a professor at Penn State's School of Information Sciences and Technology, defines a crisis as a series of ill-defined situations with information flowing in from many different sources.
"The problem in crisis management right now is information overload. It creates cognitive problems for the person trying to interpret the data and make decisions," he said.
The other problem facing crisis teams is their lack of skills in manipulating and analyzing spatial data using a geographic information system.
"Specialists in other domains can't access GIS when they need it the most," said Penn State geography professor Alan MacEachren, director of Penn State's GeoVISTA Center. "They have to rely on GIS experts, which isn't very efficient."
GIS is a critical element to managing data during a crisis, yet few local and state governments have the funds to cross-train crisis team leaders in intricate geospatial mapping techniques, according to MacEachren.
A significant problem is that current GIS technologies aren't designed for the end-user or for use in team efforts, such as crisis management, MacEachren said.
"There's been no study in a systematic way to predict what works and what doesn't in these situations," he added.
The center's mission is to coordinate integrated GIS research with emphasis on geovisualization.
MacEachren and McNeese are part of the GeoCollaborative Crisis Management (GCCM) project, a Penn State University research initiative that the National Science Foundation funded with $400,000 as part of its Digital Government Research Program.
MacEachren said researchers are currently using ArcIMS from ESRI.
"However, our interfaces approach and software is not dependant on a particular GIS," he added. "We are currently in the process of implementing a version using open source GIS software, relying on Geoserver + GeoTools."
McNeese, who spent 23 years in the Air Force designing command and control centers, is an expert in the arcane field of cognitive engineering. MacEachren is a professor of geography at Penn State and an expert in geographic visualization and cartography.
Rounding out the interdisciplinary GCCM team are professors Guoray Cai, an expert in human-geographic collaboration, and Rajeev Sharma, a specialist in the science of gestures for human-computer interaction. Researcher Sven Fuhrmann, who works in the field of geovisualization and cognitive science, is also a member.
Together, the team is spending three years studying how teams of government workers respond to crises collaboratively, and developing technology that will enable them to synthesize geographic and other forms of data without having to become GIS experts.
Helping them are several government partners who are experts in dealing with natural disasters, hazardous spills and other environmental problems, as well as agencies that deal with terrorism.
Federal agencies include the Environmental Protection Agency, the Department of Homeland Security and the Federal Emergency Management Agency. State partners include Pennsylvania's Department of Environmental Protection and the Florida Division of Emergency Management. GCCM's business partner is Advanced Interfaces, a small software firm specializing in multimodal interfaces for GIS.
So far, the team has spent most of its time studying how crisis teams work in the real world and developing a way to permit human-computer interaction with geospatial information to occur in a group environment. The goal is to come up with a way that allows what the researchers call a new computing paradigm based on "multimodal," dialog-enabled interaction with geographic information.
The solution is called Dialogue Assisted Visual Environment for GeoInformation (DAVE_G).
"DAVE_G can recognize gestures in conjunction with dialog and interpret the meaning," McNeese explained.
Combining gestures and dialog is important, according to MacEachren.
"Spatial concepts are vague for computers," he said. "When a user tells a computer a location is 'near,' 'between' or 'north of' something, it has trouble interpreting what that means. But when you add hand gestures, the accuracy improves."
The technology is meant to work with large screen displays. For example, a crisis team tracking a hurricane might say, "Let's look at the population distribution here in the southeast" and gesture at a map on display, circling the region of interest.
The computer can interpret the combination of voice and gesture commands, zoom in the area and act on the next series of queries. The team member might gesture to indicate the possible track of the hurricane and ask the computer to display what areas would be most affected by flooding if the storm tracks north or south of the current location.
On a smaller scale, GCCM is also developing software to let field workers manipulate GIS software on tablet PCs and PDAs using styluses.
More Work To Be Done
In terms of practical use, GCCM is testing the capabilities of DAVE_G for hurricane disasters and West Nile outbreaks. The team is also working on a demonstration project for the Port Authority of New York and New Jersey that will aid the agency with possible oil and hazardous material spills.
With the research project just halfway completed, it's too early to judge the results. But if things go according to plan, GCCM and its multimodal software for human-computer interaction could change the way groups of government teams work together and make split-second decisions during a crisis.
"This technology will make the information that's essential for crisis management more accessible to the people who need it," said MacEachren. "Local police, fire and emergency management personnel are the ones who need to know where things are and make decisions based on that information. But few, if any, of these people are highly skilled in analyzing geographic information with software. This technology will make it possible for these people to use the information without expensive training." | <urn:uuid:ddf5165f-edbc-4b3c-ab4f-dc2b0b6d4e25> | CC-MAIN-2017-04 | http://www.govtech.com/public-safety/Sign-Language.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00074-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938374 | 1,468 | 2.59375 | 3 |
According the Washington Post, citing documents leaked by Edward Snowden, the NSA is fully capable of capturing GSM (Global System for Mobile communications) traffic that's encrypted with the A5/1 algorithm. In addition, the agency's mobile traffic analysis is used to infer relationships, by harnessing metadata collected from cell towers and data delivered to mobile advertising networks.
Last week, the Washington Post reported that the NSA is collecting cellular location in bulk, to the order of 5 billion records per day globally. On Tuesday, the paper reported that in addition to bulk collections, the intelligence agency uses a collection of data sorting tools to separate the bits of collected metadata and turn them into actionable information. The process, as described by the report, outlines the exact fears that privacy watchdogs and government critics had when the NSA's metadata collection programs were exposed earlier this summer.
In addition to using cellular tower data to pinpoint a person's location, the NSA also uses WiFi and GPS information to locate subjects, as those signals "reveal their location in a variety of ways including leaked location information from their IP address, mobile apps and built-in location based services," the Post explained.
Moreover, as part of a project called HAPPYFOOT, the NSA also intercepts traffic generated by mobile applications that relay location information to advertising networks. All of this intercepted data, and collected metadata, is then sorted and used to infer relationships between people or identify persons of interest.
The NSA's use of advertising networks is interesting, because the FTC just reached a settlement with a flashlight application developer for collecting and transmitting consumer information, including location data, to advertising networks without permission. This settlement was important to privacy advocates, because they've long warned the public that mobile advertising platforms were privacy and legal risk.
For example, Privacy Rights Clearinghouse warned against the privacy risks of smartphones as far back as 2005. At the time, they noted that the data collected by smartphones and transmitted to carriers (metadata), could not only pose privacy risks, but pose a conflict within federal privacy laws, which rarely keep pace with technology.
In 2011, software security vendor Veracode examined the Pandora mobile app, and discovered no less than five advertising libraries being used by the application. Pandora later removed the libraries, but one of them was Google's AdMob, a company purchased by the search giant in 2010.
Among the various personal bits of information being collected by AdMob, Veracode discovered that it was also attempting to gather COARSE and FINE location data. In addition to WiFi and GPS, the NSA also collects metadata related to COARSE and FINE to locate people using the HAPPYFOOT program.
Veracode's Tyler Shields explained at the time:
"So what does this mean to the end user? It means your personal information is being transmitted to advertising agencies in mass quantities..."
"In isolation some of this data is uninteresting, but when compiled into a single unifying picture, it can provide significant insight into a person's life...When all that is placed into a single basket, it’s pretty easy to determine who someone is, what they do for a living, who they associate with, and any number of other traits about them."
Again, Shields was speaking about the same type of data collection and relationship inference being conducted by the NSA, years before proof of such collection and relationship mapping existed. Since 2011, the amount of data transmitted online and collected by advertisers and data brokers has only grown.
So given the latest information on the topic, it seems as if the NSA took advantage of the situation. If the advertisers were collecting the data anyway, the NSA simply needed to use the existing legal framework on interception and collection to gain access to it.
When it comes to intercepts, the Washington Post story also includes a document that shows the NSA can collect A5/1 GSM traffic that is unencrypted, encrypted when the crypto variable is known, and when the crypto variable is unknown. Thus, if the GSM traffic is using A5/1, the NSA can bypass the encryption completely and process it with no problem.
The notion that the NSA can bypass A5/1 isn't a surprise. In fact, plenty of lawful interception vendors sell hardware that can decrypt A5/1 GSM traffic. The security that is offered by A5/1 has been exploitable years, and academic knowledge that it was vulnerable has existed for decades. But between 2003-09 researchers started taking a hard look at it, and working on ways to develop a more practical attack. So, assuming they're not using existing vendor technology, then the NSA has perfected the process.
To replace A5/1 and address security concerns, A5/3 was developed, offering a stronger 128-bit encryption. But in 2010, researchers showed that it too could be broken. Still, A5/3 is a better alternative to A5/1, and while adoption of the new algorithm is spreading, there are many carriers still using A5/1 in Europe and Asia. Leaving those using the older system exposed to bulk collections of metadata.
On Tuesday, as news of the NSA's mobile analysis operations broke, Deutsche Telekom announced that they were the first operator in Germany to move away from A5/1 and adopt A5/3. However, due to the number of older phones in use that do not support A5/3, the mobile operator said that calls to those devices will still work, but that they would revert to the older standard.
The Washington Post's story serves as a strong reminder that we live in a data-driven world, and that true privacy is a rare thing. Advertisers buy and sell the data that creates our online existence, and sometimes this information is collected without the average consumer's knowledge or informed consent. However, even for those of us who go to great lengths to protect our privacy, the Post's report shows that the odds are good that something somewhere is collected, and the NSA has a copy. | <urn:uuid:74be8e36-7fad-4c82-9c6d-711ee97e4f2e> | CC-MAIN-2017-04 | http://www.csoonline.com/article/2136966/privacy/nsa-using-leaky-smartphones-and-bypassed-a5-1-to-track-people-globally.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00404-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959153 | 1,232 | 2.515625 | 3 |
With SMS, a cell phone client, a central server, GPS and a set of bar codes,
can track pharmaceutical transactions to assure consumers in third-world countries that the drugs they buy aren’t counterfeits or expired. As shipments change hands from manufacturer to retailer, the server uses positioning data, bar-code images and SMS messages to authenticate that the shipment and the parties involved are registered and that the goods changing hands are verified. If counterfeits slip in, the system will indicate where, say researchers at New York University.
Cell phones can help weed out counterfeit drugs | <urn:uuid:8badbf2a-64de-4b15-b3bd-fc293a3ff2c2> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2870159/network-security/5-lab-technologies-that-could-reinvent-cell-phones.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00524-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.887852 | 118 | 2.515625 | 3 |
Organizations Using the Internet
Afghanistan (not Taliban, listed elsewhere)
Modified 24 Apr 2009
Afghanistan — The USSR invaded Afghanistan 27 December 1979, and were opposed by various mujahadin factions. For the following ten years, the U.S. CIA funded and supplied weapons to the mujahadin through Pakistan's ISI (Inter-Services Intelligence). With the Soviet departure in 1989, U.S. interest in Afghanistan ceased. ISI had distributed the weapons through Gulbuddin Hekmatyar, who was rabidly anti-western. He seized the position of prime minister in the fighting after the Soviet departure, preventing any coalition government. The Taleban, backed by Pakistan's ISI, eventually became the major military force in Afghanistan, seizing Kabul and the majority of the territory by the mid-1990's. Between the Taliban's rise to power and the war in late 2001, most countries and the U.N. recognized the exiled Jamiat-e-Islami Afghanistan as the legitimate Afghan government. It was founded in the 1960's by Burhanuddin Rabbani, with Ahmad Shah Massoud and Gulbuddin Hekmatyar, and was currently ruled by Rabbani through late 2001. A very few countries (Saudi Arabia, Pakistan, and the United Arab Emirates) instead recognized the Taleban (in Sep 2001, after the attacks on the U.S., Saudi Arabia and UAE dropped their recognition, and Pakistan dropped its recognition by late 2001).
The Jamiat-e-Islami formed the most important part of the Northern Alliance, which held the northern part of the country before the U.S.-supported campaign in late 2001. Ahmad Shah Massoud was the Northern Alliance's military leader, holding the mountainous country of the Hindu Kush in the north-east of the country, until he was assassinated by the Taleban on 9 September 2001, two days before the World Trade Center in New York was attacked.
The anti-Taleban Afghan coalition was known as the United National Islamic Front for the Salvation of Afghanistan or UNIFSA; it was made up of 13 parties opposed to the Taliban including Harakat-i-Islami Afghanistan (Islamic Movement of Afghanistan), Hizb-i-Islami (Islamic Party), Hizb-i-Wahdat-i-Islami (Islamic Unity Party), Jumaat-i-Islami Afghanistan (Islamic Afghan Society), Jumbish-i-Milli (National Front), Mahaz-i-Milli-i-Islami (National Islamic Front).
Afghan refugees outside Afghanistan had organized politically, including: Mellat (Social Democratic Party), Coordination Council for National Unity and Understanding in Afghanistan, (CUNUA, based in Peshawar, Pakistan), tribal elders representing the traditional Pashtun leadership, and the Writers' Union of Free Afghanistan (WUFA).
There is a confusing variety of movements. These pages might help you sort through them:
Pro-Government (Jamiat-e-Islami Afghanistan and allies)
- Jamiat-e-Islami Afghanistan — the ethnically Tajik government recognized during Taleban rule by most nations. Prominent figures included the official head of the Northern Alliance, ousted President Burhanuddin Rabbani; military leader General Mohammed Fahim Khan; and Ismael Khan in west-central Ghor and Heart provinces.
- Junbish-i-Milli-yi Islami — ethnic Uzbek anti-Taleban forces led by General Abdul Rashid Dostum.
- Payam-e-Mujahid — The Northern Alliance, running a radio station Radio Voice of Mojahed over the Internet (and presumably over the air) http://www.payamemujahid.com
- Taleban (Anti-Government) — Listed elsewhere
- Hezb-e-Islami — anti-Taliban political party. Originally founded by Gulbuddin Hekmatyar, who first was in the Jamiat-e-Islami. In 1975, Hekmatyar and Massud tried to take power. The plot failed and Hekmatyar left the movement to found his own. In 1998 or so, the Hezb-e-Islami was defeated by the Taliban. Some joined Massud, some joined the Taliban, and Hekmatyar left to Iran. As of 18 Sep 2001 he was saying that he would join the Taleban if U.S. attacked bin Laden's bases in Afghanistan (BBC News). http://www.hezb-e-islami.org/
- Afghan Voice — Association for Peace and Democracy for Afghanistan — http://www.afghanistanvoice.org/
- Afghanistan Peace Organization — http://www.afghanistan.org/
- Shuhada — NGO working to help Afghans — http://www.shuhada.org/
- Revolutionary Association of the Women of Afghanistan — http://www.rawa.org/
- Hazara — Hazaristan is a region inhabited by the Hazaras people, who have been slaughtered by the Taliban. The anti-Taliban alliance includes ethnic Hazara shi'a groups of the Hizb-i Wahdat led by Karim Khalili and Hahaqiq.
Intro Page Cybersecurity Home Page | <urn:uuid:ad470f6b-8cec-47cf-bfdb-bd1e44486f8e> | CC-MAIN-2017-04 | http://cromwell-intl.com/cybersecurity/netusers/Index/af | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00550-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927568 | 1,093 | 2.828125 | 3 |
The lecture has long been the standard for transmitting knowledge to classroom learners, but these days a lot of students are participating in distance learning courses. As a result, teachers are starting to look to webcasting to upgrade their existing distance learning solutions. In a classroom, a teacher can strive to promote interactivity by allowing students to ask questions, work together on projects, play learning games, and more. Online, all of these features can be brought into play as well with webcasting.
If you’re not yet familiar with webcasting, this is a type of technology which is similar to webinar software, but with more powerful capabilities. Anyone with a high-speed internet connection can participate in a webcasting presentation. Webcasting is fairly simple to set up, and is also quite affordable. Some instructors and school boards shy away from webcasting because they imagine it to be costly, but scalable solutions like those offered by INXPO will be affordable to K-12 schools and higher learning institutions.
How is webcasting currently being used in education?
- Distance learning institutions like online universities and K-12 schools are using webcasting to teach all of their classes to students
- Brick-and-mortar colleges are using webcasting for the distance learning opportunities they provide.
- Professors and other faculty members are taking advantage of the capabilities of webcasting in order to meet with other professionals, cutting back on travel expenses for learning seminars and conferences.
- Schools are also using this software to train their own staff members in the latest educational techniques.
INXPO’s communications software is used in the educational sector by instructors and also by corporate HR departments for training employees. It is specifically designed to help you to transmit knowledge using live video, crystal clear audio, chat and screen sharing. With this software, you can lecture students but also provide a platform for interactivity. Q&A keeps students involved, and you can use instant polls and surveys to check to see whether students are comprehending information. You also can automatically log all activity so you can review your sessions and figure out what you can improve and who might need a little extra help.
Before you start using webcasting for your distance learning classes, a couple of quick tips. Don’t forget that interactivity is key, whether in person or online. Most teachers have found that the best approach is to regularly interact with students and allow students to influence the direction of the discussion, just as you likely would in a classroom environment. If you wait until the end of a long lecture to use the live chat, polls, surveys and other interactivity features, comprehension will suffer. Finally, never lose sight of the fact that technology by itself does not transmit knowledge, but you can leverage the power of technology to share knowledge more effectively. Ultimately the instructors who achieve the best results from webcasting will be those who are able to unite powerful technology with effectual teaching techniques.
To learn more about INXPO’s online communications software please email us at firstname.lastname@example.org or call 312-962-3708.
Post contributed by Adam Polaszewski | <urn:uuid:b44084f7-c5aa-4e5d-9787-afb2cc089d8c> | CC-MAIN-2017-04 | https://blog.inxpo.com/how-webcasting-technology-is-being-used-in-education/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00486-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951336 | 638 | 2.609375 | 3 |
Wikipedia is one of the most highly visited sites on the Web with over 94 million unique visitors per month. A recently discovered vulnerability could have put those users at risk of malware exploits had it not been discovered.
Check Point researchers found a critical vulnerability in MediaWiki (versions 1.8 and up)—an open source Web platform used to create and maintain ‘wiki’ websites such as Wikipedia.org. If exploited, the flaw would allow an attacker to remotely execute malicious code. A successful attack could enable the attacker to gain complete control of the vulnerable Web server, and possibly compromise visitors by hosting malware on the site.
“It only takes a single vulnerability on a widely adopted platform for a hacker to infiltrate and wreak widespread damage,” said Dorit Dor, vice president of products at Check Point Software Technologies. “We’re pleased that the MediaWiki platform is now protected against attacks on this vulnerability, which would have posed great security risk for millions of daily ‘wiki’ site users.”
Thanks in part to the efforts of Check Point researchers, this crisis has been averted. If attackers had gained control of Wikipedia.org and injected malware code to infect site visitors the results could have been catastrophic and widespread.
This issue also illustrates why it’s important to be aware of discovered vulnerabilities that affect the systems and software you rely on, and why it’s crucial to implement patches and updates in a timely manner when they’re available.
For more details about this specific threat, check out this Threat Cloud Central blog post.
If you have a site that uses MediaWiki 1.8 or later, and you have not applied the latest update, you should do so as soon as possible to ensure your Web server is not vulnerable. Now that news of the flaw is public, and the patch exists for attackers to reverse-engineer, the threat is actually greater and the clock is ticking. | <urn:uuid:cfb0bc19-5bc1-4589-932b-8c18e73582b5> | CC-MAIN-2017-04 | http://www.csoonline.com/article/2137144/malware-cybercrime/wikipedia-and-many-other-wiki-sites-contained-critical-vulnerability.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00056-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931196 | 396 | 3.015625 | 3 |
Tanzania has a large and growing population, strategically located and abundant in natural resources and is politically stable. The country has a population of around 45 million and is the largest in East Africa. It is the main trade gateway for the five surrounding landlocked countries DRC, Rwanda, Burundi, Uganda and Zambia.
The country has bountiful natural resources, minerals in the form of gold, diamonds, copper, coal and recently discovered large deposits of natural gas along its coastline. The country has been a democracy since its birth in 1961 and has never suffered through a civil war and its governments have been elected democratically who have all been pro-business and have liberalized and opened up the Tanzanian economy.
Consequentially Tanzania has had a growth rate of around 7% for five years and has been dubbed by the world bank the 7% club, a group of countries who have been forecasted to achieve 7% or more real GDP growth over the next decade. | <urn:uuid:3096572c-113a-4c01-a352-4088906ecc3d> | CC-MAIN-2017-04 | https://www.mordorintelligence.com/industry-reports/analysis-of-key-sectors-of-tanzania-agriculture-dairy-meat-fruits-and-vegetables-water-manufacturing-and-construction-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00056-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.978746 | 198 | 2.515625 | 3 |
TCP: Transmission Control Protocol
In preparation of our CCNA exam, we want to make sure we cover the various concepts that we could see on our Cisco CCNA exam. So to assist you, below we will discuss TCP.
Transmission Control Protocol (TCP) is the transport layer protocol in the TCP/IP protocol suite , which provides a reliable stream delivery and virtual connection service to applications through the use of sequenced acknowledgment with retransmission of packets when necessary. Along with the Internet Protocol (IP ), TCP represents the heart of the Internet protocols.
Since many network applications may be running on the same machine, computers need something to make sure the correct software application on the destination computer gets the data packets from the source machine, and some way to make sure replies get routed to the correct application on the source computer. This is accomplished through the use of the TCP “port numbers”. The combination of IP address of a network station and its port number is known as a socket or an “endpoint”. TCP establishes connections or virtual circuits between two “endpoints” for reliable communications. Details of TCP port numbers could be found in the TCP/UDP Port Number document and in the reference.
Among the services TCP provides are stream data transfer, reliability, efficient flow control, full-duplex operation, and multiplexing.
With stream data transfer,TCP delivers an unstructured stream of bytes identified by sequence numbers. This service benefits applications because that the application does not have to chop data into blocks before handing it off to TCP. TCP can group bytes into segments and passes them to IP for delivery.
TCP offers reliability by providing connection-oriented, end-to-end reliable packet delivery. It does this by sequencing bytes with a forwarding acknowledgment number that indicates to the destination the next byte the source expects to receive. Bytes not acknowledged within a specified time period are retransmitted. The reliability mechanism of TCP allows devices to deal with lost, delayed, duplicate, or misread packets. A timeout mechanism allows devices to detect lost packets and request retransmission.
TCP offers efficient flow control – when sending acknowledgments back to the source, the receiving TCP process indicates the highest sequence number it can receive without overflowing its internal buffers.
Full-duplex operation: TCP processes can both send and receive packets at the same time.
Multiplexing in TCP: numerous simultaneous upper-layer conversations can be multiplexed over a single connection.
Protocol Structure – TCP Transmission Control Protocol
- Source port — Identifies points at which upper-layer source process receives TCP services.
- Destination port — Identifies points at which upper-layer Destination process receives TCP services.
- Sequence number — Usually specifies the number assigned to the first byte of data in the current message. In the connection-establishment phase, this field also can be used to identify an initial sequence number to be used in an upcoming transmission.
- Acknowledgment number – It contains the sequence number of the next byte of data the sender of the packet expects to receive. Once a connection is established, this value is sent.
- Data offset — 4 bits. The number of 32-bit words in the TCP header indicates where the data begins.
- Reserved — 6 bits. Reserved for future use. Must be zero.
- Control bits (Flags) — 6 bits. It carries a variety of control information.
- Window — 16 bits. It specifies the size of the sender's receive window, that is, the buffer space available in octets for incoming data.
- Checksum — 16 bits. It indicates whether the header was damaged in transit.
- Urgent Pointer — 16 bits. it points to the first urgent data byte in the packet.
- Option + Paddling – Specifies various TCP options. There are two possible formats for an option: A single octet of option type. An octet of option type, an octet of option length, and the actual option data octets.
- Data – contains upper-layer information.
I hope you found this article to be of use and it helps you prepare for your Cisco CCNA certification. I am sure you will quickly find out that hands-on real world experience is the best way to cement the CCNA concepts in your head to help you pass your CCNA exam! | <urn:uuid:e5de1b1c-dc62-4667-84c7-7304e95f497b> | CC-MAIN-2017-04 | https://www.certificationkits.com/tcp-ip-overview/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00542-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.871903 | 898 | 4.0625 | 4 |
We will speak here about some basics about Forwarding UDP broadcast traffic. If you were wondering what Forwarding UDP broadcast traffic actually is I will try to explain it here in few words.
If you have more that one broadcast domains in your local network, let’s say that you have three VLANs. In normal networking theory it’s normal that broadcast initiated on host inside one VLAN will get to all host inside that VLAN but it will not get across to other VLAN. Typically the broadcast domain border is router or a Layer’s 3 switch VLAN interface. Although this is normal for most of broadcast traffic there needs to be a way to forward some kinds of broadcast traffic across that border. Why? Here’s a simple example. If you use DHCP, and you are, you will probably have hosts in different VLANs and all of them need to get the IP address from DHCP. If Forwarding UDP broadcast traffic didn’t exist it will be needed to have one DHCP server on every VLAN. Remember that DHCP works using broadcast traffic in some of the steps.
Simple DHCP address leasing:
Host that connects on the network will in the first step send broadcast DHCP discover message in order to find where the server is or if the server actually exist. After the HDCP replies with unicast DHCP offer host will one again use broadcast to send DHCP request to server. Server will then acknowledge the IP address leasing with unicast ACK message and that’s it.
Forwarding UDP broadcast traffic
If the DHCP server is on one VLAN and host is on other, the DHCP server will not get the first message from the host because that message is broadcast and it will be stopped on broadcast domain border (router, VLAN interface). By implementing Forwarding UDP broadcast traffic it will be possible for DHCP discover and DHCP request to get to the server. Normally that implementation is done on the broadcast border, on the interfaces that are connecting two VLANs (two broadcast domains).
Cisco IOS provides two ways to implement Forwarding UDP broadcast traffic:
- IP helper addressing
- UDP flooding
The first method, IP helper addressing, is used in production networks much more frequently than the second. Usually to convert broadcast traffic into unicast. Such as forwarding DHCP requests from all segments to a centralized DHCP server.
I can point you on the fast that using IP helper addressing is only possible if you have the unicast destination on the other segment (IP address of the DHCP server). Basically IP helper addressing needs to know to which unicast address to send the received broadcast. And the config is more than simple, please note that int fa 0/0 is connecting to VLAN with IP addressing 192.168.0.0/24 and that DHCP server is on other VLAN and it has IP address 172.16.1.55
Rack1R5#conf t R5(config)#int fa 0/0 R R5(config-if)#ip add 192.168.1.1 255.255.255.0 R5(config-if)#no sh R5(config-if)#ip helper-address 172.16.1.55
Let’s go further, the other method is UDP flooding and it will be used when you cannot convert the broadcast traffic into unicast traffic simply because you do not know the unicast address. Sometimes is not about you not knowing unicast IP address on other side but it’s about the need to keep the traffic pointed to broadcast address even when forwarded on the other side. If you must make sure that it remains broadcast traffic when forwarded along the specified path your solution is UDP flooding.
Cisco IOS normally possesses a technique for forwarding UDP broadcasts. This feature enables flooding of UDP broadcast packets from one network segment (VLAN) to other. When the feature is enabled each router forwards (rebroadcasts) UDP broadcast packets to the next segment. The packets are not actually routed, layer 2-type forwarding is used. You will use ip forward-protocol spanning-tree command to turn this on.
Although the forwarding is very similar to spanning-tree flooding it does happen on Layer 3. Routers use spanning-tree topology only for loop control and UDP broadcast packets are forwarded out of nonblocking spanning-tree interfaces. The spanning tree must be enabled on all interfaces participating in the flooding tree.
Configuration of spanning tree requires that you configure a bridge-group command on all participating interfaces and that you assure that spanning-tree BPDUs are forwarded along the specified path. When you configure the ip forward-protocol spanning-tree command with the bridge-group command you do not need to enable concurrent routing and bridging (CRB) or integrated routing and bridging (IRB). This configuration will not bridge all IP traffic but only UDP broadcast traffic specified with the ip forward protocol command. This configuration will however bridge all nonrouted (non-IP) traffic if you have that kid of traffic filtering techniques to avoid the bridging of non-IP packets will be needed.
Only UDP packets are forwarded in this fashion and the following conditions must be met for a Packet to be considered for flooding:
- The packet must be MAC-level broadcast
- The packet must be IP-level broadcast
- The packet must be a UDP packet with a port matching what is specified by ip forward-protocol command.
- TTL of those packets must be at least 2
A flooded UDP datagram is given the destination address that is specified by the ip broadcast-address interface configuration command on the output interface. The destination address can be set to any desired address. Therefore the destination address can change as the datagram propagates through the network. The source address is never changed. The TTL value decreases. After a decision has been made to send the datagram out on an interface (and the destination address has possibly changed) the datagram is handed to the normal IP output routines, and is therefore subject to access lists if they are present on the output interface.
The spanning-tree based flooding mechanism forwards packets whose contents are 255.255.255.255 and 0.0.0.0 and also if subnet local broadcast packets (with host part of 255.255.255.255 or 0.0.0.0) when ip forward-protocol spanning-tree any-local-broadcast is configured.
Example on Cisco device
This one enables spanning-tree forwarding:
R5(config)#ip forward-protocol spanning-tree
Here we configure a bridge-group and assign interfaces to it:
R5(config)#interface FastEthernet0/1 R5(config-if)#bridge-group 1 R5(config)#interface Virtual-Temolate1 R5(config-if)#bridge-group 1 R5(config)#bridge 1 protocol vlan-bridge
In this example I configured vlan-bridge spanning-tree protocol in the bridge protocol command. You can use IEEE STP which is most commonly used for fallback bridging but in my case the example is on Catalyst 3550 Series Switch that does not support IEEE STP. If you have 3560 go for IEEE STP. There is no actual bridging that will happen in this example, you just need to build STP topology to provide for loop-free flooding of UDP datagrams.
I need to mention that there must be ip forward-protocol udp global command configured. This applies to both broadcast forwarding methods, IP helper addressing and UDP flooding to. In other words, before Cisco IOS Software can use IP helper or UDP flooding for broadcast it needs to be told what to forward.
Look at what ports you need to forward: Cisco IOS Software has a number of ports enabled for forwarding by default:
- TFTP port 69
- DNS port 53
- Time service port 37
- NetBI0S name server port 137
- NetBIOS datagram server port 138
- BOOTP client and server packets ports 67 and 68
- TACACS service port 49
- IEN-116 name service port 42
Note that on some Cisco IOS versions defaults differ from those just listed even when documentation does not indicate that. The best thing to do is to manually configure all UDP ports that you need to be forwarded. It is done like this:
R5(config)#ip forward-protocol udp tftp R5(config)#ip forward-protocol udp domain R5(config)#ip forward-protocol udp bootps R5(config)#ip forward-protocol udp time R5(config)#ip forward-protocol udp bootpc | <urn:uuid:6aabfac1-dafa-4f4f-adf0-fcbd1f06f93f> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2013/forwarding-udp-broadcast-traffic-mechanisms | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00358-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.875661 | 1,815 | 2.890625 | 3 |
If you asked 5 people for definitions of “smart” or “success,” you would likely receive responses with similarities and differences. Webster’s definition of “smart” includes “showing intelligence,” “mentally alert,” and/or “witty, clever.” Likewise, Webster’s definition of “success” includes “having the desired result” or “having gotten or achieved wealth, respect, or fame.” Using these definitions, let’s dive deeper into the smarter networks = successful students question.
Smart networks that “show intelligence,” or are “mentally alert,” are those with insight to what is taking place on the network. They can adapt and be predictive. If they are “witty/clever,” then they are intuitive and can integrate network tools and resources to meet the needs of individual users. The ability to support/block applications, deliver content at various bandwidths, use Wi-Fi, leverage existing low-bandwidth cabling, conduct Wi-Fi tests prior to a new use case, securely deploy, support IoT-enabled environments, and be able to do all of this in a BYOD culture, is surely evidence of a smart network. But what is a smart network’s connection to successful students?
You might agree that “success” is relative to the individual. A “desired result” or “achieved wealth” may mean different things to different people. It is a personal measurement. According to recent studies, just as success is personal to each individual, so too must be learning. Say what?! How can Higher Education possibly individualize learning? For a variety of valid reasons, teachers do what they are comfortable with (lecture, for example), what they are confident in (possibly low-tech), and within the boundaries of 24 hours in a day (they can’t find time to individualize teaching). Learners must then adapt to each individual instructor. Although adaptation is a great life lesson, research indicates students are most successful when they have personalized learning experiences. Can learning management systems track student’s progress and guide them towards “individual” learning paths? Can assessments baseline a student’s knowledge and suggest a course of learning? Can data sources give awareness to what is best for a student? Sure! But setting a digital footprint and map for personalized learning is only part of the picture…the network must do its job.
Smarter Networks = Successful Students?
So, what does a network have to do with personalized learning and student success? Whether an institution chooses to adopt personalized learning from the top down, or through a grass roots effort, the network MUST support it, and that means being intelligent, mentally alert, and clever! Much like a foundation of a house, what sits on top is only as good as what is beneath. So, do smarter networks mean successful students? Let’s keep it real. There is no guarantee that a smarter network will produce 100% successful students. But why not ensure the network supports learners in reaching their full potential, no matter the technology being used. Isn’t reaching full potential an indication of success?
To learn more, read the latest Educause review on personalized learning and the new Aruba solutions that support “smart” networks.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in. | <urn:uuid:20a37b2f-0023-48d4-91a5-807250e7d5b6> | CC-MAIN-2017-04 | http://community.arubanetworks.com/t5/Technology-Blog/Do-Smarter-Networks-Mean-Successful-Students/ba-p/264506 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00167-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94073 | 734 | 3.28125 | 3 |
Interior Gateway Routing Protocol (IGRP), a Cisco proprietary distance vector routing protocol, is similar to the Routing Information Protocol (RIP) in behavior. However, IGRP uses a composite metric that utilizes high bandwidth paths over low bandwidth links. IGRP provides equal and unequal cost load balancing to utilize multiple paths available to reach a destination. Enhanced IGRP (EIGRP) is also a Cisco proprietary enhanced distance vector routing protocol. EIGRP relies on the Diffused Update Algorithm (DUAL) to calculate the shortest path to a destination. EIGRP is similar to IGRP in calculating the metric, but has many improvements. These include fast convergence, incremental updates and support for multiple network layer protocols. The metric for an EIGRP route is 256 times higher than the metric of an IGRP route for the same destination. When two domains using these protocols need to communicate with each other, redistribution between EIGRP and IGRP is necessary. These are the two issues that occur with EIGRP and IGRP redistribution belonging to the same autonomous system:
|1.||Internal EIGRP routes that (by default) have a lower administrative distance value of 90 are always preferred over external EIGRP or IGRP routes that (by default) have a value of 170 and 100.
|2.||External EIGRP route metrics are compared to scaled IGRP metrics.
The route with the lowest metric value is used.
If the metric values are the same, the external EIGRP route is preferred.
However, external EIGRP can be ignored, even though their routes have a higher administrative value than IGRP.
Both IGRP and EIGRP use an Autonomous System (AS) number and only routers using the same AS number can exchange routing information using that protocol. When routing information is propagated between IGRP and EIGRP, redistribution has to be manually configured because IGRP and EIGRP use different AS numbers. However, redistribution occurs automatically when both IGRP and EIGRP use the same AS number. Because IGRP and EIGRP use compatible metrics, the metrics are scaled up or down by 256. This is when they are redistributed from one to the other. For more information refer to: Redistribution Between EIGRP and IGRP in the Same Autonomous System | <urn:uuid:69ea593c-b99a-482c-99a4-d8719c9d1f58> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2344476/cisco-subnet/how-does-redistribution-take-place-between-eigrp-and-igrp-when-they-belong-to-the-same-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00167-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.894308 | 491 | 2.640625 | 3 |
First i hope i understand you correctly. I think you wanna know something abaout using pointers in Cobol. Pointers are nothing else then adresses of areas. So the area could be filled with data fom another called programm without transfering the whole area from the calling-programm to the called-programm. You only transfer the adress of the area or the table by a pointer variable.
Joined: 22 Nov 2005 Posts: 700 Location: Troy, Michigan USA
To start with, what is your understanding of linked-lists? Do you understand the structure? And are you interested in implementing a forward linked list or a bi-directional (forward and backward) linked list? | <urn:uuid:387eab3f-54c2-4fbc-a850-c1f1d94d5781> | CC-MAIN-2017-04 | http://ibmmainframes.com/about14222.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00377-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.876795 | 143 | 2.6875 | 3 |
Android is a Linux-based open source operating system for mobile devices. With this, Google hopes to challenge other open Linux-based devices, such as the Nokia N810.
Of course, we at F-Secure have to think what effect, if any, open platforms might have on the future of mobile malware. Will an open standard for mobile phones make mobile malware more or less of a problem? Might this accelerate or decelerate the evolution of mobile malware?
The key issue here is whether Android will go for totally open systems or whether they will adopt a system for signing approved applications (such as Symbian).
If unsigned and unknown applications written by anyone have full access to phone features, we smell trouble.
Quoting Android's homepage:
"... an application could call upon any of the phone's core functionality such as making calls, sending text messages, or using the camera ..."
Of course, we won't know the full specifications of Android phones until they become available in late 2008.
And it's pretty guaranteed that no criminal attacks will take place until the installed base for Android has become large enough to interest the bad guys financially. This might never happen.
P.S. The installed base is already there for the iPhone. iPhone malware could easily become reality in the near future. | <urn:uuid:5da77b83-d817-48de-9c80-b804b614d2ed> | CC-MAIN-2017-04 | https://www.f-secure.com/weblog/archives/00001311.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00377-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943432 | 263 | 2.5625 | 3 |
A fiber optic attenuator, also called an optical attenuator, simulates losing the could be caused by a long period of fiber. Typically, this device performs receiver testing. While an optical attenuator can simulate the optical loss of an extended period of fiber, it can’t accurately simulate the dispersion that would be caused by a long length of fiber.
Put it simply, for a fiber optic receiver, too much light can overload it and degrade the bit error ratio. In order to achieve the best bit error ratio (BER), the light power should be reduced. Fiber optic attenuators fit the requirement perfectly. This could happen when the transmitter delivers too much power for example once the transmitter is simply too near to the receiver.
How Does a Fiber Attenuator Work?
Attenuators are like your sunglasses, which absorbs the extra light energy and protect your eyes from being dazzled. Attenuators normally have a working wavelength range in which they absorb the sunshine energy equally.
An essential characteristic of a good fiber attenuator is that they should not reflect the light, instead, they should absorb the extra light without being damaged. Because the light power used in fiber optic communications are fairly low, they usually could be absorbed without noticeable damage to the attenuator itself.
Types of Optical Attenuators
Two types of fiber optic attenuators exist: fixed value attenuators and variable optical attenuators.
Fixed value attenuators have fixed values that are specified by decibels. Their applications include telecommunication networks, optical fiber test facility, Lan(LAN) and CATV systems. For instance, a -3dB attenuator should reduce concentration of the output by 3 dB(50%). Fixed value attenuator’s attenuation value can’t be varied. The attenuation is expressed in dB. The operating wavelength for optical attenuators ought to be specified for that rated attenuation, because optical attenuation of a material varies with wavelength. Fixed value attenuators are comprised of two big groups: In-line type and connector type. In-line type appears like an ordinary fiber patch cable; it has a fiber cable terminated with two connectors which you’ll specify types. Connector type attenuator looks like a bulk head fiber connector, it has a male end and a female end. It mates to regular connectors of the identical type for example FC, ST, SC and LC.
Variable optical attenuators come with a variety of designs. They’re general used for testing and measurement, but they also possess a wide usage in EDFAs for equalizing the sunshine power among different channels. One type of variable optical attenuator is made on the D-shaped fiber as a type of evanescent field device. If your bulk external material, whose refractive index is larger compared to mode effective index, replaces a part of the evanescent field reachable cladding, the mode can become leaky plus some from the optical power could be radiated. If the index from the external material could be changed with a controllable mean, with the effects for example thermo-optic, electro-optic, or acoustic-optic, a device with controllable attenuation is achievable.
Other types of variable optical attenuators include air gap, clip-on, 3-step and more. | <urn:uuid:13da2009-918c-47c5-b869-1dae2c1f831e> | CC-MAIN-2017-04 | http://www.fs.com/blog/what-exactly-are-fiber-optic-attenuators.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00285-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.896652 | 698 | 3.390625 | 3 |
With our information systems under aggressive attack, we cannot ignore any meaningful line of defense. The human element of cyber security is too often overlooked—and that a rigorous new approach to workforce cyber preparedness is urgently needed. This comprehensive new approach is called Building Human Firewalls.
The computing security model of the past decade, based on firewalls, anti-viral services, intrusion detection controls, system event monitoring, VPNs, etc., is failing to detect and block the most advanced malware. That is serious enough. But what’s worse is that this traditional defense model is of little use in preventing what many experts feel is the greatest threat to cyber security: human error.
Many recent reports indicate that traditional cyber security tools and techniques are not enough to prevent the loss of sensitive date to hackers. Even the best technology is only as good and reliable as the people in the organization. What is also required is knowledge training and trust in the organization. The human and social engineering
Research Reports on Cyber Security and Human Firewalls
There are numerous facts that bolster this view, but here are three of the most important:
1. 80% of all data breaches reported by the U.S. federal government from January 2009 through May 2012 were caused by human error. (“Data Breaches in the Government Sector,” a Rapid 7 Research Report. From the report: “From January 2009 through May 31, 2012, employee error and device theft caused the majority of data breaches. Combining unintended disclosure, insider threats, physical losses, the loss/theft of portable devices, and the loss/theft of stationary devices, the total number of incidents reached 214 (out of 268), exposing more than 93 million PII records.)
2. In 2012 survey, 710 experienced IT security professionals said lost devices and mishandled data were responsible for ten times more of their organization’s data breaches in the last two years than external cyber attacks.( “The Human Factor in Data Protection,” a study conducted by the Ponemon Institute.)
3. The prestigious cyber security researchers at the Georgia Tech Research Institute (GTRI)
declared recently spear phishing to be the number one cyber security threat to the enterprise.
The NSA/Edward Snowden affair clearly indicates that more than smart technology and multiple network administrators are needed to protect the secrets of any large organization. In addition to technical firewalls, a Human Firewall is required to eliminate cyber attacks and compromises from within. Some estimate project that, with a properly constructed Human Firewall cyber vulnerability may be reduced by up to 90%.
In addition, research on the rapidly growing hacker threat to Federal Government Agencies indicates that the Snowden threat, as a professional hacker, is not an isolated phenomenon and there are various forms of human error and threats that exist in any organization including:
- Disgruntled employees
- Careless Employees
- Professional espionage threats
Clearly, traditional perimeter firewalls and automated security controls are proving ineffective without educating and informing the existing organization staff as to proper procedures of protecting the organization network.
Establishing the Cyber Security Human Firewall
A company with a strong Human Firewall:
- Has an engaged, aware, well-educated workforce.
- Has a clear policy that prioritize the protection of networks and data, and which foster compliance with security standards.
- Is agile and responsive, and knows that in an era lacking bulletproof IT security, cyber breaches are not so much prevented as managed and mitigated.
- Has learned to circle the wagons internally, increasing cyber risk collaboration and communication;
And, is obtaining critical intelligence by sharing information externally, with trusted partners and
Organizations require effective tools, services and knowledge to create an effective Human Firewall and reduce human error. The best tools offer comprehensive solutions combined in a customized dashboard that provide:
- New proactive data protection and incident response policies, driven from the top.
Better trained, and better equipped, work forces that can become as discriminating about
malicious threats at the social level as our technology firewalls are at the data level.
Better cyber threat intelligence services, distributed more broadly through the enterprise—
and fast incident response capabilities when a threat becomes an attack.
- Ongoing threat vulnerability assessments and better real-time threat pictures.
More security apps, especially in mobile environments, that everyone can use to take
greater responsibility for the security of their enterprise.
- And, in the face of ever-escalating security breaches and data losses, we need them now.
Lastly, these requirements must be available in a form that is easy to access, user friendly, portable and is available to all members of the organization. Further, because of the evolving and dynamic nature of the cyber threat, the Human Firewall must be built on an architecture that is stable, robust and easily adaptable. This will enable the constant flow of new information and data.
And the Human Firewall must be secure from cyber penetration attacks.
What are the Benefits of the Cyber Security Human Firewall?
The individuals in the organization, if they are properly trained, educated and informed can create a culture of security on a cost effective basis. Effective use of Human Firewall tools and services will address the key vulnerabilities of most existing networks easily and continuously without costly upgrades of technology.
With the proper tools and services the Human Firewall can be built customized and maintained with a relatively small investment compared to existing penetration based Cyber technology. Combined with the leveraging aspects of the organization staff, the return on investment (ROI) is substantial.
Since the Human Firewall addressed processes and procedures it is technology neutral and focuses on the flow of current information.
Cyber Security Human Firewalls Recommendations
Recognize the essential role your workforce plays in protecting the integrity and resilience
of your IT systems—and then build a culture that helps your employees prioritize intelligent
- Establish and enforce proactive cyber security policies.
Train and educate your workforce—using online services that stay current with the dynamic
Establish compliance procedures and guidelines that standardize security practices, inside
the IT department and throughout the enterprise.
Deliver cyber threat alerts and information that help mitigate cyber damage, and incident
response procedures that help manage it.
Launch a Human Firewall campaign, designed to make all of the above central parts of
enterprise cyber security efforts.
- Consider utilizing a comprehensive tool like Cybero to reduce human error and build a human firewall.
About the Author: Jon M. Stout is Chief Executive Officer of Aspiration Software LLC. Aspiration Software LLC is an Information Technology/Cyber Security services provider specializing focused on the Intelligence Community (IC). | <urn:uuid:32f472fc-45a1-485c-9661-c4aa5341db28> | CC-MAIN-2017-04 | http://www.infosecisland.com/blogview/23332-Cyber-Security-Failures-Value-of-the-Human-Firewall-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00129-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925495 | 1,369 | 2.859375 | 3 |
Atomic bombs are in the news of late. Last year we marked the 70th anniversary of the first use of the A-Bomb on Hiroshima. A treaty intended to keep Iran from getting one is still being argued on Capitol Hill. All eyes are on North Korea, where Supreme Dictator Kim Jong-un grows ever closer to having a viable nuclear weapon of his own (10 kilotons as of a September 2016 test). In a perfect world, we would be the only country with an A-bomb arsenal, and we would trust ourselves to stay away from the button.
In the summer of 1939, Albert Einstein wrote a letter to President Franklin D. Roosevelt to let him know that the Nazis were working to extract and purify uranium-235. Although this was quite difficult to do at the time, it was known to be a key ingredient for an atomic bomb. Nazi success in this endeavor would not end well for the rest of the free world. FDR’s immediate reaction was to kick off the Manhattan Project, with the goal of beating the Germans. And so began the atomic race.
The project was managed from start to finish by Robert Oppenheimer, who orchestrated processes of gaseous diffusion, magnetic isotope separation, and mechanical centrifuging to get “The Gadget” ready for testing in the summer of 1945. Since this had never been done before, no one was totally certain whether the test, code-named “Trinity”, would be a colossal dud or a civilization-changing event. The flash, reportedly seen by a blind girl 120 miles away, affirmed the latter and set off some profound and diverse responses.
Physicist Isidor Rabi remarked that the very equilibrium of nature had been upended. Test director Ken Bainbridge said “Now we’re all sons of bitches.” Oppenheimer himself quoted the Bhagavad Gita – “I am become Death, the destroyer of worlds.” Several of the witnesses at Trinity soon signed a petition against ever doing something like this again, but it was ignored.
In short, the observers of the first atomic blast were both awestruck and terrified. Less than a month later, a bomb estimated at approximately 15 kilotons was dropped on Hiroshima, and three days later one struck Nagasaki. At least 129,000 people were killed.
The comparison between atomic bomb development and technologies in general is a bit disingenuous; this was a wartime, anything-goes effort to race the enemy and be the first to wield the biggest gun. The impact of the technology was immediate and profound. From first boom to wholesale destruction was only a matter of weeks. Although bomb technology hasn’t remained stagnant (current bombs are 1000 times more powerful) the human race has been scrambling ever since to curtail the actual use of nuclear weapons. The very fact that this technology exists leaves most of us feeling just a bit unsafe.
When I was growing up in the 50’s, seat belts were just starting to show up in some cars, but it wasn’t until 1995 that they were mandatory throughout the US. At about the same time, several major automakers started experimenting with inflatable restraints, eventually leading to the first passenger airbag in the 1973 Olds Toronado. In 1998, just three years after seat belts were mandated, the feds required all passenger vehicles to have dual front airbags. For all these years, it’s been comforting to know that a combination of seat belts and airbags has been protecting me while traveling by car.
Now, it appears that as many as 14 deaths have been directly linked to randomly exploding airbags, and more than 100 million vehicles worldwide have been identified by the manufacturer Takata for recall. The prospect of an airbag spontaneously exploding in my face does not leave me feeling safe.
You can’t have a conversation about how technology has changed our culture without mentioning the Smart Phone; my first was a Blackberry, and since that one died I have been an Apple iPhone guy. I concede that they are not always the most technologically advanced, and I occasionally see a Samsung Galaxy feature that I would like to have, but I have stayed the course with Apple.
Recently, that has proven to be a really fortunate choice. I had heard of a few isolated incidents of the Galaxy Note 7 battery explosions, but didn’t think much of it until, on a recent flight from Denver to Phoenix, the flight attendant announced emphatically that all Note 7’s needed to be completely powered down, and could not be connected to any external power source for the duration of the flight. Last week, the FAA made it a Federal crime to fly with the device in the United States. Samsung has permanently stopped production, and although my Apple stock is going up, I’m more than a little curious to know what was different about the Samsung batteries. Until we get answers, I won’t feel totally safe.
Of course phones aren’t the only Smart stuff in our lives these days. You can’t look at any publication on new technology without reading about TV’s, cars, houses – pretty much every part of our lives – that will soon be connected. In this new world order, I can lock and unlock my home, see who’s at my front door, adjust my thermostat, or check how full my trash can is – and I can do it all from anywhere.
Connecting everything and everybody everywhere might seem a bit frivolous at first, but the implications of security and energy savings are propelling the technology forward. As for the trash can, I have yet to discern an advantage in opening an app on my smartphone versus opening the cabinet door in my kitchen.
If you are concerned that a hacker could break into your car’s computer and mess with the controls, then you probably don’t want to hear about Chris Roberts. Chris is the security researcher who is believed to have hacked into an airliner through its entertainment system, and subsequently issued some nefarious commands to the flight control computer.
As frightening as one hacker and one jetliner is, there is more. While the average consumer might feel reassured with the growing number of smart-connected devices, all a network security analyst sees is a growing number of entry points for hackers. This was mostly theoretical fear mongering until the October outbreaks.
A company called Dyn is one of a handful of DNS providers that helps people connect to websites. One Friday they suddenly experienced a huge volume of traffic attempting to knock their service offline. The incident showed how a targeted digital assault on just one company can disrupt a massive piece of the Internet. The source of some of the traffic was traced to Internet of Things devices – webcams, thermostats, baby monitors and perhaps even trash cans – that connect to one another and the Internet.
We’ve all experienced Internet outages from time to time, but this was the mother of all disruptions. The Department of Homeland Security is still trying to figure out who was behind these DDoS (distributed denial of service) attacks, but it won’t be easy; the attack came from tens of millions of IP addresses around the globe. A malware variant called “Mirai”, customized to exploit the Internet of Things, has been identified as the culprit.
The experts saw this one coming. IoT devices are very popular, and manufacturers are rushing them to market in order to grab share. If you’re an entrepreneur, this is what you do. This frantic product development cycle leaves little or no time for device security. In the meantime, DDoS attacks are becoming more widespread. The source code for the Mirai Botnet has already been made available to the public, and one tracking site suggests there are more than 1.6 million infected devices currently active. None of my stuff, it appears, is safe anymore.
A-Bombs, air bags, cell phones, IoT devices - each of these technologies could be viewed as a perfect storm of intense motivation, short product development cycle, and unforeseen consequences. You can count on it happening again. “Mirai”, a Japanese word, translates as “the future.”
(*) Robert Lewis, co-pilot of the Enola Gay, August 6, 1945.
Author Profile - Paul W. Smith, a Founder and Director of Engineering with INVENtPM LLC, has more than 35 years of experience in research and advanced product development.
Prior to founding INVENtPM, Dr. Smith spent 10 years with Seagate Technology in Longmont, Colorado. At Seagate, he was primarily responsible for evaluating new data storage technologies under development throughout the company, and utilizing six-sigma processes to stage them for implementation in early engineering models. He is a former Adjunct Professor of Mechanical Engineering at the Colorado School of Mines, and currently manages the website “Technology for the Journey”.
Paul holds a doctorate in Applied Mechanics from the California Institute of Technology, as well as Bachelor’s and Master’s Degrees in Mechanical Engineering from the University of California, Santa Barbara. | <urn:uuid:90599ec6-3a9f-49a1-a160-2d199800241b> | CC-MAIN-2017-04 | http://www.lovemytool.com/blog/2016/11/my-god-what-have-we-done-by-paul-w-smith.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00525-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.969041 | 1,886 | 2.640625 | 3 |
The recovery of evidence from electronic devices is fast becoming another component of many the IT Manager’s remit. Electronic evidence gathered is often valuable evidence and as such should be treated in the same manner as traditional forensic evidence – with respect and care.
Essentially, this area is known as computer forensics and can be described as the scientific examination and analysis of data held on, or retrieved from, computer storage media in such a way that the information can be used as evidence in a court of law. Subject matter can include:
- the secure collection of computer data
- the examination of suspect data to determine details such as origin and content
- the presentation of computer based information to courts of law (if necessary)
- the application of a country’s laws to computer practice.
In short, the objectives of a forensics analysis are to, determine what happened, the extent of the problem, determine who was responsible and present this information as evidence in court if required.
It is used by internal investigators of public and private organisations for a variety of reasons, in particular where a computer user is suspected of a breach of organisational policy. Indeed, in the past couple of years awareness amongst the legal community in Ireland of the need for professional computer forensic services and equipment has increased substantially.
The methods of recovering electronic evidence whilst maintaining evidential continuity and integrity may seem complex and costly, but experience has shown that, if dealt with correctly, it will produce evidence that is both compelling and cost effective.
When talking about computer forensics, it is easy to get caught up in the technical minutiae – the bits and the bytes, the ones and the zeros, slackspace and pagefiles. Given the language used by many forensic investigators it is little wonder that many people consider it to be a black art, forever damned to the world of the ponytails.
In reality however, digital forensics is concerned primarily with forensic procedures, rules of evidence and legal processes. The principal reason given that forensic evidence fails to deliver in a court is not the technical merit of the evidence itself, but rather issues relating to how it was gathered, who gathered it, what training and experience they have, chain of custody, proper documentation, and even, believe it or not, the storage facilities used. A certain case here in Ireland springs to mind, where the evidence storage facility was brought into question. Who had access to it? What security measures are in place to ensure only authorised personnel have access to the evidence? What chain of custody documentation is kept? These are ultimately the key questions and are among the crucial considerations for any IT team if they find themselves central to an internal investigation.
Although the document is not intended to be a definitive manual of every single operation that may take place during an investigation, it does provide some first-rate guidance and advice. Interestingly, the thrust of the guide is about forensic procedures, rules of evidence and legal process, and is a great resource for anyone tasked with drafting incident response policies and procedures.
From the outset, I would like readers to note that when tasked (usually by the HR department) with conducting an investigation relating to computer equipment there are some key “rules’ to be followed:
Rule 1. An examination should never be performed on the original media.
Rule 2. A copy is made onto forensically sterile media. New media should always be used if available.
Rule 3. The copy of the evidence must be an exact, bit-by-bit copy. (Sometimes referred to as a bit-stream copy).
Rule 4. The computer and the data on it must be protected during the acquisition of the media to ensure that the data is not modified. (Use a write blocking device when possible)
Rule 5. The examination must be conducted in such a way as to prevent any modification of the evidence.
Rule 6. The chain of the custody of all evidence must be clearly maintained to provide an audit log of whom might have accessed the evidence and at what time.
All of the does not come without difficulties. There is an enhanced awareness amongst offenders of the nature of electronic evidence and the use of techniques to hide evidence. The skillful user makes the examiner’s job difficult, if not impossible. There is an increasing use of tools to hinder forensics – secure deletion tools, encryption tools, automated “scrubbing” tools, digital compression, steganography, remote storage, and audit disabling. Add to this the difficulty in placing a specific person at a specific computer without additional evidence, be it CCTV or Access Control Systems. Computer forensics is useful, but not always a silver bullet.
Not all incidents require of justify the full rigor of a forensic analysis. There are a number of factors affecting the decision to proceed, for instance, the seniority of staff. It is generally accepted that senior staff are more likely to appeal disciplinary procedures or otherwise respond. The background of staff is another important consideration. Staff with a legal, HR or union background may have other motivations. Obviously, if an investigation involves staff with a financial motive to appeal a disciplinary action, a forensic analysis that uncovers some compelling evidence may offer the organization a strong negotiation tool.
Computer forensics is much much more than technical wizardry. It is about keeping a clear head, and being aware of what NOT to do, as much as what you should do. The IT department is becoming an obvious point of call for any organization seeking to analyse a computer or computers that could be central to an internal investigation. As a consequence, there is a growing need to find not only the technical skills, but also the softer “decision-making’ and “investigative’ expertise that will help resolve a wide range of issues quickly, and more importantly, discreetly. | <urn:uuid:9cc2312c-71ae-429e-9208-d2b49ede4c8d> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2007/07/20/the-rules-for-computer-forensics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00395-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951246 | 1,187 | 2.921875 | 3 |
Data security: prevention and encryption
The data that one gets, need to have some security, the reason behind is that there are many hackers out there who stay in the dark to make sure that one is having some data which can be useful. They keep attacking until they can find some data which can benefit them financially. One should know that there are some options too which are available to one and he can take advantages of them so that he can secure the data some really safe place. This should be done as soon as possible because when the data is in the hard disk, then too it's not secured since HDD can get bad too and can erase the data itself. Here are some ways through which one can make sure that he is going to safe the data;
This is the type of the data storage where the data can be stored in some digital format. The data is basically stored in some logical pools and the physical storages can also span across some servers which can be multiple. The locations of these errors can vary as well. The physical environment that they have, is somehow, pretty much owned already. The environment is normally managed by some hosting company. The storage which is provided by the company is also responsible for keeping that data accessible and available anywhere and hence the physical environment has to be running a protected on some urgent bases. The organizations and the people also buy or normally lease this capacity for storages for their own users, application data and the organization data. They can be access with the help of some cloud which is co-located and it belongs to the computing services. Hence one can be easily sure of the thing that he would get some really amazing services of the cloud storage which is going to get his data saved at some good place.
A SAN is a specific system that gives access to solidified, piece level information stockpiling. Sans are principally used to improve stockpiling gadgets, for example, plate shows, tape libraries, and optical jukeboxes, open to servers so that the gadgets show up like generally connected gadgets to the working framework. A SAN regularly has its own particular system of capacity gadgets that are for the most part not open through the neighborhood (LAN) by different gadgets. The expense and intricacy of sans dropped in the early 2000s to levels permitting more extensive appropriation crosswise over both venture and little to medium estimated business situations. A SAN does not give record reflection, just square level operations. Nonetheless, document frameworks based on top of sans do give record level access, and are known as SAN file systems or imparted plate record frameworks. Offering stockpiling typically improves capacity organization and includes adaptability since links and capacity gadgets don't need to be physically moved to movement stockpiling starting with one server then onto the next. Different profits incorporate the capacity to permit servers to boot from the SAN itself. This considers a speedy and simple substitution of defective servers since the SAN might be reconfigured so that a substitution server can utilize the LUN of the flawed server. While this region of engineering is still new, numerous people prospect it as being the fate of the venture datacenter. Sans likewise have a tendency to empower more successful catastrophe recuperation forms. A SAN could compass a removed area holding an auxiliary stockpiling show. This empowers stockpiling replication either executed by circle show controllers, by server programming, or by specific SAN gadgets. Since IP Wans are regularly the slightest excessive strategy for long-separation transport, the Fiber Channel over IP (FCIP) and discs conventions have been produced to permit SAN growth over IP systems. The customary physical SCSI layer could just help a couple of meters of separation - not almost enough to guarantee business continuation in a catastrophe.
Handling Big Data
the big information is an all-enveloping term for any gathering of information sets so extensive and complex that it gets to be hard to process utilizing close by information administration apparatuses or customary information transforming applications. The difficulties incorporate catch, curation, stockpiling, pursuit, imparting, exchange, investigation and visualization. The pattern to bigger information sets is because of the extra data resultant from examination of a solitary extensive set of related information, as contrasted with partitioned more diminutive sets with the same aggregate sum of information, the permission granting of the connections to be found to spot business patterns, avert infections, battle wrongdoing etc. Huge information is hard to work with utilizing most social database administration frameworks and desktop facts and visualization bundles, needing rather enormously parallel programming running on tens, hundreds, or even a large number of servers. What is known as some really large information changes relying upon the capacities of the association dealing with the set, and on the abilities of the applications that are customarily used to process and dissect the information set in its area. For a few associations, confronting many gigabytes of information shockingly may trigger a need to rethink information administration choices. For others, it may take tens or many terabytes before information size turns into a critical thought. One must take some time to think that they cannot be handled as easily so one should make some efforts to get them handled in some good way.
Data encryption is the demonstration of changing electronic data into an incoherent state by utilizing calculations or figures. Initially, information encryption was utilized for passing government and military data electronically. About whether as general society has started to enter and transmit specific and delicate data over the web, information encryption has ended up more across the board. These days' web programs will naturally scramble content when uniting with a protected server. You can let you know are on a protected, encoded site when the URL starts with https, significance Hypertext Transmission Protocol to be secured.
There are some types of the data encryption which must be utilized by one so that he can get access to some really good alternatives;
The data can be encrypted this way too and here the data would be available at many of the vendors which can include Hitachi, Samsung etc. the systematic key which is used for some encryption is maintained somehow, differently from the other CPU which means that the removal of the CPU can pose some potential attack.
The whole data base can be encrypted as well and hence one can be sure that he won't have to face any problem if the data is stolen.
If someone wants, he can follow this method which states that one can also indicate that the data can be encrypted in some individual files. Hence one can be assure of the fact that those specific files stay save.
The media which is removable like the USB can be encrypted as well so during the transition of data, one can't have access to any of it.
The mobile devices are also supposed to be having some encryption as well since it might happen that they get lost or stolen.
Hardware based encryption devices
There are some devices which are hardware asked as well and they are as follows;
This is the total productive maintenance and it is the system so maintaining the integrity of the quality system and the production of them through some processes, equipment machines etc. and hence they have to be encrypted as well.
This is the security hardware module. It is a device which can safeguard and hence can manage some digital kits. This should also be encrypted so that data inside can't be read.
USB is the main data removing device for some people so encryption is necessary here if one has plan to use that data for such a long time.
The hard drives are other sources which can handle some large data inside so getting the data encrypted in it should be one's responsibility.
Data in-transit, Data at-rest, Data in-use
Data is use means Information being used has additionally been taken to signify "dynamic information" in the setting of being in a database or being controlled by an application. Case in point, some undertaking encryption portal answers for the cloud case to scramble information very still, information in travel and information being used. While it is for the most part acknowledged that file information, paying little respect to its capacity medium, is Data at Rest and dynamic information subject to consistent or regular change is Data which is being used and hence the dormant information could be taken to mean information which may change, yet occasionally. The loose nature of terms, for example, consistent and regular implies that some put away information can't be thoroughly characterized as either Data at Rest or Data in Use. These definitions could be taken to expect that Data at Rest is a superset of Data being used; nonetheless, Data being used, subject to incessant change, has unique handling necessities from Data at Rest, whether totally static or subject to periodic change. Information being used alludes to information that is not basically being latently put away in a stable goal, for example, a focal information stockroom, yet is working its route through different parts of an IT structural planning. Information being used may be at the present time being produced, altered or overhauled, deleted, or saw through different interface endpoints. This is a useful term for seeking after thorough security for IT frameworks. Data at still is a term that is some of the time used to allude to all information in workstation stockpiling while barring information that is navigating a system or briefly living in machine memory to be perused or redesigned. Information very still could be archival or reference documents that are changed once in a while or never; information very still can additionally be information that is liable to customary however not steady change. Samples incorporate key corporate documents put away on the hard drive of a representative's journal machine, records on an outside reinforcement medium, records on the servers of a stockpiling region system (SAN), or documents on the servers of an offsite reinforcement administration provider. Businesses, government organizations, and different foundations are worried about the ever-show risk postured by programmers to information very still. Keeping in mind the end goal to keep information very still from being gotten to, stolen, or modified by unapproved individuals, efforts to establish safety, for example, information encryption and progressive watchword insurance are ordinarily utilized. For a few sorts of information, particular efforts to establish safety are commanded by law. While data in transit is the data which is being utilized somewhere and it is unavailable for the storage.
ACL as for a workstation document framework is an arrangement of authorizations joined to an article. An ACL details which clients or framework methodologies are allowed access to protests, and in addition what operations are permitted on given objects. When a subject demands an operation on an item in an ACL-based security demonstrate, the working framework first checks the ACL for an appropriate passage to choose whether the asked for operation is approved. A key issue in the meaning of any ACL-based security model is figuring out how get to control records are altered, in particular which clients and procedures are conceded ACL-change access.
There are some data policies made for safeguarding of data and they are as follows;
Wiping: The data can't just be wiped easily it may happen that one can think he has deleted all data but it can always get recovered so one must reformat the hard disk before thinking that the data has been wiped completely.
Disposing: The devices which are used these days are somehow, flammable so one must take some attentive measures so that the disposal of the data can be done properly.
Retention: For the data retention, the space and the security so one should ensure both are there so that the data can stay age and sound.
Storage: the storage capacity matters a lot when it comes to the data. So there are many external storage devices too which can be used by one for the storage of data.
Hence one should know there are many ways through which the data can be secured. One must also know the types of the data which are the data are use, at rest and in transition. The most common method for the security isn't the installation of the anti-virus but the encryption so one must pay some special attention onwards that. | <urn:uuid:d166f78d-cf2d-4d0f-aea7-2fe648aae589> | CC-MAIN-2017-04 | https://www.examcollection.com/certification-training/security-plus-data-security-prevention-and-encryption.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00543-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953218 | 2,426 | 3.046875 | 3 |
The Latin America’s biopesticides market has been estimated at USD 144.71 million in 2015 and is projected to reach USD XX million by 2021, at a CAGR of XX% during the forecast period from 2016 to 2021. Biopesticides offer a unique and innovative approach to the management of agricultural pests using formulated microbial agents as an active ingredient. Microbes that have been used in this approach include fungi, bacteria, viruses and nematodes. Each microbial biopesticide is unique, not only the organism or active ingredient but also the host, the environment in which it is being applied, and economics of production and control.
In Latin America, the pesticides market, both synthetic as well as biopesticides is witnessing a steady growth and the key market drivers for the industry in the region are adoption of herbicides tolerant crops, increasing area of crop production and increasing yields of agricultural produce. However, strict regulation posed by US EPA and EU on pesticidal residue limit on food crops, will limit the use of the synthetic pesticides on crops and will increase the demand of biopesticides in the region. While the prevalence of chemical or synthetic pesticides in Latin America would continue, human, animal and environmental health concerns would play key roles in driving growth for Biopesticides. Emerging economies of Latin America are likely take the lead in adoption of both pesticides and Biopesticides. Principal factors driving the same include greater adoption of biopesticides in place of traditional chemical-based pesticides as a consequence of increasing efficacy and enhanced consumer confidence in their performance output.
Segmentation of this report has included categorizing Biopesticides as Bioherbicides, Bioinsecticides, biofungicides and other Biopesticides. By application area, Biopesticide demand has been analyzed in terms of crop-based (including grains & cereals, oilseeds and fruits & vegetables) and non-crop-based (including turf & ornamental grass and other non-crop-based applications).Bioherbicides form the largest and Biofungicides are the fastest growing segments.
Many countries, such as Argentina, have been at the forefront of introducing regulations aimed at minimizing the use of chemical pesticides within municipal limits, which are expected to provide the necessary momentum for biopesticides. Biological control agents (BCAs) or Biopesticides account for a small share of registered pesticides in Brazil because the market for unregistered BCAs is much higher.
The Biopesticides market, is witnessing a surge in corporate activity with several agrochemical companies entering into the agricultural biologicals sector either through dedicated R&D investment, licensing deals, partnerships, and mergers and acquisitions. The analysis of major companies in the Biopesticides industry has taken into account strategy adopted, financial revenues and the latest developments in the market. Some of the leading players covered include Bayer CropScience, BASF, Marrone Bio Innovations, De Sangosse and Valent Biosciences.
Key Deliverables in the Study | <urn:uuid:4ebefd2b-59ec-4bfc-8e37-e2c9684507bd> | CC-MAIN-2017-04 | https://www.mordorintelligence.com/industry-reports/latin-american-biopesticides-market-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00359-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938264 | 620 | 2.71875 | 3 |
HPE information security expert Dustin Childs discusses the need to build security measures in eSports
In this article...
- Because video game tournaments are conducted online, eSports face a major risk of being hacked
- Information security expert Dustin Childs says game developers need to be held accountable and start building in security
Nearly two decades ago, ESPN began to broadcast poker tournaments. To the surprise of just about everyone, watching people play poker became a runaway hit, and soon, anyone who could play Texas Hold’em was vying for their chance to shine at a televised Vegas tournament.
ESPN is trying this formula again, but this time with video games. eSports—or professional video gaming often done in teams but also occurring in matches between individual players—is already an international phenomenon, selling out arenas and stadiums in Asia. The fan base in the United States is growing so it’s just a matter of time until eSports tournaments become a part of mainstream American culture.
“The more popular a sporting event becomes… the greater the risks of someone trying to rig matches.”
However, the more popular a sporting event becomes and the more money that is at stake, the greater the risks of someone trying to rig matches. In traditional sports, we’ve seen point shaving by players and referees purposely favoring one team over another. But the greatest threat to the integrity of eSports is its Internet security. Because the video game tournaments are conducted online, they face major risks of being hacked.
User accounts are a valuable commodity, and one of the first major hacks in eSports revolved around the theft of the accounts. The popular game League of Legends was disrupted by an Australian hacker stealing account information and transferred it to other servers. At the world championships for Dota 2, the event was halted by a distributed denial of service (DDoS) attack. In addition, there are a number of hacks used by the players to gain an edge during a match.
“One of the biggest hacks that gets used is called an aimbot,” explains Dustin Childs, information security expert with Hewlett Packard Enterprise. “Instead of manually using the controls to put your weapon on your opponent, the aimbot automatically does it.” The rapid-fire pace of the aimbot’s shooting, Childs says, is a pretty obvious “tell” that the integrity of the game has been hacked.
Tournament organizers instituted Valve Anti-Cheat (VAC) technology, which is designed to catch in-game hacks. “While it’s good to know there is technology in place, it isn’t 100 percent effective,” says Childs. Tournament officials tend to fall on the side of banning anything that could resemble a hack, rather than seeing if it is actually legitimate play or not.
“Cyber hacks give players an unfair advantage, ruining the integrity of the tournaments.”
Hacking video games isn’t difficult. Most designers aren’t thinking about security and preventing hacks as they develop the games so protections aren’t built into the systems. The hackers are taking advantage of existing functionalities and manipulating them for their benefit during play. These hacks give the players an unfair advantage, and this ruins the integrity of the tournaments.
“Game developers and vendors need to be held accountable and start building in security,” says Childs. The average gamer doesn’t take security into consideration when playing or watching a tournament, but that doesn’t mean it should be ignored. As the amount of money at stake in these tournaments increases and fan interest grows, expect a criminal enterprise to take advantage of the vulnerabilities and easily-hackable flaws.
eSports are on the cusp of becoming the next sports sensation. Whether it is a passing fad or a mainstream staple will depend on its integrity. Without improved security measures, the future success of eSports could be tenuous. | <urn:uuid:1332f7c6-3b33-4c7f-969c-fbe637dce6f9> | CC-MAIN-2017-04 | https://www.hpematter.com/sports-tech-issue/hackers-getting-ready-for-esports-tournaments-espn | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00323-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953414 | 806 | 2.640625 | 3 |
NASA on Monday launched three 2010-vintage Nexus One smartphones into orbit via an Antares rocket, saying that the Android devices would be among the cheapest satellites ever devised.
The devices are part of the administration’s PhoneSat program, which is designed to ascertain the suitability of consumer smartphone processors as cheaper satellite brains.
[MORE NEXUS ONES IN SPACE: Android phone blasts into space aboard satellite]
Michael Gazarik, NASA associate administrator for space technology, said in a statement that there’s no shortage of possible applications for the space-going Android phones.
“Smartphones offer a wealth of potential capabilities for flying small, low-cost, powerful satellites for atmospheric or Earth science, communications, or other space-born applications. They also may open space to a whole new generation of commercial, academic and citizen-space users,” he said.
Credit: courtesy NASA
The devices contain much of the hardware needed for basic satellite functionality, including reasonably modern processors, cameras, GPS receivers, radios and a host of other small sensors.
The phones are housed in four-inch cubesat structures, and will attempt to take photos of the Earth via their onboard cameras.
The PhoneSats are also part of an elaborate game, as they transmit packets of data back to Earth, where they can be received by amateur radio operators. While some packets are simple status reports, others are tiny fragments of the Earth pictures being captured from orbit, which can be reassembled into complete photographs.
Interestingly, however, NASA is not the first to undertake this type of project – a privately-held British company called Surrey Satellite Technology Limited launched a Nexus One into space aboard the Indian Space Research Organization’s PSLV-C20 mission in late February. However, the STRaND-1’s price tag – “about as much as a high-end family car,” according to SSTL – is likely significantly higher than NASA’s PhoneSat, which cost less than $7,000.
Email Jon Gold at email@example.com and follow him on Twitter at @NWWJonGold. | <urn:uuid:c5da4d6b-b65b-4e06-bab4-c05d752ab306> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2165670/smartphones/nasa-launches-smartphone-satellites----downloading-images--may-be-an-issue.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00231-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930053 | 442 | 2.84375 | 3 |
Originally designed to track individual contributions to the Social Security Program, a person’s Social Security Number (SSN) has evolved to unintentionally become a near universal identifier synonymous with a person’s identity. SSNs never expire, are never reclaimed, seldom reissued, and with a few rare exceptions are unique and issued to a single individual.1
It can be nearly impossible to engage in daily life without an SSN, and despite its many vulnerabilities, the SSN is generally the primary consumer identifier that public and private institutions trust. For example, the Internal Revenue Service requires all corporations to collect SSN data from their employees and contractors for tax reporting purposes. Parents are required to submit a SSN for any dependent children they claim. Individuals are also often asked to submit their SSN when opening bank accounts, applying for credit and receiving medical care. In both the private and public sector, SSN’s are the nucleus of identity management, driving policies for authentication and compliance activities.
The ubiquity of the SSN and its use in identity management makes it a treasure for fraudsters and criminals. In the mid-1990s, the growth of online new-account applications and customer self-service options fueled a dramatic increase in identity-related fraud. While these electronic channels drove an improved customer experience, their inherent anonymity had the unintended consequence of making it easier for fraudsters to misrepresent identity information. Enterprises and public agencies began to voice concerns that the structured nature of the SSN was facilitating this behavior, inadvertently allowing criminals to use publicly available information to predict an individual’s SSN.
To mitigate this problem, the Social Security Administration (SSA) instituted a major change to the way it issues new SSNs. This change however has not only caused problems for fraudsters; it has created significant, inadvertent issues for companies that rely on SSNs to determine identity risk. The old ways of using SSNs to find fraud are no longer accurate or viable and organizations must now consider new techniques for identifying risk that account for this change.1
The original system for issuing SSN’s relied on chronology and geography to assign significance to the numeration and order of the nine-digit identifiers. The first three digits, known as the area number, corresponded to a particular geographic region of an individual’s mailing address. The next two digits, known as the group number, were assigned in a nonconsecutive yet predictable order within each distinct area number. The final four digits were determined serially and issued in order of application.1
The SSA regularly published the highest group number that had been issued for a given area number. Publishing the group numbers provided risk managers with a way to divide the set of total possible numbers into issued and unissued ranges. That is, prior to SSN randomization, firms could use public information to determine whether an asserted SSN had been issued. Assertions of an SSN that fell outside of the issued range appeared highly suspect and were typically either typos or indicative of misuse. This structured nature of the SSA’s issuing logic allowed fraudsters some ability to predict an individual’s SSN given knowledge of his or her date and location of birth2, allowing fraudsters to represent the number as their own.
In July 2011, this system was abandoned for a new scheme where SSNs began to be issued using fully randomized digits. SSNs issued since then no longer reflect any of the significance that allowed fraudsters to abuse the number. While this change appears to have succeeded in reducing the opportunity for fraudsters to predict a victim’s SSN, the policy change has inadvertently compromised a series of traditional identity-management practices. Because most risk managers rely on the previous structure to determine SSN validity, the policy change has created a new set of vulnerabilities for fraudsters to exploit. It is now extremely difficult for risk managers to distinguish between SSNs that were legitimately issued and those numbers that are being illegitimately asserted.2
Prior to randomization, a SSN in the unissued range sent a strong and explicit signal to risk managers. Whether the individual asserted the invalid credential with malicious intent, as a benign attempt to escape a bad payment behavior in the past, or as typographical error, it was clear that something was not right with the stated information. Organizations realized that SSNs in the unissued range were often associated with high risk, and required investigation. The tools and policies they developed to prevent identity fraud reflected these learnings and often applied more stringent, yet warranted, scrutiny towards individuals asserting SSNs from the unissued range.
Six Steps to a Solution
SSN randomization presents substantial challenges for any organization that relies on SSN structure to assess the validity of the asserted SSN as part of an identity risk assessment, particularly as the risk for exposure to attacks is expected to grow as more randomized SSNs are issued.3 The proper response will require a concerted, cross-organizational investigation. Risk managers should evaluate the severity of the problem in their own environments, identify areas of strength and vulnerability, and respond with an updated approach.
The first step for risk managers should be to consider how SSNs are used across their organization, and the degree to which current policies and processes are impacted by randomization. They should undertake an initiative to evaluate the tools and processes currently in operation. Moreover, these investigations should revisit any policy updates made in response to SSN randomization. Consider the following six questions during the review process:
- How does the current identity-proofing process depend on SSNs? Has the process been updated to accommodate the SSA’s policy change?
- How has randomization affected remediation procedures as part of the new-account onboarding process?
- Who developed the identity verification tools and policies? Were they created internally or via an external vendor? Who is responsible for maintaining these solutions?
- On what data does the process rely? Who provides this data and where does it come from?
- What have vendors done to respond to SSN randomization?
- What solutions are there to distinguish between new, legitimately issued SSNs; benign errors; and malicious assertions of SSNs that have never been assigned by the SSA?
The challenges posed by SSN randomization cannot be solved with a simple fix. They require significant resources and expertise. In an environment where risk managers are consistently asked to do more with less, determining next steps can be both difficult and confusing. Many organizations may simply lack the ability to respond to SSN randomization internally and within a timely manner. After evaluating the severity of the impact of SSN randomization on the business, companies should look for partners thatunderstand the impact of randomization across public and private sectors and have developed solutions to directly address the challenges of SSN randomization. Risk managers cannot afford to ignore the issues created by randomization.
Ken Meiser is the Vice President of Identity Solutions at ID Analytics.
1 Acquisti, Alessandro and Ralph Gross (2009). Predicting Social Security Numbers from Public Data
2 Bert Kestenbaum, Social Security Administration (2012). Consequences of Social Security Number Randomization, 2012
3 Electronic Privacy Information Center (2014) Social Security Numbers | <urn:uuid:a09e8043-2635-44af-8807-ecb922f0446b> | CC-MAIN-2017-04 | http://www.idanalytics.com/blog/fraud-risk/unintended-consequences-impact-ssn-randomization-risk-management/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00139-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952619 | 1,476 | 2.828125 | 3 |
In previous posts in this series, we’ve performed the analysis necessary to classify traffic, determine how it will route, and compute the volume of traffic the WAN link will be carrying. Thanks to Erlang traffic models, we have also determined the number of simultaneous calls the WAN needed to be sized. While all of this information is useful, there’s one last leap to get to the end goal of this exercise.
For calls going across a WAN, the lower bandwidth G.729 codec is usually used. There are exceptions, of course: faxes, TTY/TTD, and modem calls would require G.711. That being said, we’ll just assume G.729 for now.
This codec has a payload bit rate of 8kbps, but there are also RTP, UDP, and IP headers (overhead) to be considered. Basically, we’re looking at 40 bytes per packet. If we put 20ms of audio samples in each packet, then the packet rate is 50 packets per second. The math looks like this:
Packets per second * Header size in bytes * 8 bits per byte / 1000 bits per kilobit
50 * 40 * 8 / 1000 = 16kbps
Add the 8kbps payload bit rate and you’re almost done. There are additional headers whose size depends upon the WAN technology in use. For example, a T1 MPLS circuit would add 6 bytes of MPLS headers, plus 6 bytes of layer 2 PPP headers, for a total of 12 bytes per packet, which works out to 4.8kbps. Add that to the 16kbps of upper layer headers and the 8kbps of payload, and we have a total of 28.8kbps.
All that’s left now is to multiply the bandwidth per call by the number of simultaneous calls the link needs to support. Last time we determined that was 31 calls, for a bandwidth total of 892.8kbps.
As Columbo (remember him?) might have said, “just one more thing.” What happens if most the calls are G.729 voice calls but some of the calls are going to be faxes? After all, the phone bills we used in the first posting in this series didn’t necessarily differentiate between voice calls and faxes. We could re-analyze the call data using logs from the fax machines. That would be the most precise of the options. On the other end of the spectrum, we could use “management’s informed estimation.” Which you use is going to be your call, but it’s something you should consider. | <urn:uuid:7e1e8f03-c947-4ad6-9aa8-d1d6ba1ed157> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2010/12/21/sizing-wan-links-part-3/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00470-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.907198 | 557 | 2.828125 | 3 |
Prevent, Detect and Respond: Detection Systems
The Internet threat environment is critical—there are 30 to 40 new attacks posted to Web sites each month. Intrusion detection systems (IDS) are designed to prevent attacks on an enterprise infrastructure. There are three types of attacks commonly reported by an IDS:
- System scanning
- Denial of Service (DoS)
- System penetration
These attacks may be launched locally, on the attacked machine or remotely, using a network to access the target. Security professionals must understand these types of attacks and how to effectively use an IDS to make an enterprise more secure. These attacks can provide information to the attacker that includes:
- Topology of the target network.
- Types of network traffic allowed through a firewall.
- Active hosts on the network.
- Operating systems those hosts are running.
- Server software the hosts are running.
- List of hosts (IP addresses).
- Slowing or shutting down critical systems or segments.
In any enterprise, the key to perimeter security is to prevent, detect and respond to attacks. These capabilities are critical for an entity’s perimeter defense. This requires the development of procedures and the deployment of technologies for incident detection and response. Security incident procedures are formal documented instructions for reporting security breaches that include implementation features for report procedures and response procedures.
Security incident report procedures are formal mechanisms employed to document security incidents. Security incident response procedures are documented formal rules or instructions for actions to be taken as a result of the receipt of a security incident report.
The security incident procedures are formal, documented instructions for reporting security breaches so that security violations are reported and handled promptly.
Acquiring effective tools will offer little risk mitigation without a correspondingly effective incident response plan. Using intrusion detection platforms without goals and a plan may involve as much overall corporate risk as not having them at all. Too many false positives—each igniting an uncoordinated and unbridled response—can result in non-trivial expenses and waste of human resources. Too many false negatives, and the corporation may host an inappropriate level of confidence in its technical infrastructure and staff performance and, sooner or later, suffer a damaging attack. Efficiently responding to each security incident will generally save an entity time, money and possibly even its reputation.
The incident response plan establishes procedures to address attacks on the entity’s IT infrastructure. The incident response procedures must enable security personnel to identify, mitigate and recover from malicious computer incidents.
The incident response plan needs a policy foundation, and then a sufficiently detailed task list and decision tree. This plan need not be comprehensive at the outset. Required contents include:
- Who to inform (names and full contact information).
- When to inform each of them (often keyed to an estimate of the time that the incident began).
- When to get law enforcement personnel involved.
- How to handle evidence and what to keep.
After building a lean framework, focus on evolving the incident response plan over time. The list above is only an essential beginning—it does not represent a mature incident response plan.
One of the maxims of security is, “Prevention is ideal, but detection is a must.” As long as you allow traffic to flow between the enterprise network and the Internet, the opportunity for an attacker to sneak in and penetrate the network is there. New vulnerabilities are discovered every week, and there are very few ways to defend yourself against an attacker using a new vulnerability.
Once you are attacked, without logs from an intrusion detection and firewall system solution, you have little chance of discovering what the attackers did. Without that knowledge, your organization must choose between completely reloading the operating system from original media and then hoping the data backups were OK, or taking the risk that you are running a system that a hacker still controls.
You cannot detect an attack if you do not know what is occurring on your network. Firewall systems and intrusion detection technology are vital, required components of enterprise perimeter security today.
Note: The key objective here is to adopt and implement procedures for timely reporting of breaches of security.
Types of IDS
There are primarily two types of IDS solutions. They are:
- Network-based IDS
- Host-based IDS
The majority of commercial IDS solutions are network-based. These detect attacks by capturing and analyzing network packets. Listening on a network segment or switch, one network-based IDS can monitor network traffic affecting multiple hosts that are connected to the network segment, thus protecting those hosts. Network-based intrusion detection systems can monitor a large network. You may consider designing a solution that requires the deployment of network-based IDS on critical subnets.
Host-based IDS solutions operate on information collected within a single computer system. A host-based IDS can determine exactly which processes and users are involved in an attack. A host-based IDS can see the outcome of an attempted attack. This type of IDS should be installed on critical server systems in the enterprise.
We strongly recommend the deployment of an intrusion detection product on the enterprise network. Examples of such products/solutions include:
- Internet Security Systems’ (ISS) Internet Scanner
- Snort (public domain)
The Internet Scanner application, an integrated part of Internet Security Systems’ security management platform, provides comprehensive network vulnerability assessment for measuring online security risks. Internet Scanner performs scheduled and selective probes of communication services, operating systems, applications and routers to uncover and report vulnerabilities. These are essential components for securing any health-care entity. In addition to providing flexible risk management reports, Internet Scanner prepares remediation advice, trend analyses and comprehensive data sets to support sound, knowledge-based policy enforcement.
The strength of Snort as an IDS solution is the ability to create and use rule sets. Snort.org has a forum where rules can be found and discussed. To take Snort to a higher level, there is a GUI that can be implemented, called IDScenter. Snort is one example of an option for an enterprise to consider. The advantage, obviously, is cost; the disadvantage is support.
IDS components, such as agents, deployed inside the enterprise network backbone can be vital in detecting unauthorized activity by authorized users within the organization’s security perimeter. Each organization needs to integrate IDS as a necessary addition to the security infrastructure. Security professionals must acquire knowledge and skills to effectively deploy and manage IDS solutions. IDS deployment requires very careful planning, preparation, prototyping, testing and specialized training.
Uday O. Ali Pabrai, CEO of ecfirst.com, created the CIW program and is the co-creator of the Security Certified Program (www.securitycertified.net). Pabrai is also vice-chair of CompTIA’s Security+ and i-Net+ programs and recently launched the HIPAA Academy. E-mail him at firstname.lastname@example.org. | <urn:uuid:e4b0b70c-96d6-4898-8256-910ca058b0a0> | CC-MAIN-2017-04 | http://certmag.com/prevent-detect-and-respond-intrusion-detection-systems/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00066-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93224 | 1,441 | 3.390625 | 3 |
Agency: GTR | Branch: EPSRC | Program: | Phase: Research Grant | Award Amount: 858.32K | Year: 2015
Autonomous robots, capable of independent and intelligent navigation through unknown environments, have the potential to significantly increase human safety and security. They could replace people in potentially hazardous tasks, for instance search and rescue operations in disaster zones, or surveys of nuclear/chemical installations. Vision is one of the primary senses that can enable this capability, however, visual information processing is notoriously difficult, especially at speeds required for fast moving robots, and in particular where low weight, power dissipation and cost of the system are of concern. Conventional hardware and algorithms are not up to the task. The proposal here is to tightly integrate novel sensing and processing hardware, together with vision, navigation and control algorithms, to enable the next generation of autonomous robots. At the heart of the system will be a device known as a vision chip. This bespoke integrated circuit differs from a conventional image sensor, including a processor with each pixel. This will offer unprecedented performance. The massively parallel processor array will be programmed to pre-process images, passing higher-level feature information upstream to vision tracking algorithms and the control system. Feature extraction at pixel level results in an extremely efficient and high speed throughput of information. Another feature of the new vision chip will be the measurement of time of flight data in each pixel. This will allow the distance to a feature to be extracted and combined with the image plane data for vision tracking, simplifying and speeding up the real-time state estimation and mapping capabilities. Vision algorithms will be developed to make the most optimal use of this novel hardware technology. This project will not only develop a unique vision processing system, but will also tightly integrate the control system design. Vision and control systems have been traditionally developed independently, with the downstream flow of information from sensor through to motor control. In our system, information flow will be bidirectional. Control system parameters will be passed to the image sensor itself, guiding computational effort and reducing processing overheads. For example a rotational demand passed into the control system, will not only result in control actuation for vehicle movement, but will also result in optic tracking along the same path. A key component of the project will therefore be the management and control of information across all three layers: sensing, visual perception and control. Information share will occur at multiple rates and may either be scheduled or requested. Shared information and distributed computation will provide a breakthrough in control capabilities for highly agile robotic systems. Whilst applicable to a very wide range of disciplines, our system will be tested in the demanding field of autonomous aerial robotics. We will integrate the new vision sensors onboard an unmanned air vehicle (UAV), developing a control system that will fully exploit the new tracking capabilities. This will serve as a demonstration platform for the complete vision system, incorporating nonlinear algorithms to control the vehicle through agile manoeuvres and rapidly changing trajectories. Although specific vision tracking and control algorithms will be used for the project, the hardware itself and system architecture will be applicable to a very wide range of tasks. Any application that is currently limited by tracking capabilities, in particular when combined with a rapid, demanding control challenge would benefit from this work. We will demonstrate a step change in agile, vision-based control of UAVs for exploration, and in doing so develop an architecture which will have benefits in fields as diverse as medical robotics and industrial production.
Agency: GTR | Branch: Innovate UK | Program: | Phase: Collaborative Research & Development | Award Amount: 2.56M | Year: 2014
Hybrid Air Vehicles Ltd has formed a collaborative industrial research team with Blue Bear Systems Research, Forward Composites, Liverpool University, Sheffield University and Cranfield University. This project team will advance the fundamental and interrelated enabling technologies required to maintain the UKs lead in the field of hybrid air vehicles – a novel aircraft form with substantial worldwide sales potential (against competitors such as Lockheed Martin and EADS). The project will focus on lowering the developmental risks in key technology areas such as novel aircraft aerodynamics, carbon composite structures, avionics monitoring systems and improving rate production to enable launch of production design and manufacture. The project results will be exploited by HAV and the UK aerospace supply chain generating UK jobs and maintaining HAV’s lead in the field of hybrid air vehicles and LTA technology.
Agency: GTR | Branch: EPSRC | Program: | Phase: Training Grant | Award Amount: 4.93M | Year: 2014
The global Robotics and Autonomous Systems (RAS) market was $25.5bn in 2001 and is growing. The market potential for future robotics and autonomous systems is of huge value to the UK. The need for expansion in this important sector is well recognised, as evidenced by the Chancellor of the Exchequers announcement of £35m investment in the sector in 2012, the highlighting of this sector in the 2012 BIS Foresight report Technology and Innovation Futures and the identification of robotics and autonomous systems by the Minister for Universities and Science in 2013 as one of the 8 great technologies that will drive future growth. This expansion will be fuelled by a step change in RAS capability, the key to which is their increased adaptability. For example, a home care robot must adapt safely to its owners unpredictable behaviour; micro air vehicles will be sent into damaged buildings without knowing the layout or obstructions; a high value manufacturing robot will need to manufacture small batches of different components. The key to achieving increased adaptability is that the innovators who develop them must, themselves, be very adaptable people. FARSCOPE, the Future Autonomous and Robotic Systems Centre for PhD Education, aims to meet the need for a new generation of innovators who will drive the robotics and autonomous systems sector in the coming decade and beyond. The Centre will train over 50 students in the essential RAS technical underpinning skills, the ability to integrate RAS knowledge and technologies to address real-world problems, and the understanding of wider implications and applications of RAS and the ability to innovate within, and beyond, this sector. FARSCOPE will be delivered by a partnership between the University of Bristol (UoB) and the University of the West of England (UWE). It will bring together the dedicated 3000 square metre Bristol Robotics Laboratory (BRL), one of the largest robotics laboratories in Europe, with a trainin and supervising team drawn from UoB and UWE offering a wide breadth of experience and depth of expertise in autonomous systems and related topics. The FARSCOPE centre will exploit the strengths of BRL, including medical and healthcare robotics, energy autonomous robotics, safe human-robot interactions, soft robotics, unconventional computing, experimental psychology, biomimicry, machine vision including vision-based navigation and medical imaging and an extensive aerial robotics portfolio including unmanned air vehicles and autonomous flight control. Throughout the four-year training programme industry and stakeholder partners will actively engage with the CDT, helping to deliver the programme and sharing both their domain expertise and their commercial experience with FARSCOPE students. This includes regular seminar series, industrial placements, group grand challenge project, enterprise training and the three-year individual research project. Engaged partners include BAE Systems, DSTL, Blue Bear Systems, SciSys, National Composites Centre, Rolls Royce, Toshiba, NHS SouthWest and OC Robotics. FARSCOPE also has commitment from a range of international partners from across Europe, the Americas and Asia who are offering student exchange placements and who will enhance the global perspective of the programme.
Agency: GTR | Branch: Innovate UK | Program: | Phase: Feasibility Study | Award Amount: 109.42K | Year: 2013
This project develops an autonomous path following capability (in the form of a sensor and algorithm kit) for aerial inspection robots used to remotely survey structures in sectors such as oil & gas, mining, energy, chemical processing, water and transport. Aerial robots have enormous potential to slash costs relative to manual inspections, which are equipment and manpower intensive and typically represent a large proportion of the recurring cost of a structure over its lifetime. Current generation robots are typically operated manually within line of sight of a remote operator; this project will develop a sensor and algorithm kit enabling such robots to automatically retrace their steps around a known structure using vision and learning, greatly speeding up repetitive surveys. A 3D visual feature map is generated and refined, and over subsequent missions a robot would use this map of the structure for autonomous visual navigation using a relocalisation approach, allowing it to reach and return from the areas to be inspected autonomously. The proposed robot combines the real-time full 3D visual mapping and relocalisation methods developed at the University of Bristol and flight control technology developed by Blue Bear. | <urn:uuid:ef9b2eb8-664f-4a7a-aae3-c85f229e5169> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/blue-bear-systems-research-ltd-222307/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00516-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916323 | 1,798 | 2.546875 | 3 |
The story of police brutality playing out on city streets, in news accounts and on cellphone video footage has been missing one data point – a federal accounting of the problem.
This spring, Justice Department-funded data scientists will present findings from a pilot project that, in essence, crowdsources facts on police homicides. So far, the number of possible deaths during and after police pursuit is far higher than the figures tabulated by both journalists and activists appalled by the longtime paucity of data on excessive use-of-force.
The project is part of a new project by the Bureau of Justice Statistics focused on capturing an official record of the whole “universe” of law enforcement homicides. The agency has assigned part of a new task to an artificial intelligence tool that crawls online news for the most relevant, potential cases of civilians dying during arrests. Soon, bureau data analysts will compare the reports to local agency records.
A survey in the offing will measure police body camera use nationwide.
"Because newspaper accounts will vary a lot in the elements that they cover, we don't have a great deal of confidence" that they will include every data element of interest, Bureau of Justice Statistics Director Bill Sabol told Nextgov.
Project leaders emphasize the census-taking does not rely on press accounts, but rather consults articles to "nominate" cases for follow-up with law enforcement, medical examiners and criminal investigative agencies.
Agencies’ "additional facts and circumstances may reveal a different cause of death,” Sabol said. Or, “somebody may have written a story about a homicide, but the homicide didn't occur.”
The end goal of this fact-finding mission, the government says, is to create a more efficient way of acquiring reliable details on the number of people killed by law enforcement.
Human rights groups say legitimate data will facilitate transparency and accountability when cops use unreasonable force.
The whole project is part of a larger departmentwide effort to put a credible number on the cases of use-of-force in the United States. There are no good figures on people severely harmed, shot at or killed by the police, partly because of the difficulty of coming up with common criteria to record for each situation.
Justice's "arrest-related deaths program" -- which has been in place since 2003 -- identified only between 59 and 69 percent of the estimated actual total of fatal interactions with police in 2011, according to a March 2015 bureau assessment of the program. The ongoing crowdsourcing and fact-checking process is a redesign of that program.
"Arrest-related deaths" encompass all fatalities -- not only homicides but also incidents like suicides and accidents -- that happen during pursuit of a suspect.
If the statistics bureau likes what it sees in the spring, a contract solicitation to maintain the system could be issued in time for reporting to begin in October.
Robo-Search Engine Picks Death News like Netflix Picks Movies
The push to offer citizens and government agencies a quantitative depiction of police brutality arrives during a low point for the public's trust in police.
The killing of black teenager Michael Brown two years ago by a white police officer in Ferguson, Missouri, became the tipping point for suspicions that U.S. authorities tend to pull the trigger disproportionately on black individuals.
More recently, the November 2015 release of a police dashboard camera video showing a white Chicago cop fatally shooting black teenager Laquan McDonald 16 times in 2014 prompted nationwide protests.
On Dec. 3, an FBI advisory policy board recommended the FBI collect data on the use of force by officers whenever their actions result “in death or serious bodily injury, or whenever a law enforcement officer discharges a firearm at or near an individual," FBI spokesman Stephen G. Fischer Jr. said in an emailed statement.
The figures would be amassed through the FBI’s longstanding Uniform Crime Reporting Program, he said. Those data elements, however, only partially overlap with the statistics bureau's informational needs.
So, about a year ago, the bureau began conversations with the FBI about creating a single questionnaire for local agencies that would combine both sets of statistical reporting.
Even if a single FBI form becomes a reality, the statistics bureau would maintain the crowdsourcing system as a type of check.
"If we find cases in Google Alerts and they are not appearing in the FBI data, then we could use that to follow up with agencies to confirm that, in fact, yes, the report is correct and the agency should submit a report," Sabol said. "We want to keep something going to make sure the coverage of all the eligible cases is complete."
Last summer, contractors from research institute RTI International developed the artificial intelligence technology that picks out news reports about arrest-related deaths.
The computer program winnows down an overwhelming amount of results from keyword searches ("police shooting" pulls up 7 million hits on Google) by dissecting the text of stories and matching this analysis with articles researchers previously read. It serves up news articles in a way similar to how Netflix suggests movie recommendations based on films a viewer watched in the past.
Humans Have the Ultimate Say-So
After a three-month tryout, the technique identified 400 possible arrest-related deaths, including homicides and other fatalities.
The number represents potential arrest-related deaths not yet confirmed and totals could be revised downward. For example, follow-up with local agencies might reveal that multiple articles about the same unnamed victims are double-counting police deaths, according to program managers.
Still, that government statistic is larger than figures calculated by high-profile, grassroots efforts to quantify police-involved deaths during the same time period.
The criteria measured by FatalEncounters.org hew closely to the bureau’s selections: shootings, other uses of force like taser homicides, accidents, suicides and natural causes. The organization found 339 deaths during that same time frame, from June 2015 through August 2015.
A Guardian newspaper project called “The Counted,” whose parameters are similar, but excludes suicides and natural causes, recorded 303 fatalities.
KilledbyPolice.net, which tracks all five types of deaths plus off-duty killings, logged 314 incidents.
The Washington Post's database of strictly police shootings resulting in deaths recorded 263 killings.
A team of humans at the bureau has the ultimate say-so in selecting potential cases, after reviewing the pool chosen by the machine-learning program, officials say.
Duren Banks, an RTI International criminologist and former bureau unit chief, said some of the technology in play is often used by commercial enterprises that want to gauge how often they are being talked about in the news and in what context.
"We've developed our own system to weave through that information and prioritize which media to look at specifically," she said.
The approach removes duplicate media reports, deaths in foreign countries and other articles outside the project's scope. It then ranks the hits in order of relevancy.
"We don't rely on blogs or information put out by partisan-type of groups," Banks said.
Staff members record the reported date of death, law enforcement agency involved and deceased individual's name, if disclosed.
During the fact-finding phase, the bureau will ask local agencies questions such as: Was the deceased armed? What might the individual have been charged with? How many law enforcement agencies responded to the scene? The bureau will take down the demographics of the person who died, including race and ethnicity, as well as whether the incident occurred in a private residence, public space or law enforcement facility.
A Long Wait for Meaningful Data
Some current and former police officers do not expect the final statistics to reflect that police brutality is pervasive in the United States.
With the FBI and the bureau data, "we can fill in some of the narrative because the narrative is being filled in anyway," said Mark A. Marshall, sheriff of Isle of Wight County, Virginia. While pundits in the media claim use-of-force by police is a systemic problem across law enforcement, "I would beg to differ,” he said.
"There are millions and millions of contacts involving citizens and the law enforcement that don’t end up in use-of-force applications, or certainly in deadly-force applications. They are relatively rare, but they do occur," said Marshall, who sits on the advisory board that recommended the FBI quantify such instances. "Anywhere those kinds of incidents occur, we’re all painted, whether it’s in Chicago, whether it's New York, or whether it's in Isle of Wight, Virginia, we all get painted with the same brush."
Without accurate long-term statistics, it is impossible to tell whether police abuse rates have increased or decreased from the time of, for example, the 1991 videotaped beating of Rodney King by L.A.P.D. officers to the 2014 surveillance video on YouTube of 12-year-old Tamir Rice slain by a Cleveland police officer.
There is speculation that police have exercised extreme force against civilians for decades, and the trend went undetected until body-cams, smartphone cameras and surveillance cameras sent images of misconduct viral.
Peter Kraska, who researches police militarization and holds a chair at Eastern Kentucky University’s School of Justice Studies, said: "It may be that it's infinitely better than it used to be, or it may be that it's infinitely worse. We just don't know.”
For some perspective, during the 12-month period the statistics bureau last collected local agency records, which was in 2011, it counted 689 homicides by police.
Police Departments Don’t Have to Record
Law enforcement agencies are not required to report police killings to the FBI or the statistics bureau.
Most agencies have zero deaths each year, so the burden to participate is small, said Banks, the RTI criminologist. Some states actually mandate that agencies assemble and report the data. But for other agencies, primarily larger ones, the process does take effort and money.
Between 2003 and 2011, the bureau provided slightly more than $730,000 in assistance to states for gathering data through a centralized statistical analysis center, Banks said.
Kanya Bennett, American Civil Liberties Union legislative counsel, said there would be less “teeth pulling" if Justice tried more of a get-tough approach.
Justice has the option to dock a certain percentage of a police department's federal funding if it does not submit data on law enforcement homicides, under the Death in Custody Reporting Act, which governs the bureau's data collections.
"I think that it is important that the Bureau of Justice Statistics do its own independent gathering of information," Bennett said. "However, it should have a fairly good starting point if it were actually requiring police departments to report."
Many law enforcement agencies oppose tying performance metrics to federal dollars, arguing the loss of money will compromise public safety. But Bennett argued, "We should not be handing out federal dollars if we are not even getting the most basic of information about how police are doing their job with federal dollars.”
Statistics bureau officials, in 2010 and 2011, began requesting more data from local police departments dealing with specific cases that had garnered media coverage. After that, “participation increased,” officials told Nextgov in an email.
The current pilot study should serve as a “direct test” of police departments' ability to respond to questions about press accounts, officials said.
The bureau "is committed to working with state and local law enforcement agencies as well as the FBI to ensure" the program data is "reliable," their emailed statement said.
Because emotions are running high on the issue of excessive force, there is pressure to show society some hard and fast figures that might not turn out to be right, Marshall cautioned. At the statistics bureau, the fact-checking probably will create complications, he said, adding, "I just don’t know that that’s going to be an accurate number."
In the meantime, all 51 of Marshall's officers now are outfitted with body cameras.
"As I told our deputies, it captures the good, the bad and the ugly," he said. "If we’ve got things that we need to improve, if there are training issues, that gives us the accountability and the transparency."
It’s All About Context
The statistics bureau expects to survey about 5,000 law enforcement agencies early in 2016 on their body-cam use, policies and practices.
The agency, however, has not assessed incorporating video into its accounting of police-involved fatalities.
"We haven't given serious thought to whether body-cams or car-cams or other video could be used to capture information about the circumstances surrounding the deaths," said Sabol, the Bureau of Justice Statistics director. The variations in police department policies may rule out the use of video feeds as a data set, he said.
Nick Selby, a detective with a police department near Dallas, Texas, maintains a tally of law enforcement homicides with video accompaniment, when available, as a business. The data is available for free. He stressed that attaining a deep understanding of deadly situations will be critical for Justice.
"We need much more data than is currently being gathered, and the data must be around the context of the event -- from the 911 call to the autopsy and final investigative reports, grand jury and court testimony," said Selby, who was not speaking in his capacity as a detective.
His company, analytics firm StreetCred Software, logs "data on the officers -- any prior complaints of excessive force -- and the decedent -- any prior conviction of violent crime," he said.
For video to be valuable, it must be taped, tagged, stored and parsed.
Terry Gainer, a former Senate sergeant at arms, chief of the U.S. Capitol police and 20-year veteran of the Chicago Police Department, now consults organizations on body camera surveillance.
"It's the analytics that will let everybody decide whether law enforcement is hiring the right officer, whether we have the right policy, the right training, the right supervision,” he said. | <urn:uuid:029bc82f-4d70-46a7-a6a8-ba4dcfff118d> | CC-MAIN-2017-04 | http://www.nextgov.com/big-data/2016/01/justice-statisticians-step-lethal-force-debate/125011/?oref=ng-HPriver | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00268-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946669 | 2,926 | 2.625 | 3 |
In Passive Optical Network (PON), optical splitters play an important role in Fiber to the Home (FTTH) networks by allowing a single PON interface to be shared among many subscribers. Optical Splitters are installed in each optical network between the PON Optical Line Terminal (OLT) and the Optical Network Terminals (ONTs) that the OLT serves. During the deployment of fiber to the home passive optical network, usually, we will face some physical access network design problems. This article may help you solve FTTH splitting lever and ratio design problems.
Choose PLC Splitter or FBT Splitter?
Before we start to discuss the splitting lever and ration design, it’s necessary to choose the right optical splitter type for your FTTH network. There are two types of splitters in our current FTTH application—PLC splitter and FBT splitter. Here we have a comparison between these two splitter types:
|Parameters||PLC Splitter||FBT Splitter|
|Wavelength Range||1260-1650 nm||Single/dual/triple window|
|Splitting Ratio||Equal division||Equal or non-equal division|
|Dimensions||Small||Large size for multi-channel|
|Cost||Low splitting channel, high price||Price is lower for small channel spliter|
As we can see in the table above, with the rapid growth of FTTH worldwide, the requirement for larger split configurations (1×32, 1×64, etc) in these networks has also grown in order to serve mass subscribers, since PLC splitters offer very accurate and even splits with minimal loss in an efficient package, they are offer a better solution for today’s FTTH applications than FBT splitters.
FTTH Network Splitting Level Design
The PON is the optical fiber infrastructure of an FTTH network. The first crucial architectural decision for the PON network is that of optical splitter placement. The PON splitting may be achieved by centralized splitting (one-level) or by cascaded splittings (two-level or more). A centralized approach typically uses a 1×32 splitter located in a fiber distribution hub (FDH). The splitter is directly connected via a single fiber to a OLT in the central office. On the other side of the splitter, 32 fibers are routed to 32 customers’ homes, where it is connected to an ONT. Thus, the PON network connects one OLT port to 32 ONTs.
A cascaded approach may use a 1×4 splitter residing in an outside plant enclosure. This is directly connected to an OLT port in the central office. Each of the four fibers leaving this lever 1 splitter is routed to an access terminal that houses a 1×8 level 2 splitter. In this scenario, there would be a also total of 32 fibers (4×8) reaching 32 homes. It is possible to have more than two splitting levels in a cascaded system, and the overall split ratio may vary (1×16 = 4×4, 1×32 = 4×8, 1×64 = 4x4x4).
A centralized architecture typically offers greater flexibility, lower operational costs and easier access for technicians. A cascaded approach may yield a faster return-on-investment with lower first-in and fiber costs. Usually, the centralized splitting solution is used in crowded city center or town areas, in order to reduce cost and easy to maintain the optical distributed network (ODN) nodes. In the other hand, two-level and multi-level cascaded splitting solution is used in curb or village places, to cover widely ODN nodes, conserve resources and save the money.
FTTH Network Splitting Ratio Design
The most common splitters deployed in a PON system is a uniform power splitter with a 1:N or 2:N splitting ratio (N=2~64), where N is the number of output ports. The optical input power is distributed uniformly across all output ports. Different ratio splitters may perform differently in your network. Then, how to design your splitting ratio? According to the passage mentioned above, if you choose the centralized splitting solution, you may need to use 1×32 or 1×64 splitter. However, if you choose the cascaded splitting solution, 1×4 and 1×8 splitter may be used more often. Besides, based on our EPON/GPON project experience, when the splitting ratio is 1:32, your current network can receive qualified fiber optic signal in 20 km. If your distance between OLT and ONU is small, like in 5 km, you can also consider about 1:64.
When to design your FTTH network splitting level, in fact, centralized splitting and cascaded splitting both has its advantages and disadvantages. We had to weight these factors and select an appropriate splitting level for our network. As for splitting ratio design, to ensure a reliable signal transmission, the longer the transmission distance, the lower splitting ratio should be used. FS.COM provides full series 1xN or 2xN PLC splitters which can divide a single/dual optical input(s) into multiple optical outputs uniformly, and offer superior optical performance, high stability and high reliability to meet various application requirements. For more information, you can click here to learn about our PLC splitter quality assurance program or click here to download our PLC splittrs catalog. | <urn:uuid:7bee0bac-86c8-4c8b-80d0-eb6e02952230> | CC-MAIN-2017-04 | http://www.fs.com/blog/how-to-design-your-ftth-network-splitting-level-and-ratio.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00296-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.911925 | 1,130 | 2.859375 | 3 |
A PKI (Public Key Infrastructure) allows principals to authenticate one another using asymmetric encryption.
A client (C) claiming to be a principal (P) authenticates to a server (S) as follows:
- S sends C a random number R.
- C encrypts R with his private key, and sends the result to S.
- S decrypts the result with P's public key.
- If the result matches R, then S knows that C must possess P's private key, and so C is assumed to be P. | <urn:uuid:88d7606c-dec9-46a7-beb9-9fdceffd077f> | CC-MAIN-2017-04 | http://hitachi-id.com/concepts/cryptographic_authentication.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00112-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.870815 | 115 | 3.296875 | 3 |
In my previous blog post, I described the most important H.323 protocols, standards, and network elements. This time I'll be taking a look at how those network elements communicate.
As mentioned last time, H.225 RAS allows communication between endpoints and gatekeepers, using UDP ports 1718 and 1719. There are a number of different H.225 RAS messages, including:
Gatekeeper discovery (Gatekeeper Request [GRQ], Gatekeeper Confirm [GCF], and Gatekeeper Reject [GRJ]): used by endpoints to discover gatekeepers with which to register.
Registration and unregistration (Registration Request [RRQ], Registration Confirm [RCF], Registration Reject [RRJ], Unregister Request [URQ], Unregister Confirm [UCF], and Unregister Reject [URJ]): these messages can be used to register/unregister addresses with a gatekeeper and join a zone (a collection of H.323 network elements registered with a gatekeeper).
Admission control (Admission Request [ARQ], Admission Confirm [ACF], and Admission Reject [ACJ]): these messages are used for call admission control.
Call Disconnection (Disengage Request [DRQ], Disengage Confirm [DCF], and Disengage Reject [DRJ]): these are used to disconnect calls.
Bandwidth control (Bandwidth Request [BRQ], Bandwidth Confirm [BCF], and Bandwidth Reject [BRJ]): these can be used to change the amount of bandwidth during a call.
Endpoint location (Location Request [LRQ], Location Confirm [LCF], and Location Reject [LRJ]): as you can probably guess, these are used to locate endpoints.
Status information (Information Request [IRQ], and Information Request Response [IRR]), Information Request Ack [IACK], and Information Request NAck [INAK]): these can be used to find status information.
H.225 call signalling (as opposed to H.225 RAS) uses Q.931 messages (you may be familiar with these from studying ISDN) to setup, maintain, and tear down calls.
H.245 signaling is used for media control, including establishing master/slave relationships, exchanging H.323 terminal capabilities, and for logical channel signalling. Logical channels can be used to transport media traffic such as audio, video, and data.
Note that terminals (phones) A and B shown in intrazone call setup example are not themselves H.323 enabled. Instead terminals A and B connect to Cisco H.323 gateways (GWA and GWB), which are H.323 enabled, function as H.323 endpoints, and interact with a gatekeeper (GK1) in order to setup the call.
Once you have had a good look at those first two diagrams, and are comfortable with their content, you may like to take a look at this diagram, which shows H.225 call signalling and H.245 call control in more detail. This last example does not include any H.225 RAS (there is no gatekeeper involved in the call flow), and H.225 call signalling and H.245 call control is shown directly between two H.323 enabled PCs (H.323 terminals/endpoints). | <urn:uuid:2efaa73b-537d-4630-a5e3-4bd0debdb632> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2344396/cisco-subnet/ccie-voice---ccvp-exam-objectives--5--h323-messages--cont--.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00379-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.872271 | 693 | 2.8125 | 3 |
With industry experts predicting the eventual demise of Moore’s Law, the long-term prospects for silicon-based computing appear rather bleak. However, a research project at the University of Bristol may give the technology a second life. The institution’s center for quantum photonics announced a breakthrough process that uses conventional semiconductors to create quantum chips.
Researchers have proposed quantum computing can be implemented using photons – essentially particles of light. But this typically requires the use of optical fibers, or more recently, exotic waveguide circuits made of silica and silicon-oxy-nitride. All of these are rather inconvenient to use when fabricating mass-produced computer chips.
To get around that problem, the researchers have developed integrated quantum photonic waveguide circuits using silicon-on-insulator materials. As explained in the announcement, the main advantage of this approach is that these waveguides circuits are compatible with those used in traditional microprocessors.
The leap from using glass-based circuits to silicon-based circuits is significant because fabricating quantum circuits in silicon has the major advantage of being compatible with modern microelectronics. Ultimately this technology could be integrated with conventional microelectronic circuits, and could one day allow the development of hybrid conventional / quantum microprocessors.
Using these structures, researchers created circuits capable of performing quantum calculations, enabling a different type of computing than that delivered by conventional binary logic. Instead of assigning bits with 1’s and 0’s, quantum computers use qubits, which have the ability to assume the value of 1, 0 or both, otherwise known as superposition. As the number of qubits increase, the potential computational capacity grows exponentially.
“Using silicon to manipulate light, we have made circuits over 1000 times smaller than current glass-based technologies. It will be possible to mass-produce this kind of chip using standard microelectronic techniques, and the much smaller size means it can be incorporated in to technology and devices that would not previously have been compatible with glass chips,” said Mark Thompson, deputy director of the Center for Quantum Photonics at Bristol.
Containing components as small as 10 micrometers, the silicon quantum chip was able to perform quantum interference and manipulation operations. Given this development and other advances in the field of quantum computing, Bristol’s research team believes the components now exist to build a fully functional quantum processors. | <urn:uuid:5257d168-9758-4ed6-9b16-784c7bbd94d9> | CC-MAIN-2017-04 | https://www.hpcwire.com/2012/09/06/quantum_circuits_built_with_conventional_silicon_materials/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00379-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.912575 | 488 | 4.03125 | 4 |
In this course, you will learn how to use Spring in conjunction with the enterprise resources and technologies available in today's systems and architectures. The course covers a wide spectrum of topics, so you should have a basic understanding of those technologies and resources prior to taking this class.
The Spring framework is an application framework that provides a lightweight container that supports the creation of simple-to-complex components in a non-invasive fashion. Spring's flexibility and transparency is congruent and supportive of incremental development and testing. The framework's structure supports the layering of functionality such as persistence, transactions, view-oriented frameworks, and enterprise systems and capabilities. Spring's aspect-oriented programming (AOP) framework enables developers to declaratively apply common features and capabilities across data types in a transparent fashion.
As an enabler for the integration of Java applications and enterprise resources, the Spring framework represents a significant step forward. If you want to deliver an enterprise application within the Spring framework, you'll find this course essential.
Note that our Spring training covers the entire spectrum and is highly modularized. As such, we can customize courses to your specific needs.
Experience expert-led online training from the convenience of your home, office or anywhere with an Internet connection.
Train your entire team in a private, coordinated professional development session at the location of your choice.
Receive private training for teams online and in-person.
Request a date or location for this course. | <urn:uuid:70824552-887f-4f04-81dd-efe738b63d13> | CC-MAIN-2017-04 | https://www.globalknowledge.com/ca-en/course/121333/mastering-spring-42-and-the-enterprise-tt3374/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00499-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.899689 | 298 | 2.578125 | 3 |
Technical stuff really gives us a hard time understanding what they are. Though most of us are not technician, we still want to understand the latest trends of our time. Even in the line of telecommunications it has developed progressively. There are a lot of transceivers being produced to increase your signal. A transceiver is a package with a transmitter and a receiver. To be able to transmit data, we need transceivers. The most common way of transmitting data is to use light-based fiber optics. The use of electronic signals is the traditional and slower way of transmitting data. The best modules to use today are the XFP and SFP modules.
XFP module also called XFP transceiver and what is an XFP module? “XFP” stands for “10 Gigabit Small Form Factor Pluggable.” With XFP you will surely experience a fast transmission of data in your computer network including your telecommunication links.
According to Wikipedia, XFP is a hot-swappable and protocol independent module. It means that you can replace the component without shutting down the whole system. XFP can replaced without interrupting the operation of your system. Its usual operation is at optical wavelengths of 850nm, 1310nm, or 1550nm. To be able to install this module in your computer, you should have one of these: 10 Gigabit Ethernet, 10 Gbit/s Fibre Channel, Synchronous Optical Networking at OC-192 rates, Synchronous Optical Networking STM-64, 10 Gbit/s Optical Transport Network OTU-2, and parallel optics links. XFP modules are able to function with just a single wavelength or dense wavelength division multiplexing techniques.
“SFP” stands for “Small Form-factor Pluggable.” It is a compact, hot-pluggable transceiver used for both telecommunication and data communication applications. It interfaces a network device mother board (for a switch, router, media converter or similar device) to a fiber optic orcopper networking cable. It is a popular industry format jointly developed and supported by many network component vendors. SFP transceivers are designed to support SONET, Gigabit Ethernet, Fibre Channel, and other communications standards. Due to its smaller size, SFP obsoletes the formerly ubiquitous Gigabit interface converter (GBIC); the SFP is sometimes, erroneously, referred to as a ‘Mini-GBIC’ (although no such device has ever been defined in the MSAs). | <urn:uuid:241b2f51-76a6-4e38-a015-96b3f139ea26> | CC-MAIN-2017-04 | http://www.fs.com/blog/xfp-module-vs-sfp-module.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00297-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920627 | 526 | 3.28125 | 3 |
Last week’s news that the U.S. would freeze a landmark nuclear cooperation accord with Russia hardly came as a shock. It was only the latest in a series of ambitious, years-in-the-making initiatives that have collapsed, one by one, in the five months since the Crimea annexation.
And like other recent rifts, this one had potentially galactic implications. The deal, struck last September, also envisioned U.S.-Russia collaboration on “defense from asteroids,” which The New York Times described as “shorthand for a proposal to recycle a city-busting warhead that could be aimed at an incoming earth-destroyer.” The paper made sure to point out that Hollywood had this idea—twice—nearly 15 years ago. But the report that real-life planet-salvation, too, might have fallen through the widening cracks in the U.S.-Russia relationship was startling. Whither asteroid defense?
The real world had been gradually catching up to Bruce Willis and company well before February 2013, when a meteor streaked through the sky over the Russian city of Chelyabinsk before crashing to Earth, blowing out windows and injuring more than a thousand people in the shock wave it released. Asteroid detection had already improved dramatically over the preceding two decades or so. But the Chelyabinsk incident vividly exposed the remaining gaps (a meteoroid is a fragment from a comet or asteroid). The issue's urgency was especially clear for the Russians, who promptly kicked asteroid defense-planning into high gear.
Whatever your thoughts on Armageddon (disclosure: I saw it in the theater, where I wept), it's hard to deny that asteroids, along with nuclear weapons, are one of the gravest threats to human civilization. “It’s like cosmic roulette,” Ed Lu, who runs a nonprofit, the B612 Foundation, dedicated to tracking asteroids, told PBS NOVA last November. Lu is a former astronaut who, along with fellow former astronaut Rusty Schweickart, started the organization in 2002 to come up with solutions for the asteroid threat. Scientists now estimate that they know of about 90 to 95 percent of the big, potentially Earth-destroying ones, measuring roughly 1,000 yards across, that are thought to have killed the dinosaurs, of which current estimates say there are about 1,100. But the rate is not so good for the smaller ones, which can still be the size of a football field and cause enormous damage, and only about a fourth of which have been discovered. Meanwhile scientists have a huge blind spot facing the sun, which is how the Chelyabinsk meteor snuck up on Siberia. “[T]he house always wins. And in this game, we’re not the house,” Lu concluded grimly.
Research has so far yielded three hypothetical ways to save the planet from an asteroid: Push it, pull it, or nuke it. You can push an asteroid off a collision course with Earth with a laser or with an old-fashioned collision—sending a spacecraft to slam into the rock and shove it away. You can pull an asteroid to safer metaphorical ground with a “gravity tractor,” a space vehicle big enough to coax the asteroid into its own gravitational field, gently redirecting the killer rock from our placid shores. Or you can blow it up with a nuclear weapon.
This literal nuclear option, like the figurative ones, is a controversial last resort. The first two methods require extended time horizons of roughly a decade or more to spot the ill-intentioned asteroid, get to it, and change its orbit. Nuclear weapons are a potential backup for when there’s less time, and their use in space, in addition to being illegal under the 1967 Outer Space Treaty, carries a risk of generating a spray of radioactive rocks that might hit Earth anyway. But, wrote Douglas Birch for the Center of Public Integrity last fall, “in recent years, advocates of the use of nuclear weapons to counter space threats have been gaining ground. NASA is spending hundreds of thousands of dollars a year to study the idea, and the U.S. nuclear weapons laboratories are itching to work with the Russians on it.”
That was pre-Crimea, and pre- the rise of "new Cold War" theorizing. Nevertheless, it’s really conditions born of the old Cold War that continue to favor U.S.-Russia cooperation on asteroid defense. When the Cold War ended, both countries were left in possession of the most advanced space programs and the biggest nuclear arsenals in the world. As Birch reports, U.S. and Russian nuclear engineers exhibited a brief flurry of interest in asteroid defense in the 1990s as they tried to figure out what to do with their suddenly less relevant, but still massive, arsenals. But weapons maintenance soon took priority over asteroid defense, and activity on the matter waned.
It was only recently that interest picked up again, and only last year that the UNformally settled on a strategy for international asteroid defense. The plan, developed by the United Nations Scientific and Technical Subcommittee of the Committee on the Peaceful Uses of Outer Space, recommended the establishment of an International Asteroid Warning Network, a global collaboration to track and share information about asteroids, and a Space Mission Planning Advisory Group, SMPAG for short, to come up with strategies for dealing with them. “In the case of a real threat, one that could be acted upon, it would be one or more of the [national] space-faring agencies”—such as NASA or Russia’s Roscosmos—“that would carry out the mission,” Sergio Camacho, who chairs the UN Action Team on Near-Earth Objects, told me in an email. “The expectation is that Russia would join SMPAG (all agencies need to formally join SMPAG). If they don’t, they can always join later.”
Meanwhile, at the national level, Russian announcements about the importance of asteroid defense are still just announcements. “Russian space officials often discuss in public the importance of planetary defense and Russia's skills in this area, but have never allotted their scarce space resources toward actual search programs or spacecraft mission concepts,” says Tom Jones, a former astronaut and chair of the Association of Space Explorers' committee on Near-Earth Objects (NEO), via email. “NASA spends more than any other nation ($40 million annually) on NEO detection and research. Russia spends very little, but is proposing to construct a detection telescope and collaborate with the International Asteroid Warning Network.”
Ultimately, as the UN sets up its asteroid monitoring and defense committees over the coming months, the U.S. and Russia may once again find themselves in a position of “reluctant codependency” in space, given their unique capabilities and shared interests.
"Asteroid impacts will always be with us, and relations between states will ebb and flow forever,” says Rusty Schweickart, chairman emeritus of the B612 Foundation, by email. Given the decade-plus timelines involved in planning and executing missions millions of miles into space, he says, “I suspect that the task of working together to [ensure] a global disaster does not occur will trump ‘minor’ geopolitical (temporary) bad blood.” Russian and American scientists can agree on one thing: When it comes to detecting threats to the planet, they don't want to miss a thing. | <urn:uuid:182b39b0-5467-4636-b50f-a5799dc23185> | CC-MAIN-2017-04 | http://www.nextgov.com/defense/2014/08/enemy-asteroid-my-friend/91112/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00141-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953503 | 1,562 | 2.578125 | 3 |
The Open GIS Consortium (OGC) is a unique organization of GIS users and vendors developing standards to guide the application of new distributed computing technologies to geodata and geoprocessing. State and local governments will be principal beneficiaries of the Open GIS Consortium's Open Geodata Interoperability Specification (OGIS), a geoprocessing standard that will enable a much higher level of GIS interoperability than can be obtained with data standards and data translators alone.
The OGIS will make it possible for geodata and geoprocessing resources to communicate information about their content and capabilities to a user's application, transparently and in real time. OGIS will also make it possible to share - with other GIS and non-GIS applications - data and processing that are currently confined within "monolithic" GIS applications.
World Wide Web browsers like Mosaic preview for us how these capabilities can work across global networks just as they can work across local networks or across a SCSI cable connecting a PC and a CD player. The OGIS extends this model of distributed computing to geodata access and geoprocessing, so that OGIS compliant applications will automatically and often invisibly manage, in real time, the many complex and incompatible types of geodata and GIS software.
OGIS-compliant applications will be layered on distributed computing architectures such as the Object Management Group's CORBA, Microsoft's OLE/COM, and remote procedure call (RPC) schemes common in UNIX environments. Data will have metadata (data about the data) attached as headers or object "wrappers" which will enable automated searching and selective processing. In some implementations - such as at sites that publish geodata to users via the Internet - programs resident with the data will extract and send only the area coverages and data layers requested.
Many concerns that involve geodata - environment, transportation, economic development - overlap jurisdictional boundaries. Within a city, agencies would often like to share data contained in dissimilar GIS databases. States seek to gather local data into statewide databases for multiple users and multiple uses, as do federal agencies with data collection and distribution missions.
The OGIS will lower the cost of maintaining data and make it easier for agencies to provide data to taxpayers, commercial entities, and other agencies. With online data, agencies will be able to improve services and reduce budgets by using the new supporting technologies for security, version control, digital payment, and fast search and retrieval of massive databases that are being developed by vendors. Geodata interoperability will accelerate the implementation of new GIS applications that will rely on wireless communications and inexpensive global positioning system (GPS) devices.
OGC, founded in August, 1994, now has 40 members, including vendors such as Oracle, Intergraph, ESRI, Genasys II, Digital Equipment Corporation, Hewlett-Packard, and Sun; universities and state agencies such as UCLA, Rutgers, University of Arkansas, and the Resource Department of the state of California; federal agencies such as U.S. Geological Survey, Natural Resources Conservation Service, Defense Mapping Agency, and National Oceanographic and Atmospheric Administration; and integrators including BBN, MITRE, GTE and Autometric.
For more information contact the Open GIS Consortium Inc., at 508-655-5858. Fax: 508-655-2237. E-mail: email@example.com. | <urn:uuid:6add0328-c717-458a-b5e8-2929ffaf7fa1> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Open-GIS-to-Benefit-State-and.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00049-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.892366 | 712 | 2.59375 | 3 |
In the wake of the most recent report by the U.N.’s Intergovernmental Panel on Climate Change (IPCC), the debate over climate change has once again flared up. Both sides are finding something in the report to justify their positions on the issue, and some are simply mocking it as the product of a political body. In fact, the issue is so contentious that even the title of this article, which assumes a debate exists, is likely to rile some of the stauncher advocates. For technology companies and professionals—particularly in the data center industry—this debate is not without consequence. Money, credibility and much more are on the line; so how is one to navigate the shrill rhetoric, dogmatic claims and technical details of such a complex topic?
This discussion is by no means intended to resolve the debate or even present a possible resolution. On the technical side, complex systems like the Earth and its atmosphere and oceans are difficult to model, and as the number of variables increases, more simplifying (and potentially erroneous) assumptions must be made to make the problem manageable. Add in the various—and often devious—motivations of individuals and organizations, and you end up with a muddy situation that is difficult to penetrate.
Why does this matter to technology? In the case of data centers, much of the cost of running a facility is energy, and most energy produced today yields carbon waste products—the supposed major culprit in climate change. The likely outcome of a political consensus on the dangers of climate change, perhaps even regardless of the cause, is higher taxes and more regulations on consumers and companies, and that outcome would be a tremendous hit for data centers. Other areas of life, such as transportation and food production (even cows have been assigned blame), could also see major changes. So approaching the subject with a cool head is critical to planning for the future, whatever it might hold.
Watch Your (and Their) Language
A highly deceptive claim made in the climate change debate, as well as similar controversies, is that “science says X.” Apart from being loaded (if you disagree, you’re automatically unscientific), this statement carries the unstated assumption that science is some sort of authority that stands independent of human motivation and the character of scientists. Science says nothing; scientists make claims. And science by its nature requires that claims be tentative at best, since contrary evidence could necessitate reevaluation of a reigning theory.
Another specious term is consensus. A common assertion among proponents of the theory of manmade climate change is that a consensus exists among scientists in support of their position. This claim is patently false; not all scientists believe in manmade climate change, and among those that do, opinions on the degree to which human activity causes climate change vary widely. A typical response is to dismiss those that disagree as unscientific or otherwise prejudiced for one reason or another (a subject addressed later in this discussion). But this dismissal is seldom on the basis of a look at the analysis and is instead too often a blanket rejection, often accompanied by some form of ad hominem attack, simply for the fact of disagreement rather than for the reason for disagreement.
A powerful tool in this controversy, or in any, is to ask for clarification of the language in the discussion. If one party from either side makes a suspicious claim, a good response is to ask what is meant by (fill in the blank). What does one mean by “the climate isn’t changing,” or what does one mean by “real scientists”? Does a consensus mean what it usually means—everyone agrees—or is it just a rhetorical way to marginalize those with a differing view? And since when does a consensus imply that a view is correct, even in science?
Everyone Has an Angle
A favorite tactic of many proponents of the idea that human activity causes climate change is to question the motivation of dissenters. Often, this attack focuses on association with oil companies. True, some researchers who oppose this view are funded by oil companies and similar organizations. To be fair, however, one should also ask what those who support the view have to gain.
Much of the funding for this research comes from governments. Governments, as history—and particularly the twentieth century—has shown, are gluttons for power and wealth. And crises almost invariably increase the power of politicians. Furthermore, researchers who rely on government funding (whether national or international) have every financial motivation to produce findings that are advantageous to their benefactors. So, the tactic of guilt by association (or by research funding) is a double-edged sword that can easily undercut both sides.
The difficulty, of course, is separating motivation from the results. Even researchers working for organizations unaffiliated with governments and oil companies are still human, and they still are subject to the same influences that can affect their judgment—particularly on a contentious issue. Human beings are also loathe to admit mistakes; even if a majority of scientists agreed with the notion that human activity causes global warming, contrary evidence might fail to sway them. (See Lee Smolin’s book, The Trouble With Physics, for a discussion of how this phenomenon has existed throughout the history of science, right to the present moment.)
The Problem With Evaluating Climate Change Claims
The circumstances of the climate change controversy are bad enough: the debate is highly polarized, the language can be extremely deceptive and the tactics of adherents on both sides are sometimes underhanded. Add to this the technical details, wherein complicated computer models aim to describe and predict the behavior of a highly nuanced and complex system, and the issue becomes almost impossible to untangle. In addition, the stakes are high in either case: if human activity causes dangerous climate change, the risk to lives and property is real. On the other hand, if the theory is false, the risk to freedom and quality of life is just as grave.
Should one believe the proponents of manmade climate change theory because many scientists agree? Should one believe because the evidence points in that direction? Apart from diving into the technical details, that question is almost impossible to address. And even for experts, questions of the reliability of models, the accuracy and proper sampling of data, and numerous other concerns make the technical debate—forget the political debate—a challenging one.
So, how should the leadership of technology companies and operators of data centers approach the matter? Do you need to buy into the notion of manmade climate change to gain respectability? Do you need to take drastic action to do your part in “saving the planet”? Your position might or might not make a difference.
Regardless of whether human activity is causing climate change, increasing your energy efficiency and reducing your impact on the environment (through recycling, less water consumption or myriad other strategies) is helpful. It can also save cost and improve your image with the public. But going too far can be problematic, too: becoming a crusader for the cause of fighting climate change will certainly alienate some potential customers, but it can also make you end up looking foolish if the role of human activity is later shown to be negligible. The history of science is replete with reversals of the reigning dogma, so trumpeting the science of the day is not necessarily the wisest approach, especially if it’s not relevant to your business.
You can cite the IPCC’s conclusion that man is “extremely likely” to be the cause of climate change, despite the “natural fluctuations” in the trend that have caused doubt in recent years, or you can choose to focus more heavily on data such as the pause in warming or the increase in polar ice. That climates vary over time is certain; the role of human activity, however, is the heart of the debate—and to the victors go the spoils. Both oil companies and bureaucrats alike have dollar signs (or hunger for power) in their eyes, so neither side of the debate has a monopoly on virtue.
IT professionals and companies need not take sides on this contentious issue to nevertheless help steward the environment. In particular, they should remain wary of a false view of deified “science,” as well as the idea that skeptics are motivated by greed. All too often, government has used crises (manufactured and otherwise) to usurp power and property, so naturally, calls for government regulation and taxation in the name of climate change are suspect.
At this point, you may be disappointed that the present article offers no damning evidence of error or even dishonesty on the part of one side or the other. The technical details of the debate are too complex to discuss in a short piece. But for the layman—and even the scientist—the first step toward coming to a rational conclusion on this matter is to see through the underhanded tactics of both sides and to recognize that everyone has an angle on the outcome. Then, if we add a dash of civility, we can all look with fresh eyes at the evidence.
Image courtesy of Robert A. Rohde | <urn:uuid:e6e1ce62-f375-4d7e-9d6c-c27c7d7ffaf6> | CC-MAIN-2017-04 | http://www.datacenterjournal.com/navigating-climate-change-debate/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00372-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958302 | 1,862 | 2.765625 | 3 |
Each day is a new battle for the security team of Google, the Google cyber security team has to deal with advanced persistent threats and skiddies which want to gain information via social engineering techniques, so it is not weird that Google has a standard for security operations, and guess what, they have published a “public” version which holds tips on how to secure yourself on the internet and while you are offline.
The Google security team explains that safe passwords are the first line of defense against cybercriminals, and they are right. Passwords will add an security layer to your environment, and this means that it gets harder for unauthorized people to gain access to specific data.
If the unauthorized person does not have the password, he will be unable to login, but there are some important things you need to keep in mind when you are crafting your first password.
Various researches over the years have proven that simple passwords like “Admin”, “1234” and “admin” simply will not protect any user. Cybercriminals have good knowledge on which passwords are often used, the cybercriminals get this information via experience and by simply reading the reports on passwords safety which are published each year.
The cybercriminals or hackers are able to use brute force attacks to force their ways into the user accounts. The brute force attacks will use lists or codes which will generate passwords like “Admin” and “1234”.
Passwords like “admin” are easily cracked, but if you take a password like “this!smysecurep@ssword!”, it will be very hard for the cybercriminals to brute force it.
The Google Security team listed up these tips for Passwords:
- Use a unique password for each important account like email and online banking
- Keep your passwords in a secret place that isn’t easily visible
- Use a long password made up of numbers, letters and symbols
- Try using phrases that only you know
- Set up password recovery options
Signing in and out
The Google cyber security team also explained that it is important to sign-out when you are done using a service. Staying logged in while not using a service only increases the chance of getting your account hijacked / hacked.
The team also explained that it is important to use secure networks when possible and to stay vigilant and aware when using unsecured networks. It is NOT wise to exchange / login at important environments via unsecured networks. Hackers and cybercriminals are able to perform man in the middle attacks which allows them to see all the information which you are sending and receiving, this includes seeing your passwords and financial credentials. | <urn:uuid:b64255ef-18ef-4d5e-92c8-9632db99ac41> | CC-MAIN-2017-04 | http://cyberwarzone.com/tips-from-google-on-cyber-security/?quicktabs_qtinterestingreads=1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00124-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937411 | 559 | 2.796875 | 3 |
Earlier this week, the Diesel Technology Forum released a study of environmental and economic benefits of clean diesel. In results of a survey released with the study, an overwhelming majority of respondents said the next vehicle they buy will be more fuel-efficient. The reasons have as much to do with concerns about our dependence on oil as pocketbook concerns, a survey released today by the Diesel Technology Forum found.
Over the next year, leading automakers are preparing to launch more than a dozen new clean-diesel car and truck models that meet the world's strictest clean air standards.
"The environmental study of more than 5 million diesel pickup trucks sold from 1994 to 2007 found the trucks will save the U.S. 48 billion gallons of fuel," said Kevin McMahon of The Martec Group, a transportation market-research firm. "That's more than two year's worth of Venezuelan imports, or the equivalent of taking 7.5 million gasoline-powered cars off the road."
Since 2000, U.S. registration of diesel vehicles has climbed 80 percent and market researcher J.D. Power and Associates predicts the market will triple over the next five years as more types of diesel vehicles become available from European, Asian and U.S. auto manufacturers.
"The message is clear: the time for clean diesel is now," said Allen Schaeffer, executive director of the Diesel Technology Forum. "With only a few choices available to consumers, diesels have already dramatically cut energy consumption and reduced carbon dioxidee emissions. With more choices on the way, Americans will continue to benefit as we continue to introduce the public to this new generation of clean diesel cars and light truck choices."
Only half of those surveyed, for instance, believe diesel-powered vehicles get better gas mileage than gasoline-powered vehicles. New diesel technologies, in fact, get up to 40 percent better fuel efficiency.
For the survey, pollsters from Yankelovich Partners interviewed 1,003 Americans in the national sample and 403 adults in California between July 14 and 21 for the survey. The sampling error is 3.1 percent plus or minus.
"Most people are surprised to learn that diesel pickup trucks outsold hybrids 2.5 to 1 from 2003-2007 and saved 21 times more fuel than all hybrids combined," Schaeffer said. | <urn:uuid:497717d8-a7e0-4d5e-a2d6-480c13c03792> | CC-MAIN-2017-04 | http://www.govtech.com/technology/Diesel-Trucks-Outsell-Hybrids.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00518-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947456 | 464 | 2.609375 | 3 |
Back-to-school means students returning to campus and bringing their technology with them.
Research firm Student Monitor, which tracks computer and Internet purchases and usage in higher education, reports that 95% of college students own a computer and 88% own a cell phone.
As the prevalence of technology on college campuses increases, so do the dangers. But, most college students don't pay nearly as much attention to the significant number of on-campus risks as they should.
So, what are the most common threats to students and their technology, and how can they be avoided?
Power Surges and Blackouts: Many college dorm rooms contain enough electronics and tech devices to rival a Best Buy showroom. With so many devices in each room draining the electrical grid, power issues are bound to happen. Power surges and blackouts are common threats, especially in warmer weather.
- Protect your technology against power problems by plugging everything into a UL approved power strip with surge suppression. This will safeguard against power surges. To prevent a blackout from costing you your data, invest in a Uninterruptible Power Supply to keep your computer running.
Data Loss: It seems like every college student has at least one story of losing a paper at three in the morning due to a computer crash. The aforementioned power issues, as well as network difficulties, make data loss a significant problem on college campuses. Fortunately, it's also one of the easiest to avoid.
- Beyond using a UPS, guard against data loss by frequently backing up. Always keep a second backup on an external device, such as a USB flash drive, which are sold in most college bookstores. Not only will it back up your data, but it provides easy portability for transfer to a campus computer lab or classroom.
Physical Damage: College dorm rooms can be raucous places. Most tech devices, such as laptops and cell phones, are sensitive and easily damaged. A spilled drink can ruin a cell phone, or someone bumping into a desk can send a laptop crashing to the ground.
- Physical damage is often the result of an accident and hard to avoid. Preventative measures, such as always putting devices out of harm's way, or storing them in a protective case, can help. If your tech is damaged, be sure you know what computer repair or tech support services your campus offers, so you can get your gear fixed quickly.
However, both file sharing sites and social networks are dangerous. They provide little in the way of anonymity, opening student computers and smartphones to attack, and are prime targets for malware and computer viruses.
- Viruses and malware can quickly spread throughout a college network, so always keep all Internet security and antivirus programs updated and running. Avoid clicking on links on social networks and be very circumspect about what you share. All it takes is one "friend" sharing your content and you become exposed, no matter how strong your privacy settings are.
Theft: College can be a very comfortable place, so much so that many students forget the need to guard their possessions and their information. According to Javelin Strategy and Research, young people ages 18-24 lost five times more money due to identity theft than any other age group last year. Even leaving your computer unlocked in your dorm room leaves you open to theft of your data and personal information.
- Always secure your technology as you would secure your home. Do not leave your devices unattended in public places, or even in your dorm room. Secure your easily stolen notebook with a laptop lock and a strong password. Keep your data - academic and personal - shielded from even a roommate's prying eyes.
College is an exciting and interesting time. It's full of new challenges and experiences and many of your high tech devices can only enhance the experience. But while technology can make things better, it's always important to remember the risks and ensure that you, and your devices, are as protected as possible.
David A. Milman, Founder and CEO of Rescuecom | <urn:uuid:3a139493-44d9-4dc6-8bc6-620a442eafdd> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2468774/windows-pcs/tips-for-on-campus-technology-protection.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00426-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.962048 | 816 | 2.59375 | 3 |
Universities in the southeastern U.S. are building a computer grid designed to help scientists predict storm surges well in advance of an approaching hurricane to give government officials a better idea of when to order evacuations.
Improving storm surge forecasting means harnessing large amounts of computing power that can quickly analyze meteorological and oceanographic data needed to develop forecast models.
"The real challenge here is to be able to create a product far enough in advance of a storm hitting the coast to actually take action," said Gary Crane, director of IT initiatives for the Southeastern Universities Research Association (SURA). He said surge forecasts are now accurate about 24 hours ahead of time; one goal is to extend that forecasting window to 72 hours.
In New Orleans, for instance, officials are trying to predict the best time to lower the flood gates on the Lake Pontchartrain canal system -- and data generated by the grid could be used to bolster those kinds of predicative capabilities, said Crane.
There are about 900 CPUs in a heterogeneous environment on the grid, but the schools recently purchased IBM Power servers that will roughly double the number of CPUs and boost the computing power from about 3 teraFLOPS, or 2 trillion calculations per second, to about 10 TFLOPS. IBM announced the hardware deal today.
Crane said the grid has the potential of growing many times in size -- depending on the number of other universities that contribute resources to the computing pool behind it.
This Washington-based SURA is made up of 62 research universities in the Southeast, and for the past two and a half years has been developing a grid that now connects about 14 of those universities. The new IBM computers will be deployed at Louisiana State University, Georgia State University and Texas A&M University.
The grid will be used for a variety of research activities, although a major focus will be the SURA Coastal Ocean Observing and Prediction Program, which is being funded by the National Oceanic and Atmospheric Administration and the U.S. Office of Naval Research. IBM, which has its own researchers investigating storms, will also play a role in developing the computing models. | <urn:uuid:b8d1b5ad-5997-42b3-a94a-673a001b0a5f> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2546682/computer-hardware/computer-grid-aims-to-predict-storm-surge.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00426-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94423 | 433 | 2.75 | 3 |
Naval Amphibious Base, Little Creek, Va. â€" When the USS Harry S. Truman carrier strike group deploys this fall it will use communications that have a high-tech twist on one of the oldest forms of radio communications that the Navy used in the days of Morse Code, said officials of headquartered here.
Instead of the "dits" and "dahs" transmitted by Morse Code, the Truman, along with the nine other ships in the strike group, will communicate over high frequency (HF) by sending Internet Protocol-based traffic such as text messages, said Paul Dixon, allied coalition networks action officer for the Naval Network Warfare Command (NETWARCOM).
The highest levels of the Navy have endorsed the use of high frequency IP communications for intra-strike group communications for one simple reason, Dixon said: It’s much cheaper than satellite communications systems that the Navy embraced in the late 1980s, when the service all but abandoned high frequency as its standard means of communications.
Dixon also said its makes no sense to use expensive and often leased satellite communications systems that require a 44,400 mile trip â€" from a ship to a satellite and then back down to another ship five to ten miles away â€" when high frequency can easily bridge that gap over free spectrum in the 3 to 30 Megahertz frequency band, Dixon said.
Dixon said that high frequency has roughly the same speed as dial-up modems used in the 1980s compared with satellite bandwidth that is as much as 100 times greater. But it is fast enough to meet the command and control needs of today’s strike groups, which are run by text messages and over chat groups based on Internet Relay chat standards.
The Navy also has provided the Truman strike group with the ability to send IP traffic over UHF channels, which provides better throughput than the high-frequency band, about 64 kpbs, or slightly more than the dial-up modems built-into most personal computers.
Eric Johnson, a professor at New Mexico State University whose specialty is high frequency and wireless networking, said the high frequency’s low throughput is due to the noise inherent on that spectrum band, which is apparent to anyone who has listened to a short wave broadcaster such as the BBC, and the narrow channels.
The high-frequency modems the Navy uses â€" which New Mexico State University helped develop â€" punches data through that noise with a stable signal thanks to sophisticated error checking protocols, Johnson said.
Dixon said that the Navy plans to outfit 25 ships with high-frequency IP systems through 2008 under a “fast track†project backed by the Chief of Naval Operations. Much of the work involves adding computer servers and firewalls to work with high-frequency radios already on the ships, Dixon said.
The high-frequency IP project will also make it easier to communicate with allied navies, which rely heavily on high frequency because they cannot afford satellite communications, Dixon said.
The Navy’s trip back to high frequency will require going back to offering high-frequency training to the service’s school curriculum, said Chuck Tabor with the NETWARCOM spectrum management division. It’s been so long time since the Navy has used high frequency “hardly anyone [in the Navy] even knows what it is anymore,†Tabor said. | <urn:uuid:9f99dfa7-df0e-4bd1-86fd-e557649b583e> | CC-MAIN-2017-04 | http://www.nextgov.com/technology-news/tech-insider/2007/06/navy-finds-a-use-for-old-tech/51689/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00060-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955597 | 710 | 2.515625 | 3 |
Next: 0.4.5.1 Prim's Algorithm
Up: 0.4 Graph Algorithms
Previous: 0.4.4.2 Source Code
A spanning tree in a graph is a tree that
visits every node in the graph. Recall that a tree is a set
of nodes and edges that has a single root node and no
0.4.5 Spanning Trees
In many graphs each edge has an associated weight or cost.
Such graphs are called weighted graphs. Often in
weighted graphs we want to find a spanning tree of minimal
In this section we explore several algorithms for finding
minimal spanning trees in graphs. | <urn:uuid:5e8cdcc9-9e8a-4946-a84d-b0c3e1f120e9> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/alg/node92.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00482-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.877311 | 138 | 3.4375 | 3 |
Definition: A hash function that uses an auxiliary array, but no shift or exclusive-or (xor) operations.
Generalization (I am a kind of ...)
Note: This hash function may be particularly fast on computers that don't have hardware support for shifting or xor.
Careful choice of the auxiliary table allows construction of a perfect hashing function, a minimal perfect hashing function, or even an order-preserving minimal perfect hashing function.
Peter K. Pearson, Fast Hashing of Variable-Length Text Strings, CACM, 33(6):677-680, June 1990.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 3 February 2009.
HTML page formatted Mon Feb 2 13:10:40 2015.
Cite this as:
Paul E. Black, "Pearson's hash", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 3 February 2009. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/pearsonshash.html | <urn:uuid:5d6b52c0-8e54-4346-90e9-0db87c419a15> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/pearsonshash.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00482-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.78662 | 243 | 2.53125 | 3 |
What is ransomware?
Ransomware is a type of malware that attempts to extort money from a computer user by infecting or taking control of a victim’s machine or the files or documents stored on it. Typically, the ransomware will either lock the computer to prevent normal usage or encrypt the documents and files to prevent access to the saved data.
• Prevents you from accessing Windows and other devices.
• Encrypts files so you can’t use them.
• Stops certain apps from running.
Phil Muncaster reports on China and beyond
Jon Collins’ in-depth look at tech and society
Kathryn Cave looks at the big trends in tech | <urn:uuid:0e49bee4-636b-4e21-a07e-ff89fded4e66> | CC-MAIN-2017-04 | http://www.idgconnect.com/view_abstract/40740/ransomware-all-locked-up-no-place-to-go | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00298-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.875237 | 142 | 3.328125 | 3 |
TORONTO, ONTARIO--(Marketwired - April 15, 2014) -
Editors Note: There are two photos associated with this press release.
Talk With Our Kids About Money Day is expanding across Canada this year. The program, developed by the Canadian Foundation for Economic Education (CFEE) in partnership with BMO Financial Group, offers a simple way to help young Canadians learn more about money and personal finances.
Talk With Our Kids About Money Day was launched last year in more than 400 schools in Toronto and Montreal. This year, the program expands to participating schools in Ontario, Quebec, Manitoba, Saskatchewan and Newfoundland. The program encourages Canadians to have conversations with youth about money and personal finance. The annual event takes place the third Wednesday in April, this year, April 16, with a "Home Program" for families and a "School Program" for Grade Seven students and teachers. Both parents and teachers can visit the online hub, www.talkwithourkidsaboutmoney.com, free for anyone to access, and updated with resources and curriculum ideas.
"Understanding how to manage money is one of the most important life skills parents and teachers can show our kids, and it's never too early to start. I applaud BMO and the Canadian Foundation for Economic Education for encouraging hundreds of schools and parents across Canada to actively take up this challenge and participate in this year's Talk With Our Kids About Money Day", said Kevin Sorenson, Minister of State (Finance). "While every day is a good day to talk with our kids about money, this tremendous initiative will help get that conversation started."
Highlights of School Program:
- Easy-to-use lesson plans for teachers on incorporating money and finance into social studies, math, science, history, geography, music and art
- Group discussion and debate ideas
- Resources and interactive tools for teachers
Highlights of Parent Program:
- Online access to ideas, activities, stories, resources, tools and videos
- Discussion starters, interaction information and helpful resources links
- Support resources are organized by child's age range
"The best way to help our youth prepare for their financial future is to get started - and as early as we can," said Gary Rabbior, President of the Canadian Foundation for Economic Education. "Talk With Our Kids About Money Day helps parents, guardians, and teachers get the conversations started - with easy to prepare, fun to do, and life-relevant activities and lessons."
"BMO has a longstanding commitment to fostering financial literacy and making money make sense - which in today's economy has never been more important for youth. The goal of Talk With Our Kids About Money Day is to bridge the information gap and arm youth with knowledge they need to become more financially confident," said L. Jacques Ménard, Chairman of BMO Nesbitt Burns and Co-Chair of the Federal Task Force on Financial Literacy.
"Jim Flaherty made the expansion of financial literacy a key priority during his tenure as finance minister," continued Mr. Ménard. "Today, we pay tribute to his efforts and his legacy. Congratulations to Jane Rooney, Canada's new Financial Literacy Leader - a position that represents a part of Mr. Flaherty's legacy."
Personal Finances in the Home
According to a BMO poll released today, Canadians would much rather talk to their kids about the facts of life than the family's financial situation (63 per cent vs. 37 per cent), indicating a lack of comfort among parents when it comes to having the "money" talk.
Furthermore, the poll revealed:
- Only a quarter of parents feel strongly that they are equipped to give their children a solid financial education
- Less than half (47 per cent) of Canadians are optimistic about the financial future of children and almost half (49 per cent) blame a lack of financial education for this outlook
|Percentage of Canadians who would rather talk about the facts of life with their children than finances
|Percentage of parents who feel strongly that they are equipped to provide a financial education
|Percentage of parents optimistic about the financial future of children
Survey results cited in this release are from a Pollara survey commissioned by BMO Financial Group with an online random sample of 1,012 adult Canadians, between March 27th and 31st, 2014. As a guideline, a probability sample of this size would yield results accurate to ± 3.1%, 19 times out of 20.
Data has been weighted by region, gender, and age, based on the most recent Census figures, so that it is representative of all adult Canadians.
CFEE is a federally chartered, non-profit, non-partisan organization, founded in 1974, that works to improve economic and financial literacy and enterprising capability. CFEE works collaboratively with ministries and departments of education along with school boards, schools, educators, and teacher associations. CFEE also engages in activities to support and assist newcomers and past immigrants to Canada, and the general public including print resources, videos, workshops, and online resources. Overall, CFEE aspires to help Canadians of all ages be better prepared to undertake their economic roles, responsibilities, and decisions with confidence and competence.
About BMO Financial Group
Established in 1817 as Bank of Montreal, BMO Financial Group is a highly diversified financial services organization based in North America. The bank offers a broad range of retail banking, wealth management and investment banking products and services to more than 12 million customers. BMO Financial Group had total assets of $593 billion and more than 45,500 employees at January 31, 2014.
To view the photos associated with this press release, please visit the following links: | <urn:uuid:965de1a1-b6f6-4f74-b3fb-e3dc945220c6> | CC-MAIN-2017-04 | http://www.marketwired.com/press-release/bmo-and-cfee-expand-financial-literacy-program-in-canada-tsx-bmo-1899857.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00298-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943714 | 1,164 | 2.796875 | 3 |
Atom manipulation makes for world record.
IBM (NYSE:IBM) produces the worst animated movie I've ever seen. Terrible production values, laughable plot, and awful soundtrack. At least it's mercifully short. Two thumbs down. Still, it does at least show what's possible when you manipulate and photograph individual atoms.
In IT Blogwatch, bloggers think... and make Heisenberg gags.
Your humble blogwatcher curated these bloggy bits for your entertainment.
Seth Borenstein reports:
Scientists have taken the idea of a film short down to new levels. ... IBM says it has made the tiniest stop-motion movie ever [made] of individual carbon monoxide molecules.
...Each frame measures 45 by 25 nanometers — there are 25 million nanometers in an inch. ... IBM used a remotely operated two-ton scanning tunneling microscope...at 450 degrees below zero Fahrenheit (268 degrees below zero Celsius). MORE
Jason Palmer speaks unto nation:
The stop-motion animation uses a few dozen carbon atoms, moved around with the tiny tip of...a scanning tunnelling microscope. ... The extraordinary feat of atomic precision has been certified by the Guinness Book of World Records. ... The device works by passing an electrically charged, phenomenally sharp metal needle across the surface. ... As the tip nears features on the surface, the charge can "jump the gap" in a quantum physics effect called tunnelling.
...It underlines the growing ability of scientists to manipulate matter on the atomic level, which IBM scientists hope to use to create future data storage solutions. MORE
SPOILER WARNING: Daniel Terdiman tells us the plot:
Called "A boy and his atom," the animated film features a small boy having a good old time as he bounces around, playing catch, and dancing [in] 130 atoms that were painstakingly placed, atom by atom.
...four researchers spent nine 18-hour days moving the 130 atoms around so they could create the exact imagery they needed for their film. MORE
Yes, yes. Great fun, but WHY, Gareth Halfacree?
It's not all about frivolity and the kudos that comes with an unlikely entry in the Guinness Book of World Records, though. ... IBM is hoping that the technology...will pave the way forward for novel computer circuits that can bypass the rapidly-approaching physical limits that threaten to put an end to Moore's Law. .
...The team behind the animation has already created the world's smallest magnetic bit, constructed from just 12 atoms - compared to the million atoms a traditional bit takes up on a mechanical hard drive. MORE
Meanwhile, ifeu quips, uncertainly:
So... did the boy act differently when he was watched?. MORE
Subscribe now to the Blogs Newsletter for a daily summary of the most recent and relevant blog posts at Computerworld. | <urn:uuid:309d6e34-7bf5-4358-9fa7-67688b3b913d> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2475257/data-center/ibm-makes-feeble-movie-about-a-boy--madewithatoms.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00508-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.9172 | 596 | 2.515625 | 3 |
Scientists working on a controversial project to create new forms of H5N1 bird flu agreed on Friday to stop their work for 60 days while the debate plays out.
"We recognize that we and the rest of the scientific community need to clearly explain the benefits of this important research and the measures taken to minimize its possible risks," they wrote in a letter published jointly by the journals Science and Nature.
"We propose to do so in an international forum in which the scientific community comes together to discuss and debate these issues," added the letter, signed by 39 scientists including Ron Fouchier of Erasmus Medical Center in Rotterdam, Adolfo Garcia-Sastre of the Mount Sinai School of Medicine in New York and Yoshihiro Kawaoka of the University of Wisconsin.
"To provide time for these discussions, we have agreed on a voluntary pause of 60 days on any research involving highly pathogenic avian influenza H5N1 viruses leading to the generation of viruses that are more transmissible in mammals," the letter said. "In addition, no experiments with live H5N1 or H5 HA reassortant viruses already shown to be transmissible in ferrets will be conducted during this time."
Late last year the two teams, one led by Fouchier and one by Kawaoka, created lab-engineered forms of H5N1 bird flu. They said they were trying to see how the virus, which has been circulating since the 1990s, might mutate into a form that could cause a deadly human pandemic.
Other researchers expressed fears about the risk the new viruses could accidentally escape and cause the very pandemic that the scientific community has been worried about. In December, the National Science Advisory Board for Biosecurity, an independent group that advises the federal government, asked the two teams to withhold details of their work.
Supporters of the research say it's key to predicting how H5N1 might mutate and change, as flu viruses often do. But opponents have said the work could be misused by terrorists or that the virus might somehow escape from the lab and spread.
The researchers tried to answer these fears in their letter.
"Despite the positive public-health benefits these studies sought to provide, a perceived fear that the ferret-transmissible H5 HA viruses may escape from the laboratories has generated intense public debate in the media on the benefits and potential harm of this type of research," they wrote.
"We would like to assure the public that these experiments have been conducted with appropriate regulatory oversight in secure containment facilities by highly trained and responsible personnel to minimize any risk of accidental release. Whether the ferret-adapted influenza viruses have the ability to transmit from human to human cannot be tested."
Right now, the H5N1 virus only rarely infects humans and cannot be transmitted very easily from one person to another. But it kills more than half its victims, according to the World Health Organization, which has tallied 343 deaths out of 582 known cases. | <urn:uuid:8818f316-7aba-4f7e-89d0-0fa7ee8db6b7> | CC-MAIN-2017-04 | http://www.nextgov.com/health/2012/01/scientists-halt-controversial-flu-research/50491/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00050-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.967983 | 616 | 2.609375 | 3 |
Aside from the use of drones, one of the most innovative and effective ways of protecting the lives of America’s uniformed troops may be microgrids—the distributed generation of forms of renewable energy that do not have to be transported under high security to every battle zone.
To date, U.S. military operations in Afghanistan have paid the equivalent of $400 per gallon for fossil fuel when security, transportation, and mortality costs are tallied up. Indeed, according to the findings of a new research report, the United States Department of Defense (DOD) is the single largest consumer of petroleum in the world. Likewise, U.S. military operations represent the largest consumer of all forms of energy globally.
The study, “Military Microgrids,” by Boulder, Colorado-based Pike Research (News - Alert) suggests that efforts by the U.S. DOD may provide the strongest momentum overall to the microgrid market today. Pike Research forecasts that, in a typical war-time scenario, the total capacity of U.S. military microgrids will reach 54.8 megawatts by 2018.Microgrids can shrink the amount of fossil fuels consumed to create electricity by networking generators as a system to maximize efficiency. They also can be used to help integrate renewable energy resources (such as wind and solar) at the local distribution grid level. Simultaneously, microgrids enable military bases – both stationary and forward operating bases (FOBs) – to sustain operations, no matter what is happening on the larger utility grid or in the theater of war.
The military’s primary concern is disruptions of service from utility transmission and distribution (T&D) lines. Its lack of control and ownership of these lines – and the uneven quality of power service regionally throughout the United States – has prompted the U.S. DOD to reexamine the existing electricity service delivery model. This analysis has led the DOD to the inevitable conclusion that the best way to bolster its ability to secure power may well be through microgrid technology it can often own and control. Furthermore, recent mandates require an increase in the reliance upon renewable energy developed onsite, whether the generation is solar PV or waste-to-energy combustion. A microgrid can tie these disparate and distributed resources together and allow them to be managed locally.
Likewise, for fixed base military operations, microgrids offer the ultimate secure power supply. Many Army, Navy, Air Force, Marines, and other military-related bases and offices already have vintage microgrids in place. What is new is that these facilities are looking to envelop entire bases with microgrids and integrate renewable distributed energy generation (RDEG) onsite. When capable of safe islanding from the surrounding grid, RDEG offers ultimate security since fuel never runs out with solar or wind resources.
Pike Research has identified roughly two dozen military facilities in the United States that are currently engaged in smart microgrid implementations. The Marines show the fastest initial capacity growth spurt, but the Army shows signs of longer-term increases in annual capacity. This is because the Army has a larger number of stationary bases requiring microgrid upgrades. Most of these new microgrids incorporate RDEG as a way of increasing reliability and security. The opportunity to help develop these microgrids has attracted a number of powerful technology companies— including Lockheed Martin, General Electric (GE), Honeywell, Boeing (News - Alert) and Eaton.
While the DOD is not the only military agency exploring microgrids as a platform to increase physical and cyber security, it is by far the most advanced in its efforts in that regard. Other nations rumored to be examining the potential for microgrids include the United Kingdom, Canada, France, and China.
Among microgrid-enabling technologies are smart inverters, switches and meters; along with electric vehicle charging technologies. Virtual power plants and cyber security technologies also are discussed in the report. This Pike Research study examines the growth microgrids for three Department of Defense microgrid sectors: stationary bases, forward operating bases, and mobile systems. Along with forecasts through 2018 for each of the three primary sectors, the report also includes forecasts for stationary renewable integration and demand response microgrids, as well as solar photovoltaic systems deployed by the DOD.
Want to learn more about the latest in communications and technology? Then be sure to attend ITEXPO Miami 2013, Jan 29- Feb. 1 in Miami, Florida. Stay in touch with everything happening at ITEXPO (News - Alert). Follow us on Twitter.
Edited by Brooke Neuman | <urn:uuid:b6924400-47fe-4111-a4a9-065f68761f56> | CC-MAIN-2017-04 | http://www.iotevolutionworld.com/topics/smart-grid/articles/2013/01/07/321800-push-energy-security-pentagon-prioritizes-microgrids.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00444-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93328 | 934 | 3.234375 | 3 |
This discussion about terminology has gotten rather pedantic. The
wikipedia article cited previously states the following example:
"west.example.com and east.example.com are subdomains of the
example.com domain, which in turn is a subdomain of the 'com'
Are Charles and Ken suggesting that blahblah.dilgardfoods.com is not a
subdomain of dilgardfoods.com?
Charles suggested that blahblah.dilgardfoods.com might point to a
publicly addressed "server". Jeff indicated that
blahblah.dilgardfoods.com pointed to the IP address of a "firewall" in
I will go with Jeff on this because I have never known of any
organization ever assigning a public IP address to a "server".
Instead, they assign a public IP address to a network device
(gateway/router/firewall) in their control which may further route to
other network devices in their control, which may ultimately route to
a "server" in their control.
It appears that some of the terminology confusion in this discussion
may have been a result of how one views internal "names" and "routing"
mechanisms within internal networks vs. how names and routing is done
via public DNS. For example, a company may set up a web proxy on an
internal network which routes blahblah.dilgardfoods.com to a web
server on the other side of the planet. But that routing is not based | <urn:uuid:91d97685-231e-48ee-b94c-42ab88eb6230> | CC-MAIN-2017-04 | http://archive.midrange.com/midrange-l/201408/msg00042.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00263-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932215 | 326 | 2.765625 | 3 |
Many documents that I see, produced using a word processor, set my teeth on edge and my blood boiling. What worries me:
- They are not very attractive.
- They are not well structured.
- Unnecessary effort has been put in because the functions of the word processor have not been exploited.
- They are not easy to navigate around.
- They are not accessible to people with vision impairments.
The reason for all of this is that the person creating the document has never been introduced to the power of the word processor and in particular the use of styles. The proper use of styles and a few other functions would bring my blood pressure back to normal and save time for the creator and the readers of the document. Hence the title of this rant.
If you are already a dedicated stylist you do not need to read any further unless you want some help in persuading others to join the enlightened.
If, on the other hand, you do not know what a style is please, please read on.
Let me start with a very simple issue: a paragraph. We recognise a paragraph by the fact that it starts on a new line and the space between it and the previous paragraph is larger than the space between lines in a paragraph. So what do most people do? They press enter a second time to leave a blank line between the paragraphs. This is wrong because:
It leaves too big a space between the paragraphs.
Occasionally they forget to put the extra line in and then it is difficult to recognise the paragraph.
It is extra work to put in the line everywhere it is needed.
For people who use screen-readers the extra blank is just confusing as it is announced as a blank paragraph.
So what is the right way? Use a style that specifies that the space below a paragraph is larger than between normal lines but smaller than the size of a complete line. Word processors come with a standard style that does that: 'Text Body'.
Note: different word processors have slightly different ways of specifying a style so this rant does not explain how to do it. There is normally a side bar or a tool bar, but 'help' will tell you.
All documents have a structure. At the very least they will have a title which gives an indication of the content of the document (this rant has a title); this may be followed by a sub-title (this rant does not have a sub-title). The document may be divided into sections, or chapters, with headings (I have divided this rant into several sections and a few sub-sections).
A title is at the beginning but is also recognisable by being in a different format (often larger and bolder than the body of the document).
A section header is recognisable by being on a separate line and in a different format from the body and the title; it will also have a space above it that is bigger than a normal paragraph space.
A chapter, which is a major section of a large document, will often start on a new page.
These elements give a visual representation of the document and enable sighted people to quickly navigate around a document. I will explain later how it helps VIPs.
So how do you create these differently formatted elements? It is possible to click on bold, change the font size and maybe the font itself for each element. This takes time, is very likely to be inconsistent within a document or across documents, and still does not get the varied line spacing right.
The easy, quick and correct method is to use (you guessed it) styles. Word processors come with styles for titles (Title), sub-titles (Subtitle) and headings (Heading 1) and sub-headings (Heading 2, Heading 3...). Using these styles will ensure that the elements are formatted correctly including the amount of space between the elements.
There are other styles such as: Signature, Caption and Quotation, that can define further elements but I am not going to write about them here.
Further benefits of styles
If you are still not sold on styles the following sub-sections give you some more benefits.
Automatic style choice
If you are creating a document and have just typed a heading you will want to type some normal text in the next line.
If you did not use a style for the heading then all your formatting (font, typeface, size etc) will continue on the next line and you will have to reset them.
However, if you have used a proper heading style then the word processor will know that the next line should be normal text and set the style to text body and you do not have to do anything. This makes life easy for you and helps to make the formatting of the documents consistent and stylish.
No orphan headings
A heading at the bottom of a page with no text after it is known as an orphan heading. There is nothing more ugly in a document than a heading, by itself, at the bottom of a page.
If you have used a heading style the word processor will automatically ensure that a heading is not the last line on the page and it is always followed by at least part of the next paragraph.
Trying to ensure that you do not have any orphans is almost impossible without using styles.
Having written a document you decide that you would prefer the body of the document to be Times New Roman rather than Arial; to do this just modify the style and it is changed everywhere in the document.
Table of Contents
If it is a long document (say ten pages or more) it will benefit from having a table of contents (TOC). A TOC makes it easier to navigate the document and also gives the reader a synopsis of the document in one place.
Creating a TOC is the work of a moment if you have used styles since the word processor can automatically generate it based on the relevant styles (Heading 1, 2, 3 etc).
Vision Impaired People (VIPs) benefit greatly from styles. A screen-reader will announce to the user that the next element in a document is a title or a heading. It can also skip to the next heading without reading the intervening text, just like a sighted person (non-VIP) can visually skip to the next heading. This means that a VIP can navigate around a document as easily as anyone else. Sighted people (non-VIPs) who want to re-read what I wrote about paragraphs would visually skim back to find the relevant heading; VIPs would do the equivalent using the functions of the screen-reader.
Some people with dyslexia find different fonts and colours easier to read; choosing different styles can automatically create a modified document.
People with limited ability to use their hands may find the navigator function of the word processor makes it easier to move between different headings.
HTML and PDF
A word processor is a wonderful way to write a document but if it is going to be consumed electronically by a large number of readers it will probably be converted into HTML or PDF. Word processors have the facilities to make these conversions and the styles will be used to create the tags used for HTML and PDF/UA (the accessible extension for PDF documents).
The tags ensure that the document is well formatted and accessible in these distribution formats.
Other word processor features
Styles are the bedrock of producing an elegant document but they are assisted by a few other features of a word processor that include: tables, page headers and footers, page numbering, inclusion of images, and hyperlinks to the Internet. This rant does not cover these but I plan to write a follow up that will.
I hope that I have convinced you that using word processor styles makes the creation of documents easier and faster, and at the same time makes them more elegant and more accessible.
Please start to use styles to boost your productivity, make your documents accessible, and keep my blood pressure down. | <urn:uuid:9c112485-0b62-4ca3-83fe-b781f98d8b49> | CC-MAIN-2017-04 | http://www.bloorresearch.com/blog/accessibility/stylish-documents-are-best-a-rant/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00107-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931524 | 1,636 | 2.765625 | 3 |
A communication system using fiber optic strands of glass for the transmission of data. The two main types of fiber optic cables used to transmit information to multi-mode and single mode optical fiber. They use light energy to travel to one or more glass fibers. A single-mode fiber is an integral part of a network of fiber optics. It is a kind of fiber optic cable, transfer a file from the light allows. A single fiber has a 9-micron (9 mum) core diameter, capable of supporting Gigabit Ethernet data transmission up to 10 km. There are different types of single-mode fiber, including staggered cut fiber, low water peak fiber, fiber non-zero dispersion and dispersion shifted fiber.
Increased bandwidth capacity
A single-mode optical fiber cable Supports A higher bandwidth of a multi-mode fiber optic cable in Quebec. The main objective of the United Nations of a fiber optic cable (OU All Other Communication System) is to allow a maximum OR of data bits to be transferred between the issuer and Fewer Errors with the receiver. The Core Narrow limiter single-mode fiber dispersion / distortion of the light (or “multi-path ‘effect), AND the bandwidth capacity of reducing the cable.
Used for Longer Transmission Distances
Simple single-mode fiber optics cables used to pay HAVE configured Wide Area Networks (WAN), Metropolitan Networks (MAN) and campus networks. A remote support its transmission UNTIL 50 times that of a multi-mode fiber. Use SMF EST generally UN For the transmission of data “over a long distance.
Limited Data Dispersion & External Interference
Rays of light parallel to the axis of core single mode fiber. This contrasts with a possible multi-mode fiber, the light rays from all sides and directions. The single input mode of the SMF limit light scattering, which in turn eliminates waste and speeds data transfer increases. A single cable mode is immune to external noise, including electromagnetic interference (EMI) and radio frequency interference (RFI).
Increased Transmission Speeds
Single-mode fiber optic cables provide higher speeds minimized because of their capacity and more bandwidth interference from outside. can support a single strand of optical fiber to transmit up to 10 Gbps (gigabytes of data per second).
Source: Fiber Optics.com | <urn:uuid:c7d3fd4d-a7bf-4150-8853-37075e3521aa> | CC-MAIN-2017-04 | http://www.fs.com/blog/what-are-the-advantages-of-single-mode-fiber-cable.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00529-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.853105 | 473 | 3.40625 | 3 |
Dense Wavelength Division Multiplexing(DWDM) refers to a fiber optic data transmission technology, which parallel transmission using the wavelength of the laser in accordance with the bit or the string line transmission mode in the optical fiber transmission data.
DWDM has three characteristics:(1)Internet diversified services. DWDM and independent of the transmission rate and the Statute that provide and service the form completely unrelated to the transmission network, for example: a transfer rate and the Statute completely transparent DWDM networks and ATM, IP, SDH signal mediated pick provide diversified services network.(2)Reduce costs, improve service quality. Compared to traditional signal assignment or scheduling in optical layer in the layer bandwidth scheduling more simple and efficient, you can reduce expenses. Another network fiber cut or optical signal failure in the optical layer the signal protection switch or network routing reply actions relative to the traditional electric layer reply actions switching time is shorter, so that the network availability increase and improve service quality. (3)Enhance the transmission distance and increase the network capacity. The biggest problem is that on the high speed of STM-64 TDM transmission fiber dispersion serious for the transmission of the light signal will produce deterioration effect, therefore, if you do not use the electronic reproduction or other compensation techniques, theoretically STM-64 signal in G.652 fiber transmission is about 60 km. Terms of the eight-wavelength DWDM technology transfer, each wavelength 2.5Gb / s of the signal, its transmission capacity for 20 Gb / s, the transmission distance of up to 600 km without electronic regenerator, and the need to light amplifier.
Coarse Wavelength Division Multiplexing(CWDM) and DWDM have certain similarities, the main difference is there are three points: (1) CWDM the Lightwave channel spacing wide light wavelength less than DWDM multiplexed on the same fiber. (2) CWDM optical modulation using a non-cooled laser with electronic tuner; DWDM is used in cooling the laser, with the temperature tuning. Because the temperature in a very wide sections of the optical wavelength is very unevenly distributed, therefore temperature tuned to achieve them is very difficult, the cost is also high. CWDM avoid this difficulty, thus greatly reducing the cost, the current the CWDM system cost general only 30% of DWDM. (3) CWDM system power consumption and physical size smaller than DWDM systems.
FiberStore supply 2 types of WDM Equipment, which is DWDM module and CWDM module devices. DWDM modules inlcude DWDM mux/demux module and DWDM OADM module. CWDM modules include CWDM mux/demux module(CWDM multiplexer) and CWDM OADM module. The common configuration of CWDM mux/demux module is 2CH, 4CH, 8CH, 16CH, 18CH CWDM mux/demux module. As for DWDM mux/demux, the common configuration is 2CH, 4CH, 8CH, 16 CH, 32CH, 40CH channels. 3 Single fiber or dual fiber connection for CWDM Mux/demux are available. They are available in the form of Plastic ABS module cassette, 19’’ rack mountable box or standard LGX box. And no matter what kind of connectors, like FC, ST, SC, LC etc, all are available on our website, and we also can mix connector on one device. Buy CWDM modules, DWDM modules and Filter WDM modules on FiberStore with confidence. | <urn:uuid:057a5bc6-74d4-48b1-9a87-13f76e9c2c2f> | CC-MAIN-2017-04 | http://www.fs.com/blog/fiber-optic-data-transmission-technology-about-dwdm-and-cwdm.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00437-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.874846 | 767 | 3.34375 | 3 |
Automatic gain control. An electronic circuit in the camera which begins to amplify the video signal when it starts to fall below a given value due to lack of light on the image device.
In digital signal processing, anti-aliasing is the technique of minimizing aliasing (jagged or blocky patterns) when representing a high-resolution signal at a lower resolution
Ratio of picture width to height. Standard aspect ratio is 4 X 3 for standard systems, 5 X 4 for 1K x 1K, and 16 X 9 for HDTV. Example: A 15 monitor (measured diagonally) is 12 wide and 9 tall. The width is four increments of 3 and the height is 3 increments of 3 therefore the ratio is 4 to 3.
A system for detecting errors in color balance in white and black areas of the picture and automatically adjusting the white and black levels of both the red and blue signals as needed for correction.
An automatic sequential video switcher which has manual switches or buttons which allow a single camera to be displayed on screen without sequential switching.
AUTO IRIS LENS
A lens with a mechanical iris which is controlled by a motor or other electrical device and responds to the changing video levels produced by a camera. The iris will be driven more open as the light hitting the image device becomes less and less and visa versa with an increase in light. The lens automatically adjusts the amount of light reaching the imager.
AUTO LIGHT RANGE
The range of light, e.g., sunlight to moonlight, over which a TV camera is capable of automatically operating at specified output.
Any action initiated or controlled by an electronic circuit without manual intervention.
AUTOMATIC BRIGHTNESS CONTROL
In display devices, the self-acting mechanism which controls brightness of the device as a function of ambient light.
AUTOMATIC GAIN CONTROL
A process by which gain is automatically adjusted as a function of input or other specified parameter.
AUTOMATIC LIGHT CONTROL
The process by which the illumination incident upon the face of a pickup device is automatically adjusted as a function of scene brightness.
This is a device used to match or transform an unbalanced coaxial cable to a balanced twisted pair system. Lets one run a video signal over (cat 5) computer networking cable.
When dealing with frequencies it is the area between the boundary of lower and upper limits of specific frequencies. Example: a bandwidth of 800 Mhz (Megahertz) would be 800 Mhz wide starting from any point (100 Mhz) and ending 800 Mhz above (900 Mhz). Specific to television technology bandwidth can be related to system resolution or lines of resolution. Where 1 Mhz of bandwidth is equal to 100 lines of resolution.
The defocusing of regions of the picture where the brightness is at an excessive level, due to enlargement of spot size and halation of the fluorescent screen of the cathode-ray picture tube. In a camera, sensor element saturation and excess which causes widening of the spatial representation of a spot light source.
Video connector, the most commonly used in CCTV.
A term used to describe a type of automatic sequential switcher which has the ability to send a signal to two monitors (dual output) one continually sequencing and the second one able to display any one of the camera inputs on the bridged monitor. There are two methods of bridging a second monitor. The first is passive where the video signal is Ted off the incoming line and sent to the second monitor, in this case the bridged monitor must be unterminated. The second method is active where the video signal is regenerated by a distribution amplifier in the switcher and sent to the bridged monitor in which case the monitor must be terminated. The bridging switchers are not always marked as being active or passive so attention must be given to the video signal on the bridged monitor to be sure if the termination must be set to on or off.
The attribute of visual perception in accordance with which an area appears to emit more or less light (Luminance is the recommended name for the photo-electric quantity which has also been called brightness.)
Also called burn. An image which persists in a fixed position in the output signal of a camera tube after the camera has been turned to a different scene or, on a monitor screen.
A specific style of mount used to connect a lens to a camera body. This style is the standard in the CCTV industry. Specifically it is 1 in diameter with 32 threads per inch, of the 16 mm format.
Refers to the image pick-up device size (1, 2/3, &Mac189; and 1/3). The format of the camera determines the type and size of lens used for the desired scene.
CATEGORY 5 (CAT 5)
Twisted pair wire with data rate to 100Mbps (1000 Mbps with 4 pair). No longer supported replaced by CAT 5E (1000 Mbps).
An environmental protective enclosure for a cctv camera. Can include a heater for winter and blower for summer.
Charged Coupled Device is a solid state device that converts an optical image into an electrical current which is processed into a video signal. Also known as a Chip camera.
Common abbreviation for Closed-Circuit Television.
See CCD. For imaging devices, a self-scanning semiconductor array that utilizes MOS technology, surface storage, and information transfer by shift register techniques.
In the CCTV industry, refers to a solid state camera (e.g. CCD).
A color term defining the hue and saturation of a color. Does not refer to brightness.
That portion of the NTSC color television signal which contains the color information.
A special type of wire used to carry low voltage signals. It gets its name from having two conductors configured in concentric circles or two conductors having the same axis. The center conductor is a solid or stranded wire running the full length of the cable. The second conductor is constructed in a braided fashion around a separating non-conductive material. This material is called the dielectric. Coaxial cables have a rating called impedance which is a measurement of resistance and capacitance, the rating for cable used in the CCTV industry is 75 Ohm.
The degree to which a color is free of white or any other color. In reference to the operation of a tri-color picture tube it refers to the production of pure red, green or blue illumination of the phosphor dot face plate.
The transmission of a signal which represents both the brightness values and the color values in a picture.
COMPOSITE VIDEO SIGNAL
A video signal comprised of all elements which make up a standard NTSC video signal, including sync, horizontal and vertical timing pulses, black level and video level from reference black to peak white. If color is a factor then color burst will also be present in the composite video signal.
The noticeable difference between blacks and whites in a picture. If the two extremes look like gray and off-white the contrast is not good. A gray scale can be used to check the monitors ability to reproduce good contrast.
The ratio between the whitest and blackest portions of television image.
Special C-Mount. Same physical characteristics except it places the back plane of a lens 5mm nearer to the image device. Auto Iris versions of this format are made in two main varieties. The main distinction between them is one has the electronics to control the iris in the lens and other relies on the camera to supply varying voltages to the lens.
DEPTH OF FIELD
A specific window of distance in which objects remain in focus. Example a cameras field of view will present a picture encompassing a distance of 200 feet from the camera; objects in the picture are in focus at a distance of 10 feet to 90 feet within the picture. The window or depth of field is 80 feet for this example. The depth of field will change in relation to the change in F-stop where the depth of field increase as the F-stop increases (F-numbers get larger).
DEPTH OF FOCUS
The range of sensor-to-lens distance for which the image formed by the lens is clearly focused.
This is the latest form of recording and as a result is not the most economical method however it does have several advantages over the VCR analogue tape recorders. First of all it enables quick access to the desired event and does not require swapping of tapes.
DIGITAL SIGNAL PROCESSING
An algorithm within the camera that digitizes data (the image). Examples include automatic compensate for backlight interference, color balance variations and corrections related to aging of electrical components or lighting. Functions such as electronic pan and zoom, image annotation, compression of the video for network transmission, feature extraction and motion compensation can be easily and inexpensively added to the camera feature set.
The deviation of the received signal waveform from that of the original transmitted waveform.
A device that provides several isolated outputs from one looping or bridging input, and has a sufficiently high input impedance and input-to-output isolation to prevent loading of the input source.
The length of time a picture from a single camera stays on the screen. Usually associated with automatic sequential switchers.
Digital Video Recorder
Electronic Industry Association. US TV standard 525 lines 60 fields.
An electronic circuit that introduces compensation for frequency discriminative effects of elements within the television system, particularly long coaxial transmission systems.
Light Factor (f). The ability of a camera lens to pass light. A value used to indicate the speed of a lens where the smaller the number the better the lens a fast lens, the normal f stop in CCTV lenses is f/1.4 or f/1.2. Each increase in f-stop decreases the amount of light passed through the lens by 50%. The normal f-stops are: f/1.0 f/1.4 f.
FIBER OPTICS (FIBER)
Flexible glass fibers drawn from the highest quality pure glass used to conduct light energy. The term has come to mean any and all equipment associated with the use of these fibers such as the light power transmitters and receivers, the connecting technology and various cabling systems.
FIELD OF VIEW
The width and height or area desired to be covered by one camera. This area is determined by the focal length of the lens on the camera and the distance the camera is mounted from the scene.
The distance from the focal point or center of the lens to the focal plane or image pick-up device and usually expressed in millimeters (mm). The larger the number the longer the lens and the more telephoto the field of view.
A measure of light intensity. A unit of illuminance on a surface that is everywhere one foot from a uniform point source of light of one candle and equal to one lumen per square foot.
Frames per second
In video it is one still picture with a duration or dwell time of 1/30th of a second made up of 525 horizontal lines. A frame is made from two fields each having 262 &Mac189; horizontal lines which are interlaced. The video frame is similar to one still picture of a motion picture film.
A CCD imager where an entire matrix of pixels is read into storage before being output from the camera. Differs from Interline Transfer where lines of pixels are output.
An increase in voltage or power, usually expressed in dB.
A numerical value used to express contrast levels in television pictures. A value of one (1) indicates a linear characteristic. Less than one indicates a curve or less contrast levels represented by a softer looking picture. The standard for a camera is .45 and for monitors is .55.
To provide for a linear transfer characteristic from input to output device.
The warped look of objects in a television picture due to erratic scanning of the electron beam in the picture tube or vidicon tube. A circle may look egg shaped or a straight line look like a curve.
A pattern of vertical bars with shades of gray starting with white and gradually getting darker gray until ending at black. Most gray scales used in television have 10 steps or bars. The pattern is used to test the ability of a camera to reproduce true white, black and the varying steps of gray in-between.
Caused by different earth potentials clearly seen as interference or humbars on a video signal.
A term used to describe a type of automatic sequential switcher which will stop sequencing and remain on a single camera input displayed on the monitor when a switch or button is depressed which is identified with the desired camera to be displayed. This switcher has only one monitor output.
A number used to measure the ability of a camera or monitor to accurately reproduce a picture with many small picture elements. A resolution chart is used when testing a camera. The chart has several circles one large one in the center with lines in a fan shape converging in the center. The lines are marked with resolution numbers which increase as the lines become closer to each other and the center. The maximum resolution displayed on the chart is 800 lines. The more common CCTV cameras are capable of producing from 350 to 500 lines of resolution.
Corresponds to colors such as red, blue, etcetera.
An electronic device which is used to amplify small amounts of light into usable amounts of light to produce a video picture. The device uses very high voltages to accelerate photons in a vacuum tube. The device is placed between a lens mount and an image device such as a vidicon or CCD. The lenses used on cameras with intensifiers must have special attributes with f-stops of up to f/1200.
Impedance is a value of a circuit expressed in ohms, but is arrived at by the combination of resistance, capacitance and inductance. The symbol for impedance is Z. This term is used with reference to cable as well as electronics, example: RG59 coaxial cable has an impedance of 75 ohms. The input or output characteristic of a system component that determines the type of transmission cable to be used. The cable used must have the same characteristic impedance as the component. Expressed in ohms. Video distribution has standardized on 75-ohm coaxial and 124-ohm balanced cable.
The light that falls directly on an object.
Extraneous energy which tends to interfere with the reception of the desired signals.
In television theory the method of placing horizontal scan lines in between each other during the period of one frame. The first field lays down 262 &Mac189; horizontal lines and then the second field lays down 262 &Mac189; horizontal lines in between the first set of lines in field number one.
A scanning process for reducing image flicker in which the distance from center to center of successively scanned lines is two or more times the nominal line width, and in which the adjacent lines belong to different fields.
A technology of CCD design, where rows of pixels are output from the camera. The sensors active pixel area and storage register are both contained within the active image area. This differs from frame transfer cameras that move all active pixels to a storage register outside of the active area.
The Internet Protocol address; a unique numeric address such as 18.104.22.168 Also see Static IP and Dynamic IP.
Infrared light, invisible to the human eye. It usually refers to wavelengths longer than 700 nm. Monochrome (B/W) cameras have extremely high sensitivity in the infrared region of the light spectrum.
An adjustable aperture built into a camera lens to permit control of the amount light passing through the lens.
Local Area Network. A computer network that is housed within a single building.
A transparent optical component consisting of one or more pieces of optical glass with surfaces so curved (usually Spherical), that they serve to converge or diverge the transmitted rays of an object, thus forming a real or virtual image of that object.
LENS PRESET POSITIONING
Follower Pots are installed on lens that allows feedback to the controller information relevant to zoom and focus positioning allowing the controller to quickly adjust to a preselected scene and arrive in focus at the proper focal length automatically.
Refers to the ability of a lens to transmit light, represented as the ratio of the focal length to the diameter of the lens. A fast lens would be rated <f/1.4; a much slower lens might be designated as >f/8. The larger the f number, the slower the lens.
Electromagnetic radiation detectable by the eye, ranging in wavelength from about 400 to 750 nm.
An amplifier for audio or video signals that feeds a transmission line; also called program amplifier.
Also called looping. The method of feeding a series of high impedance circuits (such as multiple monitor/displays in parallel) from a pulse or video source with a coax transmission line in such a manner that the line is bridged (with minimum length stubs) and that the last unit properly terminates the line in its characteristic impedance. This minimizes discontinuities or reflections on the transmission line.
Referring to the video inputs on a device such as a switcher or quad. The input is not terminated and has provisions to continue the video line, normally two BNC connectors are present on the rear of the device making it possible to connect the video signal to additional devices. The looping of a video signal should be limited to a few devices depending on distance between devices.
A reduction in signal level or strength, usually expressed in dB. Power dissipation serving no useful purpose.
Distortion effects which occur at low frequencies. In television, generally considered as any frequency below the 15.75-kHz line frequency.
Luminous intensity (photometric brightness) of any surface in a given direction per unit of projected area of surface as viewed from that direction, measured in footlamberts (fl).
That portion of the NTSC color television signal which contains the luminance or brightness information.
A measure of light intensity (1 Foot Candle approximately 10 Lux). International System (S1) unit of illumination in which the meter is the unit of length. One lux equals one lumen per square meter.
A term used with regard to lenses and has evolved to be the same with power when describing the size of a zoom lens. A 16 to 160 mm lens is said to be a 10X or ten power zoom lens. It has a magnification of 10. The other standard is a 6X lens like a 12.5-75 mm. The largest mm is divided by the shortest mm to give the power or magnification.
(1) Referring to a type of video switcher, which can be passive or active, with multiple cameras in and one monitor out, which requires buttons or switches to be pressed to change the camera displayed on the monitor.
(2) Referring to types of lenses, meaning non-auto iris but manual adjust iris; also describing the zoom function of non-motorized zoom lenses.
The process, or results of the process, whereby some characteristic of one signal is varied in accordance with another signal. The modulated signal is called the carrier. The carrier may be modulated in three fundamental ways: by varying the amplitude, called amplitude modulation; by varying the frequency, called frequency modulation; by varying the phase, called phase modulation.
A unit of equipment that displays on the face of a picture tube the images detected and transmitted by a television camera.
Black and white with all shades of gray.
In monochrome television, a signal wave for controlling the brightness values in the picture. In color television, that part of the signal wave which has major control of the brightness values of the picture, whether displayed in color or in monochrome.
The transmission of a signal wave which represents the brightness values in the picture, but not the color (chrominance) values.
A unit that can accept a number of camera inputs and almost simultaneously display them on a single monitor and/or record them to a single video tape. Multixplexers can also be used to transmit multiple cameras over the same transmission medium.
A filter that attenuates light evenly over the visible light spectrum. It reduces the light entering a lens, thus forcing the iris to open to its maximum.
The word noise originated in audio practice and refers to random spurts of electrical energy or interference. In some cases, it will produce a salt-and-pepper pattern over the televised picture. Heavy noise is sometimes referred to as snow.
(National Television Systems Committee). The standard television format (signal specifications) arrived at by this committee and the Federal Communications Commission to guide manufacturers and broadcasters so that all products in this country would be compatible, whether the signal was black and white or color. This system has 525 horizontal scan lines with 30 frames per second, commonly used in the United States and Japan. Some of the other formats of the world include PAL, CCIR, and SECAM.
The signal level at the output of an amplifier or other device.
Phase alternating line. Describes the color phase change in a PAL color signal. PAL is a European color TV system featuring 625 lines per frame, 50 fields per second and a 4.43361875-MHz sub-carrier.
PAN AND TILT
A device upon which a camera can be mounted that allows movement in both the azimuth (pan) and in the vertical plane (tilt).
PTZ (PAN TILT ZOOM)
Same as Pan Tilt but includes a camera with zoom functionality.
PAN/TILT PRESET POSITIONING
Follower pots are installed on pan/tilt unit to allow feedback to the controller and provides information relevant to horizontal and vertical positioning, allowing the controller to quickly adjust to a pre-selected scene automatically.
The amplitude (voltage difference between the most positive and the most negative excursions (peaks) of an electrical signal. A full video signal measures one volt peak to peak.
A special lens with a very small objective lens which is able to gather light through a small opening of about º, sometimes less. These lenses are generally physically longer than a normal lens of the same focal length and are available in manual and auto-iris configuration; some are also made in a right angle form. The F stop of these lenses will generally be two or more F stops slower than a lens of comparable focal length.
Short for Picture Element. A pixel is the smallest area of a television picture capable of being delineated by an electrical signal passed through the system or part thereof. The number of picture elements (pixels) in a complete picture, and their geometric characteristics of vertical height and horizontal width, provide information on the total amount of detail which the raster can display and on the sharpness of the detail, respectively.
Point Of Sale. . Cashiering system that rings up merchandise by the scanning of barcodes on merchandise.
Three colors wherein no mixture of any two can produce the third. In color television these are the additive primary colors red, blue and green.
This word can pertain to any function that can be operated from a distance, i.e. (1) video switching; (2) camera motion; (3) recording; (4) relay action. These remote functions can be caused to happen due to direct voltage over a long cable or digital information carried by cable, light, radio frequency transmission, etc.
The amount of resolvable detail in the horizontal direction in a picture. It is usually expressed as the number of distinct vertical lines, alternately black and white, which can be seen in a distance equal to picture height.
The amount of resolvable detail in the vertical direction in a picture. It is usually expressed as the number of distinct horizontal lines, alternately black and white, which can theoretically be seen in a picture.
Also called image burn. A change produced in or on the target which remains for a large number of frames after the removal of a previously stationary light image and which yields a spurious electrical signal corresponding to that light image.
RF (Radio Frequency)
A frequency at which coherent electromagnetic radiation of energy is useful for communication purposes. Also, the entire range of such frequencies.
A loss of vertical synchronization which causes the picture to move up or down on a receiver or monitor.
The initials RS stand for Recommended Standard. There are many standards set by IEE International Electronic Engineers pertaining to types of signals produced by an electronic circuit. This particular standard deals with a television signal parameter setting certain standards for signal level and timing.
In color, the degree to which a color is diluted with white light or is pure. The vividness of a color, described by such terms as bright, deep, pastel, pale, etc. Saturation is directly related to the amplitude of the chrominance signal.
The movement of an electron beam from left to right and top to bottom over a target area used to produce a video signal and reproduce a visual image.
Software Development Kit. A resource by which manufactures allow developers to add new functionality to their proprietary product.
In television, a factor expressing the incident illumination upon a specified scene required to produce a specified picture signal at the output terminals of a television camera.
Relating to, or arranged in a sequence. Used in relation to switching of camera inputs to a monitor so as to display different camera scenes one at a time in a sequential manner or repetition.
Ability to control the integration (of light) time to the sensor to less than 1/60 second; e.g: stop motion of moving traffic.
A single CCTV cable that includes RG59 coax and power wires
SIGNAL TO NOISE RATIO
Ratio between a useful video signal and unwanted noise. Usually expressed in db (decibels).
Heavy random noise.
A small filter placed in the center of one of the elements of a lens to increase the high end of the f stop to the lens from f64 to f1000 or more. These filters are neutral density type which do not affect the color rendition of the lens.
STATIC IP ADDRESS
A Static or Dedicated IP address is a type of account from an ISP where your computer or network is assigned the same constant IP Address at all times. Also see IP Address and Dynamic IP.
Referring to a part of a video signal, but also a shortened version of the word synchronize. The part of a video signal which synchronizes the scanning of a monitor to the scanning of an image device. There are vertical sync pulses and horizontal sync pulses which are used to keep the timing or start time of the electron beam in proper synchronization.
The signal employed for the synchronizing of scanning.
Maintaining two or more scanning processes in phase.
A term used to describe a picture condition in which groups of horizontal lines are displaced in an irregular manner.
The end, a boundary, a closing. Used when referring to the end point of a video signal. A video signal has a definite beginning and end and cannot be split, divided or Yed. The signal can be looped in and out of different devices which are not terminated but at the end of the line a 75 ohm resistor must end or terminate the signal.
The number of horizontal lines that can be seen in the reproduced image of a television pattern. 350 lines maximum with the 525 NTSC system.
A wideband amplifier used for passing picture signals.
VIDEO TAPE RECORDER
(VTR)-(VCR) Video Cassette Recorder. A device which stores video signals on magnetic tape for retrieval at a later time.
To enlarge or reduce, on a continuously variable basis, the size of a televised image primarily by varying lens focal length.
A variable focal length lens. The lens components or elements in these assemblies are able to move to change their relative physical postions, thereby varying the focal length and angle of view through a specified range of magnifications. The sizes of these lenses are expressed in their range of focal length i.e., 11-110 or 8-48 etc. which tells the two focal length values. They will also have a magnification factor such as 6X or 10X. The 11-110 mm lens is a 10X lens because if 11 mm, the wide field of view is multiplied by a factor of 10 the result is 110 mm which is the telephoto field of view. | <urn:uuid:361d0d5f-a95e-4b4e-b18b-a45d9a86f9ee> | CC-MAIN-2017-04 | http://www.cctvforum.com/glossary.php | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00555-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.906222 | 5,880 | 3.046875 | 3 |
In biology the response of an ecosystem to change has one of three outcomes: 1) the system adapts to the change and thrives, 2) the system rejects the change and ossifies or 3) the system cant adapt and dies.
So, how does that lesson apply to IT? IT is, for better or worse, the change agent in many organizations. One of the risk factors in initiating change is understanding the rate of change the organization can absorb. The old story about the frog in water is illustrative: If you put a frog in cold water and heat it up gradually, the frog will stay in the water until it cooks. If you drop the frog in hot water it will jump out.
Another risk factor is letting change get out of hand. Take the example of cellular growth. It's a great thing to have. It helps the organism replace dead cells. It provides the organism with additional resources to support a larger, more robust structure. But what happens when the cellular growth gets out of hand? We call it cancer.
Planning for controlled change is crucial to successful growth for both organizations and organisms. Putting governance mechanisms in place to control change will keep wild undesirable growth from sucking the life out of your company.
Change is a positive thing for many environments. Take the pond scum example. Your backyard pond is probably not the most attractive thing in the world when its covered with algae. So you change it by getting the water to move which inhibits the growth of algae.
You add a little base or acid to improve the pH of the water so the coy dont die. But you do that with a little planning because to much or too little will unbalance things to the point where desirable pieces of the ecosystem die off. Working with and involving all the affected parties in planning change will make that change more likely to be a success. | <urn:uuid:e4c16689-cba7-4393-b1b2-d255d203af91> | CC-MAIN-2017-04 | http://www.cioupdate.com/insights/article.php/3669481/The-Link-Between-Biology-and-IT.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00491-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932191 | 377 | 2.90625 | 3 |
The hoarding of data by companies is usually accompanied by the assurance that all the obvious identifiers (name, address, Social Security number) have been deleted – i.e. that the data has been “anonymized”. The goal is to retain the usefulness of the data without endangering the privacy of the individuals it pertains to.
But is that possible? Ars technica‘s Nate Anderson shows with an example that even without the obvious identifiers, it is still possible to tie the data to an individual. Case in point – the release of “anonymized” data regarding the hospital visits of state employees by the Massachusetts Group Insurance Commission in 1990.
At the time, Latanya Sweeney, a graduate student in computer science chose to test that assertion. Combining the data in question with the data she obtained by buying the voter rolls from the city where the then-Governor of Massachusetts lived (which included names, addresses, ZIP codes, birth dates and sex of every voter), she managed to find out which records were his, using the simple method of exclusion of conflicting data.
Throughout the years, there were other examples of failed anonymization: AOL, Netflix, etc. It seems that most data can be “personal” if combined with the right amount of other relevant data. If that proves to be true, it raises some good questions: all the data that is collected in various databases around the world, if combined – what can it say about me? And can it be used for malicious purposes?
And from the companies’ and researchers’ point of view: can the information be still useful if stripped of all the potential “personal” elements? | <urn:uuid:68bc94fc-73e1-4431-9550-ffcbcbcd9441> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2009/09/08/is-it-possible-for-data-to-be-both-anonymous-and-useful/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00271-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941657 | 348 | 2.921875 | 3 |
FTP and Encrypted File Transfer
FTP can be an efficient way to move large files between servers or individuals. The protocol itself is proven and mature but often its implementation is the security issue. Many organizations built their FTP systems so long ago that they are horribly outdated and can't keep up with current security standards. FTP uses clear text to transfer files, regardless of whether or not those files contain sensitive information. Nowadays, organizations understand the need to focus more on data encryption and certification of the people receiving that data. Outdated FTP systems don't provide the on-the-fly adjustments, guaranteed delivery or error-handling needed to meet ever-changing needs.
Encrypted file transfer
Encrypted file transfer is a safer option. Although Secure File Transfer Protocol (SFTP) or FTP over Secure Sockets Layer (FTPS) adds encryption and helps mitigate risk, this is not enough. Encryption standards don't make the files being transferred impenetrable to security threats. Organizations must ensure that attachments are fully encrypted while in transit and while at rest awaiting receipt.
The proliferation of software as a service (SAAS) and cloud computing also heightens the security threat. To have a successful SAAS or cloud offering, the underlying technology must be proven, secure and scalable. A solution that securely passes files through the cloud can make moving your business-critical data simpler and more accessible. However, files should never be stored in the cloud, as data at rest is the weakest link. | <urn:uuid:406178bd-72fd-4b79-b3b6-e6072af29975> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Security/How-to-Securely-Exchange-Massive-BusinessCritical-Files/2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00179-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935717 | 303 | 2.671875 | 3 |
If not documents, then what? Two important kinds of outputs of a slice of a software development process are shown in Figure 1: (1) knowledge and understanding on the part of the system builders, and (2) documentation of that understanding.
Figure 1: Milestones defined as measurable increases in knowledge.
Documents are merely evidence that a person has performed certain intellectual activities. For example, a test plan is evidence that a test planner has enumerated the tests that need to be done, and explained their rationale. However, one does not know if test planning is actually complete (has sufficient coverage) unless someone credible and impartial assesses the plan. That is, the plan needs to be verified.
Progress should be measured through tangible outcomes whenever possible, or through independent assessment when there are no tangible outcomes. The outcomes or the assessment are the credible indicators of progress, not the documents. For example, how do you know whether a design is robust enough to proceed with development? The assertion that a design document has been completed is not a reliable indicator, because it is well-known in software development that designs evolve substantially throughout implementation.
How then can one tell whether one is at a point at which proceeding with development will be productive or lead to lots of rework and perhaps even scrapping a first attempt at building the system? Prototypes are useful for this purpose, and so the successful completion of prototypes that address critical design issues is a better indicator of readiness than the completion of a design document. In any case, progress should be seen in terms of the creation of actionable knowledge, not artifacts.
The Scaling Problem
As projects scale, the effects of a document-centric process become more prominent, because those who create the documents tend to be less available to answer questions. Teams create documents and pass them on to other teams, and the original teams are often re-deployed to other activities. They might even be located at a separate site. Programmers, testers, and others are expected to pick up the documents and work from those alone. It is as if someone sent you a book of calculus and said, Here, build a program that implements this. No wonder large projects tend to fail. Due to pressure to optimize the deployment of resources, large projects tend to consist of many disjointed activities inter-connected by the flow of documents. But, since documents are information and not knowledge and are therefore not actionable, these flows tend to be inadequate.
Agile methods have been extended to large projects. For example, see Scott Ambler's article Agile and Large Teams. Ambler is Agile Practice Lead for IBM/Rational and tends to work on very large projects. The basic approach is to decompose the project into sub-projects, define interfaces between the associated sub-components, and define integration-level tests between these sub-components. This is very much a traditional approach, except that documents are not used to define all of this ahead of time. Instead, the focus is on the existence and completeness of the inter-component test suites, on keeping interfaces simple, and allowing interfaces (including database schema's) to evolve while keeping the inter-component tests up to date. | <urn:uuid:b1027e32-a1ee-419d-b4ed-36017f81e119> | CC-MAIN-2017-04 | http://www.cioupdate.com/insights/article.php/11049_3810631_2/Solving-the-Problem-of-Large-Project-Failure.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00087-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949712 | 653 | 2.59375 | 3 |
Business continuity is an offensive as well as defensive business strategy – so why is so much focus given to the latter? David Honour comments.
When the term business continuity first started to be used, in the mid-1980s, it was very much the province of the business community – hence the name. This remained the case through the early 1990s, but in more recent times, essentially with the advent of the so-called Millennium Bug, the public sector started to turn its eyes towards business continuity. This process has continued inexorably and has brought many benefits. However, one result has been that the vast majority of business continuity thought-leadership initiatives in the 21st century have been public-sector driven. True, the financial sector has also been a huge-player, and other blue chip companies from other sectors have played their part, but such companies tend to be monolithic business structures which often operate more like a public sector organization than a fleet-footed and nimble entrepreneurial smaller business.
The result of the above has been that business continuity management has been largely formed and shaped by and for large organizations. And the result of this has been an almost exclusive focus on business continuity as a defensive / protective strategy.
The common definitions of business continuity management support the above statement:
Business continuity management is:
“An holistic management process that identifies potential impacts that threaten an organization, and provides a framework for building resilience and the capability for an effective response which safeguards the interests of its key stakeholders, reputation, brand and value-creating activities.” (Business Continuity Institute)
“A holistic management process that identifies potential impacts that threaten an organization and provides a framework for building resilience with the capability for an effective response that safeguards the interests of its key stakeholders, reputation, brand and value creating activities. The management of recovery or continuity in the event of a disaster. Also the management of the overall program through training, rehearsals, and reviews, to ensure the plan stays current and up to date.” (Disaster Recovery Institute International)
The focus is on building resiliency and response mechanisms with the aim of *safeguarding* stakeholders and key aspects of the business.
However, there is another aspect of business continuity management which has been neglected and which returns business continuity to its roots. This is business continuity as an offensive strategy; as a competitive strategy. This is where business continuity is seen not just as a way of protecting and safeguarding the business; but it is seen as a way of growing the business. As such it is a very strong lever for gaining executive board buy-in and for supporting budgetary claims.
There are two areas where business continuity can be used as a competitive strategy: pre-disaster and post disaster.
In a commercial world which is increasingly becoming aware of the need for business continuity; having a top-quality, tried and tested business continuity management structure in place helps your company stand out from others. This will become an even stronger competitive advantage over the next few years as business continuity standards take hold around the world and their associated accreditation schemes highlight those companies that have taken business continuity seriously. The time may come when companies will only be able to do business with the public sector, for example, if they can show that they are accredited to a business continuity standard. If this happens companies that are prepared NOW will gain a strong advantage over their competitors who are trying to play ‘catch up’.
Another non-disaster related benefit of business continuity management is that it can help to create a business which operates its systems to the optimum level. As highlighted in the definitions above, business continuity management involves creating resilient businesses; companies that are flexible and which can quickly identify and respond to challenges and threats. However, resiliency is not just a benefit in times of disaster. Resiliency can provide day-to-day business benefits; with hardened systems failing less often and production returning more quickly from day-to-day glitches than would be the case in less resilient businesses. In other words, business continuity management is about optimizing process availability; and this enables businesses to operate to maximum efficiency at all times.
The company that successfully operates a true business continuity management culture will have systems that are more effective; more efficient; more fully utilized than their competitors. Such a company will be able to maximize the return on investment it makes in business processes. It will be more productive, more reliable and an excellent partner and supplier. When it sets a deadline it will meet it. When it undertakes a project, it will deliver on time and on budget. In this way, business continuity management becomes necessary not because it means you will survive into the distant future; but because it will make you a better and more competitive business today.
It is well documented that an effective disaster response can help a company’s share price to increase and its reputation to become stronger. The definitive study in this area was carried out by Knight and Pretty (‘The impact of catastrophes on shareholder value’, 2000).
However, beyond this advantage there is another, more entrepreneurial, competitive advantage. The business that recovers most quickly from a wide-area disaster is the business which is able to capitalize on the situation. Disasters create new markets and open up existing ones. This may allow the rapid development and launch of new products (see http://www.continuitycentral.com/news02994.htm for a good recent example of this). Or, if your products and services are available when a competitor’s aren’t, you can gain temporary market share, which may become permanent if your products and services are at least as good as your competitor’s. Weakened competitors can be weakened further by effective business continuity coupled with good marketing.
That may sound cold-hearted and ‘against the spirit of business continuity’. But business is about competition. It is about being one-step ahead of the competition. It’s about building your company to be better and more capable in what it does than others around it. Business continuity can be one tool which enables you to do this.
Make a comment
David Honour is editor of Continuity Central.
Excellent article. In my DR/BC planning practice, I am continually distinguishing the difference between DR and BC to my clients, large and small. All see value in insuring that their technology environments are protected with a viable recovery schema. However, when one asks, “What are you going to do with your folks and their business process when the building burns down,” there is generally a long silence. As you so aptly pointed out, having a thoughtful plan provides shareholders the notion of security for their investment, and attributes to management that they are aware of the problem, concerned about the firm’s employees and have taken steps to protect shareholder’s value. A formidable competitive advantage in this day and age heightened threat. Please soldier on with additional articles like your recent one; I think any DR/BC professional should help his or her client to see and understand the full picture, and articles like yours assist this process.
Noel Castleman – CBCP
In response to the question "Business continuity is an offensive as well as defensive business strategy - so why is so much focus given to the latter?" It is very evident being introduced to disaster recovery planning in the late 1970s and contingency planning in the mid 80s that the promotion and commercialism in the continuity profession was always reaction. Listen to the words that are always talked about - notification, response, recovery, restoration, alternate sites, incident management, emergency response, etc. When was the last time in a board meeting or CxO meeting that the word continuity was used as a business solution to gain market share, streamline business functions, or acquire a competitor?
Exactly the point, hardly, if at all. We in the continuity profession are always reading information dealing with disaster, preparing for the incident that will happen some time in the future. Getting information from vendors and consultants that talk about preparedness, scenario planning, alternate sites, contacting those who you need when you need them, RPOs and RTOs. And then attending conferences that indicate that we, the professionals, should be talking with management about proactive approaches for preventing or mitigating incidents while planning for and implementing workforce contingencies.
I have yet to hear a consultant in 32 years tell me that by using business continuity management as a business benefit, the business will get a better return on investment, will be able to increase revenues, will finally streamline those business operations that are outdated and will be able to manage the data more effectively while cutting operating costs.
I believe that business continuity can be used to provide day to day business benefits, but I also believe that the continuity industry as a whole has to change and focus on promoting those business benefits. Until change happens in our profession, we will still be focused on selling defensive strategies.
Steve Schulze, CBCP
Excellent article and a great question to defreeze the mind of all the business continuity professionals around the globe. It's the same old story of moving from a reactive approach to a proactive approach. Why do organisations need to be prepared? What good could an organisation bring to its balance sheet at the end of the year if business continuity was done as per the expectations? And the point that all business continuity professionals make: how does my CxO see benefit coming out of business continuity strategies if an incident does not happen?
It’s interesting to know that we all try to convince management by saying that if you do business continuity, it will make the organisation better placed amongst the competitors. Do we all know who these competitors are? Are we able to quantify or equate the word competitive advantage to the dollar amount and present this to management? Do we know what business continuity strategies the organisation's competitors have put in place? Does this reflect in the annual review with our management? If we can answer these questions and ensure that management is aware of the opportunities that will come their way during the time of crisis the business can be in a better position in a pre- as well as post- crisis situation.
A case study around this could be:
"I buy Product A, let’s say a mobile phone from a telecoms company. This company had thought of building resilience into the product but had not known how their competitors had thought of building resilience into their product. The business continuity specialist for this company had a hard time convincing the management to build the resilience into the infrastructure because the management considered it to be an overhead expense. While using this cell phone I realise that the instrument doesn't work that well when used for long duration calls. I, as consumer of the product, live with it until I come across a friend of mine who has a cell phone which works absolutely fine when exposed to the problem faced by my cell phone. I prefer to change the product and use the other company's cell phone, called Product B. The manufacturer of this cell phone had known where his competitor's might be lacking and hence built enough resilience into the product. As they say the fire spreads and there's news in the market that Product A cell phones are bad and don't work well in long duration calls. Product B now sells like hot cake and generates a lot of ROI"
The moral is that the business continuity specialist for the organisation pioneering in Product B was more aware of the company's competitors and was able to show an increase in the dollar value amount to the management when the product was being designed. This comparative analysis helped the management to take a decision to build a product that can outshine all others.
It all depends on how far the vision of a business continuity specialist can go and how far the organisation is ready to think. My suggestion is to try to equate all the factors to a dollar value and show the positive numbers to the management. No one would want to make a product that doesn't work but are we able to show the management efficiently that if we don't build resilience right now, our competitors might outshine us and we can be out of business? That's one thought that would scare any CEO of the company. You would then find a business continuity review happening every quarter and follow ups also happening regularly. That to me would be the journey from being defensive to offensive and aggressive.
Sachin Dutta, ABCI
Fidelity India Risk Oversight – BCM
I would only agree with the author that unless the ‘business’ realises the value and the need for BCM as an enabler and not as a cost, the money spent on BCM is primarily driven and mandated due to regulation more than a strategic advantage! It is only a matter of time when BCM compliant companies may be preferred against others in the vendor selection process itself.
Excellent article; for some time I have promoted the commercial benefits of business continuity. I have two major clients who have attested to the fact that their robust business continuity planning was a key factor in winning major tenders.
In promoting the benefit of BCP I also get away from using the term 'disaster'. The word is entirely negative, means different things to different people, and the whole purpose of business continuity planning is to PREVENT a crisis turning into a disaster. The following quote, taken curiously from a women's magazine sums it up; "there is no such thing as a disaster; it is merely an inconvenience that can be resolved with a little patience and understanding"
Ben Thomas, MBCI
John Dawes, the captain of the British Lions Rugby Team quoted in 1971 before taking on the All-Blacks – ‘Get your retaliation in first' the message being :'do unto to them before they do unto you!!' and in essence, the proactive facet of BCM is exactly that!
As you so rightly say in your article, the focus has been about reaction, and really emanated from the DR approach of the early IT days - big beasties ticking in an IT room. The responses are well documented and the timescales etc appropriate for the time - however time has moved on and the dependencies on the core business components of people, ICT Technologies/ Applications, Workplaces and most importantly everyone's expectations of 'always available' means that classic recovery as a component of BCM is only just appropriate for some of the business's requirements.
As for the majority of the key business processes, loss there-off is just plain unacceptable as with one click/one call the client is 'long gorn', possibly for ever as the loyalty factor is almost a custom of the past!!
So, if the board is going public to demonstrate the governance controls the following proactive actions, as components of the BCM programme should be underwritten:
* Succession planning for key personal;
* Loss controls in place for all key buildings/complex's/technologies;
* Critical business processes and key depedencies identified and documented;
* Escalation procedures and decision criteria to move from business-as-usual controls into the exceptional BCM controls created and exercised;
* Company-wide understanding of all the above;
* Inter-linked to the reactive components of the BCM programme.
In conclusion, preparation is all, as in John Dawes quote, gaining the team’s support and understanding of the overall objective ensures the value of the proactive elements of BCM.
Mike Mikkelsen FBCI MCMA
Redan International Limited
•Date: 12th Jan 2007• Region: World •Type: Article •Topic: BC general
Rate this article or make a comment - click here
UPDATED 25TH JANUARY | <urn:uuid:bd40486d-da04-4877-9bc0-ed2fe2ebe791> | CC-MAIN-2017-04 | http://www.continuitycentral.com/feature0427.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00115-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964211 | 3,212 | 2.546875 | 3 |
Local scheduler character parsing and the command line
The local scheduler uses standard white space-delimited parsing for the command line. This means that if any of the parameters contain white space they need to be enclosed in quotation marks. Certain parameters, such as /start, always contain white space and hence always need to be quoted. Other parameters, such as /exe and /cmd, may or may not contain white space and may or may not need to be quoted.
The following example shows a command line that does not need quotation marks.
The following example shows a command line that does need quotation marks.
LocalSch.exe /exe="%ProgramFiles%\MyProgram\myprog.exe" /cmd="/apm /s /ro"
Quoting already quoted parameters
If the parameters that are to be passed to /cmd= are already quoted, then three quotes are required: one set to quote the entire string, another to quote the quoted values, and the third for quoted values.
For example, the following command line shows an example of parameters that need to be surrounded by three quotation marks.
LocalSch.exe /exe="%ProgramFiles%\LANDesk\File Replicator\LANDeskFileReplicatorNoUI.exe" /cmd="""%ProgramFiles%\LANDesk\File Replicator\LDHTTPCopyTaskConfig.xml"" ""%ProgramFiles%\LANDesk\File Replicator\replicator.log"""
In the above command, the two parameter are paths to files. Because both paths are in the “Program Files” directory, the paths have spaces and must be quoted in order to be proper parameters for LANDeskFileReplicatorNoUI.exe. So each quoted parameter is surrounded by a second set of quotes, and then the entire string is surrounded by quotes.
Quoting redirection operators
Quotes must also surround any switches that contain a redirection operator. Redirection operators include the following symbols: <, >, |. The /bw switch uses a | character called a pipe or bar. It is important to remember that the | character is used in the command prompt to pipe the output to another application. To prevent this character from being parsed by the command line, it must be surrounded with quotes.
For example, the following command uses a /bw parameter with a | character and needs to be quoted.
LocalSch.exe /exe=C:\ldclient\myprogram.exe /cmd="/apm /s /ro" /bw="LAN|server"
Was this article useful?
The topic was:
Copyright © 2016, LANDESK. All rights reserved. | <urn:uuid:a3bd67e3-6d44-4513-9e05-cc04fdea5c8c> | CC-MAIN-2017-04 | https://help.landesk.com/docs/help/en_US/LDMS/10.0/Content/Windows/localsched-c-parsing.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00023-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.852277 | 555 | 2.71875 | 3 |
Based on the Hippocratic oath, new database design takes consumer privacy into account in the way it stores and retrieves information.
IBM researchers are working on a new database design that takes consumer privacy into account in the way it stores and retrieves information.
IBM Fellow Rakesh Agrawal this week is presenting the idea, called a Hippocratic database, at the Very Large Data Base 2002 conference in Hong Kong. The design is based on the Hippocratic oath that serves as the basis of doctor-patient relationships. The concept occurred to Agrawal while being challenged by his brother, who is a doctor, about the inability of technology like databases to take individuals privacy concerns into account.
"More and more databases are keeping personal and private information, and we are sort of relying on databases for our day-to-day existence," said Agrawal, lead scientist on the project at the IBM Almaden Research Center, in San Jose, Calif. "If we dont treat it with respect, people are going to get hurt."
One tenet of the Hippocratic oath includes a statement on privacy that states, "
whatever I may see or hear
in the life of human beings
I will remain silent, holding such things to be unutterable." The Hippocratic database concept hinges on this principle.
Hippocratic databases would negotiate the privacy of information exchanged between a consumer or individual and companies. The database owner would have a policy built into the database about storage and retrieval of personal information, and the database donor would be able to accept or deny it.
Each piece of data would have specifications of the database owners policies attached to it. The policy would specify the purpose for which information is collected, who can receive it, the length of time the data can be retained and those who are authorized to access it.
The increased ubiquity of the Internet and use of databases for data mining in marketing have led to the need for database systems that limit the type of data stored, how it is used and how long it is stored, researchers say. At the same time, regulations such as the Health Insurance Portability and Accountability Act of 1996 and the Gramm-Leach-Bliley Act of 1999, along with tough European Union privacy laws, are forcing companies to take privacy more seriously.
"Once companies start recognizing that this is going to be extremely important for the consumer and some companies start saying We respect your privacy, and we use databases that are Hippocratic, that might become a movement in itself and that might become a competitive advantage," Agrawal said. "At this stage, Im sort of saying that we need to create technology, and I think market forces and legal forces will take care of it."
Already, IBM researchers in their lab have prototyped the Hippocratic database concept to work with the Platform for Privacy Preferences standard
from the World Wide Web Consortium, which helps determine the information a Web site can collect. P3P allows a Web site to encode its data collection and use practices in XML in a way that can be compared to a users preferences. | <urn:uuid:178a4144-fa38-4b16-b92e-dd026b4b72ba> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Database/IBM-at-Work-on-Hippocratic-Database | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00198-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947061 | 628 | 2.640625 | 3 |
XSS Evasion: Hiding in Plain Sight
Is there a holistic solution for ensuring the secure, fast, and reliable delivery of applications?
By Lori MacVittie
Today's Web users continue to demand more from their online experience. Accessibility and speed drive programmers and keep users happy -- but at what price? The ever-increasing interactive capabilities of the Web's most progressive sites ultimately put a user's security in jeopardy. These new programming channels, coupled with Web 2.0, leave user security susceptible to new XSS (Cross Site Scripting) threats.
Threat prevention systems of yesteryear such as UTM (Unified Threat Management), Web application firewalls, intrusion prevention systems, and intrusion detection systems are now vulnerable to XSS invasions. The previous method of plugging holes with content filtering and signature-based databases no longer sufficiently combats attackers' evasion techniques. XSS attacks are now two-pronged, not only attacking a target system but also attacking defensive systems protecting the targets. To counter them requires a system to first recognize an e-evasion attempt, and then identify invasion techniques. The latest attack attempts utilize HTML manipulation injection and encoding tags in different code sets or base systems to slide attacks through the filtering process.
The Vagaries of Language
It is often said that the English language is one of the most complex and confusing languages in existence. Homophones, such as "there" and "their" (not to mention "they're") constantly trip up even native speakers, and other eccentricities can cause great confusion and consternation among speakers and writers alike.
Although systems and secure coding techniques exist that prevent traditional XSS attacks from successfully exploiting vulnerabilities across the application infrastructure, today's attackers have grown more sophisticated. Using the same attacks from yesterday, they have discovered how to embed and hide them from the filters and signature-based comparisons that have traditionally protected Web sites and applications.
Traditional filtering techniques capable of detecting XSS injection are generally based on regular expressions that expect specific formatting of HTML. This formatting includes white spaces, carriage returns, and tabs. An attacker can bypass these regular expressions filters simply by decreasing, increasing, or inserting extra white spaces. In this way, the attack -- though a well-known one -- does not match the expected pattern and the resulting malicious code is allowed to pass through and exploit the application.
White-space-based injection attacks work because the filters in threat-prevention systems do not cover all the possible cases, and HTML rendering engines ignore white space contained inside HTML tags.
Similarly, regular-expression-based filters are also used to look for malicious code within well-formed HTML. Well-formed HTML must contain opening and closing tags where appropriate. Adding or dropping closing tags or even adding additional opening tags can confuse these filters and enable the attacker to successfully evade detection.
HTML manipulation injection attacks work because filters cannot anticipate the irregular placement of opening and closing tags on HTML elements, and rendering engines are forgiving and will often close tags on their own.
HTML manipulation is a growing threat as the number of social networking sites and communities continues to grow and users demand the ability to include formatting in their messages. By allowing more lattitude for users to include links and other HTML, entities it becomes even easier for attackers to embed attacks and affect potentially hundreds or thousands of users with one successful attack.
Making it even easier to bypass threat-prevention systems and filters is the fact that HTML can be encoded in multiple code sets. HTML tag names can easily be encoded in different code sets (such as hexadecimal, octal, and Base64) that are recognized and interpreted correctly by browsers but are not understood by traditional regular expressions and filters.
Use of different character encodings works because although the filter may not recognize the attack, the browser will correctly interpret the data during the rendering process.
URL String Evasion
Part of an XSS-based attack is the injection of malicious code, with the second step requiring that some data or script be loaded from an external site. Because of this requirement, many threat prevention systems recognize the presence of external domains and will prevent them from being accepted by the application. To evade the systems' ability to block such URLs, attackers employ several methods to hide the URL from being recognized as external.
Such methods include using an IP address instead of a domain name, using URL encoding to hide the domain name, and encoding the URL numerically as a DWORD, hexadecimal, or octal string. Some attackers will mix and match these methods, causing further confusion to threat prevention systems.
URL string evasion works because filters expect a string-based domain name, not a numeric one, and because modern Web browsers are capable of understanding the encoded version of the domain name or IP address.
In much the same way that cable television provides parents with the means to block channels that deliver, by their assessment, unacceptable content, some of today's Web application firewalls have evolved to provide the same control over user-initiated requests. This control provides the means by which XSS-based attacks -- even those hiding in plain sight -- can be stopped before they reach back-end applications and databases, and wreak havoc on application infrastructure.
XSS injection attacks themselves have not changed, only the manner in which they are delivered has been modified. By detecting these attempts to hide in plain sight -- within the same HTML code and HTTP requests -- Web application firewalls can continue to prevent such attacks from successfully being injected in applications and subsequently affecting either mission-critical data or, in the case of Web 2.0 communities and social networks, potentially thousands of users.
What is required to detect these evasion attemps is normalization before evaluation. When a request is received, it needs to pass through a process that removes extraneous comments and white space, and further decodes the request into a common, understood code set. This allows proven methods of detecting and preventing XSS injection attacks to be applied and thus stops the evasion from accomplishing its task of injecting malicious code into an application.
Threat prevention systems based purely on signature or keyword matching cannot properly deal with evasion attacks. While these techniques are a good basis for preventing known threats from reaching applications, such a static method of threat detection cannot continue to be successful against the evolving dynamic nature of Web application attacks, in particular XSS injection.
Advanced technology is required to detect the evasions used today to penetrate existing threat prevention solutions. These solutions, such as IPS or stand-alone Web application firewalls, provide protection primarily at the Web application layer and cannot address the broader issue of application delivery security. A modern Web application firewall -- when coupled with the network and application transport layer security of an application delivery platform and integrated into an application delivery network -- offers a holistic solution for ensuring the secure, fast, and reliable delivery of applications.
- - -
Lori MacVittie is a technical marketing manager for application services at F5 Networks. You can reach the author at email@example.com | <urn:uuid:a338e975-5c44-4ded-bbf4-850f2c6f9bfd> | CC-MAIN-2017-04 | https://esj.com/articles/2008/07/29/xss-evasion-hiding-in-plain-sight.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00108-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.918482 | 1,446 | 2.609375 | 3 |
With the benefits of fiber optic cables, they are widely used in data centers around the world nowadays. Fiber optic cables seem to be a indispensible component of each data centers. They play an important role in transmitting datas every moment with a fast speed. In order to avoid unnecessary loss or keep safety, whether you are a novice or a veteran, it’s important to understand the basic knowledge before you begin fiber cabling in data center.
Basic Knowledge Of Fiber
As we know, in fiber optic cables, data is transmitted through pulses of light and this digital signal is transmitted over a medium made of high-quality glass. Due to the diversity of fiber cables, the most basic knowledge of fiber optic that you should understand is the difference between single-mode and multi-mode fiber when considering fiber optics for your data center.
Single-mode fiber is a single stand of glass fiber with a core diameter between 8 and 10.5 µm and a cladding diameter of 125 µm that has one mode of transmission. Due to its relatively narrow diameter and one mode, it will propagate typically 1310 to 1550nm while only the lowest order bound mode can propagate at the wavelength of interest typically 1300 to 1320nm. Single-mode fiber usually carries higher bandwidth and gives a higher transmission rate and up to 50 times more distance than multi-mode fiber, but requires a light source with a narrow spectral width that is also cost more. We usually see the 9/125 in construction which means that the core to cladding diameter ratio is 9 microns to 125 microns.
Multi-mode fiber is made of glass fibers, with a common diameters in the 50 to 100 µm range for the light carry component (the most common size is 62.5). Due to its large diametral core, multi-mode fiber allows multiple modes of light to propagate. Light waves are dispersed into numerous paths, or modes, as they travel through the cable’s core typically 850 or 1300nm. Using multi-mode fiber, you can get high bandwidth at high speeds (10 to 100MBS – Gigabit to 275m to 2km) over medium distances. However, there is a limitation of multi-mode fiber. Because of the high dispersion and attenuation rate with this type of fiber, the quality of the signal is reduced over long distances. Multimode fiber is usually 50/125 and 62.5/125 in construction which means that the core to cladding diameter ratio is 50 microns to 125 microns and 62.5 microns to 125 microns.
After understanding of the single-mode fiber and multi-mode fiber, you may have a qustion:
When considering fiber optics for your data center, which type is better?
To this question, my question is “the best systems are the ones that work well for you”. This is based on transmission distance to be covered as well as the overall budget allowed. Multimode fiber will allow transmission distances of up to about 10 miles and will allow the use of relatively inexpensive fiber optic transmitters and receivers. If the distance to be covered is more than 10 miles, single mode fiber is the choice. However, transmission systems designed for use with this fiber, such as the laser diode will typically cost more.
Note: It is important to never mix fiber optic cores. For example, never plug a single-mode 9/125 cable into a multi-mode 50/125 cable or plug a multi-mode 50/125 cable into a multi-mode 62.5/125 cable. When cores are not matched properly, data transmissions will be lost. Other cable information depending on the application, such as fiber patch cord, horizontal and backbone cables can be learned if you are interested, here is not to state one by one.
Best Practices For Better Fiber Cabling
We have a basic understanding of fiber optic cables in the first section. This part, we will going to give some practice tips of fiber cabling. Though fiber cabling has distinct benefits compared with copper in regard to transmission, attenuation and electromagnetic interference (EMI), improper practices of fiber cabling may lead bad effect to data transmission. Therefore, we must maintain the best practices when we doing fiber cabling.
Inspection & Cleaning
Maybe you have heard a saying that best practices for fiber optic installation start with inspection & cleaning. They are growing in importance as links with increasingly higher data rates are driving decreasingly small loss budgets. With less tolerance for overall light loss, the attenuation through adapters must get lower and lower. This is achieved by properly inspecting and cleaning when necessary. There are two types of problems that will cause loss as light leaves one end-face and enters another inside an adapter: contamination and damage. In general, we do the inspection with fiber microscope (optical microscope or video microscope). It is easy to use but be sure to follow the instruction. In addition, you must beware of bad habits when you doing cleaning. Because cleaning has been part of fiber maintenance for years, most people have their own approaches for cleaning end-faces. However, beware of bad habits as many have developed in the industry over time. In a word, whatever approach is selected, certain truisms apply to fiber optic end-face inspection and cleaning. Strictly follow the defined working process and principles and consistent inspection and cleaning up front will avoid unexpected and costly downtime in the future.
Fiber Bend Radius
There is a reduction in the strength of that signal when a cable is bent. Similarly, fiber optic cables is also as. The bend radius, or measurement of a curve, can determine how strong the data signal will flow. With fiber cabling, there are different specifications for bend radius varying by cable manufacturer and fiber type. There have been many improvements made in this area, including the development of “bend insensitive fiber“. A good rule of thumb for determining bend radius during installation is that bend radius equals 10-15 times the outer diameter of the cable jacket. You can also consult the manufacturer for recommendations when you choosing cables.
In many cases, fiber cabling must run over long distances. Although fiber optic cabling assemblies are robust and sturdy, it is need to be cautiously used when pulling. It is important to remember that never pull from the connectors because this position is typically the weakest link in pulling strength. In addition, we suggest you to use a pulling sock to reduce strain on critical areas. It is readily available from the related suppliers.
When you doing some cable managements, you will face the question “How to confirm the correct tie-down point?”. Tie-down point, here we mentioned, is the area where cable is affixed to patch panels or racks and cabinets to ensure they do not move around. To tie-down the fiber cable properly, you may never use zip ties on fiber optic cable unless there are specific designated areas on the cable to do so. The common and proper way is to use Velcro and never over tighten, as this can crack the fiber and cause failure. The following picture shows us the improper and proper ways to tie-down fiber.
Labeling Is Necessary
Cable labeling makes jobs become easier and safer. They are designed to help users reduce trouble, improve safety and save time and money. Every cable in your cable room should be clearly labeled so you can trace any faults. Your business can’t afford to waste time testing cables at random to find a fault when the system is down. Proper data center cabling by an experienced network cabling service would include this as a routine matter. Of course, you should obey the rules of labeling but not just do it casually. Also, it seems simple, but it has its own skill requirement and meaning in data center.
Fiber cabling in data center is not as simple as the above statement, except them, there are many other knowledge or operating skills to learn during working. The knowledge described above is the most basic knowledge which is for reference only.
As data rates keep increasing, fiber optic cable will become more necessary in many data center applications. A thorough understanding of the basics will help you work better and more higher efficiency. I hope you will enjoy this article and gain some helps or working inspiration from it. In addition, if you have any requirement about the related products, such the fiber cables, fiber patch cord, fiber patch panel or other fiber optic products, you can visit our website or contact us directly over firstname.lastname@example.org. | <urn:uuid:e283f86e-711e-4d2c-ab2e-4de03e6b6007> | CC-MAIN-2017-04 | http://www.fs.com/blog/basic-knowledge-tips-of-data-center-fiber-cabling.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00070-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936482 | 1,743 | 3.21875 | 3 |
Computer memories were once far more exotic than today's neat, tiny rows of chips.
There was, for example, the principle internal storage system for the Univac 1 which consisted of tubes of mercury 22.75 inches long into which sound was pumped modulated to correspond to a sequence of bits.
Univac I mercury delay line memory courtesy Ron Mak
The mercury acted as a delay line long enough to hold 10 words each consisting of 12 seven bit characters (6 bits plus odd parity). This system actually worked despite it's complexity but it not only had limited storage and was expensive, it was also rather big ... in fact, it was room-sized.
Physical overview of Univac 1 (note mercury memory tanks INSIDE the computer. The UNIVAC 1 computer was actually a little room.)
As a list-friend pointed out the other day, a key metric of this system was how many bits per gallon could be stored.
Then there was Bubble memory. As explained on The Vintage Technology Association's web site:
Magnetic bubble memory is a non-volatile data storage medium invented at Bell Labs in 1967. Bubble memory uses a thin magnetic film on a garnet substrate, which forms cylindrical domains when constricted under a magnetic field. These domains, or bubbles, each store one bit of data. The bubbles are created by a generator signal, pushed around the film in racetrack-like loops, and eventually detected by a sense amplifier. Unlike semiconductor memories, bubble memory is sequential access, rather than random access. Conceptually, it is like a tiny magnetic diskette and drive, but with no moving parts. Instead of the disk moving, the bits move.
The Intel 7110, a high density 1-megabit bubble memory device.
What other medium could you use for delay line storage? Another list-friend, Joseph S. Barrera III, figured a fiber optic loop from Los Angeles to New York and back would be about 8,000 km long. At 10Gb/s a bit would be 0.03 meters long so the delay line could store (((8,000,000 / 0.03) / 8,388,608) = 31.79 MB (although if you throw in a parity bit your storage will be only 28.26 MB).
But as storage solutions go this wouldn't be cheap; fiber optic cable costs about $0.66 per meter so the actual loop (ignoring any other costs involved in laying the loop) would cost about $5.3 million or $166,723.58 per MB. Kind of pricey and the latency would be brutal.
Ah ... but you don't have a spare $5.3 million? For (perhaps) less money you could bounce laser pulses off the moon. No fiber optic cable needed and at 10Gb/s over 384,400 km you could store 1.35774965639 GB but the lasers, detectors, and gear to track the moon would get expensive and then there's the problem that the moon sets every night so you'd need multiple ground stations and some heavy engineering ... you'd still be spending a serious amount of cash.
What about using the Internet as a delay line? Yep, someone has done just that ... it's called PingFS, the Ping Filesystem, and it's described as:
... a set of python scripts which, in Linux, provide virtual disk storage on the network. Each file is broken up into 64-1024 byte blocks, [which are] sent over the wire in an ICMP echo request, and promptly erased from memory. Each time the server sends back your bundle of joy, PingFS recognizes its data and sends another gift [the data].
In other words once uploaded the file's contents will be constantly circulating the Internet in chunks and once it's put into the system there's never more than 1,024 bytes of the file on the originating machine (and then only for as long as it takes to resend). Brilliant but insane.
So, there you have it; the old (mercury delay lines and magnetic bubble memories) are reborn anew as ICMP ECHO requests. | <urn:uuid:3fc9d377-eb68-458c-bd46-f5a85932ef94> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2226673/infrastructure-management/using-delay-lines-as-computer-memory.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00556-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946446 | 851 | 3.40625 | 3 |
Fidel M.,University of Alaska Anchorage |
Kliskey A.,University of Alaska Anchorage |
Kliskey A.,University of Idaho |
Alessa L.,University of Alaska Anchorage |
And 2 more authors.
Polar Geography | Year: 2014
The Bering Sea Sub-Network, a Community-Based Observation Network, was initiated to improve knowledge of environmental changes occurring in the Bering Sea and to enable scientists, Arctic communities and governments to predict, plan and respond. Climate change can affect the health of the social-ecological system of Indigenous communities through negative effects to travel and changes to biological resources used for subsistence. Harvesters are perceptive of, and often have multigenerational knowledge about, the environmental conditions which subsistence activities are dependent upon. Community monitoring can detect local level environmental changes, and provide society with examples of adaptation strategies. Semi-structured interviews, with a participatory mapping component, were used to collect data on marine subsistence activity in Indigenous communities bordering the Bering Sea, in the Russian Federation and Alaska, USA. Spatial data allow exploration of human responses to change over time. In the Yup'ik village of Togiak, Alaska a shift has occurred in recent years in where residents harvest walrus, while seasonal regulations remain static. This may cause residents to travel farther in more dangerous conditions. The co-management system in place could be an effective forum to deal with change as it was structured to incorporate local input in adaptive management. © 2014 © 2014 Taylor & Francis. Source
Alessa L.,University of Idaho |
Alessa L.,University of Alaska Fairbanks |
Kliskey A.,University of Idaho |
Gamble J.,Aleut International Association |
And 3 more authors.
Sustainability Science | Year: 2015
Community-based observing networks (CBONs) use a set of human observers connected via a network to provide comprehensive data, through observations of a range of environmental variables. Invariably, these observers are Indigenous peoples whose intimacy with the land- and waterscape is high. Certain observers can recall events precisely, describe changes accurately, and place them in an appropriate social context. Each observer is akin to a sensor and, linked together, they form a robust and adaptive sensor array that constitutes the CBON. CBONs are able to monitor environmental changes as a consequence of changing ecological conditions (e.g., weather, sea state, sea ice, flora, and fauna) as well as anthropogenic activities (e.g., ship traffic, human behaviors, and infrastructure). Just like an instrumented array, CBONs can be tested and calibrated. However, unlike fixed instruments, they consist of intelligent actors who are much more capable of parsing information to better detect patterns (i.e., local knowledge for global understanding). CBONs rely on the inclusion of Indigenous science and local and traditional knowledge, and we advocate for their inclusion in observing networks globally. In this paper, we discuss the role of CBONs in monitoring environmental change in general, and their utility in developing a better understanding of coupled social-ecological systems and developing decision support both for local communities as well as regional management entities through adaptive capacity indices and risk assessment such as a community-based early warning system. The paper concludes that CBONs, through the practice of Indigenous science in partnership with academic/government scientists for the purpose of knowledge co-production, have the potential to greatly improve the way we monitor environmental change for the purpose of successful response and adaptation. © 2015 Springer Japan Source
Huntington H.P.,The Clearing |
Ortiz I.,University of Washington |
Noongwook G.,Savoonga Whaling Captains Association |
Fidel M.,Aleut International Association |
And 6 more authors.
Deep-Sea Research Part II: Topical Studies in Oceanography | Year: 2013
Alaska Native coastal communities interact with the marine environment in many ways, especially through the harvest of fish, marine mammals, and seabirds. The spatial characteristics of this interaction are often depicted in terms of subsistence use areas: the places where harvests and associated travel occur. Another way to consider the interaction is to examine the areas where harvested species range during their lifecycle or annual migratory path. In this paper, we compare seasonal subsistence use areas, lifetime subsistence use areas, and "calorie-sheds," or the area over which harvested species range. Each perspective offers useful information concerning not only the nature of human-environment interactions but also the scope for potential conflict with other human activity and the means by which such conflicts could be reduced, avoided, or otherwise addressed. Seasonal subsistence use areas can be used to manage short-term activities, such as seasonal vessel traffic during community re-supply. Lifetime subsistence use areas indicate the area required to allow hunters and fishers the flexibility to adjust to interannual variability and perhaps to adapt to a changing environment. Calorie-sheds indicate the areas about which a community may be concerned due to potential impacts on the species they harvest. © 2013 Elsevier Ltd. Source | <urn:uuid:b02acb51-aee2-4358-944b-c01cb98a5c24> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/aleut-international-association-1339212/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00556-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913477 | 1,047 | 2.71875 | 3 |
It has been observed that many data center construction projects omit ambient air cooling from their efficiency strategy. Data centers collectively, however, consume a significant percentage of all energy produced in the U.S. Data center economics are changing. Research suggests that ambient air cooling is a compelling data center cooling strategy because it reduces electricity usage significantly, is a major cost savings, is proven safe and is environmentally responsible.
Ambient air cooling reduces electricity usage significantly because air conditioning units can be turned off for periods of time and because lower static pressure air delivery methods can be used; ejecting hot air before it reaches the air supply lowers the amount of cooling energy needed even when air conditioning units are required.
One common misconception with ambient air cooling is that this approach precludes the use of normal chiller units. In fact ambient air cooling, a common economizer method, can be configured to automatically step in when outside air conditions meet computer room operating parameters such as during off-summer months and nighttime. Economizing with outside air can even be used at the same time as a normal chiller. Air side economizers are used to completely replace or more commonly augment normal chillers. “While temperate environments realize the quickest ROIs, nearly all geographies in the U.S. can attain some level of free cooling through air side economizers” (42U, 2012). Every hour that normal chillers are turned off, energy savings accumulate.
Electricity is required for fans to move air through a data center. “As with any fluid dynamic, the more obstructions and constraints on the air flow the higher the static air pressure and force required to overcome this friction” (Tooley, 2010). One of the objectives in using ambient air is to reduce electricity consumption. This objective can be optimized through a well-thought-out design. Today’s high-density data centers are known to require raised floors many feet tall in order to minimize air flow restrictions. Ventilated floor tiles are notorious for restricting air flow and providing the wrong amount of air to a given computer location. Raised floor tiles for the purpose of distributing air are an unneeded expense, and they create inherent cooling problems. Worst of all, air distribution through raised floor tiles creates static air pressure. “Air flowing through perforated floor tiles must be restricted to a much greater degree than the plenum to cool equipment uniformly” (VanGilder & Schmidt, 2005). The more static air pressure, the more fan energy will be required to move the same amount of air. “Data centers can be more efficient if designed on a slab floor now that many of the benefits of a raised floor are gone” (Sty, 2012). Lower energy use, a key objective of ambient air use, can be achieved by avoiding air distribution through high-static-pressure means like a raised floor and other static-pressure-generating duct work.
A cooling strategy used in many homes today is a whole house exhaust fan. A key reason for its use is that it requires less energy to exchange the hottest inside air with cooler outside air than it does to just cool the inside hot air. The hot air created in data centers greatly exceeds that found in homes. The cost savings advantage of exhausting hot air and replacing it with cooler ambient air is likewise greater in a data center.
The size of this opportunity is more significant than many IT leaders may realize. “Another impact of higher energy densities is that server hardware is no longer the primary cost component of a data center…The purchase price of a new (1U) server has been exceeded by the capital cost of power and cooling infrastructure to support that server and will soon be exceeded by the lifetime energy costs alone for that server. This represents a significant shift in data center economics that threatens to overwhelm the advances in chip efficiency that have driven the growth of digital information during the past 30 years” (U.S. Environmental Protection Agency ENERGY STAR Program, 2007). Electricity costs have now exceeded the cost of data center equipment. IT managers put significant emphasis on price negotiations with server hardware manufacturers. This economic shift suggests that a greater amount of attention should be put on managing data center power consumption than the cost of equipment acquisition.
In 2007 the EPA produced for Congress a report that found “by 2011 DC power consumption was estimated to double from the 2007 1.5% level. Under current efficiency trends, national energy consumption by servers and data centers could nearly double again in another five years (i.e., by 2011) to more than 100 billion kWh” (U.S. Environmental Protection Agency Energy Star Program, 2007). Data center power consumption may have reached three percent of all energy generated in 2011 in the U.S. The magnitude of the data center macro opportunity cannot be ignored. At 10 cents per kWh, 100 billion kWh would equate to 10 billion dollars in data center energy consumption in 2011. Data center power efficiency is a responsibility IT managers cannot ignore.
The efficient use of power consumption can be improved just as dramatically as its growth. NREL is the only national laboratory dedicated to the advancement of research, development, commercialization and deployment of renewable energy and energy efficiency technologies. Its legacy data center had an estimated power usage effectiveness (PUE) of over 3.0. Its new facility is designed to operate at 1.08. “The commercially available technologies employed at NREL reduced energy consumption by an average of 270%” (Sty, 2012). It is important to note that ambient air cooling was the most significant efficiency-gaining technology used by NREL. The case study at NREL shows the viability and scale of efficiency improvement that can be achieved in a data center through the use of ambient air cooling.
The size of the data center efficiency problem and opportunity with ambient air cooling is great. In fact, traditionally IT leaders have spent meaningful amounts of time negotiating lower IT equipment costs, but data center efficiency is now an even greater cost reduction opportunity. Industry leaders advocate IT leaders extend their computer management practices into the data center facilities. Forrester sites this as the driver for the growing penetration of data center infrastructure management (DCIM) products (Forrester Research 2012). Realization of the cost savings opportunity within the data center can represent the greatest cost savings opportunity for IT infrastructure and operations leaders today.
Ambient air cooling is safe because modern mechanical designs do not let water through building openings, use filters to clean outside air before entering the computer room, can be used with outdoor sensors to put mechanical systems into recirculation mode in the event of dust storms similar poor conditions, and can be conditioned using humidification.
Openings in buildings to accommodate fiber, fluid cooling pipes, fresh air, electricity and other infrastructure services is commonplace. Expanding the use of fresh air with the intent of reducing energy consumption for data centers is a safe and obvious evolution for the industry. The U.S. Department of Energy recommends using economizers including outside air cooling. “HVAC system efficiency can be improved by adding equipment that can convert delivered gas or electric power efficiently or by using economizers, which allow the automatic use of outside air or allow users to regulate space conditions” (US Department of Energy, 2010). Building openings to allow for fresh air is a common practice and even recommended when done properly.
Using filters to clean outside air before entering building occupied spaces is common practice. The air conditioning industry has established specific standards to determine when and how much air cleaning is needed. “If the outdoor air contaminant levels exceed the values given in 6.1.1 (Table 1), the air should be treated to control the offending contaminants. Air-cleaning systems suitable for the particle size encountered should be used. For removal of gases and vapors, appropriate air-cleaning systems should be used. Where the best available, demonstrated, and proven technology does not allow for the removal of contaminants, the amount of outdoor air may be reduced during periods of high contaminant levels, such as those generated by rush-hour traffic. The need to control offending contaminants may depend on local regulations that require specific control measures” (ANSI/ASHRAE Standard 62-2001, 2001). This common practice can also be understood better by speaking with data center managers who have already implemented ambient air cooling. Facebook, NetApp, NREL, Oracle, The University of Utah Hospital and Yahoo, to name a few, are all examples of ambient air data centers (Mee Industries, 2011; Miller 2010, Sty, 2012). Familiarization with the common practice of treating outside air can help IT managers more widely recognize this technology as data center safe.
The latest ASHRAE air standards for data centers allow greater temperature and humidity variation. Equipment manufacturers develop their equipment to meet or exceed these data center standards. “Part of the rationale in choosing the new low and high temperature limits was based on the generally accepted practice for the telecommunication industry’s central office, based on NEBS GR-63-CORE, which uses the same dry bulb temperature limits as specified here. Most IT manufacturers start to increase air moving device speed around 25°C (77°F) to improve the cooling of the components and thereby offset the increased ambient air temperature. The concern that increasing the IT inlet air temperatures might have a significant effect on reliability is not well founded” (American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc., 2008). This wider band of operating conditions qualifies more hours a year for ambient air cooling. As noted, IT equipment manufacturers have developed their equipment to meet or exceed these data center ASHRAE air standards. And humidification levels can be easily raised when needed using readily available humidification systems like those used at Facebook (Mee Industries, 2011). Particulate sensors can and should be used to determine if the outside air contains an excessive amount of unwanted matter that could result in shorter filter change cycles.
Ambient air cooling is environmentally responsible because less electricity used equates to lower carbon dioxide emissions, it can equate to less water consumed and it can result in smaller air conditioner designs.
The amount of carbon dioxide emitted in electricity generation can be calculated locally and nationally. The most accurate calculation method is performed on the basis of the area where the energy is consumed. But national averages can also be used to approximate the environmental impact of energy reductions. “Most users of the Equivalencies Calculator who seek equivalencies for electricity-related emissions want to know equivalencies for emissions reductions from energy efficiency or renewable energy programs. These programs are not generally assumed to affect baseload emissions (the emissions from power plants that run all the time), but rather non-baseload generation (power plants that are brought online as necessary to meet demand) Emission Factor 6.8956 x 10-4 metric tons CO2 / kWh” (US Environmental Protection Agency, 2012). Using the EPA’s calculation, it can be determined that U.S. data centers were responsible for putting 689,560,000 metric tons of carbon dioxide into the atmosphere in 2011. Data center managers must have a response to the environmental impact and reputational risk of carbon dioxide emissions.
Fresh water is consumed by power companies in the generation of electricity. Fresh water is often depleted again by consumers to cool their data center through the use of onsite cooling towers. Fresh water is a limited resource. In many parts of the world, there is not enough fresh water to meet current demands; this is especially true in the western United States. “Because the growth of fresh water supplies is limited, growth in electricity demand can be met only by developing technologies that reduce the volume of fresh water required per kilowatt-hour of power generated” (Wolfe, 2008). Less electricity consumed equates to less fresh water consumed. Also, if a data center uses evaporative cooling, fresh water consumption can be dramatically reduced by using ambient air cooling alternatives.
When ambient air cooling is used, chillers might only be needed as trim cooling. Trim cooling can be accomplished with smaller units and less energy. This would be true in a scenario where the outside temperature is 78°F and the desired computer equipment inlet temperature is 75°F. In this scenario the ambient air only needs to be cooled by three degrees, which requires a lot less energy than would be needed to cool the 100-degree computer equipment exhaust air. At a minimum, trim cooling of ambient air may consume a lot less energy than cooling equipment exhaust air using a cooling plant only. Implementing a trim cooling strategy could even result in requiring smaller cooling mechanical equipment.
Research suggests that ambient air cooling is a compelling data center cooling strategy because it reduces electricity usage significantly, is proven safe and is environmentally responsible. Industry standards bodies, governing U.S. agencies and industry analyst firms recommend the use of ambient air cooling for data centers. Case studies have been published where the overwhelming benefits of ambient air cooling have been proven. The objectives of ambient air cooling are maximized when combined with a low-static-pressure strategy. Leaders in the data center space are implementing ambient air cooling strategies now; every data center in the US should evaluate the use of ambient air cooling as part of their cooling strategy. Ambient air cooling has become a data center strategy imperative. As data centers grow in size, electricity cost increase, fresh water supplies shrink and CO2 emissions continue to damage the environment, this solution will only become more necessary.
About the Author
Brent Elieson is Director of Infrastructure Operations at the University of Utah.
Photo courtesy of Tom Raftery
42U (2012). Free cooling with Data Center Economizer Solutions. Retrieved from: http://www.42u.com/cooling/economizers/economizers.htm
American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. (2008). 2008 ASHRAE Environmental Guidelines for Datacom Equipment: -Expanding the Recommended Environmental Envelope-. Retrieved from http://tc99.ashraetcs.org/documents/ASHRAE_Extended_Environmental_Envelope_Final_Aug_1_2008.pdf
ANSI/ASHRAE Standard 62-2001, Ventilation for Acceptable Indoor Air Quality, (2001). ISSN 1041-2336. Retrieved from http://www.grntch.com/images/ASHRAE_Standard62-01_04_.pdf
Forrester Research (2012). Server and Data Center Predictions For 2012. Retrieved from: http://www.forrester.com/home#/Server+And+Data+Center+Predictions+For+2012/quickscan/-/E-RES61442
Mee Industries, (2011). Data Center Cooling: Technologies for Improving Power Usage Efficiency Ratings. Retrieved from http://www.meefog.com/downloads/case-studies/cs-dc-data-center-cooling-oregon.pdf
Miller, R., (2010) Yahoo Computing Coop: Shape of Things to Come?. Data Center Knowledge. Retrieved from http://www.datacenterknowledge.com/archives/2010/04/26/yahoo-computing-coop-the-shape-of-things-to-come/
Sty, R., (2012), Personal Communication, Smith Group JJR . http://www.smithgroupjjr.com/
Tooley, M., (2010). Plant and Process Engineering 360, Burlington, MA. Butterworth-Heinemann (p.451). ISBN 13:978-1-85617-840-2
US Department of Energy (2010). Energy Efficiency and Renewable Energy, Building Energy Codes 101, PNNL-SA-70586. Retrieved from http://www.ashrae.org/File%20Library/docLib/Public/20100301_std901_codes_101.pdf
U.S. Environmental Protection Agency ENERGY STAR Program (2007). Report to Congress on Server and Data Center Energy Efficiency Public Law 109-431. Retrieved from http://www.energystar.gov/ia/partners/prod_development/downloads/EPA_Datacenter_Report_Congress_Final1.pdf?f816-9a78
US Environmental Protection Agency, Green Power Equivalency Calculator Methodologies (2012). Retrieved from http://www.epa.gov/greenpower/pubs/calcmeth.htm
VanGilder, J.W. & Schmidt, R. R., (2005), Airflow Uniformity through Perforated Tiles in a Raised-Floor Data Center. Retrieved from http://www.apcmedia.com/salestools/KDUP-636LVV_R1_EN.pdf
Wolfe, DR. J.R. (2008) Costlier, scarcer supplies dictate making thermal plants less thirsty, Power Magazine. Retrieved from http://www.powermag.com/issues/features/105.html | <urn:uuid:1e3f8a39-4c4b-49ce-8047-422b32c5277e> | CC-MAIN-2017-04 | http://www.datacenterjournal.com/ambient-air-cooling/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00374-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.899807 | 3,508 | 3.28125 | 3 |
Want to go green? Leverage cloud computing. Or so says Microsoft in a new piece of research through which the software giant reported that moving apps to the cloud can dramatically cut carbon emissions.
According to a recent study commissioned by Microsoft and conducted by solution provider Accenture and WSP Environment and Energy, businesses that run applications in the cloud can reduce energy consumption and carbon emissions by about 30 percent or more compared to running those same applications on their own on-premise infrastructure.
Using their own data centers as an example, Microsoft said large data centers benefit from economies of scale and operational efficiencies beyond what enterprise IT departments can achieve. And when it comes to small businesses moving to the cloud, the research revealed that net energy and carbon savings can sometimes hit more than 90 percent.
Rob Bernard, Microsoft's chief environmental strategist, said the increased productivity, reduced costs and lower management overhead of cloud computing and cloud products, now coupled with the environmental benefits, illustrate the true value of the cloud.
"The cloud has the ability to deliver business value for customers in an age where corporate responsibility is critical to business success," Bernard said in a statement.
The study focused on three Microsoft apps for email, content sharing and CRM and found that the cloud version of those applications can significantly reduce carbon emissions. And, while the research was conducted using Microsoft apps, the software giant added that "similar advantages can be observed across many applications and cloud service providers."
Examining three distinct deployment sizes -- 100, 1,000 and 10,000 users -- the study looked at the carbon footprint of server, networking and storage infrastructure and found that the smaller the company the larger the benefit of going with the cloud. When a 100 user organization moved to the cloud, the effective carbon footprint reduction could be up to 90 percent because of a shared cloud environment and no local servers.
Meanwhhile, companies with 1,000 users had savings ranging from 60 percent to 90 percent. And large companies, had savings that was typically around 30 percent to 60 percent on energy consumption and carbon emissions for cloud applications. Microsoft cited one large consumer goods company that reduced carbon emissions by 32 percent by moving 50,000 email users in North America and Europe to the cloud.
"As the data shows, the per-user carbon footprint is heavily dependent on the size of the deployment," the study indicated. "The cloud advantage is particularly compelling for small deployments, because a dedicated infrastructure for small user counts -- as in a small business running its own servers -- typically operates at a very low utilization level and may be idle for a large part of the day. However, even large companies serving thousands of users can drive efficiencies from the cloud beyond those typically found in on-premise IT operations."
NEXT: Carbon-Cutting Cloud Attributes | <urn:uuid:e79dfbe5-dc46-48c5-9388-29e9975da0b3> | CC-MAIN-2017-04 | http://www.crn.com/news/cloud/228200288/can-cloud-computing-cut-carbon-emissions.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00190-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951493 | 568 | 2.71875 | 3 |
Introduction to Amazon Relational Database Service (Amazon RDS)
Database Management Systems (DBMS) have been monolithic structures with their own dedicated hardware, storage arrays, and consoles. Amazon Web Services (AWS) realized that while each company can use unique methods of collecting and using data, the actual processes of building the management infrastructure are almost always the same. AWS remedies DBMS problems with its Amazon Relational Database Service (Amazon RDS).
Database Management Systems (DBMS) are an integral part of almost every large-scale software system. DBMS are large, complex software suites that not only store and retrieve data, but also secure the data, allow backups of the data, replicate the data across multiple systems for greater reliability, and cache the data for faster access.
Such large database systems are difficult to set up, maintain, and expand. They are also challenging to clone, which is a critical step that allows Development (Dev) and Quality Assurance (QA) departments to use the same environment for developing and testing an organization's products.
Meanwhile, companies are finding that the data they capture is becoming more and more valuable to their business, either as a way of measuring and improving their own operations, or as a product that can be sold. The process of extracting value from the data becomes complicated, however. This is because as the data becomes more valuable, more care should be taken with the DBMS infrastructure-and yet the people who are involved in running and maintaining the DBMS are the ones who can best help with exploiting the data for the business.
Amazon Relational Database Service (Amazon RDS) was created by Amazon Web Services (AWS) out of its own experience with these DBMS complications. Amazon RDS provides cost-effective DBMS deployment, quick and efficient scaling, and easy support of development and QA.
This paper describes how you can set up Amazon RDS and use it as a drop-in replacement for traditional DBMS, with all the cost, scaling, and agility advantages of deploying software in the cloud.
Problems with Traditional DBMS
Because traditional DBMS are so large and complex, you can run into problems with them at any stage of the software lifecycle: Production, Dev, and QA. This section first describes some of these problems in the order in which you're most likely to encounter them (Production, then in the QA-Dev cycle), and then discusses how Amazon RDS can help mitigate them.
Problems with Production DBMS
Databases-particularly large-scale databases-are at the heart of successful web applications and critical enterprise software. Any company that uses a database, or provides database services, finds itself depending on the database either for critical support of its operations or for its competitive business advantage. But there is a usually a long and involved process in setting up the DBMS that support these databases, particularly if the databases need to scale across hundreds, thousands, or even millions of users, and/or across continents or time zones, all while guaranteeing reliability.
Traditionally, DBMS have been monolithic structures with their own dedicated hardware, storage arrays, and consoles. Proper sizing of the hardware infrastructure has been a capital-intensive-but largely opaque-process. Initial configurations are based on educated guesses about system load and user needs, and lock in thousands of dollars of capital budget. Of course, these guesses almost always get at least one aspect of the deployment wrong, meaning, it requires more design time and expenditures on additional memory, bigger CPUs, and/or more disk space.
In growing its own infrastructure and then developing AWS, Amazon realized that while each company can use unique methods of collecting and using data, the actual processes of building the management infrastructure-and the problems inherent in growing that infrastructure-are almost always the same. Werner Vogels, the CTO of AWS, calls these processes "undifferentiated heavy lifting," meaning, tasks that every organization must perform but they impart no business advantage.
These processes encompass the whole spectrum of database management: you must choose the right software and hardware to run, install the latest version of software, configure the right security levels to access the hardware and software, get the entire system onto the network, and finally, make the system accessible as a data store.
Once the DBMS is running, the operational challenges start: you must make sure the database is backed up while running read replicas of the database to increase access speed and increasing capacity as the system grows. If you decide that you need high availability, there are additional complications: you must replicate the data onto separate hardware platforms, and you must detect the failure of the main server and re-route traffic to the replicated server.
These processes are not everyday tasks; in fact, setting up a new database on a new machine is a relatively rare occurrence for most organizations, as would be setting up replication, and so on. Because these processes happen rarely, they cause a different sort of problem: when it is time to perform them again, you or your team needs to re-learn them and/or update their skill set to the latest release. This means that critical configuration tasks – which can cause their own problems, and sometimes ones that do not manifest themselves until well down the road-are being done by relative rookies, every time. | <urn:uuid:b87548a8-5e3a-4bf8-afc2-6f9e8071f217> | CC-MAIN-2017-04 | https://www.globalknowledge.com/ca-en/resources/resource-library/white-paper/introduction-to-amazon-relational-database-service-amazon-rds/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00428-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940441 | 1,083 | 2.734375 | 3 |
NEW: Check out our TCP Optimization Guide: Nagle Delays and Beyond
Silly Window Syndrome (SWS) is a problem that can arise in poor implementations of the transmission control protocol (TCP) when the receiver is only able to accept a few bytes at a time or when the sender transmits data in small segments repeatedly. The resulting number of small packets, or tinygrams, on the network can lead to a significant reduction in network performance and can indicate an overloaded server or a sending application that is limiting throughput.
In TCP, as data is transmitted the receiver replies with acknowledgements that, among other values, specify a window size - the number of bytes it is currently able to receive. The sender uses this to compute a "usable window" by subtracting the amount of unacknowledged data from the window size provided by the receiver. This process is known as the sliding window algorithm that TCP uses as its flow control protocol.
In certain situations and without preventative measures in place, the sliding window protocol can lead to SWS when the usable window shrinks to a "silly" size and increasingly small segments are sent (for reasons discussed below), to the point where packet headers exceed the amount of data in the packets. The greater number of packets being sent, each with its own TCP header, dramatically increases the overhead even as the amount of actual data sent decreases, leading to network congestion and a large loss of efficiency from degraded throughput.
What Causes Silly Window Syndrome from the Sender Side?
On the sender's side, silly window syndrome can be caused by an application that only generates very small amounts of data to send at a time. Even if the receiver advertises a large window, the default behavior for TCP would be to send each individual small segment instead of buffering the data as it comes in and sending it in one larger segment.
A Common Solution
Nagle's algorithm is one of the most common ways of dealing with silly window syndrome, but the algorithm is still widely misunderstood and requires some tuning and optimization to make it work correctly in most environments. Here's what happens in a TCP transaction when you have Nagle's algorithm turned on:
- The first segment is sent regardless of size.
- Next, if the receiving window and the data to send are at least the maximum segment size (MSS), a full MSS segment is sent.
- Otherwise, if the sender is still waiting on the receiver to acknowledge previously sent data, the sender buffers its data until it receives an acknowledgement and then sends another segment. If there is no unacknowledged data, any available data is sent immediately.
While Nagle's algorithm increases bandwidth efficiency, it impacts latency by introducing a delay since only one segment is sent per round trip time. Applications that require data to be sent immediately usually require Nagle's algorithm to be turned off.
For a real-life analogy, let's say we have a couple of moving trucks (packets) taking furniture (data) from one house to another. If a truck transported each piece of furniture as soon as it was taken out of the old house, one piece at a time, clearly the operation would take forever (SWS). If we have enough trucks in transit between locations, there's going to be a fair bit of congestion on the route as well. The obvious and more efficient solution is, of course, to wait until each truck is full before it drives off to the new house to avoid the large overhead of the drive time and loading/unloading time of each truck's trip.
What Causes Silly Window Syndrome from the Receiver-Side?
If the receiver processes data slower than the sender transmits it, eventually the usable window becomes smaller than the maximum segment size (MSS) that the sender is allowed to send. However, since the sender wants to get its data to the receiver as quickly as possible, it immediately sends a smaller packet to match the usable window. As long as the receiver continues to consume data at a slower rate, the usable window, and therefore the transmitted segments, will get smaller and smaller.
There are some settings you can tweak to minimize the likelihood of silly window syndrome being caused on the receiver side:
- When the receiver's window size becomes too small, the receiver doesn't advertise its window until enough space opens up in its buffer for it to advertise a maximum-sized segment or until its buffer is at least half empty.
- Instead of sending acknowledgments that contain the updated receive window from above as soon as the window opens up, the sender can delay the acknowledgments. This reduces network congestion since TCP acknowledgments are cumulative, but the delay must be set low enough to avoid the sender timing out and retransmitting segments.
Going back to our example, let's say that once the moving trucks are unloaded, there's only one person moving the furniture into the new house. If he's moving too slowly, the furniture will pile up in front of the house and there won't be any space left for subsequent trucks to drop off their furniture. If his solution is to tell the truck drivers to start bringing fewer and fewer pieces of furniture each trip to give him a chance to keep up, he'll run into the same problem as before - a lot of trucks for only a few items traveling between houses. If he doesn't request additional furniture until he's moved more of it into the house, he can ask for a fully-loaded truck as soon as he has room instead of incrementally receiving the same amount of furniture, but in a larger number of trucks.
Silly window syndrome is an avoidable problem, but it happens, and when it does, it pays to know where to look for the cause, and what kind of troubleshooting you can do to make it better. | <urn:uuid:e604e5b9-3113-4c0d-ae93-adc8faf2d869> | CC-MAIN-2017-04 | https://www.extrahop.com/community/blog/2016/silly-window-syndrome/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00062-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958009 | 1,183 | 2.984375 | 3 |
Since the original conceptualization of computer security, and perhaps even before, social engineering has been in existence. One could say that social engineering began when societies began, whether it was realized or not. It is now time to give some of this work to scripts and applications to make it a little more interesting…
As the years passed in the computer security community, network penetration became more and more necessary, but computers were not the only thing getting compromised. Social engineering was part of the hacker subculture, but it was never a service offered by companies.
In recent years—largely due to the fact that they are doing more business online—companies have become more security aware and networks have become more “secure.” Finding remote vulnerabilities on Internet-facing networks that can be exploited is becoming more and more difficult due, in part, to such realities as the increased safety of operating systems, the standardization of automated patching, and the hiring of security personnel. Having said that, many would argue, “What about corporate networks? Do companies secure their networks the same way they secure production servers?”
The short answer, in my experience, is no. Companies have different approaches to and views about internal and external networks: they often don't think about internal threats. They fail to understand that internal threats don't necessarily mean an internal employee going rogue; it could easily be an attacker with access to the corporate network who is attacking it from an internal perspective.
For thousands of reasons and excuses, workstations and internal servers are never kept as secure as external servers: they usually lack up-to-date patching schedules, and are loosely and improperly configured. On top of this already insecure network are the human users, which includes IT admins, engineers, and developers. Your employees.
Employees: A group of people who can perform amazing tasks such as infect their computer in less than two hours, install buggy freeware apps, and open all those links that come with explicit warnings such as DO NOT OPEN - VIRUS FOUND.
To make a story short, hackers, spammers, botnets, criminal organizations, and all the other “bad guys” constantly take advantage of the weakest link in all types of security: The Human Factor, or human stupidity. The reality is, it doesn't matter how much you harden a computer, you can rely on a human to find a way to compromise that computer.
Social engineers are acutely aware of how human psychology operates, and they are well aware of human needs and feelings. Consequently, they will use and abuse these “issues” to craft their ruses and attacks.
Additionally, due to the rise of social networks in personal and corporate environments, people are constantly checking their Facebook, LinkedIn, email, Twitter, Google+, and Gmail—everyone wants to know what is going on within their company. The 21st century human has an addictive need to be informed in real-time. It is human nature to communicate and interact with people, and to be as informed as you can about your environment. Deep down, we all love to gossip.
Before we even start, it's worth noting that client-side attacks, phishing attacks, social engineering attacks, and social engineering penetration tests have existed for a long time. Due to the ever-tightening security around networking in recent years on one hand, and the expansion and rapid growth of social networks on the other, these attacks have gained strength, and new attack types are appearing daily, abusing the communication channels humans are working so hard to create.
Standard attack types:
• Classic email-driven social engineering attacks
• Website phishing attacks
• Targeted social hacking (Facebook, LinkedIn, Google+, et cetera)
• Physical social engineering
In my next three posts, I will be walking through the steps to perform a social engineering attack from a corporate point of view as a security consultant. I'll begin with information gathering, the indispensible "homework phase" that every social engineering engagement should begin with. | <urn:uuid:da620e80-ae52-4f0e-ba8a-283c53fff6a8> | CC-MAIN-2017-04 | http://blog.ioactive.com/2011/11/automating-social-engineering-part-one.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00052-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.969558 | 826 | 2.65625 | 3 |
Kaspersky Lab, an international data-security software developer, reports the discovery of "Blue Code" - a new malicious program, which attacks remote Web servers operating on Microsoft's Internet Information Server (IIS) platform. At the moment, Kaspersky Lab has received several reports of infections by this worm from China.
Similar to the notorious "Code Red" worm discovered earlier this year, "Blue Code" attacks IIS servers. However, to penetrate into target computers, this worm exploits the Web Directory Traversal vulnerability in IIS security that was discovered in October 2000. The worm penetration procedure consists of three stages. First of all, "Blue Code" gains access to the remote computer's hard disk, then uploads there a worm-carrying file from an already infected IIS server and runs this file.
The worm-carrying file creates several additional files in the root directory of the C drive: SVCHOST.EXE, HTTPEXT.DLL and D.VBS. The first two names are reserved by Windows and belong to the non-malicious programs that are included in Windows 2000/NT standard distribution. In this way, the worm tries to disguise its presence on the infected IIS server.
The malicious SVCHOST.EXE is registered in the start-up section of the Windows system registry so the worm will become active each time the computer is rebooted.
In turn, D.VBS performs several actions that are aimed at the removal of active "Code Red" copies from the system memory and creating defense against future "Code Red" attacks. In particular, "Blue Code" locates and terminates a INETINFO.EXE application that is responsible for access to the Web server's resources (this terminates active "Code Red" copies). In addition, the worm changes the processing of specialized HTTP requests that make it impossible for "Code Red" copies to penetrate this IIS server in the future.
For further spreading, "Blue Code" initiates 100 active threads that scan randomly selected IP-addresses and attempts to plant its copy to the available remote computers. The number of active worm threads can significantly slow down the infected IIS-server's productivity.
The worm also has a payload routine that performs a DoS-attack (Denial of Service) on the http://www.nsfocus.com Web server from 10:00 am till 11:00 am UTC time.
Protection against "Blue Code" already has been added to the daily update of KasperskyTM Anti-Virus. For a more detailed description of this worm, please visit the Kaspersky Virus Encyclopedia. | <urn:uuid:8324e23c-0e20-427a-af29-6041d4cd760d> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/virus/2001/_Blue_Code_A_Worm_That_Fights_Code_Red_and_IIS_Servers | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00446-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91608 | 543 | 2.765625 | 3 |
Once upon a time, Microsoft had applied to the FCC to become an approved white spaces administrator. However, in December, the FCC approved Spectrum Bridge to administer a database system of television white spaces, "which may provide service to devices beginning January 26, 2012." So now the software giant is proposing WiFi-NC which operates over low bandwidth channels, but when bundled together all of these white space narrow channels can provide a "full purpose signal" and screaming speeds.
Wi-Fi frequencies can carry a signal only a short distance, but TV stations use parts of the radio spectrum that allow signals to travel long distances. Microsoft Mobile Computing Researcher Ranveer Chandra said when TV stations moved from analogue to digital broadcasts, it opened up "unused slices of the spectrum between stations." These "white spaces" could be used for wireless Internet service, if there was a way to ensure there would be no interference with TV broadcasts. Yet research proved that "an HD movie stream can be transmitted over the same channel being used by a wireless microphone (which use frequencies close to TV broadcasts) without causing any noticeable degradation to the sound recorded. That shows that the current rules may be too conservative." As it stands now, white space devices "must avoid any channel in use by a wireless microphone as well as the channels on either side of a TV broadcast."
This was about the time it was reported that Microsoft wanted "to rule the white spaces" and Microsoft Research presented "SenseLess, a database driven white spaces network." That system was able to tell wireless devices where there were available white spaces and if it would be legal to broadcast. Well now Microsoft has Wi-Fi over narrow channels, dubbed WiFi-NC which could operate at fast speeds. WiFi-NC would bundle multiple narrow signals to create bandwidth and, like the fastest Wi-Fi networks, be able to transmit data up to a gigabit per second in those white spaces.
Physorg explained that WiFi-NC devices would work by "combining a large group of very low power radios and receivers (which they call transmitterlets and recieverlets) each of which would be temporarily dedicated to one free band in the spectrum. The signals would then be combined to create one full purpose signal and used in what the team calls a compound radio."
According to Microsoft Research, "We propose WiFi-NC, a novel PHY-MAC design that allows radios to use WiFi over multiple narrow channels simultaneously. To enable WiFi-NC, we have developed the compound radio, a single wideband radio that exposes the abstraction of multiple narrow channel radios, each with independent transmission, reception and carrier sensing capabilities. The architecture of WiFi-NC makes it especially suitable for use in white spaces where free spectrum may be fragmented."
"Not only would such new devices allow Wi-Fi suppliers and users to take advantage of the additional bandwidth, but moving to such a new system wouldn't necessitate throwing out current hardware, as the reception and transmission logic would remain the same. Moving to such a new standard, Microsoft argues, would be both fair and efficient, allowing everyone access to more bandwidth, which is always a concern as more and more devices come to rely on Wi-Fi hardware and software solutions for moving data," reported Physorg.
Microsoft researcher Krishna Chintalapudi told Technology Review, "It is our opinion that WiFi-NC's approach of using multiple narrow channels as opposed to the current model of using wider channels in an all-or-nothing style is the more prudent approach for the future of Wi-Fi and white spaces." Chintalapudi added that the goal of the Microsoft Research team "is to propose WiFi-NC as a new wireless standard for the hardware and software industries."
Convincing Congress of the necessity to approve wider white spaces may be the biggest challenge Microsoft faces.
Like this? Here's more posts:
- Hacking For Privacy: 2 days for amateur hacker to hack smart meter, fake readings
- Geeks under fire: War on privacy, freedom and general computation
- 4th Amendment vs Virtual Force by Feds, Trojan Horse Warrants for Remote Searches?
- Irony: Surveillance Industry Objects to Spying Secrets & Mass Monitoring Leaks
- Privacy Advocates Sue DHS for Big Bro Fake 'Friends' Monitoring Social Media
- Give the TSA more power so it can grope and then arrest you?
- Lulzlover Hacked Coalition of Law Enforcement, Data Dumped for 2,400 cops and feds
- DARPA's Spy Telescope Will Stream Real-Time Video from Any Spot on Earth
- Busted! DOJ says you might be a felon if you clicked a link or opened email
- Privacy Freaks Rejoice: Privacy to be a 'Hot Job Skill' in 2012
- Secret Snoop Conference for Gov't Spying: Go Stealth, Hit a Hundred Thousand Targets
- PROTECT-IP or control freaks? Monster Cable blacklists Sears, Facebook as rogue sites
- CNET Accused of Wrapping Malware in Windows Installer for Nmap Security Tool
- Do you give up a reasonable expectation of privacy by carrying a cell phone?
Follow me on Twitter @PrivacyFanatic | <urn:uuid:ec2197e4-7990-4af7-bab0-924531120bf9> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2221435/microsoft-subnet/wi-fi-for-white-spaces--will-microsoft-s-wifi-nc-set-new-network-standard-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00354-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.9213 | 1,057 | 2.84375 | 3 |
The standardization process for 3-D content to be transmitted to the home has begun, and though
it’s at least a year and a half down the road, cable operators need to be paying close attention
to the technologies that could soon swamp up their pipes . . . or their wallets.
The modest reviews and returns for the remake of “Journey to the Center of the Earth” notwithstanding, film studios and broadcasters across the globe are enthusiastic about three-dimensional (3-D) content. There’s a wave of 3-D coming, people want it in their homes, and the Society of Motion Picture and Television Engineers (SMPTE) and the consumer electronics industry are already working on making it happen. Pay TV providers obviously have a role to play.
The history of 3-D imaging, or stereoscopic imaging, dates back more than 150 years. Stereoscopy, invented around 1840, creates the illusion of depth in an image by presenting two similar images, differing only slightly in perspective, one to each eye. Many 3-D displays still use this method to deliver 3-D images to viewers.
The first 3-D movies hit America in the early 1900s, and since then hundreds of films and TV shows have been created in the format. According to Sensio, in the ‘50s, 3-D films such as “Hondo,” starring John Wayne, and Alfred Hitchcock’s “Dial M for Murder” were shot using state-of-the-art technology, but even so, 3-D lost its flare due to the poor viewing conditions in most theaters, and due to the complex equipment required to exhibit 3-D movies, such as silver screens, polarized glasses, double-synchronized projectors, special lenses, etc.
But with the introduction of the Imax 3-D format in the ‘80s, and with the emergence of new screening technology, the 3-D format saw a resurgence. Computer animation technology, digital cameras and 3-D home theaters have contributed to the democratization of stereoscopic production and screening. And as the technology has gotten progressively more advanced, the demand has only gotten stronger for more quality 3-D content.
So the concept of delivering 3-D content to the masses is no new concept, but what is new is that it now makes business sense, says Nicholas Routhier, president and CEO of Sensio, which develops and markets avant-garde stereoscopic technologies. Before, the technology was very limiting; distributing two different streams was a nightmare, and there was no meaningful way of doing it; and it was extremely costly, he said. But today, digital technology has improved, and consumers are excited about 3-D and are willing to pay more to see it.
3-D versions of movies, on average, rack up about one-and-a-half times the amount of box office sales as the 2-D versions, Routhier says. And the next step is that consumers want to see those same 3-D movies in the comfort of their own homes.
According to Steve Oksala, vice president of standards for the Society of Cable Telecommunications Engineers (SCTE), executives from the cable, consumer electronics and film industries, as well as other related industries, are saying about 3-D to the home: “This looks like it could bring in business. Let’s get some standards.”
BEGINNINGS OF A STANDARD
“Companies have come to us and said that there is a market for moving 3-D video to the home,” says SMPTE Engineering Vice President Wendy Aylsworth.
So this summer, SMPTE established a task force to define the parameters of a stereoscopic 3-D mastering standard for content viewed in the home. Called the 3-D Home Display Formats Task Force, it aims to move the 3-D home entertainment industry forward by setting the stage for a standard that will enable 3-D feature films and other programming to be played on all fixed devices in the home, no matter the delivery channel, SMPTE says.
The inaugural meeting of the task force occurred on Aug. 19. It explored the standards that need to be set for 3-D content distributed via broadcast, cable, satellite, packaged media and the Internet to be played-out on televisions, computer screens and other tethered displays. After six months, the committee will produce a report that defines the issues and challenges, minimum standards, evaluation criteria and more, which the society says will serve as a working document for SMPTE 3-D standards efforts to follow.
“Digital technologies have not only paved the way for high-quality 3-D in the theaters, they have also opened the door to 3-D in the home,” Aylsworth says. “In order to take advantage of this new opportunity, we need to guarantee consumers that they will be able to view the 3-D content they purchase and provide them with 3-D home solutions for all pocketbooks.”
According to Aylsworth, in early 2009 there will be an effort within the Technology Committee of SMPTE to actually write a standard, or standards. The task force will determine how many standards there need to be; for example, subtitling methods, which are very difficult in 3-D, may be segregated out to house its own standard.
Typically a standard takes between a year and two years to get published, Aylsworth says, so the U.S. is looking at about 18 months until it will begin to see 3-D-to-the-home technologies in the market. “The manufacturers who care the most are in active participation,” Aylsworth says, “and once the [standard] document goes through the final ballot, details are down and vendors are confident that no technology items will change, vendors will go into production.”
Vendors that presented their respective technologies at the task force meeting included Sensio, Philips, Dynamic Digital Depth (DDD), TD Vision and Real D, all of which have 3-D distribution technologies that are working in some fashion today.
Currently, consideration is not being given to smaller formats and smaller devices that could receive 3-D content. The focus of the task force is mostly on distribution to large-screen TVs. “We don’t want to do anything to preclude it from going to mobile, but we had to put parameters on what we want to solve now, and we went with the HD world and how people are viewing HD.”
But eventually, Aylsworth says: “We want 3-D to reach everyone. We want to pick a method that can hit as many displays as possible.”
And that’s exactly what the Consumer Electronics Association (CEA) is aiming for, as well. Much interest regarding 3-D-to-the-home standards has been expressed to the association, and on Oct. 22, the CEA has authorized a “discovery group” to investigate the need for 3-D standards.
According to Brian Markwalter, CEA’s vice president of technology and standards, the discovery group will look at how 3-D information could be passed by a common standard to displays, how the content will move around in the home, how the mixture of 2-D and 3-D content will hold up, and how the mixture of 3-D content and onscreen menus, as well as other features that consumers are used to having, will fare.
WHAT ABOUT CABLE?
3-D to the home is not that far out for cable operators, SMPTE’s Aylsworth says, and operators should be aware of the task force’s activity and working through their related organizations – through SCTE – staying abreast of what’s going on. Since it’s just a task force, she says, operators can stay tuned and merely pay attention to what comes out of the task force, but they should be learning about the issues. “What does it mean for their bandwidth, their infrastructure?”
According to the SCTE’s Oksala, it’s way too early to tell if 3-D content will require cable operators to upgrade their headends or networks, or whether it will be a plug-and-play solution. “If it’s done sensibly, it will simply be additional content transported over the cable network.”
“What concerns us,” Oksala says, “is if the standard results in different kinds of digital information being sent than cable already sends. When a standard is created, we want to make sure it doesn’t call for something that’s going to break our system. We can make changes in digital streams, but if something comes along that [cable operators] don’t expect, if we haven’t thought it through, it may process incorrectly.”
He says that the SCTE will make sure that the standard is consistent with SCTE 40; and if it’s not, SCTE 40 may have to be extended to account for something new.
Sensio’s Routhier says that with his company’s technology, cable operators would not need any additional technology, they would just need to use Sensio’s encoder. What cable operators do need to think about, though, is marketing, he says – where the service will be available, and for how much, etc. The real technical challenge, he says, is on the shoulders of TV manufacturers.
And bandwidth, of course, is always a concern, Oksala says. The issue will be important, especially since many broadcasters are doing HD, and eventually mobile and handheld, which doesn’t leave a lot of bits for 3-D. But as Aylsworth points out, there are many 3-D distribution techniques that do not require much additional bandwidth, such as Philip’s mode of distribution, dubbed 2-D plus depth. With this technique, a 2-D image is sent along with information about the depth of each object in that image, and at the other end, the display melds the two, allowing viewers to see two different images – one in the left eye and one in the right – without 3-D glasses.
The critical issue for cable operators, says DDD CEO Chris Yewdall, is deciding at what point there are enough 3-D-capable TVs in the market to warrant delivering 3-D content to subscribers. Yewdall expects 3-D games to emerge first, followed by 3-D movies on Blu-ray, which will inspire consumers to purchase 3-D-ready TVs. Then, he says, the cable industry can warrant the infrastructure changes needed – however minimal or severe – to start to address those customers with pay-per-view (PPV) and video-on-demand (VOD) services with 3-D content.
With the first generation of 3-D TVs, in order to enjoy the 3-D feature, a user must plug the TV into a PC. “It’s targeted toward gamers,” Yewdall says. The 3-D market will be most successful when it can deliver a 3-D experience that is comparable to what people are used to in 2-D, he says.
“3-D TV affects everyone in the TV industry, and that includes cable,” says Advanced Television Systems Committee (ATSC) President Mark Richer. “It’s not going to be here tomorrow for the consumer, but it’s really on the radar now, for everybody, and I think it’s going to be very, very successful. It’s going to take a while, but so did HDTV.” Just like with HDTV and the switch to color, Richer continues, “Whether you watch 3-D in a theater, or some day in the home, it really is striking to a viewer. It’s one of those big changes. I think it’s going to be a great opportunity for the cable and satellite industries.” | <urn:uuid:1eb57a35-fd52-4b43-9dce-15fd4d7bc54c> | CC-MAIN-2017-04 | https://www.cedmagazine.com/print/articles/2008/09/3-d-in-the-home | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00354-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95395 | 2,568 | 2.84375 | 3 |
As far back as the 1850s, people tried to send images over telegraph wires, but the methods were not able to provide quality images and displays. In the 1930s, The Associated Press finally succeeded and began transmitting news pictures, and it had immediate success; people loved illustrations with their news.
Today, monitors make it possible to display pictures with amazing speed and quality; they have increasingly replaced hard copy as the preferred method of accessing text and images; and they have become our doors to the Internet and the digital universe.
SIZE DOES MATTER
One of the first things buyers notice about a monitor is its size. Cathode ray tube (CRT) monitors are the most common and are sold in 14-, 15-, 17-, 19- and 21-inch models. Fifteen-inch monitors are predominant in today's PC systems, with 17 inches being the most common upgrade.
Larger monitors can improve the way things look on your screen and keep graphics moving faster, which are the most common reasons for an upgrade. Smaller screens can be set for higher resolutions, but the result will be images that are smaller and difficult to read or distinguish.
However, larger screens have larger tubes that require more room, creating a problem for users with limited desktop space -- imagine having a 19-inch TV on your desk.
Quality 17-inch monitors cost about $700. Users can get a high-quality 21-inch monitor, ideal for desktop publishing or design, for about $2,000.
When considering a monitor's size, be aware that its size is not the same as its viewable or usable area. Manufacturers provide the viewable screen sizes, in addition to the CRT sizes, of their monitors. Since the picture tube is inside a plastic casing, the viewable screen size is smaller than the CRT or monitor size. For example, a 14-inch monitor can have a viewable area that varies from 13 inches to 13.5 inches, while a 17-inch monitor can range from 15.5 inches to 16 inches of viewable area.
Larger monitors require greater viewing distance. Ideally, users should sit at a distance that allows them to see the entire screen without severe head or eye movement -- 20 inches for a 15-inch monitor and 30 inches for a 17-inch monitor.
THE MONITOR BEHIND THE MASK
A CRT uses masks -- a shadow mask or an aperture grille mask -- to align the tube's electron beam on the color phosphors, which create the screen images.
Shadow mask monitors use dot pitch -- the distance between dots on a monitor -- as their measurements. The smaller the distance between dots the better the image. Dot pitch is measured in millimeters. For example a .28mm dot pitch will produce a sharper image than a .39mm dot pitch. Most monitors come with dot pitches of .25 mm to .28 mm. A shadow mask tube is less expensive to produce and more commonly used in the market. They offer displays with more precision for word processing and spreadsheet applications.
Aperture grille monitors are more expensive and provide better graphics with saturated colors and brightness, which makes them ideal for desktop publishing and design work.
CRAZY FROM THE HEAT
Larger monitors use more power and thus produce more heat and electromagnetic field (EMF) emissions. Power consumption can be as high as 128 watts when a monitor is active. But almost all monitors have a feature that activates a low-power or sleep mode, which can drop power consumption to 30 watts.
Even though the effects of long-term exposure to a monitor's EMF emissions has not proved harmful, there is concern among many users. However, the MPR II standard for EMF emissions provides the most stringent guidelines for maximum radiation levels.
IN NEED OF REFRESHMENT
Since many of us stare at monitors for up to 12 hours per day, a monitor with a low refresh rate -- the time it takes to redraw or redisplay an image on screen -- can cause eyestrain. A higher refresh rate reduces flicker and the resultant eyestrain and headaches. Screens with a refresh rate of under 70 Hertz (or Hz) -- the frequency of electrical vibrations per second -- are unsuitable, resulting in images and characters that appear to vibrate.
There is a trade-off between a high refresh rate and a high resolution. The higher the resolution, the more time that's required to refresh the screen. Resolution is the measurement -- usually in dots per inch -- of an image's sharpness on the screen. The more pixels -- small spots on the screen where a combination of dots is used to display text or images -- the higher the resolution.
Another type of monitor -- used in shopping malls and other public places -- are touchscreens. Touchscreen monitors eliminate the need for a keyboard or mouse and provide users with graphical icons to initiate specific tasks. It can be used by every user regardless of the user's computer knowledge. By simply touching the screen, users can request information, initiate transactions or paint and draw.
Touchscreens are also finding their way into liquid crystal displays (LCD). For example, Wacom Technologies has developed a pressure-sensitive LCD that allows users to directly paint onto the screen with a pressure-sensitive pen. The PL-300, developed by Wacom, is a 10-inch, stand-alone LCD monitor mounted on top of a Wacom tablet. The display has 800x600 pixel resolution and 256,000 color depth -- the number of colors that can be displayed at one time.
For additional information, contact ELO TouchSystems, MicroTouch Systems and Wacom.
NEW KID ON THE BLOCK
Today, CRT monitors are big league, while LCDs are the rookies. However, major monitor manufacturers -- ViewSonic, MAG Innovision, Princeton, NEC and others -- are introducing flat panel display monitors. LCDs -- one of the many forms of flat panel -- can not only match a CRT's viewable area and overall performance, but they are much lighter, have lower emission levels, are thinner and require less space on the desktop. Low power consumption and excellent resolution make them highly suitable for a variety of professional fields, such as desktop publishing, medicine, government, military and finance.
It is predicted that in a few years the roles might be reversed, and LCDs will push CRTs aside. Evidence of this changing of the guard is in the form of ViewSonic's VPA150 ViewPanel -- a 15-inch monitor that is multimedia-ready with stereo speakers built into the base.
LCDs are still expensive in comparison to a similar CRT -- about three to four times more than a CRT. The flat panel display market is growing, but it will still take years before flat panels overtake CRTs for desktop use.
In an effort to reduce LCD prices, some manufacturers are using passive-matrix technology, which is less expensive than active matrix (see glossary p. 65). Active matrix LCD displays for portable computers are clearly better than passive matrix displays. However, the differences are less noticeable in desktops; passive matrix LCD displays are quite readable and cheaper than active matrix models.
NEC's 14.4-inch MultiSync LCD 400V active matrix monitor weighs only 11 pounds, consumes 30 watts on full mode and less than 7 watts in the power-saving mode.
For additional information on LCDs, contact: ViewSonic, MAG Innovision, Princeton and NEC.
THE PLASMA ALTERNATIVE
Another alternative to CRTs and LCDs are plasma displays. Initially intended for portable computers, they are starting to appear in the desktop market.
Plasma displays can provide the quick response times of tubes with all the other advantages of LCD displays.
Plasma technology uses a sealed glass envelope filled with rows and columns of small, individually charged chambers. Each hold a mixture of neon and xenon gases, which, when energized, glow brightly and produce images.
Plasmavision, developed by Fujitsu, is a 42-inch monitor with its own remote control for setting the screen and switching between RGB -- a video display requiring separate red, green and blue signals -- and video.
The Leonardo plasma display, developed by Mitsubishi, provides a 40-inch monitor that's only 4 inches deep. It weighs 65 pounds and has a screen resolution of 640x480. Digital signal processing also allows the unit to display various resolutions accommodating an array of refresh rates. Leonardo is compatible with IBM, Macintosh third-party graphic standards and various video formats -- S-VHS, NTSC, PAL and SECAM -- but plasma monitors are still developing and very expensive.
For additional information on plasma monitors, contact Mitsubishi or Fujitsu.
WHAT TO LOOK FOR
Buying a new monitor and setting it up is the easiest upgrade to do, but keep an eye out for these features and issues:
* Inspect the adjustment or control keys, because they make a monitor easier on your eyes and easier to use. Most models provide screen control buttons for adjusting the picture and sound -- distortion, colors, pincushioning, zoom and volume.
* Set the proper resolution. After start-up, the computer will establish the default setting, but most units allow the user to set it as well. For a 15-inch monitor, the resolution should be 800x600, 17-inch 1024x768 and for 19-inch 1280x1024.
* Monitors and graphics cards are interrelated, so a user who upgrades to a high-quality 17-inch monitor but not the graphics card will see little improvement. Before upgrading to a larger monitor, make sure the monitor supports the card's capabilities and vice versa. It's crucial that buyers always check the manual for their monitor and cards to make sure they support the same resolution.
* Know the warranty details. Many manufacturers provide a three-year warranty with a new monitor. If you decide to purchase a reconditioned unit, look for a model with -- at least -- a one-year warranty.
April Table of Contents | <urn:uuid:2c0bb771-4b7b-4832-950f-28ac58cdbe26> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Monitors-Open-Doors-to-Digital-World.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00081-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.919765 | 2,081 | 2.84375 | 3 |
The amount of traffic flowing across today’s networks is growing exponentially. While it would be ideal to periodically upgrade the network bandwidth when needed to prevent traffic congestion and drops, this isn’t always practically feasible, making QoS (Quality of Service) a mandatory requirement in all but the smallest networks. In this course you’ll be introduced to the basics of QoS terminology and features and we’ll then quickly scale up to intermediate concepts. Some of the topics to be covered are; Why do we need QoS, definitions of QoS terminology such as Classification, Marking, Congestion Management and Congestion Avoidance. With regards to Classification we’ll cover the topic from a Layer-2 perspective (CoS), Layer-3 (IP Precedence, DSCP) and higher (NBAR). You’ll learn about Queuing mechanisms like FIFO, WFQ, CBWFQ and LLQ. Other topics covered are WRED and WTD, Policing and Shaping. | <urn:uuid:8a775f28-90c2-4ab7-b3af-ba68a1fa6351> | CC-MAIN-2017-04 | https://streaming.ine.com/c/ccie-rs-intro-to-qos | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00199-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.911822 | 212 | 3.1875 | 3 |
Fiber optics is used in communications, lighting, medicine, optical inspections and to make sensors. But fiber optic is not always same. Such as outside plant (OSP) and premises cabling.
“outside plant” fiber optics as used in telephone networks, CATV, metropolitan networks, utilities, etc.
“premises cabling “fiber optics as found in buildings and campuses.
Outside Plant (OSP)
Telephone companies, CATV and the Internet all use lots of fiber optics, virtually all of which is singlemode fiber and most of which is outside buildings. It hangs from poles, is buried underground, pulled through conduit or is even submerged underwater. Most of it goes relatively long distances, from a few hundred feet to hundreds of miles.
Outside plant cables often have very high fiber counts, up to 288 fibers or more. Cable designs are optimized for the application: cables in conduit for pulling tension and resisting moisture, buried cables for resisting moisture and rodent damage, aerial for continuous tension and extreme weather and undersea for resisting moisture penetration. Installation requires special equipment like pullers or plows, and even trailers to carry giant spools of cable.
Splice and test
Long distances mean cables are spliced together, since cables are not manufactured in lengths longer than about 4-5 km (2.5-3 miles), and most splices are by fusion splicing. Connectors (generally SC or LC styles) on factory made pigtails are spliced onto the end of the cable. After installation, every fiber and every splice is tested with an OTDR.
If this sounds expensive, you are right! The installer usually has a temperature controlled van or trailer for splicing and/or a bucket truck. Investments in fusion splicers, OTDRs and other equipment can be quite expensive.
By contrast, premises cabling – cabling installed in a building or campus – involves shorter lengths, rarely longer than a few hundred feet, typically with fewer fibers per cable. The fiber is mostly multimode, except for the enlightened user who installs hybrid cable with both multimode and single mode fibers for future high bandwidth applications.
Splice and test
Splicing is practically unknown in premises applications. Cables between buildings can be bought with double jackets, PE for outside plant protection over PVC for building applications requiring flame retardant cable jackets, so cables can be run continuously between buildings. Today’s connectors often have lower loss than splices, and patch panels give more flexibility for moves, adds and changes.
Most connectors are SC or ST style with LC becoming more popular. Termination is by installing connectors directly on the ends of the fibers, primarily using adhesive or sometimes prepolished splice techniques. Testing is done by a source and meter, but every installer should have a flashlight type tracer to check fiber continuity and connection. | <urn:uuid:73242a09-d875-475d-aeae-02ae48bd688d> | CC-MAIN-2017-04 | http://www.fs.com/blog/the-difference-between-outside-plant-osp-and-premises.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00071-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937159 | 586 | 3.34375 | 3 |
I can think of few accessories more essential to computing than the mouse. This device made computer use more friendly. The ability to point and click revolutionized home and business computing.
It is easy to take the computer mouse for granted. I have grown accustomed to using it. In fact, to me, a mouse is a virtually natural extension of the computer, seemingly indistinguishable.
Yet, problems always seem to accompany progress. Such is the case with the mouse and wireless keyboard. As it turns out, these helpful devices may be letting in viruses and hackers.
That is right. Many computer users are unknowingly vulnerable. I recently came to learn that the mouse and wireless keyboards we love so much are highly susceptible to cyberattacks.
Wi-Fi Connections to Blame
As is true of many of you, I use Wi-Fi daily. I try not to dwell on all the security dangers possible because of Wi-Fi. Unfortunately, the reality is that just as colds can spread when people cough and sneeze into the air, computer viruses disseminate via Wi-Fi signals.
My Beloved Wireless Mouse
I was so excited when I brought home my first wireless mouse. The ability to slide the device around the table without a cable proved optimal in my work. The thought that I could possibly be the victim of a hacker virus never occurred.
Now, wireless mice and keyboards are a primary means of contracting the so-called MouseJack virus, created by hackers.
How Hackers Get Inside the System
Wireless mice and keyboards connect to the actual computer via a dongle, which translates user actions from the devices back to the computer. So, when I type or click on something using a wireless device, there is a moment when my information is out there in cyberspace. Guess what? That is how hackers can intercept data and get into the system.
A hacker who wants to steal my information or install a virus only need be within 100 meters of the computer to cause havoc.
Brands with Known Problems
Right now, there are certain brands that exhibit weaker defenses than others to these wireless attacks. The names include:
Using a Bluetooth-enabled mouse or keyboard is the best way to protect against these cyberattacks. Bluetooth technology comes with standard security that keeps out known viruses and provides defenses against hacking.
It is also possible to get a firmware patch from the computer or accessory manufacturer.
Small businesses, such as mine, must remain extra vigilant about security. I lack the financial resources to buy a whole new computer system if one falls victim to a serious hack attack. Nor can I afford to lose any sensitive data that could cost me clients.
I strongly advise that anyone serious about remaining safe in cyberspace take precautions as soon as possible. | <urn:uuid:8a007c3c-6299-4f22-beb5-038c531333ee> | CC-MAIN-2017-04 | https://www.apex.com/surprise-your-computer-mouse-may-be-letting-in-viruses-and-hackers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00127-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95271 | 565 | 2.53125 | 3 |
With the "telnet" command you can test if a port is open
Using the telnet command you can test if a port is open.
You can check if a port is open in your network by issuing the telnet command. If it is open, you will see a blank screen after issuing the command:
telnet [domainname or ip] [port]
- [domainname or ip] is the domain name or IP address of the server to which you are trying to connect
- [port] is the port number where the server is listening
If the port is open, you will see a blank screen. This will mean that connection is successful.
telnet rpc.acronis.com 443
(!) In Windows Vista and Windows 7 you may need to enable telnet first:
- Go to Start -> Control Panel -> Programs;
- Under Programs and features, click Turn Windows features on or off;
- Mark both Telnet Client and Telnet Server;
- Click OK.
If backup to Acronis Cloud is failing, use Acronis Cloud Connection Verification Tool to check the connection. | <urn:uuid:8c412c2a-c371-4ad7-bf25-543b0e1dc6d6> | CC-MAIN-2017-04 | https://kb.acronis.com/content/7503 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00063-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.830417 | 236 | 2.53125 | 3 |
Engineering students from Ohio State University have designed another record-setting vehicle, thanks in large part to the advanced computing resources of the Ohio Supercomputer Center.
Ohio State’s Venturi Buckeye Bullet team set an international land speed record last week at the Bonneville Salt Flats near Wendover, Utah. In a new competition category, the electric vehicle reached an average two-way speed of 212.615 miles per hour, capturing a new record (pending certification from the governing body).
The streamliner – the Venturi Buckeye Bullet 3, aka VBB-3 – was required to make two runs, one in each direction, to be considered for the international record. Although the record is officially determined by averaging the speed of the two runs in the middle of the eight mile course, the fastest measured speed for the so-called flying mile was 270 miles per hour.
The electric racecar was designed and built by undergraduate and graduate students at the Center for Automotive Research at The Ohio State University (OSU CAR) in partnership with Monaco-based electric car specialists Venturi Automobiles. Venturi provided the car’s two custom electric motors, while its two megawatts of lithium ion batteries were produced by US company A123 Systems.
Being able to evaluate a racecar’s aerodynamic properties is essential to creating a winning vehicle. In this case, student engineers carried out a pressure contour simulation of the racecar at 300mph. They also studied how the body shape could be modified to achieve minimum drag without undermining stability.
“At these high speeds, aerodynamics play a crucial role in vehicle and driver safety, as well as being one of the critical factors that dictate the peak performance of the vehicle,” said Dr. Giorgio Rizzoni, director of OSU CAR in a 2013 report. “From the start of the design process, the aerodynamics of each proposed body shape for the VBB3 was evaluated in Fluent and OpenFOAM using the computational resources available at the Ohio Supercomputer Center.”
One component that is crucial to stability at high-speed is the vertical tail. The final version of this part was the result of extensive simulations and numerous design iterations. As important as it is to hone all of the components, the entire vehicle must also be simulated as one unit to optimize performance and safety. Some of these solutions involved more than 42 million cells, according to the blog from Ohio Tech Communications Director Jamie Abel.
“Since it’s a new car with a new body, it has been interesting to look at the current aero, and to identify areas of improvement,” reported graduate student Casie Clark. “For example, I ran CFD jobs aimed at designing a wind deflector to deflect air around the tires, instead of allowing a large air mass inside the wheel wells, which creates substantial drag.”
The team’s inaugural run with the newly-minted car faced some challenges due to inclement weather, but team leader David Cooke wasn’t shaken. “We can’t wait to get back on the track and continue the journey to 400 miles per hour with an electric vehicle,” he shared. | <urn:uuid:d0e34ba5-37a0-46f4-97dd-d2d6f8b765c5> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/09/02/supercomputing-propels-record-setting-supercar/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00063-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951513 | 664 | 2.6875 | 3 |
Plans are currently underway for development of the world’s most powerful radio telescope. The Square Kilometer Array (SKA) will consist of roughly 3,000 antennae located in Southern Africa or Australia; its final location may be decided later this month. The heart of this system, however, will include one of the world’s fastest supercomputers.
The array is quite demanding of both data storage and processing power. It is expected to generate an exabyte of data per day and require a multi-exaflops supercomputer to process it. Rebecca Boyle of Popsci wrote an article about the telescope’s computing demands, estimating that such a machine would have to deliver between two to thirty exaflops.
Currently, the fastest computer on the Top500 list, the Japanese K computer, can perform 10.51 petaflops on Linpack. To reach the minimum estimated compute requirement for the SKA, a system would need to perform nearly 285 times faster than that system. For such a supercomputer to exist, substantial advancements in both processing power and reductions in power consumption will be required.
There is still time to develop this technology, as the SKA is not expected to begin operation until 2024. Most experts estimate the exascale barrier to be broken before that time. It’s with this focus that IBM and ASTRON, a Netherlands-based radio astronomy institute, have partnered up in a five-year 32.9 million Euro project to develop a system capable of supporting the SKA. The collaboration is named DOME, after the cover for telescopes and the Swiss mountain.
Research will take place at the new ASTRON & IBM Center for Exascale Technology in Drenthe, the Netherlands. Some of the technologies that will be researched there include 3D stacked chips, advanced accelerators, optical interconnects and nanophotonics. Marco de Vos, Managing Director of ASTRON described how answering challenges presented by the SKA would lead to greener technology:
“Large research infrastructures like the SKA require extremely powerful computer systems to process all the data. The only acceptable way to build and operate these systems is to dramatically reduce their power consumption. DOME gives us unique opportunities to try out new approaches in Green Supercomputing. This will be beneficial for society at large as well.”
Since the SKA is in still in the planning phases, researchers at the exascale facility will test their designs on the existing low-frequency array (LOFAR). LOFAR was built by ASTRON and uses some of the same technology that will be incorporated into the SKA.
The technical requirements for the SKA are certainly challenging from a computational point of view. Research and development for the project will most likely generate advancements in low-power computing and storage technologies that will have applications in supercomputers around the world. Beyond requiring new innovations in computing, this will be the most powerful telescope of its kind ever developed. | <urn:uuid:41f12277-710d-4996-9f48-7190bcee6243> | CC-MAIN-2017-04 | https://www.hpcwire.com/2012/04/03/astronomers_look_to_exascale_computing_to_uncover_mysteries_of_the_universe/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00457-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941394 | 618 | 3.671875 | 4 |
Matthew Mazzotta’s latest art project — a methane digester that feeds on doggy doo — has created quite the stir. Recently installed in a public dog park in Cambridge, Mass., the device is powering an old-fashioned, gas-burning lamppost while sparking conversations and building connections that may otherwise go overlooked.
“Everybody loves this idea because basically it’s doing something good with the dog waste,” said Mazzotta, a Massachusetts Institute of Technology (MIT) visual arts program graduate and lead artist of Park Spark Project.
Dog (and other animals’) waste creates methane — a greenhouse gas released into the atmosphere. Methane can be converted to energy. The methane digester has been hailed as the first of its kind in the United States — due to its public nature and location in a dog park.
Mazzotta is hoping to fuel other devices with the “passive technology.”
“The science is the science,” he said. “I think everybody loves to see something on the simplest of levels — there’s no electricity involved in this. It’s just collecting gas and burning it.”
While an MIT student, Mazzotta traveled to India for an appropriate technology class, which included studying methane digesters. Upon his return, while sitting in a Cambridge dog park, Mazzotta noted a full garbage can and thought about other countries that make use of such “waste.”
Last spring, Mazzotta built a communal tea house in the Netherlands, where local cow manure is transformed to methane — the energy of which is used to heat water. He realized that urban areas don’t take advantage of such natural resources, though they certainly have the furry friends producing such fuels.
But his hopes for the Cambridge project extend beyond the environment. A conceptual artist at heart, Mazzotta said he hopes the technology will serve as a launching point for people to meet, converse and learn from one another. “I started realizing that’s what this energy could be used for — to open up a new social air in the community instead of just being a technology,” he said.
The process of getting the city on board with the project wasn’t quite as simple as the technology Mazzotta describes, however. He spoke with officials at various city departments — from fire, public health to parks and recreation — and after making a few changes to the design, got approval for the project in the end of July. But Mazzotta isn’t complaining about the extra precautions.
“Who blames them,” he said. “It’s new, it was exciting for me to learn about, so they probably had some thoughts.”
And dog owners are eager to use and play with the digester. In use since early September, it’s equipped with a stirring handle (with directional arrows), which people have spun more than Mazzotta expected. “It’s been spun so much, it gave me new ideas for public art,” he said. “If you ever put a handle in public with arrows on it, people spin it like crazy.”
While the lamppost is the first device to be fueled by the methane digester, Mazzotta has asked the public for input on other potential uses, with mixed results. A recent meeting garnered suggestions, which ranged from fueling a shadow projection box, popcorn stand and even dentist chair.
And interest in creating similar projects has risen outside Cambridge, Mazzotta said, noting that his inbox has roughly 50 e-mails from government agencies inquiring about receiving a methane digester.
Mazzotta, who is referred to as “the artist” by park-goers, said it’s not the methane device itself that excites him, but people’s shift in thinking after seeing the possibilities. “Art can open up conversations over and over again,” he said. “Where technology is exciting at first, and then degrades, I think art should be the opposite — it’s confusing at first, but then just grows in interest.”
The Park Spark Project is funded through MIT, in partnership with the city of Cambridge, and remains an ongoing project. For more information, visit http://www.cambridgema.gov/CAC/Public/Park_Spark.cfm. | <urn:uuid:253adf01-0f55-47d8-80f7-6102fa9bf057> | CC-MAIN-2017-04 | http://www.govtech.com/technology/Pet-Poop-Fueling-City-Park-Light.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00485-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955249 | 927 | 2.59375 | 3 |
The size will be counted as three bytes.
The character P represents a scaling factor for this data item and does not count in the size of the data item.
03 MICROS PIC PPP999.
the data item MICROS can contain values between 0.000001 and 0.000999. The size of the data item is three, and the three zeros between the decimal point and number are not explicitly represented in the number.
Meaning of P: An assumed decimal scaling position. Used to specify the location of an assumed decimal point when the point is not within the number that appears in the data item.
Size Of P: Not counted in the size of the data item. Scaling position characters are counted in determining the maximum number of digit positions in numeric-edited items or in items that are used as arithmetic operands.
The size of the value is the number of digit positions represented by the PICTURE character-string. | <urn:uuid:4bdf5336-5ddd-4db2-9cb5-fb8531c73572> | CC-MAIN-2017-04 | http://ibmmainframes.com/about2126.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00393-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.865821 | 197 | 2.640625 | 3 |
If you think 24/7 connectivity is nothing new for you, and you constantly check in on Foursquare, use location-aware apps, update Facebook or other social media statuses with your geo-tagged photos, then you probably have no location-awareness sharing issues and are not overly concerned if you lose locational privacy. In the year 2014, your futuristic automated smart home can update statuses for you; even more personal data will be logged coming from emerging technology; interaction with the power grid, smart meters, IP TVs, smart appliances, movie theaters harvesting emotions, robots, GPS in cars and smartphones, and products that stalk you will create a life-log. By 2014 there will be a plethora of programs, mobile apps and devices to track you that will create and store records of your movements, activities and behaviors; this is the scene that Europe's biggest cybersecurity agency studied "to predict positive and negative effects of online 'life-logging' on citizens and society."
In the European Network and Information Security Agency (ENISA) report, "To log or not to log? Risks and benefits of emerging life-logging technologies, the agency used a 2014 fictional family's day-to-day lives and examined the "impact for their privacy and psychology as they put ever more personal information online." While you might not call it life-logging, it's not too farfetched as many people track personal data generated by their own behavioral activities. In one ENISA scenario, a person would have rather walked out of the house naked than without her phone to update online statuses. In another, the bathroom mirror scrolls with your daily calendar, the weather, keeping track of and posting statuses when you awaken, your mood and your personal hygiene. Exercise equipment and your kitchen appliances also track and automatically post social media statuses.
According to ENISA analysis, "Information security related risks may have serious connotations on privacy, economy and society, or even on people's psychology" and shows how those (privacy, social, legal, economic, etc) "aspects are highly interrelated, and should be examined together." The benefits of "life-logging can bring families and friends closer and for a longer period of time." It reduces "individuals' sense of isolation" and enhances communication and "the building of social bonds among people." But advertisers will happily gobble up all that personal data generated and will push a "higher degree of context- awareness and personalization of services, which in its turn, would mean competitive advantage for those who have control over this data."
The down sides of social networking gone wild with a flood of personal behavioral activity data? Loss of privacy and control over data, financial fraud and "mobile devices, sensors or services become more attractive targets for attackers. In future Internet scenarios there is a related loss of autonomy risk," ENISA reported. For government and industry groups there is an increased risk for "corporate espionage and corporate disruption. An evil-doer, a hacker or an attacker attempts to glean personal information which individuals put 'out there' and to use such information as a way to hack into or attack a company or government department or a network. On the other hand, companies may use such tools to monitor the activities of their employees."
The deluge of data from logging your life has other dangerous risks "such as psychological damage, related to discrimination, exclusion, harassing, cyberstalking, child grooming, feeling of being continuously under surveillance (paranoid behavior), pressures related to work performance, peering into other peoples life etc." In other words, too much social media networking and you might think Big Brother is constantly watching you; paranoia will destroy ya.
But social networking surveillance is not farfetched as the government increasingly uses social media to gauge public opinion and citizens' input to political issues and other policies. For years there has been a tainting of public opinion with "weaponized information" into social media conversations and search results. The EFF warned that Big Bro wants to be your buddy on social networking sites, especially if you might be what Ntrepid called a "true influencer" in the presentation, Anatomy of a Social Network: Finding Hidden Connections and True Influencers in Target Data. That ISS World Americas teaching track was meant for "intelligence analysts and law enforcement agents who have to 'connect the dots' between people, places and other entities by searching through various data sources from data text to information on behavior patterns." This is all in order to "perform appropriate analysis to determine relationships, hierarchy, and organizational structure of co-conspirators and identify individual involvement in criminal and/or terrorist activities."
'Ninja librarians' aka CIA analysts mine and track "the mass of information people publish about themselves," including 5 million daily tweets and Facebook. Open source intelligence, OSINT, is the name of the game for criminal investigators and intelligence analysts "now that the Internet is dominated by Online Social Media."
So what's paranoid to you? Tenable Network Security's Marcus Ranum said, "One person's 'paranoia' is another person's 'engineering redundancy'." ENISA believes "that an informed user is the first step: the right to be forgotten, right to be let alone etc, are probably best enforced if the user is in control over his/her personal data." The flipside is spamming government agencies with too much information like the "FBI, here I am" approach where Hasan Elahi's constantly updates the FBI of his movements. Graffiti artist Banksy said, "You're mind is working at its best when you're being paranoid. You explore every avenue and possibility of your situation at high speed with total clarity." But one of my favs was said by the EFF's John Perry Barlow, "Relying on the government to protect your privacy is like asking a Peeping Tom to install your window blinds."
Like this? Here's more posts:
- Fourth Amendment's Future if Gov't Uses Virtual Force and Trojan Horse Warrants?
- 4th Amendment vs Virtual Force by Feds, Trojan Horse Warrants for Remote Searches?
- Facebook Wants to Issue Your IRL Offline ID & Internet Driver's License
- Skype Exploits: I know where you are, what you are sharing, and how to best stalk you
- FBI rolling out nationwide face search and recognition system
- Alabama Sheriff Demands Go Daddy Kill AntiSec Hackers' Websites for Data Dumps
- Privacy Nightmare: Data Mine & Analyze all College Students' Online Activities
- Busted! DOJ says you might be a felon if you clicked a link or opened email
- Not Without a Warrant: Privacy Upgrade and Digital Liberty from Surveillance
- Secret Snoop Conference for Gov't Spying: Go Stealth, Hit a Hundred Thousand Targets
- PROTECT-IP or control freaks? Monster Cable blacklists Sears, Facebook as rogue sites
- 4Chan Founder Moot Cherishes Choices: 'Facebook and Google Do Identity Wrong'
- Do you give up a reasonable expectation of privacy by carrying a cell phone?
Follow me on Twitter @PrivacyFanatic | <urn:uuid:21ebd0d1-242c-4fd6-b64d-9d3429c2890a> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2221117/microsoft-subnet/too-much-social-media-networking--paranoia-of-big-brother-surveillance-may-destroy-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00237-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92394 | 1,463 | 2.53125 | 3 |
Before Americans can realize Newt Gingrich's dream of building a moon colony, they must first send other living things to the rocky celestial body to test whether long-term survival is possible.
NASA plans to start by gardening on the moon. Studying plant growth, known as germination, in the lunar environment can help us predict how humans may grow too, said the space agency in a recent announcement of the experiment. NASA hopes to coax basil, turnips, and Arabidopsis, a small flowering plant, from tiny seedlings to hearty greens in one-sixth of the gravity they're used to here on Earth.
Plants, like humans, are sensitive to environmental conditions when they are seedlings. Their genetic material can be damaged by radiation in outer space, as well as by a gravitational pull unlike that of Earth. "If we send plants and they thrive, then we probably can," the statement read.
Humans would depend on plant life to live out their days in an extraterrestrial world, just like they do on their home planet. Plants would provide moon dwellers with food, air, and medicine. They would also, as previous research has shown, make them feel better by reducing stress, and even improve concentration—welcome side effects for those aware that their new home is built to kill them.
NASA hopes to cultivate its green thumb by sending a sealed growth chamber to the moon on the Moon Express lander, a privately funded commercial spacecraft, in 2015. The 2.2-pound habitat will contain enough oxygen to support five to 10 days of growth and filter paper, infused with dissolved nutrients, to hold the seeds. When the spacecraft lands in late 2015, water will surge into the chamber's filter paper. The seedlings will use the natural sunlight that falls on the moon for energy. An identical growth chamber will be mirroring the experiment on Earth, and the twin experiments will be monitored and compared.
Astronauts have been tinkering with plants in space for some time now, growing (and even glowing in the dark) aboard the International Space Station. Cultivating a garden on the moon, however, is the first genuine life sciences experiment on another world. | <urn:uuid:784f56ba-cd09-4426-bd8b-842d217563d4> | CC-MAIN-2017-04 | http://www.nextgov.com/emerging-tech/2013/12/nasa-sending-basil-moon/74945/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00447-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948406 | 443 | 4.1875 | 4 |
Who can forget one of the most iconic lines from the original Star Trek television show: "Warp drive, Mr. Scott. Make it so." Kidding! Just doing a little mash-up there. Before any Trekkie heads explode, here's the real line, delivered by a young William Shatner. All of which is a clumsy, Friday-ish way to lead into this week's meeting in Dallas of space scientists who will discuss possible ways to propel spacecraft faster than the speed of light and probably engage in master-level Star Trek trivia contests. From Yahoo news:
Spacecraft propelled by antimatter harvested by robotic factories on Mercury will be under discussion, as will spacecraft made from hollowed-out asteroids and a laser-beam “highway” to provide energy for ships to “hop” to nearby stars.Some of these technologies may come into being within 20 years, the organizers claim -- but the goal is interstellar travel by 2100, visiting planets such as those found by NASA’s Kepler space telescope.
Maybe it is possible that humans could go interstellar by then. After all, 2100 is more than 86 years from now -- a long time in terms of scientific discoveries and breakthroughs -- but we're still talking about taking nine months to go to Mars. We'll need a quantum leap to get to the interstellar level. Yet I'm not sure what the big rush is. Granted, we're well ahead of schedule in degrading the Earth's environment, but our planet is a long way from being uninhabitable. And while I agree with Dr. Friedwart Winterberg, a theoretical physicist from the University of Nevada, who tells Yahoo, “For the human species and its unique culture to survive the death of the sun, a bridge must be built to other solar systems with earthlike planets," scientists expect our sun to remain unchanged for several billion years. There's time. So while I think it's great that researchers are trying to advance the science of space travel, we're not yet in a doomsday scenario. To use a football analogy, let's work the ball upfield before throwing up a hail Mary. Now read this: | <urn:uuid:7f071a6e-9dba-4759-ac9f-31f22a875ec5> | CC-MAIN-2017-04 | http://www.itworld.com/article/2708369/enterprise-software/no-hurry-for-warp-speed.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00266-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944507 | 440 | 2.5625 | 3 |
For years, Microsoft has had to contend with the impression that their products were not as secure as their counterparts. This was especially true when it came to Internet Information Services (IIS), the Microsoft web server. Much of the issues associated with ISS were related to the fact that many of the services were enabled by default.
Right out of the box, IIS could run as a fully functional web server without much need to configure various services. Unfortunately, attackers knew this and were able to compromise servers, and web sites, that relied on IIS because many of the administrators who installed the software were not aware of what steps they needed to take to secure this application.
Much has changed over the years. In response to an increase in web-based application attacks, Microsoft made attempts to increase security in all of their products, including IIS. In version 6, they rolled out what was referred to as a lockdown by default approach where many features and services were left out, or disabled, in the default installation. They were still available, however the administrator had to enable or install them giving them full knowledge that they were running. In version 7, this approach changed again to take on a minimum install approach where only the bare minimum components are installed giving attackers a much smaller surface to work from.
Despite the strides taken to protect IIS 7 from attacks, there are still risks that a web administrator needs to be aware of if they are running this application as their web server - this is what makes using a WAF (Web application firewall) so appealing. Unfortunately, some of the things that make Microsoft’s IIS so appealing are also some of the issues that anyone using it needs to be aware of.
It is not insecure because it is a Microsoft product, but the fact that Microsoft still makes things easier for administrators still makes it a target. IIS can be installed and run on Server 2008 Core, which uses a command line interface rather than Windows. In this environment, the server is much more secure. However, when Windows is used the temptation to make use of Internet Explorer to connect to the web is far too great and happens far too often. When servers are allowed to access the web, they are put at risk. Windows makes it too easy for a lazy admin to simply fire up IE to find something from their server rather than a workstation.
One of the biggest threats to security is a web application. Odds are that most servers using IIS are using Windows. In a Windows environment, it is far too easy to install web applications like WordPress, Joomla!, or ZenCart. Although this is a huge selling point, it also poses a risk because if the web administrator does not have background knowledge related to the vulnerabilities that are present in these, or any other web application, then they may unknowingly be installing insecure software onto their server.
Of course, this can be true of applications installed via a command line interface or GNU/Linux shell as well, however odds are that if a person is adept at using these tools, they are more aware of basic security risks as well.
Unfortunately for Microsoft, many web admins still remember what the Code Red and Nimda worms did to web servers using IIS. Defacing web sites, hitting them with Denial of Service attacks, and exploiting path traversal vulnerabilities.
Due to Microsoft’s market share, it will always be a preferred target for malware attacks. Even as engineers work to patch known vulnerabilities, the thousands of pieces of malware being released into the wild every day that pose significant threats to any server running Microsoft.
Like any server, certain steps need to be taken to harden the operating system against attacks. While malware prevention, Intrusion Detection/Prevention Systems, network firewalls, and all of the other tools and techniques help prevent some attacks, they don’t adequately prevent attacks launched against any third-party applications that have been installed on the server.
dotDefender protects IIS web servers against a variety of vulnerabilities to include:
By acting as a Security-as-a-Service solution, dotDefender is able to provide protection to web servers whether the admin has an extensive background in security or just a minimal amount of knowledge on the subject.
With dotDefender web application firewall you can avoid many different threats to web applications because dotDefender inspects your HTTP traffic and checks their packets against rules such as to allow or deny protocols, ports, or IP addresses to stop web applications from being exploited.
Architected as plug & play software, dotDefender provides optimal out-of-the-box protection against DoS threats, Cross-Site Scripting, SQL Injection attacks, path traversal and many other web attack techniques.
The reasons dotDefender offers such a comprehensive solution to your web application security needs are:
Whether your web server is running IIS or Apache makes little difference. With hundreds of millions of dollars being stolen each year by cyber criminals vulnerabilities will continue to be a problem as known ones are exploited and new ones emerge.
In addition to money and data stolen as a result of compromised servers and web sites, businesses have to contend with a damaged reputation after an attack. When a breach of security occurs, customers and visitors second guess visiting that site if they know that they are not safe. Once the search engines find malware or spam on a web site, it can be flagged as malicious and removed from the search engine results page causing a loss in legitimate traffic.
dotDefender's unique security approach eliminates the need to learn the specific threats that exist on each web application. The software that runs dotDefender focuses on analyzing the request and the impact it has on the application. Effective web application security is based on three powerful web application security engines: Pattern Recognition, Session Protection and Signature Knowledgebase.
The Pattern Recognition web application security engine employed by dotDefender effectively protects against malicious behavior such as the attacks mentioned above, and many others. The patterns are regular expression-based and designed to efficiently and accurately identify a wide array of application-level attack methods. As a result, dotDefender is characterized by an extremely low false positive rate.
What sets dotDefender apart is that it offers comprehensive protection against threats to web applications while being one of the easiest solutions to use.
In just 10 clicks, a web administrator with no security training can have dotDefender up and running. Its predefined rule set offers out-of-the box protection that can be easily managed through a browser-based interface with virtually no impact on your server or web site’s performance. | <urn:uuid:38479de0-6980-47fd-ade9-bff6c34af9a5> | CC-MAIN-2017-04 | http://www.applicure.com/solutions/iis-security | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00110-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95345 | 1,348 | 2.59375 | 3 |
A rule to live by with sensitive data is that at some point, your server will be compromised. With that in mind, it’s a good idea to protect your data, and more importantly your customer’s data, from a server breach.
Storing user passwords or credit card information in a database in plain text is a big problem. It sounds like common sense that you’d encrypt that information but all too often it’s left wide open or weakly encrypted. Weak encryption isn’t a lot better than no encryption. To the naked eye you can’t read the data, but if the attacker is able to download the encrypted data, they have plenty of time to crack the encryption later on.
One of the biggest challenges with automated encryption is the storage and retrieval of the secret key which must be used to unlock the data. That single key must be a value knowable by your software but should be hard to get to by anyone else. When using strong encryption, this is the obvious weak point.
A common form of encryption is the Advanced Encryption Standard (AES). AES uses a Rijndael cipher to create the secret key in either 128 or 256 bit key lengths. Along with the secret key, you also generate an Initialization vector (IV) which is a unique value applied to the data being encrypted to ensure that two identical pieces of data do not result in the same encrypted output. Otherwise it would be easy to find all the common data in a database and deduce a pattern from the encryption.
To safely store your data in a database, you’d start by generating a strong secret key value in a byte array. This is best generated programmatically. This single key can be used to encrypt all of the data you’d like to store.
When you perform an encryption operation you initialize your Encryptor with this key, then generate a new, unique Initialization Vector for each record you’re going to encrypt. You could use the same IV for a single row of data provided that the value in each column will not be similar to any other column, otherwise you should use multiple IV’s. The IV does not need to be kept secret, in fact it’s meant to be shared. You need the IV along with the secret key in order to decrypt the data, having just one of the two values will do you no good. The IV(s) can be stored right along with the encrypted data in the database.
With your strong key and IV value you can encrypt your data and store it in the database. Here’s an example from Stack Overflow in C#:
When your application needs to work with the data, the IV is included in the data row which can be used in conjunction with the private key to decrypt the data for use in the software.
Finally you need to protect your secret key. If you’re able, storing the key on a different physical server offers the best protection (keep the public and private keys separate). If you need to store the key on the same server, the best you can do is to make it non obvious how to gain access to the key, after all, if someones gains physical access to your server or can log in as an Administrator, nothing can stop them from accessing all data on the server.
For a .NET Web application, a decent option is to store the key in an encrypted Web.config file. The entire configuration file (or just specified sections if you want) will be encrypted by the operating system at either the machine level or user level. The file will be stored on the disk encrypted and when the IIS application pool loads up the application it will automatically be decrypted in memory so that your application code doesn’t need to worry about decrypting the configuration on its own. There are some other tactics you can take to store the private key on the same machine as well.
This is a straightforward way to achieve solid data encryption. Protecting the secret key is imperative so you’ll want to pay attention to that point. By employing this strategy you’ll be doing right by your customers and help to prevent yet another compromise of personal data. | <urn:uuid:d59069a7-b471-40ad-8473-1eec7036e9c7> | CC-MAIN-2017-04 | http://www.itworld.com/article/2693828/data-protection/a-basic-encryption-strategy-for-storing-sensitive-data.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00412-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.911811 | 868 | 3.046875 | 3 |
0.4.4 Dijkstra's Algorithm - Shortest Path
While Floyd's algorithm determines the lowest-cost path
between all vertices in a graph, Dijkstra's was designed to
find the lowest cost path between a single starting vertex
and all of the other vertices in a graph. Dijkstra's can, clearly,
be used to obtain the same information as Floyd's algorithm
if it is called repeatedly for every vertex in a graph.
Dikjstra's algorithm is a greedy algorithm which
means, if given a choice, it operates by choosing the biggest
or most valuable alternative.
The first step of this algorithm is to "label" the starting vertex
with an ordered pair, (-,0), and initialize a distance counter to one.
Next, look at all edges between labeled vertices and unlabeled
vertices. If the cost of a particular edge added to the second item
in the ordered pair of the initial vertex is equal to the distance
counter, label the terminal vertex of this edge (name_of_starting_vertex, distance_counter) and continue
this process, incrementing the distance counter by one at each
iteration and continuing until all vertices in the graph are labeled.
The second member of the ordered pair at each vertex is the lowest
cost walk from it to the starting vertex. The first member of the
ordered pair of the ordered pair is the node immediately preceding the
current node on the shortest path from source to destination.
The algorithm actually coded is implemented in a slightly different
manner. Because it is inefficient to store the distance counter and
look at every edge at every increment, I choose to begin at the
starting vertex and traverse to all nodes reachable from it. Each
node is labelled with a distance from the start and a previous
vertex. Each node is also enqueued for later processing.
Once all nodes adjacent to the starting vertex are processed, labelled
and enqueued, the algorithm dequeues the first node. All nodes
adjacent to this vertex that have not been visited are labelled and
enqueued. Additionally, any node that has been visited but can be
reached more cheaply is re-labelled and enqueued.
This process continues until the queue is empty. The shortest path to
a given vertex n is labelled on that vertex. The path can be
determined by examining the prior steps recursively back to the | <urn:uuid:3d3034ff-4e3a-441c-b747-595d0534f5da> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/alg/node89.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00458-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926656 | 517 | 4.0625 | 4 |
NASA's NuSTAR Takes Collaboration Into the StarsBy Samuel Greengard | Posted 2016-05-04 Email Print
A cloud-based collaboration platform enables a global team of astrophysicists and other scientists to boldly go where knowledge sharing has never gone before.
Few fields generate as much research matter as astronomy and astrophysics. Incredibly complex mathematical equations, vast quantities of data and mountains of analysis lead to detailed scientific papers. However, the ability to share and collaborate on these projects is essential.
"There are enormous challenges related to keeping research and papers in sync," says Brian Grefenstette, a research scientist in the Space Radiation Lab at the California Institute of Technology (Caltech) and a member of NASA's NuSTAR project.
The initiative, which processes data collected by the Nuclear Spectroscopic Telescope Array (NuSTAR) X-ray telescope, connects a group of about 150 scientists in 10 working groups scattered across the globe—including the United States, the United Kingdom, Germany, Japan and India. In the past, the group relied heavily on email to exchange crucial documents, including PDFs and PowerPoint decks.
"As all the information goes back and forth, people insert comments, and everything eventually winds up coalesced into a paper that is submitted to a journal like Nature or The Astrophysical Journal," Grefenstette says.
This approach created some cosmic headaches, including information that has sometimes disappeared into a black hole. Since its inception, the NuSTAR group has exchanged upward of 25,000 emails, and it has thousands of files in its achieve.
In the past, emails sometimes bounced because attachments were too large, so they were rejected by servers. Simply put, the universe of data was becoming completely unmanageable.
"Manually exchanging files and information simply was not feasible," Grefenstette explains. "It required far too much time sorting everything out and moving papers along.
Cloud Collaboration Offers a Quantum Leap in Efficiency
The NuSTAR group began exploring options that could lead to a quantum leap in efficiency. It selected cloud collaboration solutions provider Huddle. The technology offered features such as a central portal, a simple interface, strong project management features, versioning and syncing, a whiteboard and strong security.
"We are now able to organize, view and process information far more effectively," Grefenstette says. In fact, the team is now able to use the portal to view every new observation as it is recorded, changed and commented on at every step of the process.
The collaboration software also helps the team manage calendars, discussion threads, file management and notifications. "It presents a very robust environment with Web 2.0 features," he says.
Teams can access data across time zones, devices and platforms. The result has been stellar, as the scientists are able to work faster and more effectively.
"We have witnessed huge improvements in efficiency and achieved gains in output," Grefenstette reports. In fact, the NuSTAR group has produced more than 100 published scientific papers—an incredibly large number for the astrophysics field and for academia in general.
Finally, the cloud-based software has greatly simplified IT and administration requirements. "We see incremental updates to the interface and constant improvements in features," he notes, adding that there was little resistance to the change.
"People immediately recognized that this was a giant step forward," Grefenstette says. "The platform makes it much easier to get to a finished scientific paper." | <urn:uuid:5a3dc898-a897-40fa-addd-1799228ae16f> | CC-MAIN-2017-04 | http://www.baselinemag.com/messaging-and-collaboration/nasas-nustar-takes-collaboration-into-the-stars.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00540-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955169 | 720 | 2.515625 | 3 |
Nowadays, top Android devices have the same capacity as the five-seven year-old laptops, which were quite suitable for code writing. But due to some peculiarities of modern gadgets, it’s rather hard to do this sacred work on them. However, hard doesn’t mean impossible.
The environments for Android development exist, and there are many of them. The question is — do they correspond to the proud name of IDE? What is essential for convenient code writing, besides compiler and word processor?
- First of all — at least minimal syntax highlighting support.
- Second — code completion. And here there are three options. The first one — snippets. These are the abbreviations which unwrap in code line at the press of certain keys (or keys combination). The second one — cache-based code completion, when after keying in a word, one can see all variants existing in cache. And finally the third one — contextual code completion, when you are offered only those variants which are suitable for you.
Thirdly… there are many small, but useful features often applied when code writing on PC, for example integration with version control system, debugging message display, and code writing style.
Here are several IDEs that more or less meet the abovementioned requirements.
AIDE or Android SDK
Perhaps the most famous IDE for Android. Both paid and free versions are available. According to the developers, IDE has the following features:
- syntax highlighting and code completion;
- a possibility of standard Android apps creation and compilation;
- the projects are saved in Eclipse format, which enables to open them on PC. And vice versa — Eclipse projects can be opened in AIDE;
- UI designer with drag and drop support (through paid App UI Designer);
- support of NDK for ARM;
- integration with Git.
But these are, let’s say, program statements. Let’s try, as far as feasible, to analyze how trustworthy they are. So, install AIDE from Play Market. At the first start, you’ll be asked which action would you like to take — to study Java, to analyze Android apps / games development, or to start coding immediately? To make it simpler, let’s assume that you choose the latter.
Then you’ll see “Create new project…” window. Right, it’s rather similar to what you see on the “desktop” IDE. Options:
- Android app — development through Android SDK;
- Mobile Game — games development through libGDX;
- Java Application — console Java application;
- Native Android App — use of NDK;
- PhoneGap App — use of HTML5-framework PhoneGap (for which it’s necessary to install one more IDE, but we are not going to do that; the article is not about HTML5);
- Hybrid App — a mixture of PhoneGap and Java-code.
Let’s choose the first option. When creating a new project, a source “Hello, World!” is generated automatically. Perhaps it’s useful for newcomers, but for netheads such kind of care seems irritating. Anyway, some desktop IDEs also do the same thing.
Let’s take a look at the interface, which (though it’s full of IDE functions) is rather ascetic. After the project creation, two files get opened: main.xml, which contains layout and actually is a description of the graphic interface, and MainActivity.java, which contains Activity. The files open in tabs.
The first tab you see after having created the project — main.xm, which seems to be simple, but it’s very boring to edit it manually. But if you splurge on App UI Designer (which costs around RUR 100), in the upper right corner you’ll see a special start button, and GUI creation will become much easier. I’ll describe its capabilities. Briefly, they are almost as good as desktop GUI designers — the same layout marking, standard widgets, properties editor… Of course there is a drop of poison — I would say it’s impossibility to conveniently link to the strings from strings.xml and absence of functions stubs autogeneration by onClick event; but in whole this tool is really able to facilitate developer’s life.
Let’s move to the code editor. Subjectively, syntax highlighting is scantier than in desktop development environments, but in fact it’s quite enough for convenient source code editing. As for code completion, it works in a pretty strange way: for example, in XML files it puts Android classes names in the places where they usually shouldn’t be — in XML properties.
It’s the same with Java editor. In Android projects, the names of imported packages often start with android. One would think that in this situation the code completion should work correctly, but it doesn’t — AIDE doesn’t find any suitable coincidence. But if you put “android” and a point, you will be immediately offered variety of other variants, just as it should be in any goodish code completion system.
The same thing for overloaded superclasses methods — it’s enough to put public void onC, and at once you will be offered a suitable choice. All fine and dandy, but after stub creation, public void appears again, just when you need to fill the stub with a code.
File panel is invoked by tapping an unremarkable sign in the upper right corner; the panel is located vertically from the left side, but the second tap will place it below horizontally. Despite this name, it’s not only a file panel — it contains error panel, search results panel and logcat panel.
In settings (invoked by tapping Menu -> More -> Settings), you can find the following configuration opportunities:
- screen image (light — dark);
- code editor (print, autosave, tabulation parameters…);
- code writing style (tabulation size, parameters aligning, new strings…);
- setup and start — here, in particular, one can install NDK and activate parallel construction (the latter eats up much memory);
- Dropbox configuration, in particular autosynchronizing;
- Git (email and username, folder for a catalogue with SSH-keys…). Here there is one parameter that I strongly recommend to change — tick Create Repo. This option forces AIDE to create repositories for new projects. I underline that it’s impossible to create a repository for current projects.
- keyboard combinations.
Since I have already mentioned Git, it’s better to describe it in details. Working with it in AIDE is a real pleasure. Development environment supports standard operations for Git — clone, commit, pull, push, checkout — but the major part of them is available only in paid IDE Premium version. Repository control is performed from the file panel. If a project is not opened, one can clone a complete one, for example from GitHub. But you should remember that it’s better to use git:// as URL — cloning by https:// doesn’t work well. Directory name field should be filled manually; IDE reaction on double slash is incorrect — instead of cloned repository it puts “-2” in the field.
Eclipse projects opening proceeds faultlessly. AIDE, as if nothing had happened, accepted not only a simple application created by me, but also a serious project found GitHub’е. Simple applications setup doesn’t take much time — 13–16 seconds for PureJava projects written without NDK application.
In general, AIDE gives rather good impression. Of course, it’s not a desktop development environment, but it can be used for complex applications coding. Yes, there are some gaps, but they are not so serious to impede development. AIDE is definitely worth being bought if you are forced to develop on the way.
Terminal IDE or mini-Linux in your pocket
Despite the name, it’s difficult to call Terminal IDE a development environment. It’s rather a Swiss military knife, which contains many tools — from GCC and Make to Vim and terminal emulator. It’s kind of a minimalist Linux environment that works on any Android device, even if the latter doesn’t have root rights.
The application represents a graphic wrapping for Linux environment, which works in a sandbox. That’s why after first start of Terminal IDE, you need to click Install System button to deploy the environment to a separate catalogue. Than, you’ll get access to a set of Linux applications, among which are:
- BusyBox — full set of standard Linux commands;
- Vim — well-known editor with a set of plugins: NERDtree, snipMate, javacomplete, etc.;
- javac — Java compilator;
- aapt — generator of APK packages from JAR files;
- GCC/Make — language compilator and a setup system for large projects;
- dropbear — SSH server and client;
- Git — the abovementioned version management system;
- mc — the very Norton Commander clone.
Let’s not go into details of commands usage (it’s the same as in Linux), but focus on capabilities of local Vim. To start, use “terminalide” command. It launches Vim with all necessary plugins. Let’s study NERDtree plugin — it’s kind of a file panel, similar to those of desktop IDEs. Here is the list of main keys and commands for the plugin:
- ma name — to create a file or a catalogue;
- o or Enter — to open a file / a catalogue;
- I — to display hidden (dot) files;
- :NERDTreeToggle — starts and closes the panel. For more convenience, I recommend to bind this command for example with \ by adding a string like map \ :NERDTreeToggle to ~/.vimrc file.
After file creation/opening, you can start writing a code; to do that, place Vim in insert mode by clicking “i” and key in a text. I’ll describe some features of Vim and plugins from point of view of IDE. Syntax highlighting here is similar to the one of AIDE. As for code completion, it works on the cache principle: the more you write, the more variants will be available further. This Vim setup includes javacomplete plugin, but it’s not reliable — in my case, it erratically reacted on keyboard combinations (, for appending key words, , for functions in insert mode). SnipMate plugin works greatly, one just needs to print, e.g., fi in Java code and click tabulation, and it will automatically deploy the combination in “final”. Let’s study a couple of abbreviations for Java files:
- main — deploys in a standard entry point for desktop Java apps;
- tc — deploys in public class FileName extends TestCase;
- t — deploys in the header of a function, which can eliminate an exception;
- fore — deploys in Java foreach;
- if — you know what it deploys in.
It’s also possible to compile to Terminal IDE (by means of F7), but you will need a make file. Before compilation, it’s necessary to unpack toolchain (located in system/android-gcc-4.4.0.tar.gz) in home catalogue manually or through install_gcc command and use C-compiler through terminal-gcc script, which installs the necessary variables and starts the compiler with necessary arguments.
Terminal IDE tools set is very large (having patience, you can even set up a core), but this is an “environment” for those who know what UNIX is and got used to Vim and terminal. If you didn’t try anything except Eclipse and similar “all-in-one” environments, IDE is not for you.
Android has its own Python version and with a possibility of graphic applications creation. This miracle is called QPython and has even three reincarnations in Play Market: QPython 3 (beta-version), QPython, and QPython Player, customized to scripts execution. We need only QPython; it enables both to write and to launch scripts, but unlike the third version it’s more stable (Python version — 2.7.2). After installation and start, you’ll see a window with the only round button, at the click on which you’ll see a menu with three points (as for me, this button is unnecessary). The points are:
- Get script from QRCode — receives a script by link coded in QR-code;
- Run local script … — enables to choose and start a script;
- Run local project … — an analogue of the previous point with the difference that in file selection window, Projects, not Scripts, will be opened as a root directory.
However, when scrolling, one more screen appears — the one which, theoretically, should be placed the first:
- Console — Python console;
- Editor — code editor;
- My QPython — scripts and projects review;
- System — a possibility to install additional libraries and components, like Docutils for example;
- Package Index — for QPython there is a QPyPi repository; this icon serves to view it.
QPython supports the following capabilities:
- work with images (PIL);
- access to Java classes (Pyjnius);
- graphic applications development by means of Kivy;
- simple games development (pygame library).
Let’s move to the code editor. And here, unfortunately, QPython with its QEdit is not so cool — it can impress only with code highlighting, which also activates only after file saving. Code completion is missing a priori, even based on cache. Actually, the only convenience is support of three boilerplates (Web App, GUI App and Console App), which, for some unknown reason, were called “snippets” by developers. They can be included to snippets catalogue. Several skins are supported: classical, dark, and Matrix — green text on the black background. With the latter, code highlighting looks much more convenient. For indents, there are two buttons in the bottom-left corner. Debugging is performed only under the classical for Android applications scheme — writing to log or (for console apps) screen output.
As a framework, QPython is beyond praise — it supports almost all capabilities of a “usual” Python. But as IDE… its functionality is definitely enough for writing small scripts, but it’s not suitable for something more than that.
Besides Python, for Android there is also Ruby interpreter called Ruboto. To develop an application on Ruboto, it’s better to use desktop, but if you want to try it on the device, you can install Ruboto IRB development environment after having pre-installed Ruboto Core framework.
De facto, this framework represents JRuby 1.7.12with stdlib library, therefore using it one can write and run the same applications that can be run and written using a usual JRuby. Adjustment for internal Android peculiarities, like a diverse internal structure of class files, is performed automatically.
The capabilities of the framework as kind of backend are quite ample — vibration, camera, and even OpenGL. But as a graphic interface for applications developed by means of Ruboto, its capabilities are pretty scanty and primitive, therefore it’s suitable only for writing one-day scripts for your own needs.
Speaking about the editor’s capabilities, it’s extremely primitive — not only is code completion missing, but even syntax highlighting. Actually, I could say that the only peculiarity is the possibility to activate full-screen mode without tabs by means of menu point Toggle usable screen.
In general, Ruboto gives a strange impression both as a framework and as a development environment. The first case perplexes — almost everything that can be written for a Google OS by means of this framework can be written also without it and often with less expenses. One should also remember that it’s not a JIT-compiler, therefore serious projects (if somebody decides to do that) will lag. Yes, OpenGL demo doesn’t lag, but I doubt that it can be considered as a serious project.
I don’t have the heart to call Ruboto a development environment — the code editor doesn’t even have a full-text search! I would say Ruboto is worth being used only in case when you have no PC or laptop around, but you desperately need Ruby, for example, to start a code of an indolent student.
Host-target development for Android is possible and often can be performed even comfortably. In the article I presented some means which can be considered IDE (though some of them can be taken into consideration at a long stretch). I think App UI Designer + AIDE liaison can be called the most potent IDE for development on Android. It costs RUR 500, but if you seriously deal (or plan to deal) with development, that’s worth the money. All-inclusive — code completion, handy UI design, and a possibility for NDK apps development.
Terminal IDE, though having JavaComplete plugin and APK packages creation tools, is customized to console apps — here it’s second to none. QPython will be interesting for Python fans. As an IDE it’s worse than two abovementioned applications, but, if you get used to it, you can use even its editor. As for Ruboto, I can say Proof of Concept — and there is nothing to add. It hardly makes sense to use it without an urgent need. IDE diversity, as you can see, is quite ample, so the choice is up to you. | <urn:uuid:1dbbd5f0-f1b1-4b73-b5c5-c6ec93dc6ba4> | CC-MAIN-2017-04 | https://hackmag.com/mobile/transforming-android-tablet-into-a-coding-machine/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00569-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916229 | 3,798 | 2.625 | 3 |
Android phones to be 'brains' of space station robots
- By Henry Kenyon
- Jul 15, 2011
NASA is using smart phones to make some prototype robots aboard the International Space Station, well, a little smarter.
Scientists at NASA’s Ames Research Center are incorporating Nexus S smart phones into a trio of circular floating robots known in agencyspeak as Synchronized Position Hold, Engage, Reorient, Experimental Satellites (SPHERES).
The overall goal of the effort is to provide both astronauts on the ISS and ground control with a mobile system capable of moving around the space station and performing various tasks such as taking sensor readings or performing visual/video inspections.
Androids in space: Phone to be launched as nanosatellite
The SPHERES robots have been on the ISS since 2006. The floating robots were initially put on the station to study satellite docking dynamics for future spacecraft. One of the ideas behind the robots was also to have them serve as onboard helpers for astronauts. The Nexus phones, Android operating system devices developed by Google and Samsung, are attached to the outside of the robot and connected to their internal circuitry via a serial cable.
The phones, which recently arrived on the station, greatly enhances their potential as an automated assistant, said Mark Micire, a research scientist at NASA’s Ames Research Center.
As many jobs robots do, SPHERES could take over the dull, time-consuming jobs aboard the station, such as taking radiation measurements and monitoring noise levels. “The idea that the robots can go around the station and do these mundane tasks is a possibility,” Micire said.
Besides helping the crew, NASA plans to have the robots controlled from ground control back on Earth. For example, if the crew is asleep and there is a sensor alert or some other issue, ground control could remotely have the robots make an inspection without waking the crew. Although the robots have only been used inside the station, there is also the possibility of using them in space to inspect the outside of the station. The SPHERES machines use tiny jets of carbon dioxide to maneuver and move.
NASA scientists looked at a number of handheld devices before they settled on the Nexus, Micire said. One of the major draws for the Nexus is that it has a powerful computing capability coupled with a camera and a variety of built-in sensors. According to NASA, the Nexus S is the first commercial smart phone certified by the agency to fly on the space shuttle and to operate on the space station.
The device has a 4-inch touchscreen, a 1 GHz processor, digital camera, gyroscope, accelerometer, proximity and light sensors, Bluetooth and Wi-Fi networking, and 16 G of internal memory. The phone's open-source Android platform allowed scientists to program the devices to work with the SPHERES robots. “That little phone has enough horse power to fly a miniature spacecraft,” Micire said.
Although the Nexus S phones are already aboard the ISS, the first tests are not scheduled until September. The initial question that must be worked out is whether the phones’ gyroscopes and accelerometers, which were designed for playing games on Earth, will operate in space, Micire said. Other considerations include making sure that the phone’s camera works in the station’s lighting and connecting the station's Wi-Fi network to the station’s communications system to link back to ground control.
A test for the communications link is scheduled for December. | <urn:uuid:bc44ddf3-658f-4d1f-8b6b-e22243a2bc75> | CC-MAIN-2017-04 | https://gcn.com/articles/2011/07/15/android-phone-brains-robots-space.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00477-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950532 | 724 | 3.0625 | 3 |
Definition: The value which has an equal number of values greater and less than it. For an even number of values, it is the mean of the two middle values.
See also mean, mode, midrange, Select, select and partition.
Note: The median is the "middle" value. The median of 1, 1, 3, 50, and 60 is 3 since two elements (1 and 1) are less than it and two elements (50 and 60) are greater. The median of 1, 1, 2, 3, 3, and 4 is 2.5, the mean of 2 and 3.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 16 November 2009.
HTML page formatted Mon Feb 2 13:10:39 2015.
Cite this as:
Paul E. Black, "median", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 16 November 2009. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/median.html | <urn:uuid:06c5ac64-612a-4ffb-9f6d-06d2bf917d73> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/median.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00413-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.892022 | 240 | 3.609375 | 4 |
RJ45 might be a name you are not so familiar with, but actually you have seen it in your daily life for a thousand times. Whenever there is a Ethernet network connection, RJ45 is always the interface used for devices. “RJ” means registered jack, which is a standardized telecommunication network interface for connecting voice and data equipment to a service provided by a local exchange carrier or long distance carrier. RJ45 is a member of the RJ family. It is especially used for Ethernet networking. And “45” is the number of the interface standard. Ethernet cables with RJ45 connectors are also called RJ45 cables.
Network interface card, or the network controller, is used to provide an electrical interface for the computer to the network and completely complies with the Ethernet standards. A proper termination is needed for connecting the LAN (local area network) media and the proper motherboard. Typically, cat 6 cable is terminated with an RJ45 modular plug, so its network interface card must have an RJ45 modular jack that matches with the patch cable.
RJ45 cable plug, namely crystal head, is widely applied to LANs and ADSL (asymmetric digital subscriber line) connections. The translucent appearance is the reason why it is described as crystal head. RJ45 cable plug is usually made of a plastic piece with eight pins on the port. Four of the pins are used for sending and receiving data, and the other four are used for other technologies or power networking devices. The plug can be inserted along a fixed direction and automatically prevent shedding. Moreover, this kind of connector is suitable for many applications, such as Ethernet networking, telecommunications, factory automation and so on. Actually, RJ45 connector was first invented in the 1970s to replace the bulkier connector for connecting modems to telephones in the telecommunication industry. Nowadays, it is frequently used for networking devices including Ethernet cables, modems, computers, laptops, printers, etc.
RJ45 and RJ11 are two commonly used jacks, but people often mixed them together because of their similar appearances. The biggest difference between them is that they are used for different applications. RJ45 is applied to networking while RJ11 is applied to telephone sets. Another difference is the number of wires in their connectors. From the previous introduction, we know that RJ45 connector has eight wires. But unlike RJ45, RJ11 has only four wires so that its size is also smaller than RJ45 connector. Although the smaller size of RJ11 makes it easier to be plugged into the RJ45 slot, it is not recommended to do so since this may damage the device that adopts the RJ45 slot. At present, instead of RJ11 jacks, RJ45 jacks are usually placed on the wall outlets inside people’s houses to reduce the number of visible wiring when using VoIP (voice over Internet protocol) handsets that are rapidly gaining popularity.
Generally speaking, RJ45 is one of the most popular connector types right now. People can use it to realize the Ethernet networking connections. The easy plug-n-play style reduces the difficulty of installation. In addition, RJ45 network interface cards and RJ45 plug cables are the extensively used products that come from the RJ45 family. | <urn:uuid:1f9d8cd8-be3f-421b-8dc8-61e959d92eaa> | CC-MAIN-2017-04 | http://www.fs.com/blog/introduction-of-the-rj45-interface.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00009-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948916 | 662 | 3.1875 | 3 |
The 500-Terawatt Laser Shot
/ July 17, 2012
Fifteen years of work by the Lawrence Livermore National Laboratory's (LLNL) National Ignition Facility (NIF) team paid off on July 5 with a historic record-breaking laser shot, according to LLNL. The NIF laser system of 192 beams delivered more than 500 trillion watts (terawatts or TW) of peak power and 1.85 megajoules (MJ) of ultraviolet laser light to its target. Five hundred terawatts is 1,000 times more power than the United States uses at any instant in time, and 1.85 megajoules of energy is about 100 times what any other laser regularly produces today. | <urn:uuid:96b14d2d-ee82-42ca-83b7-bb1996921d95> | CC-MAIN-2017-04 | http://www.govtech.com/photos/Photo-of-the-Week-The-500-Terawatt-Laser-Shot-07172012.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00037-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.910991 | 146 | 2.75 | 3 |
Sunday, November 29, 2009
Specifically, "Just as a number of local or regional companies provide both electricity and gas, independent telephone companies would be encouraged to provide both telephone and information utility services in their respective territories"
The original copy of this intriguing document resides in the Smithsonian National Museum of American History, Lemuelson Center for the Study of Invention & Innovation, in the Western Union Telegraph Company Records archival collection covering the years 1820-1995.
Here is the complete text.
1965: Western Union's Future Role-as the Nation's First Cloud Utility
Thursday, November 26, 2009
First let's look at scaling out, or to scale horizontally which basically means to add more nodes to a distributed system, such as adding a new servers or storage (which is easier). These could be in the form of physical or virtual servers. An example might be scaling out from one web server system to many dedicated slaves machines. Google has made an art form of scaling out. They have data centers around the globe geared toward this one core task - just in time hardware provisioning, but for most this is a very difficult and costly endeavour. Virtualization makes this sort of instant replication & provisioning of many virtual machines much easier.
Next is scaling up or the ability to scale vertically which means adding resources to a single server in a distributed system. Typically this involves the addition of CPUs or memory to a single virtual server in the form of Virtual CPU and RAM. Unlike a physical server, in a virtual environment you can change your virtual hardware characteristics, a physical server is what it is. You run at it's maximum potential limiting it's ability to easily scale up. If you need more scale you need more hardware or have to manually add more components to the physical server (RAM, CPU, storage, etc), which means downtime while the servers are upgraded. In virtual environment this isn't a limitation and can often be done on the fly.
Vertical scaling of existing systems also enables you to better leverage Virtualization technology because it provides more resources for the hosted Operating system and Applications that can share these resources in a multi-tenant environment. Virtualization also allows for more automated programmatic control of the system resources in correlation to the demands placed on the infrastructure or application being hosted. This is because in a virtual infrastructure you are not managing any actual physical components but instead virtual representations of them.
So it is very true that virtualization isn't a requirement of a cloud infrastructure, it just makes it a heck of lot easier to manage and scale out or up or both.
Wednesday, November 25, 2009
Hotel Seoul Kyoyuk Munhwa HoeKwan
202, Yangjae-Dong, Seocho-Gu, Seoul
13:55PM Introduction & Networking
14:10PM Lightning Talks (long form)- chaired by Chan-Hyun Yoon, KAIST (30 min each)
15:10PM Coffee Break
16:30PM unPanel / Breakout Discussion - chaired by Yang-Woo Kim (Dongkuk University)
17:30PM Wrap Up, Dinner & Networking
- KCSA (Korea Cloud Service Association)
- KISTI (http://www.kisti.re.kr/english/index.jsp)
Monday, November 23, 2009
Let me point of a few the more interesting points of my trip to the land of the rising sun. As I mentioned in my previous post about the opportunities for Cloud Computing in Asia, if my schedule is any indication of the demand for cloud products, there is a tremendous amount. Every minute of my trip was accounted for with non-stop meetings. I will also point out that the Japanese know how to entertain. As you can probably tell, I do a lot of traveling and am quite frequently taken to fancy restaurants, nothing comes close to the fine restaurants of Tokyo. Duck Sashimi anyone?
As for CloudCamp Tokyo, it was well attended with more then 160 in attendance. One of the more interesting aspects of the Camp was how the Japanese interact in an unconference setting. To put it simply, they don't. Getting them to publicly speak was a challenge. A few ask questions, but generally it was a one way conversation. I spoke, my translator spoke. The lightning presentations were also very well received. After the main unconference is when things got interesting. We had an open bar which probably helped loosen things up a bit. In an orderly single file fashion, almost everyone of the 160 or so attendees proceed to introduce themselves to me, handing me their business cards, with both hands, followed by a bow and a Hajimemashite (a polite 'Hello, I am pleased to make your acquaintance' which you only use the very first time you meet).
I also found it interesting that language and cultural differences are major barriers. Unlike Europe where most business people speak English, this is not the case in Japan. Most don't. To get around this we worked with a large Japanese System Integrator which provided us with two very nice Japanese translators (Eno-san and Maki-san - pictured) . The firm also provided us with introductions to most of the major Japanese cloud customers including the top Hosting, Data Centers, Telecoms etc. Without the help of the SI we would have had much more difficult time, a good portion of our meetings involved our translators doing the majority of the talking. So my suggestion to any company looking to sell cloud products and services to the Asian market is to find yourself a local partner who can act as a guide to the local business scene.
All in all a succesful week in Japan. Next week I'll be in Tel Aviv at the The World Summit of Cloud Computing. Should be interesting.
(P.S) Wear a suit and tie.
ENISA supported by a group of subject matter expert comprising representatives from Industries, Academia and Governmental Organizations, has conducted, in the context of the Emerging and Future Risk Framework project, an risks assessment on cloud computing business model and technologies. The result is an in-depth and independent analysis that outlines some of the information security benefits and key security risks of cloud computing. The report provide also a set of practical recommendations.
A few highlights of the report include:
- The Cloud’s economies of scale and flexibility are both a friend and a foe from a security point of view. The massive concentrations of resources and data present a more attractive target to attackers, but cloud-based defences can be more robust, scalable and cost-effective. This paper allows an informed assessment of the security risks and benefits of using cloud computing - providing security guidance for potential and existing users of cloud computing.
- Scale: commoditisation and the drive towards economic efficiency have led to massive concentrations of the hardware resources required to provide services. This encourages economies of scale - for all the kinds of resources required to provide computing services.
- Architecture: optimal resource use demands computing resources that are abstracted from underlying hardware. Unrelated customers who share hardware and software resources rely on logical isolation mechanisms to protect their data. Computing, content storage and processing are massively distributed. Global markets for commodities demand edge distribution networks where content is delivered and received as close to customers as possible. This tendency towards global distribution and redundancy means resources are usually managed in bulk, both physically and logically.
STANDARDISED INTERFACES FOR MANAGED SECURITY SERVICES: large cloud providers can offer a standardised, open interface to managed security services providers. This creates a more open and readily available market for security services.
LOCK-IN: there is currently little on offer in the way of tools, procedures or standard data formats or services interfaces that could guarantee data, application and service portability. This can make it difficult for the customer to migrate from one provider to another or migrate data and services back to an in-house IT environment. This introduces a dependency on a particular CP for service provision, especially if data portability, as the most fundamental aspect, is not enabled..
ISOLATION FAILURE: multi-tenancy and shared resources are defining characteristics of cloud computing. This risk category covers the failure of mechanisms separating storage, memory, routing and even reputation between different tenants (e.g., so-called guest-hopping attacks). However it should be considered that attacks on resource isolation mechanisms (e.g.,. against hypervisors) are still less numerous and much more difficult for an attacker to put in practice compared to attacks on traditional OSs.
MANAGEMENT INTERFACE COMPROMISE: customer management interfaces of a public cloud provider are accessible through the Internet and mediate access to larger sets of resources (than traditional hosting providers) and therefore pose an increased risk, especially when combined with remote access and web browser vulnerabilities.
Read the Complete Report Here >
Monday, November 16, 2009
Today, at the Microsoft Professional Developer Conference (PDC) in Los Angeles, Microsoft announced the release of version 4.0 of the.NET Micro Framework, but also that they are open sourcing the product and making it available under the Apache 2.0 license, which is already being used by the community within the embedded space.
The .NET Micro Framework,a development and execution environment for resource-constrained devices, was initially developed inside the Microsoft Startup Business Accelerator, but recently moved to the Developer Division so as to be more closely aligned with the overall direction of Microsoft development efforts.
Thursday, November 12, 2009
One of the more interesting side effects of creating the CloudCamp series of events around the globe has been as a market research vehicle. As interest in Cloud Computing increases in various geographic regions, so does the interest in folks on the ground who want to help organize local CloudCamp events. This network of local organizers has become an invaluable resource into new markets. These events have also done a tremendous job of forecasting potential high growth markets and more importantly the opportunities for Cloud computing within various emerging markets. And lately it seems that by far the largest opportunities are coming from one particular region of the world.
To give you some background, we have an upcoming CloudCamp next week in Tokyo (November 17th) organized by NTT among others as well as next month in Seoul, South Korea (Dec 16th) organized by the Korea Institute of Science and Technology Information and the newly formed Korea Cloud Service Association. The Japanese, South Korean and Chinese markets have been particularly strong for CloudCamp. Based on the this interest, we will also be doing a series of CloudCamp's in China (Shanghai, Beijing and Hong Kong), which will mostly likely take place in early 2010. (If you're interested in sponsoring one of these events, please get in touch)
As a more personal example, I will be in Tokyo next week for a CloudCamp Tokyo event on Tuesday as well as a number of business meetings. Purely from a demand point of view, from the moment I get off the plane on Monday until I leave on Sunday, I have non-stop meetings from 9am through dinners late into the evening every night of the week with various Japanese firms looking to capitalize on the booming Cloud Computing sector. We've seen so much interest from Japan that we've started to have to turn down meeting opportunities. To say the least, the interest in "Kumo" Japanese for cloud is astounding.
We've seen similar levels of interest in China as well where there seems to be a technological renaissance occurring. China is a very unique place when it comes to Cloud Computing. First of all they don't have the legacy infrastructure that most Western economies suffer from. It's in a sense a greenfield opportunity where the Chinese have the opportunity to choose the latest & best technology solutions without regard for how it may effect legacy systems -- since there really isn't any.
For instance, look at the massive adoption of mobile phones over the last several years, the traditional landline was almost completely bypassed for the newer and more efficient mobile options. Computing is also seeing a similar bypass, with projects such as national wifi networks being built in conjunction to a masssive multi-billion dollar national railway system. The Chinese seem to have realized that a national infrastructure is more then just a physical one, but also virtual.
I'm not alone in making this conclusion about the Asian market, In a recent report, Gartner said infrastructure software will account for 64.4 percent of overall enterprise software spending in the Asia-Pacific region next year, with APAC enterprise software spending to grow 10.2% in 2010 - the fast growth in any of the various global software markets.
Following upon the same sense Amazon Web Service has just announced an expansion into the Asian region in the first half of 2010. Saying "AWS customers will be able to access AWS’s infrastructure services from multiple Availability Zones in Singapore in the first half of 2010, then in other Availability Zones within Asia over the second half of 2010. AWS services available at the launch of the Asia-Pacific region will include Amazon EC2, Amazon S3, Amazon SimpleDB, Amazon Relational Database Service, Amazon Simple Queue Service, Amazon Elastic MapReduce, and Amazon CloudFront."
“Developers and businesses located in Asia, as well as those with a multi-national presence, have been eager for Asia-based infrastructure to minimize latency and optimize performance,” said Adam Selipsky, Vice President of Amazon Web Services. “We’re very excited to announce the expansion of AWS infrastructure into Asia to help our customers plan their technology investments and better serve their end-users in Asia.”
Tom Lounibos, CEO of SOASTA had an interesting comment on the opportunity in a twitter post earlier saying "AWS announces Singapore site 7 hours ago, and I wake to three SOASTA customer requesting Cloud Testing from Singapore! "Demand" wins!"
Although I am just one man from just one company I believe that in some small way that both Enomaly and CloudCamp represent the tip of the iceberg when it comes to the opportunity to offer Cloud Computing related products in service to the Asian Market and from where I sit there is no bigger opportunity then in Asia.
Monday, November 9, 2009
ECP is a carrier-class architecture & cloud hosting platform which supports the deployment of very large public cloud infrastructure for service providers. The platform has been designed to span multiple federated data centers in disparate geographies around the globe handling hundreds of thousands of VM's and multi-tenant customers.
This version of ECP Service Provider Edition brings the follow enhancements over 3.0.2:
- KVM is now directly supported as a hypervisor at install time.
- Sample data is installed during initial installation, so there is no need to create a customer/group/permissions before testing the system. See INSTALL for default user/pass.
- VNC window in customer UI is now identical to the Admin UI. Passwords for the VNC console are now found under Info button at VM level.
- Info window now shows how to connect with an external VNC client as well as the existing Java applet.
- VNC window can be disabled entirely on a per VM basis.
- App Center can now be searched/filtered. This is useful if you offer a large number of appliances.
- Admin Dashboard now shows graphical whole cluster resource usage.
- Network Manager has been removed. All deployments are recommended to use DHCP for IP assignment going forward.
- Various performance improvements have been added at customer UI level.
- Various performance improvements have been added to infrastructure code.
Enomaly's Cloud Service Provider Edition extends our core ECP platform, already used by thousands organizations around the world, with the key capabilities needed by xSPs, carriers, and web hosting providers who want to offer an Infrastructure-on-demand or IaaS service to their customers. Enomaly ECP Service Provider Edition provides a powerful but simple customer self-service interface, customer-facing REST API, theme engine, strong multi-tenant security, a hard quota system, and flexible integration with your billing, provisioning, and monitoring systems.
Screen Shots (click to enlarge)
Sunday, November 8, 2009
As someone who spends his days eating, breathing and sometimes drinking cloud computing, it's fun to see how the debate has recently devolved into a debate purely focused upon the finer semantic nuances of the various terminologies. The debate seems to generally focus on the varied usages within the companies that are attempting to "cloud-ify" themselves & their products/services. This cloudification seems to be the trend du'jour within the technology industry, an attempt to augment marketing materials and or product positioning to include cloud related buzz words, whether they make sense or not.
Actually one of the better stated criticism comes from Oracle CEO Larry Ellison who observes that cloud computing has been defined as "everything". It's everything and nothing in particular, a trendy word that is used more to impress than explain a particular problem. I for one completely agree.
As a marketing term, cloud has enabled us to broadly define the movement away from the desktop / server centric past to the cloud [Internet] enabled future. Wikipedia's cloud definition says it well, "it is a paradigm shift where technological details are abstracted from the users who no longer need knowledge of, expertise in, or control over the technology infrastructure "in the cloud" that supports them". Yup, enough said.
This message is to of you -- the ones who are jumping on the cloud bandwagon, let me say this as plainly as possible. Regardless of whether it's "the cloud" or "cloud computing" it all comes back to the fact that it's a buzzword. A way to say we're cool, we're now, we're new, with out saying it directly (a neologism). It's the New Coke of Computing / the new taste of the Internet.
So what is The Cloud? It's the Internet. And what is Cloud Computing? It's the next big thing in computing, it's using the Internet.
Friday, November 6, 2009
More specifically it was created with an open collaboration model in mind where both large companies and individuals can equally collaborate without fear of legal ramifications. Using the OWFa the actual spec development can be done in any forum the participants choose (Unincorporated Google groups / Social Networks, non-profits, startups, Enterprises, etc.)
I'll also be the first to point out that one of the key authors is David Rudin, a Microsoft Standards Attorney. But regardless of Rudin's employer, this is a well thought out document and I for one am very excited by the potential usage of OWFa within a variety of standards processes. I believe that OWFa has the potential to dramatically effect the way we as industry both collaborate and innovate when it comes to the development of common truely open standards, whitepapers and best practices. I encourage anyone who truly believes in the creation of an Open Web to take a look the OWFa.
You can download a copy of the the final draft from here.
To start things off, below is the first in what I hope will be many of these. I'm calling this new feature Transient Ambiance, The mood evoked by my ever changing environment.
When I recorded this entry using my iPhone Voice Memo App, I found myself in the London Underground. As I waited in the tube station for my return to Heathrow airport, I was in the midst of one of those strange surreal moments. With only a Violinist and myself in the middle of a typically busy London underground station. A momentary period of solitude in an otherwise hectic week of meetings and presentations. As I sat pondering life's mysteries, a soft melodic music echoed off the dark, damp underground walls.
Download here (MP3, 2.95mb, timing:3.13)
The scope will include Standardization for interoperable Distributed Application Platform and services including Web Services, Service Oriented Architecture (SOA), and Cloud Computing. SC 38 will pursue active liaison and collaboration with all appropriate bodies (including other JTC 1 subgroups and external organizations, e.g., consortia) to ensure the development and deployment of interoperable distributed application platform and services standards in relevant areas.
Similar to other ISO initiatives each member country that’s interested in participating in this group will come up with their own structure to provide feedback on work items and establish voting positions, including the InterNational Committee for Information Technology Standards (INCITS) who will be the US TAG.
Administrative support and leadership of SC 38 will be provided as follows:
The US National Body will serve as Secretariat for the SC and its Working Groups, and Dr. Donald R. Deutsch from the US National Body will serve as the Chair for the SC. The National Body of China will provide Ms. Yuan Yuan as the Convenor of the Working Group on SOA. The US National Body will provide the Convenor of the Working Group on Web Services. The National Body of Korea will provide Dr. Seungyun LEE as the Convenor of the Study Group on Cloud Computing. The National Body of China will provide Mr. Ping ZHOU as the Secretary of the Study Group on Cloud Computing.
I’ve pasted the complete resolution in detail below.
Resolution 36 ‐ New JTC 1 Subcommittee 38 on Distributed Application Platforms and Services (DAPS)
JTC 1 establishes a new JTC 1 Subcommittee 38 on Distributed Application Platforms and Services
(DAPS) with the following terms of reference:
Title: Distributed Application Platforms and Services (DAPS)
Scope: Standardization for interoperable Distributed Application Platform and services including:
• Web Services,
• Service Oriented Architecture (SOA), and
• Cloud Computing.
SC 38 will pursue active liaison and collaboration with all appropriate bodies (including other JTC 1 subgroups and external organizations, e.g., consortia) to ensure the development and deployment of interoperable distributed application platform and services standards in relevant areas.
As per the JTC 1 Directives, SC 38 will establish its own substructure at its first meeting. Based on discussions at the JTC 1 Plenary, it is anticipated that SC 38 will initially establish subgroups as follows:
a. A Working Group on Web Services
o Draft Terms of Reference:
i. Enhancements and maintenance of the Web Services registry (inventory database of Web Services and SOA Standards).
ii. Ongoing maintenance of previously approved standards from WS‐I PAS submissions, ISO/IEC 29361, ISO/IEC 29362 and ISO/IEC 29363.
iii. Maintenance of possible future PAS and Fast Track developed ISO/IEC standards in the area of Web Services.
iv. Investigation of where web service related standardization is already ongoing in JTC 1 entities.
v. Investigate gaps and commonalities in work in “iv” above.
b. A Working Group on SOA
o Draft Terms of Reference:
i. Enumeration of SOA principles.
ii. Coordination of SOA related activities in JTC 1.
iii. Investigation of where SOA related standardization is already ongoing in JTC 1 entities, and
iv. Investigate gaps and commonalities in work in “iii” above
c. A Study Group on Cloud Computing (SGCC) to investigate market requirements for standardization, initiate dialogues with relevant SDOs and consortia and to identify possible work items for JTC 1.
o Draft Terms of Reference:
i. Provide a taxonomy, terminology and value proposition for Cloud Computing.
ii. Assess the current state of standardization in Cloud Computing within JTC 1 and in other SDOs and consortia beginning with document JTC 1 N 9687.
iii. Document standardization market/business/user requirements and the challenges to be addressed.
iv. Liaise and collaborate with relevant SDOs and consortia related to Cloud Computing.
v. Hold workshops to gather requirements as needed.
vi. Provide a report of activities and recommendations to SC 38.
Topics related to Energy Efficiency of Data Centers are excluded. On topics of common interest (such as virtualization), coordination with the EEDC SGis required.
Membership in the Study Group will be open to:
1. National Bodies, Liaisons, and JTC 1 approved PAS submitters
2. JTC 1 SCs and relevant ISO and IEC TCs
3. Members of ISO and IEC central offices, and
4. Invited SDOs and consortia that are engaged in standardization in Cloud Computing, as approved by the SG
In addition, the Convenor may invite experts with specific expertise in the field.
Meetings of the group may be via face‐to‐face or preferably by electronic means.
The SC 38 Secretariat will issue a call for participants for the Study Group.
The SGCC Convenor is instructed to provide a report on the activities of the
Study Group at the SC 38 2010 Plenary meeting.
Administrative support and leadership of SC 38 will be provided as follows:
a. The US National Body will serve as Secretariat for the SC and its Working
Groups, and Dr. Donald R. Deutsch from the US National Body will serve as the Chair for the SC.
b. The National Body of China will provide Ms. Yuan Yuan as the Convenor of the Working Group on SOA.
c. The US National Body will provide the Convenor of the Working Group on Web Services.
d. The National Body of Korea will provide Dr. Seungyun LEE as the Convenor of the Study Group on Cloud Computing.
e. The National Body of China will provide Mr. Ping ZHOU as the Secretary of the Study Group on Cloud Computing. | <urn:uuid:87369eb7-365d-46f1-b498-5ca663630896> | CC-MAIN-2017-04 | http://www.elasticvapor.com/2009_11_01_archive.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00339-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930391 | 5,367 | 2.578125 | 3 |
Analysts at Gartner have predicted that half of all smart city objectives will include climate change, resilience and sustainability by 2020.
Speaking to an audience at the Gartner Symposium/ITXpo in Barcelona this week, Bettina Tratz-Ryan, research vice president at Gartner, outlined Gartner’s thinking. She discussed how the Internet of Things (IoT) and data analytics will accelerate the development of smart cities.
Ms. Tratz-Ryan indicated that as smart cities develop, cities are defining new objectives and measurable outcomes that meet the targets agreed upon at the COP 21 in Paris to reduce greenhouse gas (GHG) emissions.
IoT to fight climate change
“With the Horizon 2020 goals of energy efficiency, carbon emission reductions and renewable energy in mind, many cities in Europe have launched energy sustainability, resource management, social inclusion and community prosperity initiatives,” Tratz-Ryan said in a statement by Gartner.
The statement points out that a number of major cities (Singapore, Gothenburg, Bristol) have adopted schemes to improve traffic and mobility, while Tratz-Ryan noted the increase in ride sharing, as well as improved infrastructure for electric vehicles and congestion charges on combustion engines as examples of cities pushing to tackle climate change.
Central to the advancement and execution of clime change goals, Gartner says, is sensors. The company predicts that next year there will be 380 million connected things in use in cities to deliver sustainability and climate change goals, rising to 1.39 billion things by 2020. Supposedly smart commercial buildings and transportation will the main contributors to this, representing 58 percent of the all IoT installed.
In buildings “Implementing an integrated business management system (BMS) for lighting and heating and cooling can reduce energy consumption by 50 percent,” claimed Tratz-Ryan. “This is a significant contribution to the commitments of cities to reduce their footprint of GHG.”
“Cities will become the environmental centers of excellence for new technology development, offering a stress test environment for the industry,” said Tratz-Ryan. “The advantages for cities will be profound. They will not only meet their mandated targets of the Horizon 2020 goals, but also develop greener and more inclusive city conditions that citizens can acknowledge as KPIs.”
Reasons to be doubtful?
IoB spoke to Clive Longbottom, analyst at Quocirca, for some expert opinion on these predictions. Longbottom was somewhat sceptical in his emailed comments, citing the power of money as an important factor in the development of smart cities.
“All of these things are a fine balancing act between various variables that the designers and founders of a smart city have to consider,” he told IoB. “The biggest of these variables is cost – while it is theoretically possible to create a zero-carbon city, the costs of doing so and of maintaining it would be prohibitive. As such, some compromises have to be taken.”
Longbottom also referenced the geopolitical factors at play here, notably the Trump factor.
“If Trump does backtrack on the US commitments to changes to try and deal with climate change…then what will this mean to smart cities elsewhere?” he said. “If they do everything they can while the US is building new cities where smog rules and the costs of housing, workers, factories and regulation are very low, can the countries looking to smart cities afford to be so practically pure in their approach?”
“If Trump’s decisions mean that China backtracks, taking India, Brazil and other growth economies with it, then sustainability starts to plummet down the priority list of not only smart cities, but every single organization on the planet – it is the only way that they can remain competitive.”
Longbottom has a point, though he does acknowledge that the sustainability message has some weight.
“What I expect to see is an increase in the amount of greenwash that is seen,” he continued. “If the amount of energy used by a smart city can be lowered for cost reasons (for example, using natural lighting where possible and using LED lights everywhere else), it hits the main variable of cost. That LED lights, being low voltage, can be powered by cheap means (stored or direct solar power, for example) lowers the need for expensive distributed grid power.”
Ultimately, however, Longbottom argues that “If the cost of putting in place such a system to save energy exceeded the lifetime savings, it wouldn’t get done – even if it did avoid those emissions and so meet the Paris agreements.” | <urn:uuid:2f4e44ad-3290-42a6-a2da-f4940d8e0225> | CC-MAIN-2017-04 | https://internetofbusiness.com/climate-change-smart-cities-gartner/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00339-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95171 | 974 | 2.671875 | 3 |
In high performance computing, Hewlett-Packard is best known for supplying bread-and-butter HPC systems, built with standard processors and interconnects. But the company’s research arm has been devising a manycore chipset, which would outrun the average-sized HPC cluster of today. The design represents a radical leap in performance, and if implemented, would fulfill the promise of exascale computing.
The architecture, known as Corona, first conceived back in 2008, consists of a 256-core CPU, an optical memory module, integrated nanophotonics and 3D chip stacking employing through-silicon-vias (TSVs). At peak output, Corona should deliver 10 teraflops of performance. That’s assuming 16 nm CMOS process technology, which is expected to be on tap by 2017.
The Corona design is aimed squarely at data-intensive types of application, whose speed is limited by the widening gap between CPU performance and available bandwidth to DRAM — the so-called memory wall. Basically any workload whose data does not fit into processor cache is a candidate. This includes not just traditional big data applications, but also a whole bunch of interesting HPC simulations and analytics codes that have to manipulate large or irregular data sets, and are thus memory-constrained.
At the CPU level, Corona contains 256 cores, each supporting up to four threads simultaneously. The Corona cores themselves are nothing exotic. The HP researchers originally assumed low-power Intel x86 Penryn and Silverthorne CPU core architectures for their design simulations, but presumably ARM or other low-power designs could be substituted.
The processor is divided into 16 quad-core “clusters,” with an integrated memory controller on every cluster. The rationale for the hierarchy is to ensure that memory bandwidth grows in concert with the core count and local memory access maintains low latency.
The processor is stacked with the memory controller/L2 cache, the analog electronics and the optical die (which includes on-chip lasers). Everything is hooked together by a 20 TB/sec dense wavelength division multiplexing (DWDM) crossbar, enabling cache coherency between cores, as well as superfast access to that cache.
The memory module, known as optically connected memory (OCM), is a separate chip stack made up of DRAM chips, plus the optical die and interface. It’s connected to the CPU stack at a still rather impressive 10 TB/sec.
To put that into perspective, the current crop of commercial processors have to get by with just a fraction of that bandwidth. The latest 8-core Intel E5-2600 Xeons, for example, can manage about 80 GB/sec of memory bandwidth and the SPARC64 VIIIfx CPU, of K computer fame, supports 64 GB/sec. Even GPUs, which generally support bigger memory pipes (but have to feed hundreds of cores), are bandwidth constrained. NVIDIA fastest Tesla card, the M2090, maxes out at 177 GB/sec.
The main function of Corona’s optical interconnect is to redress the worsening bytes-to-flop ratio that HPC’ers have been lamenting about for over a decade. For memory-constrained applications, it’s preferable to have a byte-to-flop ratio of at least one. Back in the good old days of the late 20th century, computers delivered 8 bytes or more per flop. Now, for current CPUs and GPUs, it’s down to between a half and a quarter of byte per flop.
The primary reasons for the poor ratio are the pin limitations on multicore processors, the inability to extend chip-level communication links across an entire node or computer, and the energy costs of electrical signaling. Photonics ameliorates these problems significantly since light is a much more efficient communication medium than electrons — something long-haul network providers discovered awhile ago.
Energy efficiency, in particular, is a hallmark of photonic communication. The HP researchers calculate that a memory system using an electrical interconnect to drive 10 GB/sec of data to DRAM would take 80 watts. Using nanophotonics and DRAMs optimized to read or write just a cache line at a time, they think they achieve the same bandwidth with just 8 watts.
The trick is to get the optical hardware down onto the silicon. Thanks to recent advances in integrated photonics, the technology is getting close. For example, the Corona design specifies crystalline and silicon dioxide for the wave guides, which are two commonly used materials in CMOS manufacturing. Slightly more exotic is the use of Germanium for the receptors (to absorb the light so that it can be converted back into electrical signals), a less often used, but still CMOS-compatible material. Finally, for the light source, the Corona designers opted for mode-locked lasers, since they believe a single device can provide up to 64 wavelengths of light for the DWDM interconnect.
Using the SPLASH-2, the second version of the Stanford Parallel Applications for Shared Memory benchmark suite, the HP researchers demonstrated a performance improvement of 2 to 6 times on Corona compared to a similar system outfitted with an electrical interconnect, and those speed increases were achieved using much less power. They also showed significant performance improvements on five of the six HPC Challenge benchmarks: PTRANS (22X), STREAM (19X), GUPS (19X), MPI (19X), FFT (2X). DGEMM, which is not bandwidth limited, showed no improvement.
It’s not all a slam dunk, however. 3D chipmaking and TSV technology is still a work in progress. And integrating photonic hardware using CMOS is in its infancy. But integrated photonics, 3D chip stacking, and the use of low-power cores for computation are all hot technologies now, especially for those in the supercomputing community looking down the road to exascale. The UHPC project (now apparently stuck in Phase 1) that was aimed at developing low-power extreme-scale computing, attracted proposals from Intel, MIT, NVIDIA, and Sandia that incorporated one or more of these technologies.
With Corona though, you get the whole package, so to speak. But all of the work to date appears to be with simulated hardware, and there was no mention in any of the research work of plans to create a working prototype. So whether this is destined to remain a research project at HP or something that gets transformed into a commercial offering remains to be seen. | <urn:uuid:d120eb9e-59e9-4ec6-9352-d1240fa5457e> | CC-MAIN-2017-04 | https://www.hpcwire.com/2012/03/15/hp_scientists_envision_10-teraflop_manycore_chip/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00247-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926286 | 1,357 | 2.828125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.