text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
The following overview will not immediately qualify anyone for an MBA. But it provides some initial insight into 10 aspects of business that are essential for IT practitioners to understand. So whether you aspire to be CIO, create the newest internet startup, take the lead in developing an enterprise application, or just get a leg up in your career, its not enough to just know technology. You must also know business.
Accounting, however, is not limited to the world of financial data. IT practitioners also may use similar processes to better understand their computing environments. For example, several operating systems have an accounting function that collects usage data. Similar functionality exists in local- and wide-area network management tools. This IT accounting data is useful for auditing, trend analysis, capacity planning, chargeback, and cost allocation.
Economics focuses on the study of supply and demand and the allocation of resources. A basic understanding of economic theory can be a useful tool in managing your IT organization. For example, can you accurately predict demand for your support desk? This may be vital to proper staffing levels, which if set too high may exceed your budget, or if too low, affect customer satisfaction.
If you run your IT shop as a profit center, here is a tip that may help you set the appropriate pricing levels. What is the switching cost of your customers? In other words, how easily can a customer change to a competing product or service without disrupting their business? If the switching cost is high, prices can often be raised to a certain point without fear of losing the customer, at least in theory.
Although IT practitioners may not be concerned with financial issues on a day-to-day basis, understanding financial concepts may be crucial to their role and career. Finance is generally concerned with the management of money. Often called both an art and a science, finance looks at an investment or capital expenditure and then determines the potential return or profit using a variety of techniques.
Within your IT organization, you may have performed financial tasks perhaps more often than you realize. Have you ever proposed a new technology project or the replacement of a legacy system? You probably had to include some sort of analysis as to why that money should be spent.
The IT practitioner may think of a project in qualitative terms such as reducing support costs or increasing capability. A financial manager, however, will look at the project quantitatively by analyzing the cost of making the investmentincluding the source of fundsand the potential return in terms of added value to the company (known as return on equity) or increased profitability (known as return on capital).
Most corporate IT organizations today operate globally and/or work with partners from around with world. Large IT shops, in particular, routinely work with offshore developers, international customers and team members in multiple time zones.
Even if your organization is local in scope, knowledge of international business is essential. Your business may have a contact center that operates in another country. It may use contract labor from a developing market. And your company is almost certainly looking to export goods overseas. | <urn:uuid:23910097-b256-4f82-bf21-5353eeffa74c> | CC-MAIN-2017-04 | http://www.cioupdate.com/reports/article.php/3707351/Understanding-the-10-Fundamentals-of-Any-Business.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00402-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949403 | 617 | 2.671875 | 3 |
In our March 11th post I briefly described the two types of 2D bar codes—stacked and matrix. Today I’d like to provide some additional information about stacked 2D symbols.
Stacked symbols started as 1D linear codes such as Code 39 and Code 128 that were then stacked in layers to create multi-row symbols. These first symbols included Code 49 and Code 16K. Stacked symbols later evolved to include PDF417 which provided features to increase data capacity, improve data density and strengthen reading reliability. PDF417 also incorporated error detection and correcting techniques. Another type of 2D stacked symbology is SuperCode, this code breaks data into smaller packets and can be used to create symbols in a variety of different shapes. 2D stacked codes are used in a variety of industries and applications, such as:
- Identification cards
- Inventory management | <urn:uuid:ef12be28-fbcb-44ad-bc9a-3a2ca0abd9e8> | CC-MAIN-2017-04 | http://blog.decisionpt.com/stacked-2d-bar-codes | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00338-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937888 | 174 | 2.75 | 3 |
Reprinted with permission from the May issue of Government Technology's Public CIO
When future generations look back on our nation's history, the 25 years leading to the present day will appear a bit sketchy, said Timothy Slavin, director of the Delaware Public Archives and president of the National Association of Government Archives and Records Administrators (NAGARA).
"[This period] may not be the Dark Ages, but it may be the dim ages when it comes to accessing certain kinds of public records," Slavin said. That's because, as governments increasingly rely on electronic information systems, the ability to produce digital records has far outstripped the ability to manage and preserve them.
"For years, we've had our records retention policies based on the understanding that we do everything on paper," said Matt Miszewski, CIO of Wisconsin. "Unfortunately simply because we've moved to a better, more efficient technology, we're losing some of the record about what we did."
Federal, state and local laws require governments to retain records of their activities and make them available to people authorized to view them. For everyday business matters, the required retention period may be a couple of years or a couple of decades. For the small percentage of records deemed historically significant, the retention period is virtually forever.
More and more of these records are created and maintained in digital formats. In the past, documents and transactions created on computer systems routinely yielded printouts that could serve as permanent records of a government's activity. Today, however, a growing number of records are born digital and never pass through a printer, and many paper documents are scanned and managed in digital form. Government officials charged with maintaining the public record have started to appreciate that life in the Digital Age poses new challenges. Gradually governments are starting to develop strategies to overcome them.
One question about how to preserve electronic records centers on choosing a physical storage medium. Anyone who tries using a current-day computer to view data stored years ago on a 5.25-inch floppy disk will understand why records managers must keep this issue in mind.
The market offers good media for preserving electronic records, Slavin said. "The trick is, people need to understand that electronic records need to be migrated onto new media." Data storage must keep up with evolving technologies. "Costs don't go away when [the records] are written and taken offline or near-line. The costs keep incurring."
Those costs are not prohibitive, however, said Ken Thibodeau, program director of the Electronic Records Archives (ERA) program at the National Archives and Records Administration (NARA). "Historically costs of digital media decrease by 50 percent and capacity increases by 50 percent over a period of about two and a half years. By migrating to newer media, you can actually cut your costs in half." But the migration must be done efficiently, he said.
Even so, with digital records growing in volume and complexity, storing the data is a serious problem. Robert Horton, Minnesota's state archivist, recalled copies of maps he received from the Minnesota Department of Transportation (Mn/DOT). "They were enormous TIFF images. We received 3,600 files on approximately 300 DVDs; the total storage came to 1.2 terabytes. The first time we tried loading that information on a server, it took approximately 50 minutes per DVD." Eventually the staff reduced the process to 10 minutes per DVD, he said. "But that was one set of records from one agency. Multiply that times all the people I could conceivably be dealing with. The system would break down very quickly."
Although managing physical storage media takes time and money, it doesn't pose fundamental questions about survival of the records, Thibodeau said. "You can migrate bit streams with no loss, or no alteration, to newer media." A much trickier matter is how to ensure that people 10, 20 or 100 years from now can access records created with today's software.
"The format issue is much more complex because it's not driven by the people who need the information," Thibodeau said. "It's driven by the people who want to sell new versions of software." When vendors update applications, they don't necessarily think about preserving the integrity of old records. For example, a document created 20 years ago might be readable in a current word processing program, he said. But if the technology used to create footnotes has changed over the years, footnotes in that document might not appear when viewed in the new application.
"A government archive can't be a museum of old computer systems," retained for the sake of accessing older records, Slavin said. "You have to have some kind of software-independent format in which to store these things, or if you are software-dependent, it has to be a system that's widely recognized and available."
Along with deciding how to save electronic records, governments must decide which records to retain. The same question applies to paper, but policies already exist for managing paper records. Many kinds of electronic records -- such as e-mail messages and Web pages -- have no analogy in the paper world; governments must create new policies about which to keep and which to destroy.
Minneapolis, for example, is engaged in a long-term project to revise its records management policies to cover electronic and paper records. Gradually the city is developing retention schedules for both the enterprise and individual departments. Once state officials approve a retention schedule, the city can destroy records that have reached the end of their life cycles, said Craig Steiner, records manager for Minneapolis. "In the meantime, we're maintaining all electronic records that have not appeared on a retention schedule."
In some cases, fear of losing important records has driven over-archiving of electronic content, said Doug Robinson, executive director of the National Association of State Chief Information Officers. With storage prices falling, "IT people say, 'Why worry about it?'" But while it may be easier in the short run to save everything rather than to impose order, it's important to develop a classification system and archive only critical documents, he said.
Even though storage prices are lower than ever, the volume of electronic records has soared so much that total costs are rising, Miszewski said. And as archives expand -- particularly if they expand without an enterprisewide standard for formats and records management systems -- retrieving information can be extremely burdensome, he said. "A simple request to give me all the documents related to a specific person can turn into a three-month endeavor," which carries costs of its own.
Two other challenges records managers face in the Digital Age involve cultural change. For one thing, end-users must learn to treat the electronic records they create or receive as records. "We have to figure out ways of training users and incorporating that knowledge of record keeping into everyday functions," Slavin said.
Now that all end-users can generate records at the desktop, governments must figure out how to involve everyone in records management, said Horton. "They're going to be naturally resistant to that. You can send out a policy that says, 'Make sure you treat all your e-mail messages as records,' but if you're getting 200 e-mail messages a day and you don't have a system in place to facilitate storage of e-mail, some people will delete all of them. Other people will save every single one." When a government develops records management policies, the policies must be practical to implement, he said.
The other cultural imperative is to make government IT departments build records management functions into applications from the start, but IT professionals aren't used to thinking that way, said Miszewski. "They're trying to accomplish business goals, and sometimes it's secondary that they're considering records retention issues. Changing that culture is extremely difficult, especially in a time of budget constraints." But when they understand that efficient records management helps cut storage and retrieval costs, IT managers will rally to the cause, he said.
All levels of governments are starting to implement policies and launch new initiatives to meet the challenges of digital records management. At NARA, the ERA program is working to acquire a system to enable the National Archives and the presidential libraries to preserve and retrieve any kind of electronic record, Thibodeau said.
As part of the ERA program, NARA is developing new standards and mechanisms to govern the way federal agencies manage electronic records, some of which eventually will pass to the National Archives, Thibodeau said. This will ensure that NARA can obtain all the necessary records in formats that allow them to endure through the years and maintain their authenticity.
The move to electronic records requires a fundamental shift in governing records management, Thibodeau said. Traditionally records managers tried to apply standards from the paper world to electronic records. They categorized them in the same hierarchical fashion and stored them in the digital equivalent of filing cabinets, separate from the information systems in which they are created and used.
This is not necessarily the best approach, according to Thibodeau. NARA is looking at alternative ways to integrate records management functions right into business applications. "We're looking at [defining] records management services that are nuggets of software that can be implemented in the computer systems and do things like point to what's in this database that's actually a record," he said. "And maybe map it across several tables, and allow you to control that. Or maybe another service that allows you to destroy a record when it's time -- just say, 'Go into my system and destroy all the records eligible under this authority.'"
NARA is also exploring suitable data formats for records preservation. Since the most effective format for preservation often isn't the format that meets agencies' daily business needs, archivists must carefully impose standards, Thibodeau said. The solution might be to specify a "transfer format" to which agencies could convert records when it came time to move them to an archive for long-term preservation, he said.
Extensible markup language (XML) is probably the best known format in this category, and it's one archivists often mention as a standard for records preservation. "Even if all the software today were to disappear, you have to assume a computer in the future could read XML tags and XML schema the same way computers today can read plain ASCII," Thibodeau said.
Minnesota is working with Michigan, Ohio, Kentucky, California, Kansas, several university archives and the San Diego Supercomputer Center to tackle the problem of storing huge records such as Mn/DOT's map files. Funded by the National Historical Publications and Records Commission, the Persistent Archives Testbed (PAT) is testing the use of data grid technology to store very large electronic archives. Grid technology uses the Internet and an application called the Storage Resource Broker to distribute archival responsibility over a network, Horton said. In its pilot project, Minnesota's state archive at the Minnesota Historical Society gathers the information, describes it by attaching standardized metadata, and then using grid technology, stores it in San Diego, where the available storage capacity far surpasses what Minnesota has. With records stored this way, Minnesota won't need to worry about accessing data from DVDs or other unwieldy offline storage media, or periodically moving stored data to newer media.
In Wisconsin, an administrative rule established in 2001 sets enterprisewide policies for electronic records management, Miszewski said. The state also published "a business-oriented primer" to help employees interpret the rule, so they understand what does and does not need to be retained, he said.
The state has been getting stakeholders to specify their records retention requirements and is preparing to procure an information system for managing records across the entire state government. It's important to take an enterprise approach because Wisconsin's state workers often move from one department to another, Miszewski said. "We'd like to make sure the records management system is the same across those functional silos."
As part of a project to consolidate the state government's e-mail systems, Wisconsin is developing a draft standard to govern how users should save e-mails as public records. Creating policies for an application as widely and frequently used as e-mail is complicated. "And we're not done," Miszewski said. "There is still a significant amount of debate going back and forth among different departments that have really different issues with regard to e-mail retention."
In Delaware, a "quick and dirty" study of e-mail conducted along with two other states revealed that only 2 percent to 5 percent of the messages in government users' inboxes meet the criteria for records that need to be maintained for long periods, Slavin said. "But the problem we have is that it's easier to keep [messages] electronically -- you just build a bigger mailbox."
Governments must train users to manage records like these at the point of use, Slavin said. Records management systems that can automate some of this task for users are starting to emerge, he said.
Building It In Up Front
To this end, Delaware published a series of guidelines to help ensure that when government agencies develop applications, they build records management functions into them. "Records management used to be done primarily after records creation," Slavin said. "Now we're beginning to see records management done prior to records creation."
Building records management into applications up front is a priority for Minneapolis as well. Together with the city's Department of Business Information Services, the Records Management department developed an initiative called Enterprise Information Management (EIM). Among the policies developed under EIM is one stipulating that every time a city department develops a new information system, part of the project budget must be devoted to records management concerns.
So when the City Attorney's Office developed a new case management information system last year, money was earmarked for information management requirements, which includes the retention schedule and record keeping requirements, Steiner said.
City departments will also have to add information management requirements as they update their five-year business plans. "Not only do we want to hit departments when they're developing a new system, we want them to be strategically planning for how they're going to meet their basic information management requirements," Steiner said.
Initiatives like these could soon pull governments out of the dim ages and back into the light. It's an urgent mission, Miszewski observed. "The records being lost are lost forever. The ones being over-saved are incomprehensible and unusable." The result, he said, "is simply unacceptable moving forward." | <urn:uuid:c1cc48fd-3a23-44f5-a159-b8be7f119ab0> | CC-MAIN-2017-04 | http://www.govtech.com/security/For-the-Record.html?page=4 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00338-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949669 | 2,975 | 2.515625 | 3 |
Gigabit Ethernet (1000BASE-T) and Power over Ethernet (PoE) are two network technologies that today are considered the norm. However, DC resistance unbalance in a PoE connection has the potential to cause significant problems. This paper looks at causes of DC resistance unbalance and how testing can help reduce problems with PoE systems.
Phil Muncaster reports on China and beyond
Jon Collins’ in-depth look at tech and society
Kathryn Cave looks at the big trends in tech | <urn:uuid:574e1bbc-30e9-4085-9dd5-5bd0ffdc41f0> | CC-MAIN-2017-04 | http://www.idgconnect.com/view_abstract/38727/dc-resistance-unbalance-testing-easy-low-cost-insurance-your-poe-systems | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00062-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940659 | 105 | 2.546875 | 3 |
SETI's search for alien life is back in business
- By Kevin McCaney
- Dec 08, 2011
Scientists are once again listening to the stars — or, more specifically, newly discovered planets — for signs of intelligent life.
The Search for Extraterrestrial Intelligence Institute, which in April had shut down its quest because of funding cuts, has put the Allen Telescope Array (ATA) in Mountain View, Calif., back into operation, searching systems for radio and other signals that could indicate life.
And SETI has some good candidates to start with: planets in habitable zones discovered recently by NASA’s Kepler space telescope.
“For the first time, we can point our telescopes at stars, and know that those stars actually host planetary systems – including at least one that begins to approximate an Earth analog in the habitable zone around its host star,” Jill Tarter, director of the Center for SETI Research at the SETI Institute, said in a SETI announcement. “That’s the type of world that might be home to a civilization capable of building radio transmitters.”
One planet that looks promising was reported by NASA scientists Dec. 5. Called Kepler 22b, it’s about 2.4 times the size of Earth orbiting a star about 600 light years away and could have an average temperature of 72 degrees F, USA Today reported.
“It is right smack in the middle of the habitable zone,” Kepler scientist Natalie Batalha said, according to the report.
Kepler 22b, which circles a type-G star like the sun, is the smallest and closest planet yet confirmed to exist in a habitable zone, TechNewsWorld reported.
The search for E.T. had gone on hiatus after SETI lost funding from the National Science Foundation and California, which caused the University of California Berkley, SETI’s partner, to withdrawal.
But the search has been revived with donations from the public and the Air Force, which is testing ATA’s capability for space surveillance.
SETI said it plans to spend the next two years focusing on the Kepler discovery, focusing primarily on the planets known to exist in habitable zones, where temperatures would allow water to exist. However, Tarter noted that, “preconceived notions such as habitable zones could be barriers to discovery. So, with sufficient future funding from our donors, it’s our intention to examine all of the planetary systems found by Kepler.”
Since its launch in 2009, Kepler has found more than 2,000 potential planets among about 150,000 stars within 3,000 light years of Earth.
Kevin McCaney is a former editor of Defense Systems and GCN. | <urn:uuid:49470ea9-84aa-4ef5-ab94-04dd5b2385af> | CC-MAIN-2017-04 | https://gcn.com/articles/2011/12/08/seti-resumes-search-for-alien-life.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00274-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940067 | 563 | 2.640625 | 3 |
Organizations Using the Internet
Modified 24 April 2009
The Islamic Front for the Liberation of Bahrain —
As per an RFE/RL story,
"Persians ruled Bahrain for nearly 200 years, and
the original population practiced Shia Islam,
until Sunni Arabs took power in 1783.
Groups like the Islamic Front for the
Liberation of Bahrain and Bahrain Freedom
Movement were inspired by Iran's 1978-79
revolution, and Tehran tried to mobilize
Bahraini Shia by sending in Iranian clerics
and training local clerics in Iran.
Tehran also propagandized Bahrainis through
the use of radio, pamphlets, and cassette
recordings of radical preachers.
Many Bahrainis were forced into exile or
deported for their part in a political
uprising in the early 1980s."
According to "The International Politics of the Middle East" by Raymond Hinnebusch, 2003, Manchester University Press, 2003, page 194, this was active from the 1970s to the 1990s.
- Voice of Bahrain -- Against the current government of Bahrain, presents itself as the voice of the Bahrain Freedom Movement -- http://www.vob.org/
Intro Page Cybersecurity Home Page | <urn:uuid:28dfbf23-6b0c-48ac-a02e-dfb1ac7b607a> | CC-MAIN-2017-04 | http://cromwell-intl.com/cybersecurity/netusers/Index/ba | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00576-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.910094 | 254 | 2.625 | 3 |
Biometric security is becoming more prevalent. More than 770 million biometric applications will be downloaded every year by 2019, Juniper Research said last year, quoted by CSO then. That’s up from only 6 million identity-proving biometric apps in 2015. It will be big, then.
What we’re usually thinking about, though, when biometrics are mentioned in the context of devices, is the proving of a person’s identity, perhaps with fingerprints.
However, some scientists think that there’s another way to approach biometrics. They think it doesn’t have to simply be geared towards just the identifying and verification users. You can use it for security-related tracking too.
Behavioral researchers think that eye movement can be used to track the places a user looks at on a computer screen. Analyzing the viewed spots, including for how long, could let software provide specific messages pertaining to that content being viewed.
A use could be to advise computer users that they’re about to give away PII, or sensitive personally identifiable information online, think professors at the University of Alabama in Huntsville. A kind of phishing-prevention tool, possibly.
Ironically, in this case, the eye tracker isn’t primarily for identifying the person, as it’s usually used in biometric security. Its purpose is to stop the person getting identified. They’re using the same equipment, though.
“Displaying warnings in a dynamic manner that is more readily perceived and less easily dismissed by the user” is the goal, says the university’s press release. By creating pop-ups that appear when a user looks at a field in a form, for example, the scientists think they can produce a more effective warning than something static in a text box. It’s less same-old-same-old.
“I need to know how long the user's eyes stay on the area and I need to use that input in my research,” says Mini Zeng a computer science doctoral student, who’s been working on the project. Where the user’s eyes are on the screen and for how long is calculated in the tracking.
If the user looks away from the PII-capturing form, the warning can be made to disappear. If the user looks back again, the warning flashes on the screen again and can stay there for a pre-determined amount of time—to force the user to read it. The researchers think that it’s the unpredictability of the warning flashing on the screen that adds to the effectiveness.
"If you get a warning every single time and it becomes annoying or habitual, you are going to ignore it," says Dr. Sandra Carpenter, a psychology professor in the press release.
Although the University of Alabama researchers don’t mention, in their press release, how they see the system being implemented, presumably any web-based form that has a dubious intent could be made to display the dynamic warning, perhaps through URL whitelists and blacklists lookups. The warning could be independent of the website publisher.
And if an eye recognition biometric sensor hardware gets added to devices anyway, perhaps it could help with kids’ homework management. “Hey, you’ve been looking at that Instagram post a little too long, Get back to the work,” the message might say.
This article is published as part of the IDG Contributor Network. Want to Join? | <urn:uuid:8cfeb442-6c5a-47f1-a84d-08b8cd694886> | CC-MAIN-2017-04 | http://www.networkworld.com/article/3042246/security/how-eye-tracking-could-stop-pii-leaks.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00576-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94861 | 731 | 2.546875 | 3 |
Editor’s note: Starting this month, Mark Weatherford, former chief information security officer for the state of California, will write a regular column on protecting the nation’s electrical grid. Weatherford now serves as vice president and chief security officer for the North American Electric Reliability Corp. (NERC), an organization of U.S. electrical grid operators.
For this first column in the Securing GridSpace series, I’d like to spend a little time discussing the electric grid in North America. Future articles will expand on the cyber-security issues and technical details laid out here.
Everyone talks about this nebulous thing called “the grid,” but what does it mean? The grid comprises three major components: generation of electricity, transmission of electricity and distribution of electricity. Of these, generation and transmission make up what’s commonly called the Bulk Electric System (BES).
The North American BES is divided into three “interconnects,” which includes companies from across all of the Canadian provinces, all of the United States and a portion of northern Mexico. This can be a little confusing, but the diversity of geography and infrastructure is the key to profound reliability across the grid.
Electricity is like water in that it will always seek the path of least resistance, so it helps if you can think of the grid as a lake with streams feeding into it and a dam at the other end releasing water. To keep the water at a reasonably stable level, you can’t have more water leaving than you have coming in. Similarly the grid’s goal is to maintain a steady level of electricity, which means a lot of work goes on by many people to keep the amount of electricity entering the grid equal to the amount required by businesses and homes so when you hit the light switch, the light comes on. This is more complicated than it sounds when you consider the fluctuations in power requirements on an hourly, daily, monthly and seasonal basis.
Here are a few quick statistics: The North American power grid consists of more than 5,000 companies that own and operate more than 160,000 miles of high voltage transmission lines and more than 1 million miles of distribution lines representing more than $1 trillion in assets. With a real-time capacity of more than 4.1 trillion kilowatts, this infrastructure delivers electricity to more than 334 million people. Pretty impressive — and no wonder the North American electric system is called the “largest machine in the world.”
For such an immense infrastructure, the grid’s reliability is impressive, and power companies have a long history of knowing how to prepare for and react to disruptions related to physical security and acts of nature. Think about it, aside from an ice storm, hurricane or similar event thrown at us by Mother Nature, when was the last time you suffered from an extended power outage? Dutch computer scientist Edsger Dijkstra said, “Simplicity is prerequisite for reliability,” and for most of the 20th century, the grid was a simple environment. Technically complex yet simple because it was bounded and risks were relatively well understood.
So how important is electricity? I think the electric industry is the most critical of the critical infrastructures because the reliable generation and delivery of electric power is arguably the most influential factor in a sustainable population in North America. Electricity is as important to modern civilization as water was to ancient Rome except it’s impossible to calculate our dependency on electricity. In fact, the loss of electricity over a wide enough geographical area measured in months (instead of hours or minutes) would result in unprecedented human suffering, economic devastation, profound gaps in national security and a return to the digital dark ages.
Enter the Internet and cyber-security.
Electric grid systems that were previously taken for granted as dependable and relatively static began to change dramatically when they were able to take advantage of the efficiencies offered by the Internet. Unfortunately this also meant that the same security weaknesses that plague daily computer life could, when not accounting for the cyber-security threat, menace our nation’s electricity infrastructure. Those same botnets used by criminals to commit crime by sending spam, distributing malware and performing denial-of-service attacks can be used against unprotected or compromised BES networks.
Just like in IT systems worldwide, cyber-attack vectors are multiplying. System and network intrusions, including malicious code (can anyone say Stuxnet?) are increasing, and a growing reliance on the Internet and the proliferation of the smart grid is creating unprecedented cyber-security challenges for the electricity industry. These threats have resulted in an inconsistent perception of risk and are some of the things I’ll be talking about in Securing GridSpace so stay tuned, it’s gonna be a wild ride. | <urn:uuid:95d0bce5-2dfe-4f4e-942d-828eae24f378> | CC-MAIN-2017-04 | http://www.govtech.com/technology/Securing-the-North-American-Electric-Grid.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00484-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951525 | 980 | 3.046875 | 3 |
| ||Tea Ceremony|
History, Benefits and Igredients
The History of Japanese tea ceremony
The Japanese tea ceremony is known by two names Chanoyu and Sado. It was introduced into Japan by way of China. It was perfected by Master Sen no Rikkyu based on the Buddhist religion of Zen. It was spread in the 16th Century.
Green tea itself was brought to Japan 1200-1300 years ago by Japanese monks studying Buddhism in China. Tea had been written and recorded as a medicinal element in Chinese books.
Eisai a buddhist monk learned the manners and rules of tea appreciation. Most of these monks enjoyed rinking tea while in meditation and at the same time reading their scriptures and observing the environment around them. The environment in monasteries are usually peaceful and tranquil.
Shizuoka prefecturew as supposed to be the first place for tea cultivaltion in Japan during the Kamakura period 1185-1333AD
The first shogun of the Tokugawa Ieyasu (1543-1333AD) enjoyed tea parties bery much. The last Shogun of Japan in the 1800AD turnd large sections of land at Makinohara into tea farms . It is now the largest tea farming area in Japan.
Green tea's scientific name is Camellia Sinensis
Percentage of catechins in tea
Dry black tea 3-10%
1. buid up the immunity system
matcha - young shoots leaves finest in grade
| || ||
Read about the:
Home | Research | Dictionary | Galleries | About Us | Help | <urn:uuid:1dfcda17-79ee-41ad-9ebe-d57b283e92f6> | CC-MAIN-2017-04 | http://www.easterntea.com/ceremony/history_benefit.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00236-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938316 | 346 | 3.453125 | 3 |
I have uncovered what I believe to be a remarkable similarity between the search for human meaning in life and the search for a unified understanding of free will.
One of these questions, the pursuit of intrinsic meaning in the world, has been answered to my satisfaction by Albert Camus’ Absurdism. Absurdism shows us that intrinsic meaning is non-existent and therefore unattainable, and then moves to what to do about this fact. Camus offers a few possibilities:
- Give up, i.e., suicide (which he rejects)
- Embrace a false meaning framework, such as religion (which he rejects)
- Accept that true meaning is impossible to find, and live in rebellion of this fact (which he recommends)
The pursuit of intrinsic meaning in the universe may seem highly dissimilar to a conversation about free will, but it turns out to be remarkably similar.
In both cases we have a hard, reductionist truth that waits for us in the dark. For meaning, that truth is that there is no true meaning that’s there to be discovered. There is only what we bring ourselves.
In free will the hard truth is that whether you have a deterministic or random universe (or some combination thereof) neither gives humans the ability to have done otherwise for any decisions they have made. And because we could not have done otherwise, we cannot be held responsible for our decisions.
- Practical Free Will
- Practical Free Will is the ability for an individual to experience having options, considering the outcomes of those options within the context of their value system, and then experience making a choice from among them based on what they want to happen.
Practical Free Will, in other words, is the experience of having free will. And I, as an incompatibilist who believes Absolute Free Will is impossible, accepts this type of free will as obvious, valuable, and…well, practical.
Here is a slightly modified version of a comment made by Marvin Edwards during the course of a free will discussion, where he defends his view of compatibilism:
A man chops down some trees and builds a house with a fireplace to survive the winter cold. Why are the trees gone? Why is there a house now? We cannot find the answers in the chemistry and physics. We have to move from physics and chemistry to biology before we can make sense of what has happened and why.
The man’s own need to survive is the reason. The man’s own muscles chopped down the trees. The man’s own mind conceived the house and made countless decision as to its design and construction as he built it.
Without the purpose that came with the man, the trees would still be there, and there would be no house. No one can dispute this.
And if the trees came from the small apple orchard in your back yard that you were tending to support your family, you would hold that man responsible for your loss.
This is reality.
I think this elegantly captures practical human life and a type of truth that it represents. We are humans, living in a human world, with all the experiences and constraints that come with that. And there is no escape from this reality.
Even strong incompatibists live within the framework of human experience, and that means doing the following on a regular basis:
- We spend less time with mean people
- We honk when people cut us off
- We are kinder to nice people
- We hire people who have a good work ethic
- We tell ourselves to do better in the future
- We ask others to do better in the future
- We hold people responsible for the way they treat others
If you’re a mindful incompatibilist you tend to control how you experience these feelings, i.e., you aren’t likely to feel hatred or disgust the same way that a believer in libertarian free will does.
But that’s not the point. The point is that we as incompatibilists (people who don’t believe free will is possible) still have these feelings. Every day. All day long.
It’s natural. It’s normal. It’s human.
So, that’s our problem. As people who both honor the reductionist truth that we aren’t actually making choices, and simultaneously know that we cannot function in the world without having thoughts and actions that reflect a belief in free choice, we must come to a similar conclusion to that Camus’ did with meaning.
Free Will and Meaning, equal in stalemate
Neither meaning nor free will can be solved satisfactorily to humans because the true answers are not compatible with human experience.
We cannot live our daily lives as if there is no meaning in the universe. We cannot generate our own and pretend it’s intrinsic. So all we can do is choose a path that is in accordance with our moral framework, that grants some measure of fulfillment, and move forward.
And it is the same with free will. We cannot live our lives as if people do not make choices. Not really. Not if we’re being honest with ourselves. If we did, we’d never blame or praise anyone for any action they’ve taken. We’d treat those who treat us badly just the same as those who treat us poorly, since neither had responsibility for their actions.
But we also shouldn’t over-subscribe to the ideology of “deserving” this or that on the grounds of the actions, because we do actually know that people don’t have choice.
So we must walk this ledge while the wind blows us in both directions.
And that’s precisely what Camus’ recommended with meaning. Never give in to the false meaning frameworks. Never believe they are real. But live within them to the degree that they satisfy you.
Live on despite the bleakness.
I would go further and say that we are ok to embrace certain structures and frameworks as long as they are not harmful. As long as people know they are false. I believe Camus warned against believing them, not using them.
One cannot proceed without using some of the handholds available to us as humans. We are too fragile for it.
The only way to deal with an unfree world is to become so absolutely free that your very existence is an act of rebellion. ~ Camus | <urn:uuid:e1abc32e-002d-4f36-b09f-339efc7bb562> | CC-MAIN-2017-04 | https://danielmiessler.com/blog/camus-absurdism-as-the-solution-to-the-free-will-debate/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00264-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960256 | 1,345 | 2.546875 | 3 |
Definition: For two vectors X and Y, and with respect to two suitable operations and is a vector Z=Z0 Z1 ... Zm+n where Zk=i+j=kXi Yj (k=0, ... , m+n).
Note: From Algorithms and Theory of Computation Handbook, page 13-17, Copyright © 1999 by CRC Press LLC. Appearing in the Dictionary of Computer Science, Engineering and Technology, Copyright © 2000 CRC Press LLC.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 17 December 2004.
HTML page formatted Mon Feb 2 13:10:39 2015.
Cite this as:
Algorithms and Theory of Computation Handbook, CRC Press LLC, 1999, "linear product", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/linearprodct.html | <urn:uuid:a62892cf-fd8b-40fd-8dbd-724280c3c3a1> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/linearprodct.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00200-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.776893 | 228 | 2.578125 | 3 |
There’s a lot of buzz these days surrounding the “Internet of Things.” Unfortunately, the term itself is fuzzy, leading to ambiguity and misunderstanding of what it actually means. A shame, because the underlying concept is one that promises to revolutionize the way we measure, consume, and track so much of what we do in our daily lives. Personally, I prefer the term “Universal Mobility,” but my personal campaign for its adoption seems to be losing. Sigh.
A thorough definition of “Internet of Things” is available on Wikipedia, but here’s my quick-and-dirty version:
The “Internet of Things” is an umbrella term that refers to the ongoing trend of making almost every conceivable device able to connect directly to the Internet.
Simple enough when it’s stated that way, right? But pretty mind-blowing when you ponder it for just a little bit. Virtually any physical thing you can imagine, from your sunglasses to the soda machine in your employee lounge to your lawnmower; all of them connected directly to the Internet with no need for an intermediary device. And this is not sci-fi—the technology is pretty well-seasoned and already in play.
So that leads us to the big question: Why? What’s the point?
Actually, there are three basic points. Let’s use the examples above to illustrate each one:
- Location – Being able to quickly determine the exact position of any physical thing. Lost your sunglasses for the umpteenth time? Open an app on your phone and find them instantly. And don’t be embarrassed if they’re still on your head—we’ve all done that.
- Status – Devices that immediately report changes in their status to a system that can send a notification. You just bought the last can of Mr. Pibb from the soda machine? Don’t worry–it can immediately notify the soda supplier to include more in tomorrow’s delivery. Even if you are the only person in the office who actually drinks Mr. Pibb.
- Consumption – Devices that immediately report how much of them you use. Just drained your bank account buying your first house and don’t want to drop another five hundred bucks to buy a lawnmower? What if, instead, you could just borrow one from Home Depot and simply have your credit charged ten bucks each time you mow?
Pretty radical stuff, but it only takes a moment to realize that the examples above aren’t that far-fetched, and similar capabilities already surround us. Here at Aria, what inspires is the ease with which the benefits of Universal Mobility (c’mon… you can’t blame a guy for trying) lend themselves to recurring revenue models. Let me try to wrap this up with a hypothetical business model that encompasses all three:
Imagine a “smart” ski lift ticket which could:
- Offer an optional add-on service that pinpoints your location on the mountain and allow you and your ski-mates to find one another instantly. Even if you’re positioned at the fourth-seat-from-the-left at the lodge bar.
- Trigger a camera that snaps a pic of you as you shred the steepest slope of that black diamond and, by the time you reach the bottom of the hill, sends an alert to your phone with a link to purchase that great shot for you to show off on Facebook. Don’t worry if the camera caught you in the middle of a “yard sale;” you can just delete the picture instead of buying it.
- Grant you the option of “paying per ski run” by just scanning you each time you get on a lift and hitting your card for a small fee, as an alternative to paying full freight for an all-day pass. The ideal option for those days when you’re feeling a bit less ambitious than your bruised and sunburned friends, and that fourth-seat-from-the-left at the lodge bar looks oh-so-inviting.
But here’s what’s really cool: even if you don’t love the idea above as much as I do (hint-hint Vail Resorts!), the technology and monetization engines needed to accomplish all of this are here right now, and all that is needed to make these things a reality (or whatever similar crazy stuff you can dream up) is creativity.
People like being “connected,” businesses like repeat customers, and consumers like flexibility in how they pay for products or services. And Aria is thrilled to be sitting at the nexus of Universal Mobility (yes, I’m going to keep doing this) and the Recurring Revenue Revolution.
Brendan O’Brien, Aria Systems
Check out Brendan’s thoughts on Recurring Revenue & Internet of Things. | <urn:uuid:46fbf656-8185-40ce-9db2-a7a78a8890bd> | CC-MAIN-2017-04 | https://www.ariasystems.com/blog/pay-per-ski-internet-things-meets-recurring-revenue/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00228-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939682 | 1,029 | 2.71875 | 3 |
This one sounds a bit like really wishful thinking. The US Department of Energy today announced $30 million for research projects that would develop advanced biofuels that could replace gasoline or diesel without requiring special upgrades or changes to the vehicle or fueling infrastructure.
The $30 million would be spent over the next four years to support as many as five "traditionally high-risk biofuels projects," such as converting biomass into biofuels and bioproducts to be eventually used for hydrocarbon fuels and chemicals.
From the DOE: " The projects will focus on optimizing and integrating process steps that convert biomass into biofuels and bioproducts that will eventually be used to support hydrocarbon fuels and chemicals. These process improvements could include pretreatment methods that alter the biomass to improve the yield of sugars in subsequent process steps, less costly and more efficient enzymes that produce sugars, and fermentation organisms and catalysts that convert the sugars into fuel and chemical intermediates."
According to the DOE, government investment in developing Ethanol-based fuel alternatives has been critical to developing those fuels. What the DOE hopes to do now is expand beyond Ethanol development.
Going beyond Ethanol has been a DOE theme of late. Just this month the DOE awarded a massive amount of its world-class supercomputing time to 57 research projects looking at everything from biofuels and climate change to nuclear power and lithium air batteries. In Sept., the DOE announced $9.6 million for what it called transformational energy research projects. And in June the DOE said it would invest $24 million in three research groups to tackle the challenges of bringing algae-based biofuels to market.
Follow Michael Cooney on Twitter: nwwlayer8
Layer 8 Extra
Check out these other hot stories: | <urn:uuid:821dbdfb-a20c-42a4-8ce4-4e59f24c2a5c> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2228056/green-it/us-offers--30m-for-high-risk-biofuel-research.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00530-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954693 | 365 | 3.0625 | 3 |
Iran has detained several people for attempting to sabotage the country's nuclear programme through cyberspace.
The Stuxnet worm was found in the control systems for several of Iran's nuclear facilities, including the Bushehr power plant.
The announcement of the detentions was aimed at reassuring Iranians and suggests the cyber attacks caused more alarm than Iran has admitted, according to the Guardian.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Iranian intelligence minister Heydar Moslehi said an intelligence apparatus now in place will not allow any leak or destruction of Iran's nuclear activities.
Stuxnet's origin and purpose is not fully understood, but experts have raised concerns that the worm appears to be designed to attack systems running critical infrastructure.
This means that in theory attackers could break into computers that control critical systems such as power stations, water supply systems and electrical power grids.
Security experts say the Stuxnet worm, which appeared more than a year ago, is one of the most sophisticated pieces of malware seen to date. | <urn:uuid:745d3aac-eade-4075-b06f-cdde70d2aa3f> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/1280093949/Iran-makes-arrests-after-Stuxnet-cyber-attack-on-nuclear-plant | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00438-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960761 | 227 | 2.53125 | 3 |
With a trend toward more automation in vehicles, the FBI recently warned that fully autonomous vehicles are at risk to be hacked and used as “lethal weapons.”
A report by the FBI’s Strategic Issues Group, found by The Guardian, said that autonomous vehicles “will have a high impact on transforming what both law enforcement and its adversaries can operationally do with a car.”
It’s up to anyone’s imagination what sort of new dangers self-driving cars could present. The Guardian noted that passengers in a self-driving car could shoot from the window while making a getaway, although that is already possible with regular vehicles. Other potential dangers include terrorists using the vehicles as explosives transporters, or someone hacking a self-driving vehicle and using it to damage property or attack pedestrians.
In its report, the FBI also noted that automated procedures could make getaways easier for criminals, as traditional sticking points like three-point turns could be performed quickly and without error by a robot in situations where a human might fail.
The bureau also guessed, though, that “surveillance will be made more effective and easier, with less of a chance that a patrol car will lose sight of a target vehicle. […] In addition, algorithms can control the distance that the patrol car is behind the target to avoid detection or intentionally have a patrol car make opposite turns at intersections, yet successfully meet up at later points with the target.”
One conclusion of the report was a prediction that self-driving vehicles would be approved by Congress, and use by the public would be within five to seven years. | <urn:uuid:83bf2db1-2c60-4460-a43f-13c20506172a> | CC-MAIN-2017-04 | http://www.govtech.com/transportation/FBI-Says-Autonomous-Vehicles-Could-Be-Lethal-Weapons.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00558-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.974681 | 332 | 2.515625 | 3 |
Intellectual property rights - copyrights, patents and trademarks - provide the legal framework necessary for creative enterprise like commercial software development to flourish, writes Robert Holleyman. But it is widely assumed that most people view IP rights as business and legal concepts with little relevance to their daily lives. That's why the World Intellectual Property Organization and its 184 member states designate April 26, the anniversary of the Convention establishing WIPO, as World Intellectual Property Day.
The BSA has recently conducted public-opinion research that finds some cause for optimism, though. Consider: 71 percent of the world thinks innovators should be paid for the products and technologies they develop, because it provides incentives for more technology advances.
This finding comes from a global survey conducted by Ipsos Public Affairs, one the world's leading public-opinion research firms, as part of the 2010 BSA Global Software Piracy Study, which is set to be released soon.
We polled a globally representative sample of approximately 15,000 personal computer users in 32 countries on their attitudes about software piracy and intellectual property rights. We asked a number of probing questions to get a clearer understanding of public attitudes toward IPR - and we found that world opinion comes down firmly in favor of innovation and intellectual property.
Here is how the question I have referenced was worded:
"The laws that give someone who invents a new product or technology the right to decide how it is sold are called intellectual property rights. Which comes closer to your view..."
- Statement A: "It is important for people who invent new products or technologies to be paid for them, because it creates an incentive for people to produce more innovations. That is good for society because it drives technological progress and economic growth."
- Statement B: "No company or individual should be allowed to control a product or technology that could benefit the rest of society. Laws like that limit the free flow of ideas, stifle innovation, and give too much power to too few people."
More than seven respondents in 10 chose paying innovators.
Make no mistake: We face significant challenges in protecting intellectual property rights around the world. The study we will soon be releasing illuminates several challenges in particular.
But when it comes to public appreciation for the core principle of intellectual property, there is also cause for optimism. | <urn:uuid:5df786ad-29c4-4651-b03b-77a09f626563> | CC-MAIN-2017-04 | http://www.computerweekly.com/microscope/opinion/The-world-strongly-supports-intellectual-property-rights | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00403-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939084 | 471 | 2.578125 | 3 |
Last year in New Jersey, a 69-year-old woman from out of state was driving to see her friends late one night when her car's alternator broke. Without power to start the car or unlock the electric locks, the woman was trapped. Desperate, she called 911 on her cellular phone without a clue as to where she was located. In another New Jersey town, a 911 dispatcher received a wireless call reporting a car accident on Jacksonville Road, but the caller was unable to tell the dispatcher which of the four Jacksonville roads in the general area was the location of the accident.
In each of these incidents, help was dispatched quickly and to the right place, thanks to technology that can find -- within a few hundred feet -- the location of a wireless 911 call. The responses to these calls for help were just two of several thousand the New Jersey Office of Emergency Telecommunications Services (OETS) were able to identify and locate during a three-month trial of the country's first live wireless enhanced 911 service.
While the trial only covered a 50-mile stretch of the New Jersey Turnpike, it was the first time public safety officials attempted to identify and locate real wireless 911 calls. "We were able to reach and even exceed the FCC mandate," said Robert Miller, OETS executive director.
In 1996, the Federal Communications Commission issued requirements that wireless companies provide 911 dispatchers with information that identifies and locates 911 calls from cellular phones. The two-part phase-in of these requirements began in April this year and gives companies five years to upgrade their cellular networks with technology that tells 911 dispatchers the location of an emergency caller to within 125 meters.
However, if the FCC's goals are clear, how they are going to be reached is another matter. Decisions concerning technology, standards and cost must be hashed out. "How much all of this is going to cost is the issue," said Woody Glover, executive director of the 911 Network in East Texas. Trying to pinpoint the location of a cell call to within a couple hundred feet could become extremely expensive, said Glover and a number of experts.
While the FCC requires states to develop a cost-recovery mechanism, such as a user surcharge, to pay for the technology and upgrades, it's not entirely clear what the money will pay for -- the infrastructure necessary for locating wireless calls or just the equipment for handling 911 emergencies. One person who is not surprised by the intricacies of wireless 911 is Bill Munn, president of the National Emergency Number Association (NENA) and executive director of the Tarrant County, Texas, 911 District. "This whole issue has become a lot more complicated than people thought," he observed.
It comes as no surprise that the explosion in cellular phones has led to spectacular growth in 911 calls from people on the go. Figures show that 50,000 emergency calls per day -- as much as 25 percent of all 911 calls -- are placed from cell phones. Unfortunately, wireless calls cannot be identified and located the same way as landline 911 calls. As a result, a 911 cell-phone call may be misrouted by hundreds of miles. By the time dispatchers are able to home in on the actual location of the call, precious time may have been lost. Ironically, most people cite personal safety as one of the reasons for using cell phones. Very few consumers are aware of the fact that wireless calls are hard to locate.
When someone dials 911 from a landline phone, the call and the caller's phone number are passed by the carrier to a 911 switch, which uses the phone number to look up the name and address of the caller in a database known as the Master Street Address Guide. The name and address are then used to determine the closest public safety answering point (PSAP) to the caller. The guide also determines which fire, police or medical service is closest to the caller. All this information appears on a PSAP dispatcher's computer screen, allowing the dispatcher to promptly send service to the caller's location.
However, when a caller uses a wireless phone to dial 911, the dispatcher sees a blank screen. Without any information, the dispatcher has to determine the caller's identity and location from the caller, which is not always possible in every emergency. "If we have one accident on our freeway," said Munn, "we'll get as many as 50 wireless emergency calls and many of the callers provide erroneous location information."
As the number of wireless 911 calls has increased in recent years, wireless carriers and public safety groups, such as NENA and the Association of Public Safety Communication Officials (APCO), began pressuring the FCC to establish regulations enabling 911 services to properly handle wireless emergency calls. In 1996, the FCC set two deadlines for identifying and locating wireless 911 calls:
* By April 1, 1998, each wireless carrier must give PSAPs a 10-digit callback number and location data of the specific cell site or cell sector where the call originated.
* By Oct. 1, 2001, wireless carriers must provide more precise location information for each 911 call.
Jack Keating, president-elect of APCO and executive director of West Covina Communications in Southern California, believes these deadlines are overly optimistic. "You have to remember that the service doesn't just happen. The PSAPs have to ask the wireless carriers to provide the 911 service to them, the PSAPs have to have the means of receiving the information from the carriers, and they have to have some sort of cost-recovery system in place."
Most public safety officials agree that the first phase of the FCC mandate will be the least costly to implement but hardest in terms of setting standards for technology. Take, for example, standards for viewing information on wireless calls. According to Glover, who heads APCO's committee on wireless standards, most call centers and their dispatchers want the new information for wireless calls to come in on their existing equipment, but one leading solution calls for installing separate phone lines and screens for emergency cell calls.
The second phase presents an even more challenging issue: which technology to use for locating wireless callers to within a few hundred feet of the call. Dispatchers can't depend on textual information as they presently do for landline calls. Those calls come in with a street address, which the dispatcher reads out to the police, fire or medical crews. Location information for wireless calls will come in some form of a latitude and longitude coordinate; since dispatchers can't use that information, their workstations will have to be upgraded to handle electronic mapping.
Wireless carriers are quick to point out that the text vs. graphics display of location information is a Phase II issue only. For Phase I, carriers are telling PSAPs that they will be able to provide basic cell-site information in a text format that will not force a costly equipment upgrade.
However, carriers realize that they will have to settle on one technology for pinpointing the location of a caller, if PSAPs around the country are going to use the service. Right now, there are two competing technologies.
One uses a technique called Time Difference of Arrival (TDOA), which calculates a telephone's location, speed and direction of travel. TDOA works on the physics of radio waves. When a cellular phone makes a transmission, the radio waves travel like water ripples when a pebble is tossed into a pond. Special receivers installed at each cell site pick up the radio wave signals and time-stamp them. Once the signal has been time-stamped by
several receivers, the time differences are calculated and the result is used to triangulate an intersection point at or near the true location of the phone.
In New Jersey's test of location technology, the state used a system from TruePosition of Bala Cynwyd, Pa. [see "Put a Location in Wireless E-911," Government Technology, October 1997]. Test results showed that TDOA can locate about 67 percent of the wireless calls to within 410 feet, which meets the FCC's 125-meter requirement. Results also showed that a greater saturation of receivers on cell-tower sites improved the accuracy of the location information coming into the dispatchers.
Privacy advocates are concerned about a technology that can locate any wireless caller at any time. They would prefer to see carriers adopt a second alternative. This one calls for putting a global positioning system (GPS) receiver in the handset of cell phones. With GPS receivers, carriers wouldn't have to deploy costly receivers on every cell site. More importantly, callers would be able to control when his or her position would be given out.
A GPS solution would also display the coordinates of the phone's location to the person carrying it. A wireless-phone user could call a gas station after running out of gas and tell the tow truck driver where to find the car by reading its position off the phone's screen.
TDOA advocates point out, however, that GPS would not work in the millions of cell phones currently in use, rendering a large segment of the population invisible to location technology. FCC regulations require carriers to locate two-thirds of all cellular calls.
In 1997, 10 states passed wireless E-911 bills dealing with cost recovery and indemnification of carriers. This year, legislation is expected to be introduced in 21 states. On average, most legislation will tax wireless customers 75 cents per month for the service.
While all the legislative activity indicates a growing awareness for the need to fund wireless E-911 service, debate is growing over who should pay and for exactly what. While public safety is driving the need for location technology to find cell-phone users, wireless carriers can expect to profit from other business uses for the technology.
For example, wireless carriers can use the technology to locate and detect possible fraud from cell-phone number cloning. Cloning involves stealing phone numbers from legitimate cell-phone customers and then using the numbers to make fraudulent calls. With location technology, carriers could quickly track down perpetrators of this fraud. Another application is billing by location. Wireless carriers could offer to bill customers according to where they use their phone, along with usage rates and time-of-day rates. Carriers could bill one rate for calls made on the road and another for calls made from home. Other possible applications involve fleet management and inventory monitoring.
Since location systems are as much a business opportunity as a public service, most PSAPs believe cost recovery should pay for the necessary modifications to their equipment and dedicated trunks for handling wireless 911 calls. PSAPs believe carriers should recover their own cost for the location systems. However, some carriers are arguing that states should pay for all initial costs and customers should then cover ongoing operational costs.
According to Keating, APCO believes that public funds should not pay for technology that will be used for business purposes. Others concur. In a newsletter published by NENA, OETS' Miller cautioned carriers from assuming that the FCC's requirements are tantamount to a federal mandate and, therefore, public funds should pay for the location technology. "If the public sector were to pay for the location systems," wrote Miller, "they would reserve the right to approve the system design, including the placement of all components, and the system could not be used to generate revenue for commercial carriers."
Instead, Miller argued, let carrier competition build the best location technology. The results will be a system in place much sooner than one built under the direction of the public sector. The FCC rule states the performance requirements for locating wireless 911 calls, explained Miller, "It doesn't tell carriers how to get there."
August Table of Contents | <urn:uuid:2fe3ce0b-3ab4-423d-8230-488b2ce54837> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Cellular-Safetys-125-Meter-Dash.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00035-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958502 | 2,380 | 2.640625 | 3 |
There are a ton of groups out there that offer cybersecurity help and guidance, the trick, it seems is finding the right one for your organization.
The Government Accountability Office this week issued a report on just that notion saying: " Given the plethora of guidance available, individual entities within the sectors may be challenged in identifying the guidance that is most applicable and effective in improving their security posture. Greater knowledge of the guidance that is available could help both federal and private sector decision makers better coordinate their efforts to protect critical cyber-reliant assets."
More on cybersecurity: From Anonymous to Hackerazzi: The year in security mischief-making
Such information though is valuable in that these myriad groups offer guidelines and principles as well as technical security techniques for maintaining the confidentiality, integrity, and availability of information systems and data, the GAO stated.
"When implementing cybersecurity technologies and processes, organizations can avoid making common implementation mistakes by consulting guidance developed by various other organizations. Public and private organizations may decide to voluntarily adopt this guidance to help them manage cyber-based risks," the GAO stated.
Who are some of these key organizations? From the GAO:
- International Organization for Standardization (ISO): a nongovernmental organization that develops and publishes international standards. The standards, among other things, address information security by establishing guidelines and general principles for initiating, implementing, maintaining, and improving information security management in an organization.
- International Electrotechnical Commission (IEC): an organization for standardization comprising all national eletrotechnical committees. The commission publishes international standards, technical specifications, technical reports, and publicly available specifications and guides. The information security standards address safety, security, and reliability in the design and operations of systems in the power industry, among other things.
- The International Telecommunication Union: a United Nations agency whose mission includes, among other things, developing technical standards and providing technical assistance and capacity building to developing countries. The union has also developed technical standards for security and, more recently, engaged in other cybersecurity activities. For example, the union has established a study group for telecommunications security to focus on developing standards and recommendations associated with network and information security, application security, and identity management. Similarly, the union, through its members' efforts, prepared a report on cybersecurity best practices for countries seeking to organize national cybersecurity efforts.
- The International Society of Automation (ISA): a global and nonprofit organization that develops standards for automation. It has developed a series of standards to address security in industrial automation and control systems.
- The American National Standards Institute (ANSI): a U.S. organization that is responsible for coordinating and promoting voluntary consensus-based standards and information sharing to minimize overlap and duplication of U.S. standards-related efforts. In addition, it is the representative of U.S. interests in international standards-developing organizations.
In an earlier report the GAO identified 19 global organizations" whose international activities significantly influence the security and governance of cyberspace."
The organizations range from information-sharing forums that are non-decision-making gatherings of experts to private organizations to treaty-based, decision-making bodies founded by countries. The groups address a variety of topics from incident response, the development of technical standards, the facilitation of criminal investigations to the creation of international policies related to information technology and critical infrastructure, the GAO stated.
From that GAO report a few key influential groups include:
- Asia-Pacific Economic Cooperation (APEC) is a cooperative economic and trade forum designed to promote economic growth and cooperation among 21 countries from the Asia-Pacific region. APEC's Telecommunication and Information Working Group supports security efforts associated with the information infrastructure of member countries through activities designed to strengthen effective incident response capabilities, develop information security guidelines, combat cybercrime, monitor security implications of emerging technologies, and foster international cybersecurity cooperation.
- Association of Southeast Asian Nations (ASEAN) is an economic and security cooperative comprised of 10 member nations from Southeast Asia. According to the 2009-2015 Roadmap for an ASEAN Community, it looks to combat transnational cybercrime by fostering cooperation among member-nations' law enforcement agencies and promoting the adoption of cybercrime legislation. In addition, the road map calls for activities to develop information infrastructure and expand computer emergency response teams (CERT) and associated drills to all ASEAN partners.
- The Council of Europe is a 47 member organization founded in 1949 to develop common and democratic principles for the protection of individuals. In 2001, the council adopted a Convention on Cybercrime to improve international cooperation in combating actions directed against the confidentiality, integrity, and availability of computer systems, networks, and data. This convention identified agreed-upon cyber-related activities that should be deemed criminal acts in countries' domestic law. The US Senate ratified this convention in August 2006.
- The European Union is an economic and political partnership among 27 European countries. Subcomponents of its executive body-the European Commission-engage in cybersecurity activities designed to improve (1) preparedness and prevention, (2) detection and response, (3) mitigation and recovery, (4) international cooperation, and (5) criteria for European critical infrastructure in the information communication technology sector. The European Commission also formed the European Network and Information Security Agency (ENISA), an independent agency created to enhance the capability of its members to address and respond to network and information security problems. Several independent organizations within Europe develop technical standards. The European Committee for Standardization is to work to remove trade barriers for European industry and provide a platform for the development of European standards and technical specifications. The European Committee for Electrotechnical Standardization is a not-for-profit technical organization that is responsible for preparing voluntary standards for electrical and electronic goods and services in the European market. The European Telecommunications Standards Institute is also a not-for-profit organization that is responsible for producing globally applicable standards for information and communications technologies including those supporting the Internet.
- Forum of Incident Response and Security Teams (FIRST) is an international federation of individual CERTs that work together to share technical and security incident information. It includes over 220 members from 42 countries. The members' incident response teams represent government, law enforcement, academia, the private sector, and other organizations. FIRST has also worked with multiple international standards organizations to develop standards for cybersecurity and incident management and response. In addition, FIRST uses the Common Vulnerability Scoring System as a standard method for rating information technology vulnerabilities, which helps when communicating vulnerabilities and their properties to others.
Layer 8 Extra
Check out these other hot stories: | <urn:uuid:3074e141-7ff6-4f33-829a-7c726be43f89> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2221438/malware-cybercrime/who-are-the-go-to-cybersecurity-help-groups-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00035-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925237 | 1,340 | 2.640625 | 3 |
The power was out for 2 million electric customers in New York. Hospitals and nursing homes were evacuating patients and shutting down. Thousands of people were stranded in high-rise buildings, needing food and water. In Queens, houses were burning to the ground. Water rescues were taking place in New York City and on Long Island.
These events didn’t take place on different days. They all happened simultaneously when Hurricane Sandy struck New York on Oct. 29, 2012. They illustrate three key distinguishing aspects of a Type 1 disaster: scope and scale, velocity and ambiguity of information. Emergency managers responding to Hurricane Sandy in New York experienced all of these challenges.
New York state, New York City and county agencies responded to Sandy with the highest level of discipline, dedication and compassion. Even with our considerable capabilities, we were challenged by the immensity of the undertaking.
Sandy made landfall as an 800-mile-wide post-tropical cyclone. It shut down a metropolitan area with 13 million residents, caused approximately 50 fatalities and damaged more than 119,000 homes. About 10,000 people took refuge in 96 shelters; many more were displaced and stayed with relatives or in hotels. More than 140,000 gas meters were destroyed. Fuel distribution was cut off as refineries, pipelines and storage terminals sustained direct impacts from Sandy. One-quarter of all telecommunications capacity was knocked out.
Eight hospitals had to evacuate patients. New York City Health and Hospitals Corp., which serves 1.4 million patients annually at 11 hospitals, reported losses of more than $800 million. New York University Langone Medical Center estimated its losses at $1 billion and most services were shut down for two months. Forty nursing homes were evacuated, and 30,000 structures were checked by first responders and search and rescue teams.
Photo: FEMA Federal Coordinating Officer Michael Byrne answers Hurricane Sandy disaster questions using Twitter on Jan. 9 at the Joint Field Office in Forest Hills, N.Y. Photo courtesy of Andrea Booher/FEMA
Port Authority facilities sustained damages of more than $100 million. Wastewater treatment plants were impacted and some partially treated sewage was released into the environment. Flooding caused extensive damage to the Holland Tunnel, Queens Midtown Tunnel, subway and rail lines, many roads and the city’s ferry facilities, including the Staten Island Ferry. Sandy brought major coastal flooding to the New York coastline, along the entire south shore from Staten Island to Montauk Point and in bays and rivers. More than 3.6 million cubic yards of sand were washed away at sites constructed by the U.S. Army Corps of Engineers. Liberty Island and Ellis Island were flooded and had to be closed for repairs to infrastructure.
FEMA and its state and federal partners deployed nearly 8,000 personnel at the beginning of the disaster. More than 40 federal agencies participated in the response. We helped bring in 350 contract ambulance crews and deployed 20 disaster medical assistance teams. The Air Force transported power company trucks from California. The Army Corps of Engineers installed 211 generators at vital facilities and unwatered subways and tunnels. Nearly 1,200 FEMA community relations specialists (now called disaster survivor assistance specialists) went door to door in affected neighborhoods. All of this was in support of the existing, robust response capabilities of the state, city and counties.
A catastrophic incident, as defined by the National Response Framework, is “any natural or manmade incident, including terrorism, which results in extraordinary levels of mass casualties, damage or disruption severely affecting the population, infrastructure, environment, economy, national morale and/or government functions.” I don’t know whether Sandy qualifies as catastrophic, but there is no doubt in my mind that Sandy presented all three of the aspects that define a Type 1 event.
A Type 1 disaster comes at you fast and nonstop. Every county, village and town needs to be our priority — and understandably so. Many times we didn’t have a 100 percent solution. Often, responders had to rely on partial information. We had to establish objectives. Our guidance makes it clear that life-saving and life-sustaining objectives are top priorities until the situation is stabilized.
So many events were going on in so many places that we quickly established a geographical construct, with three branches and nine divisions, stretching from Long Island through the lower boroughs of New York City and north through the Hudson Valley. This allowed us to push down our decision-making close to the action.
There was a cascade effect. The water came in, causing a loss of power, loss of sewer, loss of basic transportation. Nursing homes and hospitals needed to be evacuated. Elevators in high-rise buildings didn’t work. Ingress and egress were blocked by 6 million cubic yards of debris. The fire department couldn’t get to whole city blocks of Breezy Point that were burning. We had multiple incidents within incidents.
Queens, Brooklyn, Staten Island, Nassau County and Suffolk County sustained most of the housing damage from Sandy — accounting for 114,000 of the 119,000 damaged homes reported to FEMA. With so many people displaced, the housing challenge became immense. In New York, people live vertically, so loss of power to one meter could affect hundreds of households. We had to find places to put people, in a hurry. The difficulty was compounded by the lack of available rental resources. The vacancy rate averaged 3.1 percent, including rent-stabilized and market-rate units, and what was available was expensive. Hotels were full with tourists in town for the holiday season. FEMA housed 1,100 members of the U.S. DHS Surge Capacity Force on three merchant marine ships to preserve hotel rooms for survivors.
There was virtually no excess housing stock and no place to put temporary housing units. We worked quickly with local and state agencies to implement the Sheltering and Temporary Essential Power program, which reimburses eligible applicants for temporary repairs that would allow them to remain in their homes while undertaking longer-term repairs. FEMA worked with state and federal partners to assist households in their search for longer-term housing, and FEMA housed nearly 6,000 families temporarily in hotels during the first six months.
New York consists of many cultures, and the language issue complicated the response. The New York Times has estimated that nearly 800 languages and dialects are spoken in the area. We had to be mindful of cultural traditions in the neighborhoods, as well as communication issues. FEMA distributed 1.1 million fliers with disaster assistance information in 26 languages and assigned translators to disaster recovery centers and to neighborhood outreach. In one case, we knew there were hundreds of Russian-speaking residents, many of them elderly, on the upper floors of high-rise buildings in Brighton Beach and Coney Island. We sent translators with our outreach to get word to survivors in that area.
Imperfect information is a dilemma in a Type 1 event. During the response to Sandy, communications were significantly interrupted. Cell service was out. Land lines were down. It was complex.
FEMA put incident management assistance teams into counties, co-located with county EOCs. We had to resolve ambiguities so we could decide where to apply resources. Public safety went to the top of the list.
Hundreds of community relations specialists, going door to door in impacted communities, assessed conditions and provided situational awareness at a neighborhood level. We established neighborhood task forces to provide information and support. Our liaisons at state and local agencies helped sift conflicting data.
From moment to moment, information flow in a Type 1 event is a challenge to leadership. The challenge is unlike that of any other type of incident.
As we have seen from several major disasters in the past two decades, the geographic scope and number of casualties varies but the impact is extraordinary in all Type 1 events. Hurricane Katrina took 1,500 lives, displaced 300,000 households and caused damage totaling $150 billion over 90,000 square miles. In three Mississippi counties alone, it left behind more debris than the 9/11 attacks and Hurricane Andrew, according to a U.S. Senate committee special report in 2006.
The attack on the World Trade Center damaged a relatively small geographic area but took nearly 3,000 lives and was a severe national trauma. Moreover, the economic impact on New York City’s economy was estimated at 429,000 jobs and $2.8 billion in lost wages in the subsequent three months. The 1994 Northridge earthquake in California damaged 114,000 structures over a 2,100-mile area and resulted in 72 fatalities. Beyond the physical damage, the impact on commuting in auto-dependent Southern California was significant.
Hurricane Sandy affected the nation’s largest metropolitan area. While the official fatality number was not as large as in some other major events, the devastation to homes, apartment buildings, commercial structures, hospitals, schools, subways and other public facilities was enormous. So far, more than $7 billion in federal funds has been expended on response and recovery. Communities will be rebuilding for years.
In recent years, our nation has increased its capability to respond to a major disaster, led by local teams and supported by the National Preparedness Directorate, which is responsible for enhancing our readiness through a comprehensive cycle of planning, organizing, equipping, training, exercising and evaluating.
After Katrina, the Senate Committee on Homeland Security and Governmental Affairs made this statement in its report: “We knew Katrina was coming. How much worse would the nightmare have been if the disaster had been unannounced — an earthquake in San Francisco, a burst levee near St. Louis or Sacramento, a biological weapon smuggled into Boston Harbor or a chemical weapon terror attack in Chicago? Hurricane Katrina found us — still — a nation unprepared for catastrophe.”
The Type 1 events that our nation has faced have presented severe challenges, but they were manageable. We haven’t experienced a truly catastrophic event, one that overwhelms our ability to respond.
All of us in the emergency management community should ask ourselves: Are we ready? | <urn:uuid:041a5c9f-2a9c-4024-a1d6-0e94faa8e4aa> | CC-MAIN-2017-04 | http://www.govtech.com/em/disaster/Whats-Different-About-Type-1-Event.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00247-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.969953 | 2,057 | 2.65625 | 3 |
Which Are Most Valuable and Why?
There was a time when learning was an activity that took place exclusively in a classroom. It ran to a set schedule, was neatly packaged in nice binders and afterward, you would file those binders and wait for the next training program. Fortunately times have changed, and so have the ground rules of learning.
IT learning has evolved from a world where training was conducted exclusively in a traditional classroom environment to a world where value-added resources are made available to the learner prior to the class, during the class and, perhaps most importantly, back on the job. There are many drivers that have led to this evolution in learning. They include cost, time, technology advancement and most importantly, a desire to transfer the knowledge to where it’s needed most, the workplace.
The most valuable learning happens after the training, when you apply these new skills and concepts in the workplace. Post-class resources are essential to bridge the gap between the theoretical concepts and their real-world applications.
There are a number of resources available to help you retain and apply your new knowledge. They range from e-learning tools to textbooks to tests and assessments. Often, these types of reinforcement aids are essential to develop your skills, reinforce and validate what you learned in the classroom and help you transfer the knowledge to your job. Here are some of the most valuable post-class resources that can supplement your learning and provide the support you need to apply your newfound knowledge:
- Supplemental E-Learning: Supplemental e-learning can be a very powerful tool because of the ability to access it online, anytime you choose. Access to dynamic content gives you the opportunity to apply the content to real-world situations for just-in-time learning. With this tool, you can reinforce a skill learned in class or use it as a refresher while on the job as many times as you want, for months after the class.
- Virtual Labs: Virtual labs are a great way to gain hands-on experience using real equipment. In contrast to using a simulated computer-based training program, live virtual labs provide a real-world environment to experiment and practice on real equipment at your own time and pace. Some vendors that offer the virtual lab experience include Element K, New Horizons Computer Learning Centers and Productivity Point International.
- E-Books: Electronic books put a complete library at your fingertips. Technical reference libraries provide 24-hour-a-day access to full electronic versions of top technical books in an interactive, instant learning environment. When you have a time-sensitive problem to solve, electronic libraries, such as Books24x7 and Safari Bookshelf, will help you find the answer fast. Using the electronic library’s powerful search engine, you can search hundreds of books concurrently so you find the precise information you are looking for without spending valuable time searching through individual books.
- Trade Magazines and Periodicals: These very useful, often free resources have the most up-to-date information to help you keep pace with our rapidly changing industry. Be careful, though, to avoid information overload. Subscribe only to those publications that match your needs and reading style. Use these resources to stay current and as a guide for future exploration.
- Practice Exams and Assessments: Practice exams and assessment tests from vendors such as MeaureUp, LearnKey, Transcender and Self Test Software can help you measure how much you have learned and assess your readiness to go to the next level. They will help you validate that you have retained knowledge and reached a defined standard and can be used to test your readiness for a certification exam.
Learning can no longer be thought of as an isolated event. Before and during your classroom training or e-learning, you must spend time thinking about the tools needed back on the job to help ensure that your newfound knowledge is retained and applied. Only then will you have truly maximized your return on training.
Martin Bean is the chief operating officer for New Horizons Computer Learning Centers Inc., the world’s largest computer training company. | <urn:uuid:864f336e-49e4-4033-ac19-6aa580707334> | CC-MAIN-2017-04 | http://certmag.com/post-class-resources-which-are-most-valuable-and-why/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00063-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95567 | 830 | 2.65625 | 3 |
Researchers at MIT are leading an effort to use algorithms and crowdsourcing to create customized camouflage that someday will conceal public eyesores such as electrical boxes and air conditioning units.
Among the challenges to be overcome is creating camouflage that does the trick when viewed from different vantage points and in different light conditions.
In a second you'll see some examples and even get to participate in the crowdsourcing effort.
First, from an MIT press release:
The researchers developed a range of candidate algorithms and tested them using Amazon's Mechanical Turk crowdsourcing application, scoring them according to the amount of time volunteers took to locate camouflaged objects in synthetic images. Objects hidden by their best-performing algorithm took, on average, more than three seconds to find - significantly longer than the casual glance the camouflage is intended to thwart.
According to Andrew Owens, an MIT graduate student in electrical engineering and computer science and lead author on the new paper, the problem of disguising objects in a scene is, to some degree, the inverse of the problem of object detection, a major area of research in computer vision.
"Often these algorithms work by searching for specific cues - for example they might look for the contours of the object, or for distinctive textures." Owens says. "With camouflage, you want to avoid these cues - you don't want the object's contours to be visible or for its texture to be very distinctive. Conceptually, a cue that would be good for detecting an object is something that you want to remove."
This video shows various demonstrations, though the crowdsourcing app that follows seems to illustrate the concept more clearly.
But perhaps more enlightening than the video is this "camouflage game" that invites us "to find the hidden box."
The first scenes are intended to be relatively easy warm-ups, including:
And then there are scenes where the box is more difficult to find, such as here:
I fared reasonably well, but there were more than a few where I ran out of time before spotting anything suspicious enough to click on.
This one's not a time waster, so go ahead and give it a try ... for science. | <urn:uuid:881a9083-7b70-4154-8163-c65468c3403b> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2226982/data-center/using-math-and-crowdsourcing-to-camouflage-eyesores.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00063-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954581 | 438 | 2.625 | 3 |
Landsat on AWS
The Multimedia Commons is a collection of audio and visual features computed for the nearly 100 million Creative Commons-licensed Flickr images and videos in the YFCC100M dataset from Yahoo! Labs, along with ground-truth annotations for selected subsets. The International Computer Science Institute (ICSI) and Lawrence Livermore National Laboratory are producing and distributing a core set of derived feature sets and annotations as part of an effort to enable large-scale video search capabilities. They have released this feature corpus into the public domain, under Creative Commons License 0, so it is free for anyone to use for any purpose.
This data set has known applied use in emergency management, commercial market research, and other fields. More broadly, this dataset could be useful for the next generation of computer vision, human mobility, machine learning, and social computing research.
AWS has made the images, videos, feature corpus and annotation sets for the Multimedia Commons freely available on Amazon S3. Now anyone can use the data on-demand in the cloud without worrying about storage costs and download time.
Landsat 8 data is available for anyone to use via Amazon S3. All Landsat 8 scenes from 2015 are available along with a selection of cloud-free scenes from 2013 and 2014. All new Landsat 8 scenes are made available each day, often within hours of production.
The Landsat program is a joint effort of the U.S. Geological Survey and NASA. First launched in 1972, the Landsat series of satellites has produced the longest, continuous record of Earth’s land surface as seen from space. NASA is in charge of developing remote-sensing instruments and spacecraft, launching the satellites, and validating their performance. USGS develops the associated ground systems, then takes ownership and operates the satellites, as well as managing data reception, archiving, and distribution. Since late 2008, Landsat data have been made available to all users free of charge. Carefully calibrated Landsat imagery provides the U.S. and the world with a long-term, consistent inventory of vitally important global resources.
AWS has made Landsat 8 data freely available on Amazon S3 so that anyone can use our on-demand computing resources to perform analysis and create new products without needing to worry about the cost of storing Landsat data or the time required to download it.
Learn more about how Landsat data is used on NASA's Landsat Science site.
All Landsat 8 scenes from 2015 are available along with a selection of cloud-free scenes from 2013 and 2014. All new Landsat 8 scenes are made available each day, often within hours of production.
Landsat on AWS makes each band of each Landsat scene available as a stand-alone GeoTIFF and the scene’s metadata is hosted as a text file.
The data are organized using a directory structure based on each scene’s path and row. For instance, the files for Landsat scene LC81390452014295LGN00 are available in the following location: s3://landsat-pds/L8/139/045/LC81390452014295LGN00/
The “L8” directory refers to Landsat 8, “139” refers to the scene’s path, “045” refers to the scene’s row, and the final directory matches the scene’s identifier, which uses the following naming convention: LXSPPPRRRYYYYDDDGSIVV, in which:
- L = Landsat
- X = Sensor
- S = Satellite
- PPP = WRS path
- RRR = WRS row
- YYYY = Year
- DDD = Julian day of year
- GSI = Ground station identifier
- VV = Archive version number
In this case, the scene corresponds to WRS path 139, WRS row 045, and was taken on the 295th day of 2014.
Each scene’s directory includes:
- a .TIF GeoTIFF for each of the scene’s up to 12 bands (note that the GeoTIFFs include 512x512 internal tiling)
- .TIF.ovr overview file for each .TIF (useful in GDAL based applications)
- a _MTL.txt metadata file
- a small
rgbpreview jpeg, 3 percent of the original size
- a larger
rgbpreview jpeg, 15 percent of the original size
- an index.html file that can be viewed in a browser to see the RGB preview and links to the GeoTIFFs and metadata files
For instance, the files associated with scene LC81390452014295LGN00 are available at:
If you use the AWS Command Line Interface, you can access the bucket with this simple command:
The following ARN is for an Amazon SNS topic that provides notifications whenever a new Landsat scene has been added to Landsat on AWS:
This topic publishes an Amazon S3 event message whenever a scene-level index.html file has been created, which is the last step in the process to make scene data available on Amazon S3. It will only accept subscriptions via Amazon SQS or AWS Lambda.
To receive updates about Landsat on AWS, please fill out the form below. | <urn:uuid:3bdcaab9-808f-45fd-80ca-7689a4a63235> | CC-MAIN-2017-04 | https://pages.awscloud.com/public-data-sets-landsat.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00275-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.887847 | 1,119 | 2.9375 | 3 |
Last year, the event was called the Data Scientist Summit and focused largely on data scientist "rock star" speakers and panel members. This year's title signaled an intention to focus instead on data science teams. The Greenplum division of EMC and EMC itself have sponsored the events both years.
What is data science? According to Wikipedia, "data science defines a discipline that incorporates applying various degrees of statistics, data visualizations, computer programming, data mining, machine learning and database engineering to solve complex data problems." The very short article goes on to say that a Data Science Journal has been published since April 2002, so data science is at least a decade old.
Why is data science relevant? Think about it this way: When questioned about the value of the first hot air balloons, Benjamin Franklin is said to have asked in response, "What is the value of a newborn baby?" Actually, data science is probably a long way from the newborn-baby stage, although it still has a long way to go before it achieves full maturity. Data science leads to technologies such as search engines like Google's, which use data outside the page itself; friendship relationships (think Facebook); big-data analysis; and product recommendation systems. In short, data science and data scientists are all about thinking creatively about what information might be useful and putting it in a useful context from which value can be derived.
Below are descriptions of some of the topics discussed at Data Summit 2012
- Predictive modeling: Predictive modeling has been with us for a long time, but data science goes far beyond traditional regression analysis to pushing the boundaries of what is possible, often involving multiple disciplines in addition to statistical learning, such as how to mine massive data sets.
- Data visualization: Making use of the power of our eyes to process a lot of information all at once, visualization can provide illumination where insight might not otherwise be easy to obtain.
- Impact of data science: The individual speakers and panels were keenly aware of how collaboration and other social tools impact products developed by teams of data scientists. They were also focused on the data collected by products that are widely deployed on the Web. Such data collection may result in a conflict between convenience and privacy. For example, analyzing an aggregation of medical records from many people may result in obtaining information that can improve the treatment of disease. However, even if individuals allow their information to be pooled anonymously, effectively securing that very private information is difficult, at best.
- Tidbits: With torrents of real-world data captured in a natural way from the Web, data conditioning rather than data quality (which is necessary in traditional enterprise systems) is often enough as the outliers may actually contain information of value. As a result, one of the key challenges of data science is being able to separate correlation from causality.
Overall, Data Science Summit 2012 was interesting and useful and should be continued in the future, but a lot of work has to go on in the field to build a superstructure that can focus and promote clear thinking about data science and its potential impacts.
The "horse and carriage" relationship between computation and information has long been expressed by the old term "data processing." Both are needed, but if the center of the IT solar system is becoming more about data, then data science as the next stage in computer science becomes more attractive and important.
However, the data science industry also requires more exposure. Data Science Summit 2012 was useful for sparking thought about the broad issues affecting data science, but its messages need to be carried to a wider audience. Why? So more people can understand and be part of a dialog that is likely to have an impact on their lives in many ways (with not all effects being necessarily beneficial).
The data science community needs to think not just in terms of individuals, teams and projects, but also in terms of how it will act as a functioning industry. The summit was a valuable starting point, but much work needs to be done before the next event. As projects lead to findings and conclusions that expand upon case studies, the results will give deeper direction and substance to the data science movement.
EMC is a client of David Hill and the Mesabi Group. | <urn:uuid:03e8afaa-5257-46a5-88bd-fc5525f83505> | CC-MAIN-2017-04 | http://www.networkcomputing.com/storage/emcs-data-science-summit-2012-envisioning-future-data/1568186128?piddl_msgorder=thrd | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00183-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954883 | 854 | 3.1875 | 3 |
Radiation emissions from mobile phones will have to be disclosed by manufacturers to US consumers in three to six months' time.
This follows new guidelines being imposed in August by the Washington-based Cellular Telecommunications Industry Association.
All mobile phone manufacturers will have to report the radiation reading - or Specific Absorption Rate - which is a measure of the amount of radiation to which the body is subjected while using a mobile handset.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
UK research caused controversy in May when scientists reported there was currently no health risk from phone handsets, but that research in future might contradict this. | <urn:uuid:5de0db75-6668-4a5b-8fa1-d0fc02bd1e8e> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/2240040748/US-forces-mobile-makers-to-reveal-handset-emissions | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00237-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964927 | 141 | 2.640625 | 3 |
At the Government Technology Conference's Security Summit last week, Keynote speaker Joanne McNabb, California's chief privacy officer, asked a poignant question: why does privacy matter?
For many reasons, it turns out.
For a start, it is a law. In California, privacy is a right defended by the state's constitution. "All people are by nature free and independent and have inalienable rights. Among these are enjoying and defending life and liberty, acquiring, possessing, and protecting property, and pursuing and obtaining safety, happiness, and privacy." [emphasis added]. There are nine other states in the Union which have constitutional rights to privacy.
Applying this to daily life can seem daunting given the way technology pervades society today. Citizens who pass multiple video cameras each day, or who have had their identity stolen might wonder if this right is being protected.
Which leads to another of McNabb's points: protecting privacy means keeping the trust of constituents. McNabb quoted a Gartner study which said that 46 percent of online consumers changed their behaviors due to fear of security issues. This translates to approximately 3.7 percent of the adult population.
Another reason that privacy should be important to those in office is that if ignored it can hurt people. People who have fallen victim to identity theft comprise a surprisingly high number of the population. In 2006 there were 8.4 million victims, according to a Javelin survey. Most identity theft issues are financial, costing on average $531 in out-of-pocket expenses, and an average of 25 hours to recover losses. The total cost of identity theft in the U.S. in 2006 was $49 billion.
"Beyond financial identity theft, there are rarer but very troubling kinds -- medical identity theft, for example, which has been called the information crime that can kill you," said McNabb. In this type, someone gets the personal information of a victim, such as a Social Security Number, and then receives medical care in another's name. This can "pollute" medical records with the diagnoses of other people. "So now you've got somebody else's diagnosis, somebody else's conditions in your medical file, and you don't know about it," McNabb explained. "So you're in being treated for something, and they think you are allergic to something you're not, or not allergic to something you are, and it can kill you. There's not a lot known about this yet... but it's a serious thing."
Although it can not physically kill us, the loss of democracy is also an important privacy issue. McNabb pointed out that "privacy is a necessary condition of individual autonomy and dignity" and that "our democratic form of government requires autonomous individuals who need a degree of privacy to play their various roles as citizens." Government accountability, secret ballot systems, and freedom of the press all hinge on having privacy of thought. Where most people grasp the metaphor of "Big Brother watching us," McNabb used Franz Kafka's The Trial to illustrate the downward spiral of a person's life when privacy and dignity are taken away.
Using technology can mean greater protection or greater threat to personal privacy depending on how that technology is used. For government agencies who collect and store the data on citizens, this means a "heightened responsibility to use personal information appropriately," McNabb explained. She suggested that information handling practices should be regularly re-examined "in light of new technologies and changing needs" and that it "doesn't mean just protecting personal information -- but continually reconsidering whether [it is needed] at all."
"Technology lets us collect lots of personal information, analyze it and manipulate it. But human beings must make the rules and establish the controls on using information," McNabb concluded. "Protecting privacy means protecting people."
The Security Summit was sponsored by Citrix, Oracle, Symantec and Verizonbusiness. | <urn:uuid:a4598be8-8394-4c74-84df-d3eded80bf81> | CC-MAIN-2017-04 | http://www.govtech.com/security/Privacy-Matters.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00145-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960068 | 795 | 2.734375 | 3 |
Big data and Internet of Things used to help improve rural weather forecasts
Farmers could get better weather forecasts thanks to IoT sensors. Schneider Electric has rolled out over 4,000 WeatherSentry sensors across the US to give a more complete view of weather patterns across the country.
The system will use Big Data techniques to more accurately predict weather and the firm claimed this would help farmers increase efficiency, profitability and sustainability.
The WeatherSentry sensors capture field-level weather and soil conditions that are used to create accurate local temperature and precipitation forecasts, alongside storm record archives and historical weather logs, to help assess and plan for the weather’s impact on day-to-day agriculture operations.
The firm claimed that its sensor generated more agricultural data than any other provider. It said its Geographic Information System (GIS) alerting system provided real-time data to allow farmers to plan crop locations, optimise water and soil usage and prioritise activity based on 15-day forecasts.
“Despite the many technological advancements made in agriculture in the past century, weather remains a high-cost, high-risk variable that impacts all corners of the industry,” said Ron Sznaider, senior vice president, cloud services, Schneider Electric.
“Taking the guesswork out of weather events allows farmers, ranchers and landowners to make better operational and financial decisions that contribute directly to the sustainability of the operation and the health of their bottom line.”
Sznaider said the use of Big Data and IoT could be used to mitigate the affects of climate change and solve one of the most critical challenges for farmers around the world.
“The Internet of Things will revolutionise how we bring about sustainable food production and we are excited to be at the forefront of delivering the precision technologies that help meet this truly global need,” he added.
Gartner research vice-president Bettina Tratz-Ryan said that IoT would have a significant role in minimising the impact of climate change.
She said that IoT would unlock “the potential of analysing real-time data from different business processes and visualizes resource inefficiencies.
“In addition, the increasing availability of data sources from the IoT will bring more information on the context in which the sensor is monitoring an environmental event or state. That context provides insight into an assessment of the dependencies between user or operator behaviour, machine-technology-process operations, or external influences that could lead to environmental inefficiencies,” added Tratz-Ryan.
Tratz-Ryan said that applications and social networks allowed us to share personal environmental best practices with others, creating a dynamic community approach. “All of these methods have one thing in common: the ability to leverage data to make real-time changes toward a more sustainable outcome,” she said. | <urn:uuid:6b8c0157-296c-4a22-a83f-a70c46251fa0> | CC-MAIN-2017-04 | https://internetofbusiness.com/schneider-put-iot-into-farming-to-monitor-weather-patterns/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00541-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920488 | 576 | 2.734375 | 3 |
I was just reading an interesting article on popularmechanics.com about new hybrid-electric bus fleets being used to provide electricity during emergencies such as the recent hurricane in Louisiana. In the article they say that cities are buying these vehicles because they reduce fuel costs and air pollution, but also these hybrid buses generate more power and are more portable then traditional generators.
I had an idea; why not take the concept of a containerized data centers and combine it with a hybrid-electric engine or bus. Diesel-powered engines in hybrid electric buses store energy in batteries which in a disaster could feasibly power a mobile data center for a significant amount of time, even after the fuel runs out. Hybrid engine technologies could serve as the basis of self powered Mobile Hybrid Data Centers for use in emergencies and other various types of situations where the location and access to power may be problematic.
The article also states that "BAE’s newest buses, expected to reach full rate production next year, produces 200 kilowatts when the engine speed is at 2300 rpms. BAE estimate a single hybrid city bus could provide power to 36 households for a full day or a 12,400 sq.-ft. hospital for 22 hours, on a single tank of diesel gas." I haven't done the math, but I can only assume this could easily keep a mobile data center up and running for quite a while.
The article goes on to say that trailblazing cities could become models for the delivery of power via mass transit to disaster-prone urban centers like New Orleans, which has only restocked on biodiesel buses since Katrina.
Anyone at the DoD or department of homeland security, please feel to get in touch. | <urn:uuid:28548da3-09b9-43d4-9204-3474ac4fe520> | CC-MAIN-2017-04 | http://www.elasticvapor.com/2008/09/mobile-hybrid-data-center-hybrid-bus.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00081-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959173 | 348 | 2.625 | 3 |
Wireless Regulations and Legislation
A recent survey conducted by Northern Sky Research, a satellite and wireless technology and applications market research and consulting firm, shows that 2006 will be a critical year for broadband wireless technologies, with the likely release of the first ZigBee, UWB and WiMAX-based products. As wireless technologies continue to advance, one would think that federal legislation and regulation would be continuing to advance as well. However, it seems that regulation lies in the hands of the technology developers and vendors. Over the past five to 10 years, the FCC Wireless Telecommunications Bureau has gradually become more flexible, pro-competitive and less regulatory with licensing spectrum for companies. However, wireless technologies that operate on the unlicensed band, like Wi-Fi or Bluetooth, are frequently confronted with attacks and may benefit from more regulation.
“Wi-Fi and Bluetooth without a doubt suffer the most intrusions. Implementations of both of these technologies are designed to be open and simple for users to get connected by default, and this leaves users who fail to properly secure the wireless connections open to attack,” said Devin Akin, chief technology officer for the CWNP program. “Additionally, some wireless implementations, such as Wi-Fi hotspots, are designed to be unsecured. The popularity of Wi-Fi hotspots only increases the risk of attacks.”
The fact is that Wi-Fi and Bluetooth vulnerabilities and attacks are a real and increasing threat to the security of companies and of everyday end users. This is because wireless networks based on the 802.11b, 802.11a and 802.11g standards are commonly implemented in corporate America today. “Security measures are available for almost every wireless technology, but in many cases, implementers do not use them. The most vulnerable networks are the ones implemented by those organizations who do not put a high priority on data security,” Akin said. “In my opinion, some of the wireless data technologies that are typically not secured are Wi-Fi, Bluetooth and Infrared. Of course, this varies between user groups—home users secure their Wi-Fi networks much less often than organizations do.”
Wireless developers are increasingly trying to meet the requirements of growing technology legislation and standards—implemented not only by the government, but also by vendors. “Many wireless intrusion prevention system (WIPS) vendors are integrating tools into their enterprise-class products that allow administrators to demonstrate compliance with GLBA, HIPAA, SOX and others,” Akin said.
The Gramm-Leach-Bliley Financial Services Modernization Act (GLBA) was one of the first pieces of legislation to modernize the U.S. financial industry by breaking down barriers between banking and related areas such as securities and insurance. But more legislation was needed, and the Sarbanes-Oxley Act (SOX) of 2002 was enacted in response to the high-profile Enron and WorldCom financial scandals. The legislation defines which records are to be stored and for how long, which affects both the financial and IT sides of corporations.
In the health insurance industry, the Health Insurance Portability and Accountability Act (HIPAA) of 1996 was enacted not only to ensure and protect health insurance coverage for people who lose their jobs, but also to standardize health-care-related information systems as well. HIPAA established standardized mechanisms for electronic data interchange security and confidentiality of all health-care-related data that required extensive changes to the way that health providers conduct business.
“There are a number of strong authentication and encryption techniques in the marketplace—each with strengths and weaknesses,” Akin said. “My preference is for standards-based security methods such as 802.11i, which is designed to be scalable, very secure and to minimize security protocol overhead. But this depends specifically on the environment where the wireless device is used. For example, in the enterprise, the most secure and scalable solution type is 802.11i compliant 802.1X/EAP. In remote-access environments where a user is accessing a corporate network from a wireless hotspot, VPN technology is the best solution because it is end-to-end. But if a wireless user is checking e-mail and browsing the Internet from a hotspot, he might simply use secure applications, such as POP3/SSL or HTTPS.”
According to the NSR survey, in 2006, WiMAX vendors are slated to deploy the first certified 802.16d solutions primarily for the licensed spectrum, and Wi-Fi will see the approval of the 802.11n spec. These projections, although extremely exciting, will also have their ramifications if not installed and secured properly. Because the bandwidth, spectrum and performance of these technologies are increasing, there is wider room for attacks as well.
Therefore, technology developers and vendors need to continue to strive to surpass their regulation, authentication, encryption and compliance efforts. Akin said IT wireless professionals should continue their personal development in order to better the security of these technologies as well. “Choose a training program that provides the fundamentals of the technology, which include administration, security, analysis/troubleshooting, QoS and design. Take in as much information as you can through instructor-led courses, CBT, study guides and certification exams,” Akin said. “This process should include the building of a home lab of hardware and software for the purpose of hands-on skill building. These professionals should continually read the newest whitepapers and books on the technology you have chosen, and maintain their wireless certifications. This will force professionals to take a look at areas where they are technically weak.”
–Cari McLean, email@example.com | <urn:uuid:1f346324-39cb-4222-becb-c47c61ecda47> | CC-MAIN-2017-04 | http://certmag.com/wireless-regulations-and-legislation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00109-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957112 | 1,171 | 2.53125 | 3 |
SSL certificates are the foundation of internet security, but they are increasingly under fire and, when improperly setup, become an unwatched door for cybercrime and cyber-espionage. Users – and your customers – trust SSL by default, which is why the online security protocol is the perfect attack tool.
As more and more certificates become compromised, it has led some to question the security of Certificate Authorities and the overall SSL infrastructure. But is SSL really broken, or is the paranoia a case of sensationalized news coverage? A recent report from the Ponemon Institute underscores the fact that organizations are failing to properly manage their certificates, with 51% globally admitting they have no idea how many keys and certificates they have in use and all of the 2000+ organizations polled indicating they had been victimized by man-in-the middle attacks as the result of a certificate compromise.
The simple fact is that a lack of proper configuration is often responsible for SSL compromises, and solutions to the problem are within any organization’s reach – all it takes is a little time and know how. Join our panel of experts as they examine SSL infrastructure, its vulnerabilities, and provide simple steps that you and your organization can take to combat the threat. | <urn:uuid:bae5122c-007c-4ebb-bd60-056529bfe858> | CC-MAIN-2017-04 | https://www.infosecurity-magazine.com/webinars/is-ssl-secure-cutting-through-the/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00045-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.967859 | 244 | 2.609375 | 3 |
This talk from Shmoocon 2013 provides a DIY guide to using Trusted Computing on embedded devices.
The authors introduce a low-cost schematic using Atmel’s CryptoModule (AT97SC3204T) and CryptoAuthentication (AT88SA102S) ICs, and release drivers for UEFI, U-Boot, and the Linux kernel.
Using these ICs as a base, they demonstrate (and provide code) ways anyone can use Trusted Computing concepts for embedded projects (Linux IMA, signed data exchange), most importantly, a secured bootstrap from ROM code to a userland application.
They also demonstrate how the TPM can be used to encrypt and sign Ethernet frames. This is a response (and implementation of a well-known mitigation strategy) to attack vectors using various pre-boot environments such as UEFI, BIOS, Option ROM, and other bootloaders.
By the end of the presentation, participants should understand how to use a TPM to secure their creative embedded projects.
About the authors
Teddy is a computer science researcher working for the USA with a focus on large-scale enterprise network modeling and simulation. He has a passion for security and CTF competitions.
David is currently employed as an incident responder with a strong interest in software engineering. He is a recent college graduate with a passion for cryptography, cryptanalysis and digital privacy. | <urn:uuid:1170db02-eb96-43aa-8471-d13e83f1be7c> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2013/04/04/diy-using-trust-to-secure-embedded-projects/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00073-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.899073 | 285 | 2.640625 | 3 |
It would be wonderful if there were less traffic accidents and auto deaths, so can you imagine a world where your dashboard suddenly flashes with the Warning Do Not Pass? This sort of safety warning is happening right now with the development of Digital Short Range Communications (DSRC), cars receiving data from other vehicles within 1,247 feet, or 380 meters, to warn of hazards unseen by the driver. Yet connected vehicles talking to each other and to the infrastructure could also create new types of tracking and privacy invasions such as your car transmitting your speed and landing you a speeding ticket.
Privacy and the Car of the Future: Considerations for the connected vehicle [PDF] was presented at the 29th Chaos Communication Congress (29C3) in Hamburg, Germany by Christie Dudley. She wrote, “I was contracted to do a privacy audit in July to identify aspects of the technology that would pose threats to users' privacy, as well as offering summaries of methods to partially or completely compromise the system. For this program to be successful, it must be accepted by the public since the benefits are derived from others' broadcasts.”
About 2,800 vehicles are talking to each other in the U.S. Department of Transportation's Connected Vehicle Safety Pilot in Ann Arbor, Mich. These cars wirelessly send signals to each other, “warning their drivers of potential dangers such as stopped traffic or cars that might be blowing through a red light. They can even get traffic lights to turn green if no cars are coming the other way.” The US DOT will decide later this year if DSRC should be required for all new cars. The German government is considering investing in this messaging technology so it could be built into infrastructure.
DSRC sends out a "basic safety messages” every 10 seconds. It uses IEEE 802.11p and 5.9GHZ in US and Europe. Dudley stressed that the protocol is not like OnStar or CAN bus—it can’t shut the car down or help to break in. In fact, she said CAN bus (controller area network), where all current auto sensors now connect, is considered so insecure that it is “untrusted by auto manufacturers,” and “all data from that bus is suspect.”
Instead, DSRC will be similar to Slotted Aloha and would be a totally independent control unit with its own GPS, inertia sensors, and interfaces that are not related to CAN bus.
Dudley explained that the packets transmitted would include a basic safety message with 50 fixed data elements such all four brakes, GPS time-sync, speed, path history, and path prediction. These messages must be considered trustworthy, so that is where certificates come into play. The certificates must be used for only a limited time, or else they could be abused and allow for tracking. The Certificate Authority is never supposed to interact with the device it is issuing the certificate for. Malfunctioning equipment would have its certificate revoked and its fingerprint blacklisted. If the system doesn’t invalidate itself after internal sensor checks fail, then the entire unit must be replaced. Cost is high for that which is why they hope that revoked certificates and blacklisting works.
Although the hardware is ready to ship, the software is not done. No one is quite sure yet how to load the certificates, or if a certificate would track coming and going for an entire trip, or only one way. Also what if innocent people were accidentally blacklisted? There have been discussions for certificate delivery to be loaded via cellular or wifi – but both would allow people’s vehicles to be tracked . . . something that nobody wants. There has also been talk of using a separate SIM for this system, not the one in your personal mobile phone. There is a MAC layer which is unrouteable and is good for privacy. However if there is ever any algorithm to make the network routeable, it will also make vehicles trackable.
Before President Obama mandated black boxes in vehicles, we looked at your car’s black box is spying on you. For this new tech, much like mandated tire pressure sensors, manufacturers are willing to add it to cars, since the cost can be passed on to consumers. Yet there have been concerns of geo-targeted advertising – forcing ads into someone’s car based on their location. Dudley said it has been discussed as a way to fund this technology. Other potential privacy pitfalls include manufacturers using the system for commercial applications and data brokers tapping into the system if it were integrated into infrastructure.
One of the most worrisome privacy concerns deals with law enforcement, since you would be broadcasting your speed. Dudley asked, what could the cops do with this? Issue tickets based on your car telling the cops you are speeding? Correlate location and speed to independent identification such as cameras or automated license plate readers (ALPRs) like the ACLU has warned the DEA uses to track us? Hackers would disable it in an instant, the first time such an unwelcome surprise speeding ticket comes in the mail.
What can hackers do? Hack the radio and the protocols. While Dudley didn’t go into it in depth, she pointed interested parties toward searching for manufacturers. There were eight device manufacturers that each produced five “Here I Am” units for US DOT Safety Pilot qualification testing.
The time to make changes that would better protect privacy is now, before this emerging technology is fully implemented.
Below is the 29C3 Privacy and the Car of the Future video presentation:
Images used with permission from Christie Dudley's Privacy and the Car of the Future presentation [PDF]. | <urn:uuid:ebacd4e6-7f5d-4edf-bf15-513c16be425d> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2473995/data-privacy/privacy-and-the-car-of-the-future--cars-talking-to-each-other-and-to-infrastructure.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00559-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963506 | 1,146 | 2.71875 | 3 |
Ever wonder where all of the IT knowledge is stored?
Computerworld recently released a study on the average age of programmers and the major effects their retirement could have on the future of the IT industry, in particular, Big Data. Mainframers are highly talented programmers who have developed their IT intelligence over decades of experience. Pedro Pereira, experienced tech writer, discusses the importance of polished programmers instilling their knowledge to the younger generation in order for databases to remain properly protected in his blog post, Mainframe Skills Shortage Could Hinder Big Data.
For years, mainframes have been the backbone of storing all data. Big Data has made lengthy strides and currently is a credible IT priority for businesses across the world. It offers tools, analytic specifications and software that can handle the large amounts of work. Due to the enormous amounts of data that is collected daily, mainframes remain the reliable key to data success.
As the study suggests, the average mainframe programmer is reaching the end of their career. These professionals house some of the most valuable IT mainframe knowledge which is crucial to pass along to the up and coming professionals taking over the industry. In addition to “grooming” future mainframers, vendors have started to build programs to generate excitement over enterprise IT to identify future mainframe talent at the high school and college levels. Read the blog to expand your thoughts and to find out ways to prevent the loss of IT information as a result of role transitions. | <urn:uuid:27d1bbb3-8db3-46ac-9406-1e7562e8c1d9> | CC-MAIN-2017-04 | http://www.dbta.com/Articles/ReadArticle.aspx?ArticleID=88166 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00467-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942844 | 297 | 2.984375 | 3 |
2.1.7 What are Message Authentication Codes?
A message authentication code (MAC) is an authentication tag (also called a checksum) derived by applying an authentication scheme, together with a secret key, to a message. Unlike digital signatures, MACs are computed and verified with the same key, so that they can only be verified by the intended recipient. There are four types of MACs: (1) unconditionally secure, (2) hash function-based, (3) stream cipher-based, or (4) block cipher-based.
- Simmons and Stinson [Sti95] proposed an unconditionally secure MAC based on encryption with a one-time pad. The ciphertext of the message authenticates itself, as nobody else has access to the one-time pad. However, there has to be some redundancy in the message. An unconditionally secure MAC can also be obtained by use of a one-time secret key.
- Hash function-based MACs (often called HMACs) use a key or keys in conjunction with a hash function (see Question 2.1.6) to produce a checksum that is appended to the message. An example is the keyed-MD5 (see Question 3.6.6) method of message authentication [KR95b].
- Lai, Rueppel, and Woolven [LRW92] proposed a MAC based on stream ciphers (see Question 2.1.5). In their algorithm, a provably secure stream cipher is used to split a message into two substreams and each substream is fed into a LFSR; the checksum is the final state of the two LFSRs.
- MACs can also be derived from block ciphers (see Question 2.1.4). The DES-CBC MAC is a widely used U.S. and international standard [NIS85]. The basic idea is to encrypt the message blocks using DES CBC and output the final block in the ciphertext as the checksum. Bellare et al. give an analysis of the security of this MAC [BKR94]. | <urn:uuid:44758aa9-f228-4731-b18a-715fdbc35892> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-are-message-authentication-codes.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00340-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91359 | 438 | 3.609375 | 4 |
Nov. 15 — Some of NASA’s best and brightest will showcase more than 30 of the agency’s exciting computational achievements at SC13, the international supercomputing conference, Nov. 17-22, 2013 in Denver.
- a summary of supercomputing-assisted science revelations made during NASA’s Mars rover Curiosity’s first year on the Red Planet;
- the Kepler mission’s new data-centric strategy for continuing the search for Earth-sized planets;
- unique insights into the physical mechanisms underlying galaxy formation gained through high-resolution 3-D simulations; and
- computational methods to improve the design of the Space Launch System (SLS) and next-generation launch pad.
Spectacular scientific visualizations from these and other NASA supercomputing applications will appear on a state-of-the art 10-foot-wide hyperwall display.
“NASA’s supercomputing technologies and expertise are key to the success of many missions,” said Rupak Biswas, deputy director of the Exploration Technology Directorate at NASA’s Ames Research Center, Moffett Field, Calif. “This includes expanding our knowledge of the ocean’s role in climate change and the global carbon cycle, understanding how space weather affects technological systems on Earth, and improving the design of aircraft components to reduce the level of noise we are exposed to every day.”
Each day at SC13 Biswas will present a talk on NASA’s new studies to determine the potential for quantum computing to solve difficult problems of importance to the agency. A D-Wave Two quantum computer was installed last summer in Ames’ NASA Advanced Supercomputing (NAS) facility, which also houses the agency’s most powerful supercomputer, Pleiades, an SGI ICE system used to support NASA science and engineering missions. Pleiades recently was expanded to include the newest generation of SGI ICE X systems containing a total of 6,624 Intel Xeon E5-2680v2 (Ivy Bridge) processors (66,240 cores). The expanded system runs at a peak performance rate of 2.87 quadrillion computer operations per second (petaflops).
In addition to Pleiades, the NASA Center for Climate Simulation (NCCS), located at NASA’s Goddard Space Flight Center, Greenbelt, Md., upgraded its Discover supercomputer with the addition of an IBM iDataPlex cluster incorporating 960 Intel Xeon E5-2670 (Sandy Bridge) processors (7,680 cores). Discover now performs at 1.12 petaflops peak.
Using Discover, NASA completed its modeling contributions to the Intergovernmental Panel on Climate Change (IPCC) Working Group I Fifth Assessment Report, released in September 2013. The SC13 exhibit hyperwall will show NASA-produced visualizations of possible 21st century temperature and precipitation pattern changes estimated by dozens of climate models for the IPCC report. Discover is currently hosting a NASA global atmospheric model’s simulation of weather at 7.5-kilometer resolution for two years and 3.5-kilometer resolution for three months. This simulation is expected to generate approximately four petabytes of data. | <urn:uuid:2ecdf453-db3a-46be-b8b9-606c0d7e022a> | CC-MAIN-2017-04 | https://www.hpcwire.com/off-the-wire/nasa-showcase-computational-achievements-sc13/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00302-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.872652 | 656 | 2.765625 | 3 |
India is devising plans to host exascale supercomputers before the end of the decade. According to an Economic Times article, the Center for Development of Advanced Computing C-DAC has proposed investing Rs 4,700 crore (around $872 million) to create systems in the “petaflop and exaflop range of supercomputers” over the next five years. Deploying an exaflops-class machine by 2017 would be quite a feat for India, given that the US, China and some European nations are targeting 2018 and beyond for their first exascale machine.
In the Indian proposal, C-DAC nominated the Department of Electronics and Information Technology (DEITY) to coordinate the effort. The department would be tasked with setting up a National Apex Committee to provide project oversight. C-DAC would be in charge of building the necessary facilities to house the system.
Recently, India has been making substantial efforts to upgrade their supercomputing capabilities. In February, the Council of Scientific and Industrial Research Center for Mathematical Modeling and Computer Simulation (CSIR C-MMACS) announced the deployment of a 250-teraflop cluster, which currently sits in 58th place on the TOP500. The government also announced a 6,000 crore ($1.2 billion) investment with an intent to “propel India into the elite supercomputing club”.
In 2007, India claimed a top ten position with the number-four ranked “Eka”, a 117.9-teraflop machine that has subsequently fallen to number 129. If the government does invest $872 million toward supercomputing over the next five years, it could propel India back into the elite circle of supercomputing nations. But without continued investment, there’s no guarantee the country would stay there for long. | <urn:uuid:d35160b3-656e-4182-b4d0-eb2af1dbaa8c> | CC-MAIN-2017-04 | https://www.hpcwire.com/2012/09/18/india_sets_sights_on_exascale_for_2017/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00330-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936101 | 386 | 2.671875 | 3 |
RDBMS: Looking Back, Moving Ahead
Way back in 1969, the relational model that spawned relational database management systems (RDBMS) was born. Dr. E.F. Codd wrote a paper that year titled “Derivability, Redundancy and Consistency of Relations Stored in Large Data Banks” for IBM. A revised version was published in 1970 in Communications of the ACM, the journal of the Association of Computer Machinery. Out of these papers, research projects evolved at IBM and the University of California at Berkeley that began to put the theory into practice.
After fighting battles against alternative technologies and doubt from the industry, the first relational database products hit the market nearly a decade later. Relational Software Inc. (now Oracle Corp.) was the first to bring a SQL implementation to market. SQL, or structured query language, is what most RDBMSs use to access the database, though it’s not a requirement. IBM followed Relational Software in 1981 with a product based on its research, SQL/DS for VSE. In 1983, it introduced its first DB2 system.
Despite the continuing challenges from competing technologies, most large databases are still created and administered via an RDBMS. As you might have expected, the first big guys to come to market with the technology are still leading it: Oracle and IBM. Microsoft’s SQL Server manages to maintain its hold on a big slice of the pie year after year, as well. According to IDC, the market is expected to reach nearly $20 billion by 2008, and Oracle has the biggest share (39.8 percent in 2003), followed by IBM (31.3 percent) and Microsoft (12.1 percent). Of course, it depends who you ask. Gartner claims the biggest market share for IBM (35 percent in 2003), with Oracle in second (32.6 percent) and Microsoft third (19 percent). Whichever analyst firm you believe, the top three players remain the same.
Oracle’s most recent update to its technology is Oracle Database 10g. Released in 2003 as the first database designed for grid computing, the technology features numerous editions aimed at various market segments, from huge enterprises to small businesses, even including a Lite Edition for companies that want to work with mobile database applications. The Enterprise Edition includes business intelligence services like data warehousing, OLAP and data mining, as well as open access to Web services through SQL, Java, XML and standard Web interfaces. Oracle can boast 17 independent security evaluations of the product, as well as several recent industry awards, including InfoWorld’s “Best Database of the Year” for its annual Technology of the Year Awards 2005, and eWEEK Labs’ “Top Products of 2004.”
IBM’s DB2 Universal Database 8.2 (code-named “Stinger” before its release last fall) is designed to work with AIX, Linux, HP-UX, Sun and Windows, and features improved integration with tools designed to help programmers increase their efficiency. DBAs are not left out—DB2 UDB 8.2 also features plenty of autonomic capabilities to free their time so they can focus on more business-critical tasks. IBM is an IT giant, and its DB2 technology spans numerous editions for production as well as application development deployments, and can be enhanced with lots of other software. Some of the major improvements in the latest version help lower costs through tools like automated statistics collection and object maintenance, and self-tuning backup and restore. Other enhancements increase worker productivity for programmers through Microsoft .NET integration and Java enhancements, and heighten security via high availability disaster recovery and other features.
Coming soon to an enterprise (or small business) near you is Microsoft SQL Server 2005 (code-named Yukon), which will be available in four editions, featuring high availability and scalability, as well as advanced business intelligence tools and tighter security. The Enterprise Edition will be a complete platform for mission-critical applications at large organizations, offering data partitioning, database mirroring, complex integration, ad hoc reporting and more. The Standard Edition is ideal for medium-sized businesses, while the Workgroup Edition makes the technology more affordable and simple to manage for the SMB space. The Express Edition will cost nothing for those who want to build simpler data-driven applications.
Just because these three vendors own the vast majority of the market, that doesn’t mean there’s no one else trying to compete. Sybase IQ, designed specifically for reporting, data warehousing and analytics, and Teradata’s Database V2R5.0 are both contenders. There are also several open-source options out there, including the popular MySQL database, Firebird (which wins the award for coolest logo) and Ingres r3 from Computer Associates.
Emily Hollis is managing editor for Certification Magazine. She can be reached at firstname.lastname@example.org. | <urn:uuid:efc85f0b-401f-4798-806e-6755bf95ae46> | CC-MAIN-2017-04 | http://certmag.com/rdbms-looking-back-moving-ahead/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00146-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941408 | 1,022 | 2.515625 | 3 |
The Trusted Platform Module never releases its internal key outside itself, so it becomes its own root of trust.
The Trusted Platform Module is a hardware-based cryptographic root of trust.
The idea behind hardware-based trust is that trust must start somewhere. Whether it begins in a preloaded bundle of certificates from various certificate authorities (as SSL does in browsers) or from a web of trust manually established at Pretty Good Privacy key-signing parties, cryptographic algorithms are only as strong as the trust they're built on.
If you want a secure transaction with someone, at some point in the past you must have verified the identity, or had someone you trust verify the identity--as SSL certificate authorities claim to do. Then, trust relationships are used to securely pass that information forward to a later transaction. Assuming there aren't any bugs in the crypto itself, this chain of trust allows for convenience later on from an initial trust relationship.
TPM gets its own internal key from the manufacturer, and the manufacturer theoretically is unable to track those keys. TPM never releases its internal key to anything outside of itself, so it becomes its own root of trust.
Of course, bugs do crop up. The cryptographic math is fairly well-understood at this point, but implementations often leave something to be desired.
Take, for example, the recent weakness in the Nintendo Wii's code-signing and verification mechanism, which was meant to ensure that only authorized apps would run on the game console. Implementation flaws resulted in third parties being able to bypass the protection. Datel, which makes video-game peripherals and cheat systems, released a commercial product based on the flaw. A small group of Wii hackers also had been using the flaw to explore the Wii. Nintendo has since fixed the problem.
Return to the story:
A Tipping Point For The Trusted Platform Module? | <urn:uuid:708a7293-959d-47c8-93da-d515b5f1762c> | CC-MAIN-2017-04 | http://www.darkreading.com/risk-management/tpm-a-matter-of-trust/d/d-id/1069285 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00358-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959102 | 381 | 3.078125 | 3 |
The smaller a silicon transistor becomes, the more electrons it leaks. That can mean unreliable, battery-draining chips. Researchers at Intel have come up with a way of dealing with the problem that subverts the industry’s strong preference for precision. The company’s prototype chip operates in a low-power but error-prone mode, but it detects and corrects its errors. This approach, researchers have found, saves 37 percent on power compared with running in conventional mode with no loss of performance. | <urn:uuid:d2ba6f8b-8e89-4298-a918-c92723b3084b> | CC-MAIN-2017-04 | https://www.hpcwire.com/2010/03/23/intel_prototypes_low-power_circuits/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00532-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916906 | 103 | 2.78125 | 3 |
Most Biological viruses have a nasty reputation. But scientist Angela Belcher believes that some viruses can be guided into performing a useful task: building high-tech materials.
Belcher and her University of Texas at Austin colleagues were intrigued by the way viruses can easily produce vast armies of new viruses. The researchers soon discovered that virus replication could also build nanosize materials for next-generation optical, electronic and magnetic devices. "We wanted to evolve biomolecules to control materials that nature has not evolved interactions with," says Belcher, who’s scheduled to join MIT this fall as an associate professor of materials science, engineering and bioengineering.
Using genetically engineered viruses that are noninfectious to humans, Belcher and her team created liquid crystal suspensions of viruses and nanoparticles that could be cast into thin liquid crystal films. "We took advantage of the viruses’ genetic makeup and physical shape to grow the material and to help them assemble themselves into structures that are several centimeters long," Belcher says. The material was stable enough to be picked up with forceps. She notes that it took about a week to grow a usable, uniform film.
Belcher believes that viruses have the potential to become cheap, efficient and environmentally safe nanotechnology building tools. "Biology makes material at moderate temperature using self-assembly and using nontoxic materials," she says. The most difficult part of the research was getting viruses that have evolved over millions of years to develop technologically usable materials, she adds. The researchers were ultimately able to evolve viruses during a period of months that could work on 20 types of materials, including semiconductors, magnetics and opticals.
Down the road, viruses could be used to produce microscopic switches, amplifiers and other devices. Such real-world applications, however, probably won’t arrive for at least another five to 10 years. "Most of this research is still at the basic science stage," Belcher admits. Time enough, perhaps, for people to adjust their view of viruses. | <urn:uuid:1c8445eb-cf1f-47f7-8734-31a86aabe4e7> | CC-MAIN-2017-04 | http://www.cio.com/article/2440545/infrastructure/helpful-viruses---when-bad-viruses-go-good.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00256-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.972476 | 407 | 3.984375 | 4 |
SEAT – Search Engine Assessment Tool – is a tool dedicated to security professionals and/or pentesters . Using popular search engines, it search for interesting information stored in their caches. It also uses other types of public resources (see later). Popular search engines like Google or Yahoo! (non-exhaustive list) use crawlers (or robots) to surf the Internet, visit found websites, index the retrieved content and store it in databases [Note: A small tool to check when the Google bot last visited your site: gbotvisit.com].
What’s the concern with security? Web robots index everything (in reality, some filters may be defined via robots.txt files, but it’s not the scope of this post). Let’s assume that everything is cached. It means that unexpected content can be crawled by robots and made publicly available:
- temporary pages
- unprotected confidential material
- sites under construction
When you search something via Google, you just type a few words and expect some useful content to be retrieved. But, the search engines are able to process much more complex queries! Examples (for Google):
- “site:rootshell.be foo” will search the string “foo” only in hosts *.rootshell.be.
- “inurl:password” will search the string “password” on the URL only.
- “ext:pdf exploit” will search the string “company” on PDF documents.
There comes the power of SEAT! It will build complex queries against not only Google but well-known search engines. It comes with a pre-installed list with the most common sites but you’re free to add your own. Pre-configured search engines are: Google, Yahoo, Live, AOL, AllTheWeb, AltaVista and DMOZ. Once the queries performed (it may take quite some time of you configured multiple search), it display the results in a convenient way. Have a look at the GUI:
The usage of SEAT is based on a three phases process (the three tabs on top of the window):
- Preparation: You define here your target (a host, a domain name or IP addresses), and which type(s) of query you will perform.
- Execution: You select here the search engine(s) you would like to use and how to query them (number of thread, sleep times, …). Then you start/pause/stop the query. Queries are multi-threaded and may have a side effect: Your IP can be blacklisted (Google has a powerful algorithm to prevent usage of tools like SEAT. Take care if you use it from your corporate LAN. All your company could be temporary blacklisted by Google!
- Analysis: The last step is the analyze of the retried content.
Once the analyze is performed (and it can take quite some depending of your targets/queries), results are already available. For each results, extra operations can be performed (by double-cliking the URL):
- Direct request (Warning: this can reveal your IP address to the target)
- Grab data from the Netcraft database
- Grab a copy from archive.org
- Grap a copy from Google cache
SEAT is fully customizable: your own search engines, advanced queries can be added. Execution can be tuned (number of concurrent threads, User-Agent, sleep time between queries etc…) and, of course, results can be saved (export to .txt or .html files).
Search engines databases are full of interesting information! Like repeated during the last ISSA meeting this week, if you search for information about a target, just ask! SEAT is a perfect tool to conduct an audit or pentest.
A few words about the supported environment, SEAT is written in Perl (version 5.8.0-RC3 and higer) and requires the following modules: Gtk2, threads, threads::shared, XML::Smart. Check out the official website: midnightresearch.com. | <urn:uuid:c7efb40a-fc33-406f-90b4-c1d88ad7ccf8> | CC-MAIN-2017-04 | https://blog.rootshell.be/2009/03/21/introduction-to-seat/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00129-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.890379 | 848 | 2.578125 | 3 |
The Weakest Link is Often…You
A common misconception people have about cyber attackers is that they only use advanced hacking tools and technology to break into people’s computers, accounts and mobile devices. This is simply not true. Cyber attackers have learned that one of the easiest ways to steal your information or hack your computer is by simply talking to and misleading you. This is called social engineering.
Social engineering is a type of psychological attack in which a hacker tricks you into giving them something they want. Social engineering attacks can happen with almost any technology, including phishing attacks via phone calls, email, text messaging, Facebook messaging, Twitter posts, or online chats. The key is to know the signs. Learning how to prevent, detect and stop social engineering attacks is one of the most effective steps you can take to protect yourself.
Use common sense. If something seems suspicious or does not feel right, it may be an attack. Some common indicators of a social engineering attack include:
- Someone creating a tremendous sense of urgency. If you feel like you are under pressure to make a very quick decision, be suspicious.
- Someone asking for information they should not have access to or should already know.
- Something too good to be true. A common example: You are notified you won the lottery, even though you never bought a ticket.
If you suspect someone is using a social engineering method to get something from you, stop communicating with the person. If it is someone calling you on the phone, hang up. If it is someone chatting with you online, terminate the connection. If it is an email you do not trust, delete it.
This is part 1 in a two-part series. In part 2, we will share tips to prevent social engineering attacks. | <urn:uuid:2e744002-7e63-484a-a120-5acb3f062b15> | CC-MAIN-2017-04 | http://news.centurylink.com/blogs/security/the-weakest-link-is-oftenyou | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00065-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956235 | 361 | 3.578125 | 4 |
New Password Cracking Method
A new attack makes some password cracking faster, easier than ever. A researcher has devised a method that reduces the time and resources required to crack passwords that are protected by the SHA1 cryptographic algorithm.
First, some context. One of the main use cases for hashing function, such as the SHA-1 function, is to store passwords securely. When attackers obtain such hashed password, they need to launch a “brute force” attack against it, in order to reveal the password. “Brute force” means, they need to repeatedly guess the password, apply the hashing function on it and compare the result with their hash password they have. The security researcher has found an algorithmic shortcut in SHA-1 calculation that makes the computation easier, thus reducing the time needed to successfully “brute force” an attack.
But it should not surprise the security community, as the writing was on the wall. When a crypto hash is weakened (i.e., discovered to be less secured than perceived), it usually marks the start of its downfall and SHA 1 has been weakened since 2004. This chart of the state of popular crypto hashes from 2009 (http://valerieaurora.org/monkey.html) shows just that:
The corollary? In case the hashing is done for security (e.g. hash user passwords, verify data integrity, etc.):
- MD5 is dead and should never be used.
- SHA-1 is going in the same direction. Consider an upgrade of existing systems and definitely don't use it for new systems.
A smart choice would be to follow the U.S. National Institute of Standards and Technology (NIST) recommendation for federal agencies: "Federal agencies should stop using SHA-1 for generating digital signatures, generating time stamps and for other applications that require collision resistance."
Best option? Use a hash function from SHA-2 family, such as the SHA256.
Authors & Topics: | <urn:uuid:9fa91b47-724a-4cff-9f21-45f549811b8a> | CC-MAIN-2017-04 | http://blog.imperva.com/2012/12/new-password-cracking-method.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00551-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932434 | 409 | 2.828125 | 3 |
There are very few technologies that have affected our everyday lives as much as the Internet. It has changed the way we communicate in many ways and has revolutionized business processes. And the digital evolution has affected our accounts just as much. Online banking has been growing in popularity for years, and more and more people appreciate the benefits it offers. Bank customers can access their accounts, execute financial transactions or trade shares at any time of day or night.
According to Eurostat, 36 percent of people in Europe used online banking for transactions last year. In Norway the figure was a staggering 83 percent. But, just as with any other service that involves large sums of money, criminals attempt to make off with as much of the loot as possible. In the Internet age, bank robbers no longer need cutting torches to get to customers’ money.
Bank robbery 2.0: Attacks in the browser
Banks have invested heavily in the security of their systems and effectively encrypted the communication channels from the customer to the bank. They have also continued to improve the TAN procedure, having recently done away with printed TAN lists and introduced new procedures such as mobile TAN and flicker TAN. But, in reality, cyber criminals can circumvent all these protection mechanisms by attacking the customer’s PC.
In the world of bank robbery 2.0, perpetrators do not attack the banks. They infect online banking customers’ computers with intelligent computer malware called banking Trojans. Visiting an infected website is all it takes to infect the computer with this specialised malware. Once it has stolen the access data, this malware can actively intervene in the payment process and divert legitimate transactions to other accounts without being detected.
How does the fraud work?
If the browser has been manipulated by a banking Trojan, data is still transferred from the computer to the bank in an encrypted form, but it is not the data that the user actually entered in the browser. If the bank customer tries to pay his rent from his infected PC, for example, the data he enters is visible in the Internet browser, but once the TAN is entered the money is unnoticeably directed to the criminal’s account.
Most antivirus solutions do not detect new banking Trojans until it is too late, since they require a corresponding signature for protection. In one test, conventional protection programs only detected 12 percent of the malware strains immediately and 27 percent after 24 hours. This means that traditional security technologies found it almost impossible to protect computers fully against current banking Trojans.
The biggest issue with this, is that the consumer is unaware of this fact. Nobody likes to talk about the threats of banking Trojans, because they are unable to offer a solution to the problem. Being aware, alert and working from an AV-protected pc within a safe network does not do the trick in this case. Even the best educated IT security expert can fall victim to this malware, as it operates completely invisible and requires no user involvement at all.
Due to the lack of communication about banking Trojans, users remain blissfully unaware of the risk, even though the average damages per case are around £4,000. Consumers assume they are safe when they install security software on their computer. And no one, banks nor AV vendors will tell them otherwise. | <urn:uuid:67d8357b-8058-4a5a-8a7e-41a0676e155e> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2012/05/01/bank-robbery-20-online-banking-in-the-sights/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00551-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959295 | 671 | 2.5625 | 3 |
The University of Cambridge has launched a recruitment drive and website to find talented young computer scientists.
It says it is "stepping up its efforts" to attract people and promote the subject, which is says has a "vital role in shaping the future".
Andy Hopper, head of the University of Cambridge Computer Laboratory, says, "We want to get rid of the old-fashioned notion that computer science is just for geeks.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
"The subject is for anyone who wants to understand how the modern world works and wants to influence the way we live in the future. Computer science is firmly at the heart of modern society and at Cambridge we are at the forefront of a new generation of research."
Staff at the Computer Laboratory will be hosting two open days on 1-2 July, where visitors will see a series of subject talks and demonstrations of student projects and faculty research. The CSCubed website aims to answer questions students might have.
The project is one of the first attempts by a leading university to attempt to change the image of computer science and IT. Industry leaders and academics have slammed the IT curriculum in schools in the past, saying it puts children off studying computing subjects.
Cambridge University says it wants to fire the interest of young people with its campaign. | <urn:uuid:10422216-6e69-4c7a-9f09-cabd18cf1192> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/1280092524/Cambridge-University-in-drive-to-attract-young-computer-scientists | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00277-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955422 | 286 | 2.515625 | 3 |
Storage Basics: Deciphering SESAs (Strange, Esoteric Storage Acronyms), Part 2 Page 3
Virtual Interface (VI)
The next acronym up for review is the Virtual Interface (VI). VI was originally designed by industry heavyweights such as Intel, Microsoft, and Compaq as a standard for interconnecting computer clusters. As such, VI was designed to provide a common interface for clustering software regardless of the underlying networking technology. In addition, VI is designed to help eliminate the overhead caused by network communication.
The VI standard specifies a combination of hardware, firmware, and operating system driver interaction to increase the overall efficiency of network communication. In application, VI provides two key functions: reducing CPU load and reducing latency. To do this, VI allows for direct memory-to-memory data transfer.
Memory-to-memory transfer enables data transfers directly between buffers and ignores normal protocol processing. VI also allows for direct application access, which enables application processes to queue data transfer operations directly to VI-compliant network interfaces without using the operating system.
Direct Access File System (DAFS)
Fitting right in the discussion of VI is the Direct Access File System (DAFS). DAFS is a protocol that uses VI capabilities to provide memory-to-memory data transactions for clustered application servers. Using the VI architecture and memory-to-memory data transfers, DAFS avoids the traffic overhead generated by operating systems.
As a little background, TCP/IP can be quite taxing on system resources, as it requires significant CPU processing while data packets are moved through the TCP/IP protocol stack. DAFS does not require this high overhead and can move the same data packets without all of the CPU overhead.
It does this by bypassing the protocol stacks and the operating systems to directly place data on the network link. Data is moved from the application buffers directly to the VI-capable NIC. In the process, overall network utilization is reduced and application throughput is increased. This flow of data takes a significant load off of the processor.
In the past two Storage Basics articles we have taken a quick look at some of the acronyms prevalent in the storage industry today. Of course, we’ve only just scratched the surface, as there are plenty more out there, and new ones are seemingly popping up a daily basis. As these first two articles have been a response to email queries, we look forward to more emails from you and the opportunity to unravel the mysteries of more acronyms in future SESA articles. | <urn:uuid:ad02003e-8d46-4b2a-887d-b14c37971de0> | CC-MAIN-2017-04 | http://www.enterprisestorageforum.com/technology/features/article.php/11192_3289841_3/Storage-Basics-Deciphering-SESAs-Strange-Esoteric-Storage-Acronyms-Part-2.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00185-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.912005 | 519 | 2.71875 | 3 |
A recent GIS user's conference delivered more than cool technology to the thousands of people who attended from around the world. As only GIS images from space can do, it put conditions on the ground were put in perspective. And what I saw when ESRI founder Jack Dangermond spoke about the future of planet Earth was, quite simply, alarming.
Sustainability of the Earth and its resources is at a critical juncture. Everyday, 250,000 new souls populate the planet, most born in developing countries. The idea that natural attrition will balance out this burgeoning growth is erroneous. Each year, there are 80 million more births than deaths, according to The United Nations Environment and Development UK Committee (UNED-UK). And 25 percent of the world's population consumes 75 percent of Earth's resources. That same portion of population is responsible for the vast majority of environmental damage done to the planet.
Dangermond said he toned down his warnings about Earth's critical condition at his wife's request - she worried about scaring the audience. He did say, however, that humans have now overshot the planet's ability to regenerate natural resources by 20 percent. "We're consuming about 1.2 earths in terms of sustaining human life," he said.
The very economic and technological progress that is exerting negative pressures on the planet holds the greatest hope for saving it. Faster and better communication systems make isolation an outmoded concept; the Internet and new wireless technologies promise the dissemination of medical resources as never before. Genetic research is producing stronger strains of agricultural products, and although we cannot control the weather, we can certainly predict it with incredible accuracy. Technology also offers educational opportunities that increasingly defy the divides of place and economic resources. Although most of the world's population still does not have access to the Internet, according to a Nua Ltd. survey, many of the planet's significant challenges can be mitigated with appropriately implemented technologies if human will is there.
The backbone of this remedial process is GIS. Images captured by satellites and aerial cameras integrated with information on specific topics, such as forests lost to development and fires, the spread of infectious diseases, characteristics of populations and myriad other conditions, gives us tools to manage the future, despite the uncertainties it holds. Done in a deliberative manner, the development of a "spatial data infrastructure" is elemental to achieving sustainability, according to Dangermond.
Governments have begun to recognize the value of geo-spatial systems. Once considered a curious specialty, GIS now supports hundreds of mainstream government operations and emergency services, as dramatically demonstrated by New York City after 9-11. Now, as geo-spatial systems are integrated with other information systems, interoperable and standards-based GIS resources can be applied to meet global challenges.
Granted, most of us are preoccupied with the daily business of life, but the world also is home to leaders who think beyond the horizon and have visions that outlive their own mortality. Dangermond qualifies as one such visionary - he wants to build a distributed network of geo-spatial data that will alter the course of "spaceship Earth," - moving it to a path that allows better management of the planet's resources and supports enhanced quality of life for its people.
Our more limited use of technology today is perhaps a necessary precursor to realizing such global dreams about how to leverage the future with the magnificent tools that are the namesake of the Information Age. | <urn:uuid:98ba98c9-9c15-4bbb-9713-081e7b349da1> | CC-MAIN-2017-04 | http://www.govtech.com/e-government/A-Map-for-Spaceship-Earth.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00543-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951857 | 702 | 3.421875 | 3 |
With ‘gamification’ reaching buzzword status in private industry, more government agencies are exploring how to use game thinking and game mechanics to solve problems and engage citizens and employees.
Doug Thomas, an associate professor in USC’s Annenberg School for Communication, says gamification can be an effective tool for some tasks — but it’s far from a silver bullet. For instance, gaming principles can be used to boost innovation within organizations, according to Thomas, who spoke Wednesday (Aug. 21) at Government Technology’s GTC West conference in Sacramento, Calif.
But careful attention must be paid both to the design of the game and the payoff for participants. Thomas also pointed out two common gamification pitfalls: underestimating the difficulty of creating an engaging game and applying the technique to the wrong type of task.
Creating a good game is extremely hard — even for professional game designers — so agencies shouldn’t assume they can build one without help, Thomas says. “Don’t give game design to someone who isn’t a game designer.” And gamification can’t change the nature of dull, repetitive tasks. Applying game mechanics to these activities doesn’t make them fun, he says, it only makes them take longer to accomplish. | <urn:uuid:595a5bc3-4e18-4ee7-8f8f-13b47ab96212> | CC-MAIN-2017-04 | http://www.govtech.com/management/How-to-Use-Gamification-in-Government.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00083-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930508 | 266 | 2.78125 | 3 |
Go Green with NComputing
Desktop Virtualization and Eco-Friendly Computing
We can't live without PCs, so let’s learn how to live with them in a way that makes them friendlier to the environment. Currently, PC’s consume far too much electricity and generate too much e-waste to to be considered an Eco-friendly solution by today’s standards. With a typical PC taking approximately 110 watts to run, and with well over 1 billion of them on the planet, it’s easy to understand what the Silicon Valley Toxics Commission is saying about e-waste is being the fastest growing part of the waste stream.
The Global Impact of Billions of PCs
How many computers are actually in use? According to a report by Forrester Research, by the end of 2008, there were over one billion PCs in use worldwide. As PC adoption grows globally, it is estimated that there will be more than two billion PCs in use by 2015. It took 27 years to reach one billion but will only take 7 more years to double that number. With this trend, something needs to change.
If NComputing systems were used at a ratio of 6 NComputing devices to each PC:
- Energy use would decline by over 143 billion kilowatt hours per year
- CO2 emissions would decrease by 114 million metric tons. That’s like planting 550 million trees!
- E-waste would be reduced by 7.9 million metric tons
Green Computing is the Future
The benefits of green computing are clear. As the number of PCs approaches 2 Billion by 2015, the potential savings related to energy use , CO2 emissions and e-waste are undeniable.
The Solution is Simple and Efficient
Today's PCs are so powerful that we no longer need one PC per person. We can tap into the excess power in one PC and share it with many users. NComputing thin client devices use just 1 to 5 watts, last for a decade, and generate just a few ounces of e-waste. Not only is this a simple solution to a complex problem, the efficiencies achieved using this technology are amazing. NComputing solutions save 75% on hardware, and since they draw less than 5 watts of power, you can reduce your energy footprint by as much as 90% per user. NComputing thin client devices produce practically no heat, reducing the need for energy-consuming air conditioning. Electricity savings alone can pay for the NComputing virtual desktops in as little as one year. | <urn:uuid:30b28c68-7ebf-4a6a-835d-72fd2ff7319a> | CC-MAIN-2017-04 | https://www.ncomputing.com/en/company/green-computing | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00569-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958184 | 526 | 3.015625 | 3 |
DCCP transport layer protocol is used to control the datagram congestion. It provides an excellent procedure to stop the internet fall down, if it is caused by the congestion. In fact, this protocol is a brilliant competitor to be used as a substitute of UDP protocol.
DCCP account DCCP congestion control trait by means of a reliable acknowledgments delivery (in form of packets instead of bytes) will provide actually a congestion control with dynamism. DCCP will also make available the negotiable blocking control mechanism, but it will be up to the particular application’s specific requirements too. Moreover, these mechanisms come with a number of specific features, so to go well with different types of applications. The bandwidth consumption can be enhanced as the size of packets in case of DCCP is increased.
Packet Header The standard size of a DCCP packet header is usually 12 bytes. But this can be increased up to an extent of 1020 bytes. Source Port of 16 bits may represent the connected port of an endpoint that is sending data packets. But destination port of 16 bits is the linked port of another endpoint. Note: These both fields function is to make-out the connection.
Type field is consisted of 4 bits and it is required to specify the DCCP message type with the values as: request packet, response packet, data packet, ask packet, data ask packet, reset packet, close packet, move packet, etc.
CCval field of 4 bits is kept for the utilization by the transport CCID. Sequence Number field is made of 24 bits and DCCP request packet or else DCCP response packet will be used to initialize it and can increase in value by one as soon as a packet sent to its way. This information will help the recipient in determining that is any packet loss happened or not on the way. Moreover, data Offset is of 8 bits and # NDP are of 4 bits fields. Checksum is of 16 bits field and Checksum Length or Cslen is of 4 bits field, etc.
DCCP implementations From the mid of 2008, no less than two datagram congestion control protocol implementations are dynamically kept up. The implementation for Linux kernel was first-time provided in 2.6.14 Linux release and the purpose of dccp-tp implementation optimization is its portability. Another latest DCCP implementation “user-space” is in progress. The main aim of this implementation (under the way) is to proffer a handy NAT- welcoming and standardized framework for the communication and contact of peer to peer type along with supple congestion control (based on application too). | <urn:uuid:dc2e6b07-760e-481e-9623-8ac784faa837> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2013/dccp-datagram-congestion-control-protocol | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00111-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.918738 | 542 | 3.125 | 3 |
The Internet of Things (IoT) is creating a new environment where malware can be used to create powerful botnets. Mirai, a new Trojan virus for Linux, is difficult to detect and already exists in the wild.
The threat is a new variant of the Gafgyt, (aka BASHLITE, aka Torlus) malware, which has been used by distributed denial of service (DDoS) service providers.
How Does This New Trojan Virus Attack?
Mirai’s name comes from the discovered binaries having the name “mirai.()” and was initially discovered in August. It arrives as an ELF Linux executable and focuses mainly on DVRs, routers, web IP cameras, Linux servers, and other devices that are running Busybox, a common tool for IoT embedded devices.
Mirai uses the default password for the telnet or SSH accounts to gain shell access. Once it’s able to get access to this account, it installs malware on the system. This malware creates delayed processes and then deletes files that might alert antivirus software to its presence. Because of this, it’s difficult to identify an infected system without doing a memory analysis.
Mirai opens ports and creates a connection with botmasters and then starts looking for other devices it can infect. After that, it waits for more instructions. Since it has no activity while it waits and no files left on the system, it is difficult to detect.
According to Best Security Search, “The low detection ratio can also be explained by the Mirai feature to delete all malware files once it successfully sets the backdoor port into the system. It leaves only the delayed process where the malware is running after being executed.”
How Is Mirai Different from Previous Variants?
MalwareMustDie states that, “The actors are now having different strategy than older type of similar threat. By trying to be stealth (with delay), undetected (low detection hit in AV or traffic filter), unseen (no trace nor samples extracted), encoded ELF’s ASCII data, and with a big “hush-hush” among them for its distribution. But it is obvious that the main purpose is still for DDoS botnet and to rapidly spread its infection to reachable IoTs by what they call it as Telnet Scanner.”
Who Could Be Infected?
This malware could infect a wide range of remote devices that are rarely scanned for malware. Security Affairs states that, “Countries that are having Linux busybox IoT embedded devices that can connect to the Internet, like DVR or Web IP Camera from several brands, and countries who have ISP serving users by Linux routers running with global IP address, are exposed as targets, especially to the devices or services that is not securing the access for the telnet port (TCP/23) service.”
How to Prevent Infection
To prevent infection:
Stop the telnet service and block TCP port 48101 if you’re not currently using it
Set Busybox execution to be run only for a specific user
Scan for open telnet connections on your network | <urn:uuid:e0462907-aaea-45aa-9b42-722734b6aee9> | CC-MAIN-2017-04 | http://www.csoonline.com/article/3134720/security/new-trojan-virus-is-targeting-iot-devices.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00505-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933924 | 654 | 2.609375 | 3 |
Apple takes steps to improve data center energy efficiency
Monday, Nov 25th 2013
While data centers provide support for a wide variety of critical technological applications and systems, these facilities are also notorious for their electricity consumption.
Data centers require vast amounts of energy to power not only their servers and other hardware, but also the systems utilized to keep these heat-producing devices cool. However, in recent years, data center designers and operators have taken steps to improve the energy efficiency and sustainability of these facilities to significantly mitigate their impact on the surrounding environment.
Because much of the energy consumed by data centers comes as a result of cooling systems, many organizations utilize server room temperature monitoring systems as a means to keep critical devices cool while keeping a watchful eye on their energy consumption. Utilization of temperature monitoring within a data center can allow facility operators to prevent servers from overheating as well as reduce the amount of electricity used in the process.
Additionally, owners and operators of data centers also use renewable energy sources as a means to improve the environmental footprint of their facilities. Apple recently changed the game of renewable energy with their Maiden, N.C., data center.
Providing its own green power source
According to a report by GigaOM contributor Katie Fehrenbacher, Apple was one of the driving forces behind bringing renewable energy resources to North Carolina. During construction planning of its new data center site, the company looked into potential local sources of renewable energy, and when it found that there weren't sufficient resources available, Apple decided its create its own.
Fehrenbacher stated that the local utility provider, Duke Energy, believed that customers would not want to pay the premium that came along with greener energy sources. For this reason, the company made little effort to provide the area with renewable power sources. However, after Apple's unprecedented move, the company was more willing to provide more environmentally friendly options.
In order to power its newest facility, Apple spent a significant amount to build two solar panel farms and a fuel cell farm. Apple's website stated that the company's 100-acre onsite solar photovoltaic array is the largest in the nation that was created and is owned by the end user. The 20 megawatt facility can produce 42 million kilowatt hours of clean energy, and its second solar farm will go online in late 2013.
The company's fuel cell energy source is a 10 megawatt facility that provides an additional 83 million kilowatt hours of renewable power. In total, the company's clean resources provide 167 million kilowatt hours of electricity, equivalent to the amount needed to power 17,600 homes for one year, stated Apple.
In addition to utilizing clean electricity to power its technology, Apple has also made other efforts to create a greener data center. The facility uses a chilled water storage system and draws upon outside air in addition to cooling system management.
Apple's efforts represent an important step in the push for more energy efficient data centers. While many organizations may not have the resources to create their own renewable energy sources, they can make small efforts, such as temperature monitoring, to improve the power consumption of their facilities. | <urn:uuid:3edbe18b-e722-40b8-852e-aeed241c2199> | CC-MAIN-2017-04 | http://www.itwatchdogs.com/environmental-monitoring-news/data-center/apple-takes-steps-to-improve-data-center-energy-efficiency-544979 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00505-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957543 | 643 | 2.765625 | 3 |
I think we do not need to go back much in time to recall some of recent incidents at the sea when it comes to oil spills. One of the recent most mentioned is 2010 Gulf spill at BP platform. Oil spills can be controlled by chemical dispersion, combustion, mechanical containment, and/or adsorption. Spills may take weeks, months or even years to clean up. Environmental effects are devastating.
Oil penetrates into the structure of the plumage of birds and the fur of mammals, reducing its insulating ability, and making them more vulnerable to temperature fluctuations and much less buoyant in the water. Oil can impair a bird's ability to fly, preventing it from foraging or escaping from predators. As they preen, birds may ingest the oil coating their feathers, irritating the digestive tract, altering liver function, and causing kidney damage. Together with their diminished foraging capacity, this can rapidly result in dehydration and metabolic imbalance. Some birds exposed to petroleum also experience changes in their hormonal balance, including changes in their luteinizing protein. The majority of birds affected by oil spills die without human intervention. Some studies have suggested that less than one percent of oil-soaked birds survive, even after cleaning, although the survival rate can also exceed ninety percent, as in the case of the Treasure oil spill. Heavily furred marine mammals exposed to oil spills are affected in similar ways. Oil coats the fur of sea otters and seals, reducing its insulating effect, and leading to fluctuations in body temperature and hypothermia. Oil can also blind an animal, leaving it defenseless. The ingestion of oil causes dehydration and impairs the digestive process. Animals can be poisoned, and may die from oil entering the lungs or liver. You just wish you can become Steven Seagall and kick some ***. Of course, humans are affected too. The Deepwater Horizon oil spill in the Gulf of Mexico in April 2010, for example, will have a large economic impact on the U.S. Gulf fisheries. A new study published in the Canadian Journal of Fisheries and Aquatic Sciences says that over 7 years this oil spill could have a $US8.7 billion impact on the economy of the Gulf of Mexico. This includes losses in revenue, profit, and wages, and close to 22 000 jobs could be lost. Obviosuly we depend on oil so is there a way to fight this problem once it happens?
Apparently, there is. A new type of sponge that loves oil as much as it hates water could make a big difference when cleaning up an oil spill. Researchers at Rice University and Penn State University say the tiny sponge they've developed can absorb 100 times its weight in oil. The sponge is made out of carbon nanotubes (of course). Extra boron atoms are added at all its junctions to boost the sponge's ability to absorb. One of the main reasons it works so well is because adding a bit of boron to carbon while creating nanotubes turns them into solid, spongy, reusable blocks. This helps the sponge increase its ability to absorb oil spilled in water. Watch a video about the sponge below.
The researchers believe the sponge could someday play a significant role in cleaning up oil spills.
Credits: Nature magazine, Wikipedia | <urn:uuid:d86f35d5-850a-4341-b806-55a564364fe3> | CC-MAIN-2017-04 | https://community.emc.com/people/ble/blog/2012/4/18 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00075-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955849 | 669 | 3.421875 | 3 |
Sokol J.C.,Basic Energy Services
59th Annual Southwestern Petroleum Short Course Meeting (Lubbock, TX, 4/18-19/2012) Proceedings | Year: 2012
Flow back and produced waters in the oil field are heavily laden with contaminates including insoluble iron sulfides, poisonous hydrogen sulfide, residual gels, friction reducers, and other chemicals. In this paper, we will look at chlorine dioxide (ClO2) as one possible solution to these problems. As a powerful yet selective oxidizer, Cl02 has the ability to break up the residual gels and friction reducers while removing the insoluble iron sulfide and killing the hydrogen sulfide. As an additional plus, Cl02 is an EPA approved biocide that kills the bacteria which are the root cause of many of the problems with water reuse. Source
Blunck M.,Basic Energy Services
Appropriate Technology | Year: 2010
Biomass gasification is basically the conversion of wood and agricultural residues into a combustible gas, which is used as a fuel to drive a generator. There are many different gasification methods in use or in development, but the downdraft fixed-bed technology is almost exclusively used for small-scale power gasifiers. A recent study from Sri Lanka reports on a 12 kW gasification plant which provides electricity for 27 families, with considerable savings in kerosene. However, the installation of the machinery took a long time and the operation of the plant is laborious. Furthermore, compared to other renewable energy technologies gasification proved to be expensive, about 30%-40% higher than those for a micro-hydro power plant or solar home systems installed in the region. While in Asia many gasifier plants are or have been in operation, there seems to be little on the ground in Africa. Source
Basic Energy Services | Date: 2013-01-08
Well servicing rigs and equipment for oil and gas wells for well completion, well maintenance, and well work over and repair.
Basic Energy Services | Date: 2013-01-01
Well servicing rigs and equipment for oil and gas wells for well completion, well maintenance, and well workover and repair.
Basic Energy Services | Date: 2014-02-10
Chlorine dioxide (ClO | <urn:uuid:f76b5cfb-5d5d-4f96-afd3-26e943c4dc7e> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/basic-energy-services-226631/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00561-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929441 | 469 | 2.5625 | 3 |
Building Intelligence into Machine Learning Hardware
December 5, 2016 Ben Cotton
Machine learning is a rising star in the compute constellation, and for good reason. It has the ability to not only make life more convenient – think email spam filtering, shopping recommendations, and the like – but also to save lives by powering the intelligence behind autonomous vehicles, heart attack prediction, etc. While the applications of machine learning are bounded only by imagination, the execution of those applications is bounded by the available compute resources. Machine learning is compute-intensive and it turns out that traditional compute hardware is not well-suited for the task.
Many machine learning shops have approached the problem with graphics processing units (GPUs), application-specific integrated circuits (ASICs) – for example, Google TensorFlow – or field-programmable gate arrays (FPGAs) – for example, Microsoft’s investment in FPGAs for Azure and Amazon’s announcement of FPGA instances. Graphcore says these don’t provide the necessary performance boosts and suggests a different approach. The company is developing a new kind of hardware purpose-built for machine learning.
In a presentation to the Hadoop Users Group UK in October, Graphcore CTO Simon Knowles explained why current offerings fall short. ASICs represent a fixed point in time – once the chip is programmed, it keeps that programming forever. The field of machine learning is relatively young and is still evolving rapidly, so committing to a particular model or algorithm up front means missing out on improvements for the life of the hardware. FPGAs can be updated, but still have to be reprogrammed. GPUs are designed for high-performance, high-precision workloads and machine learning tends to be high-performance, low-precision.
CPUs and GPUs are excellent at deterministic computing – when a given input yields a single, predictable output – but they fall short in probabilistic computing. What we know as “judgment” is probabilistic computing where approximate answers come from missing data, or a lack of time or energy.
Graphcore is betting that hardware purpose-built for machine learning is the way to go. By designing a new class of processor – what they call the Intelligence Processing Unit (IPU) – machine learning workloads can get better performance and efficiency. Instead of focusing on scalars (CPUs) or vectors (GPUs), the IPU is specifically designed for processing graphs. Graphs in machine learning are very sparse, with each vertex connected to few other vertices. They estimate the IPU provides a 5x performance improvement for general machine learning workloads and 50-100x for some applications like autonomous vehicles. As a comparison, GPU performance for machine learning, according to Knowles, increases at a rate of 1.3-1.4x every two years.
Graphcore’s focus is on improving the speed and efficiency of the probabilistic computation needed for machines to exhibit what might be called intelligence. Knowles described intelligence as the culmination of four parts. First, condensing experience (data) into a probability model. Second, summarizing that model. Third, predicting the likely outputs given a set of inputs. Lastly, inferring the likely inputs given an output. With the IPU, Graphcore hopes to be on the cutting edge. “Intelligence is the future of all computing,” Knowles told the group, “It’s hard to imagine a computing task that cannot be improved by [intelligence].”
Yet we don’t use the word “betting” lightly above. Graphcore exited stealth mode at the end of October with the announcement of a $30 million funding round lead by Robert Bosch Venture Capital GmbH and Samsung Catalyst Fund. While the IPU technology sounds promising, it will not go to market until sometime in 2017 and so has not yet established itself in real-world deployments. It remains to be seen whether the industry will accept a new processor paradigm. As Michael Feldman noted on episode 148 of “This Week in HPC”, the trend has been away from HPC-specific silicon and toward commodity processors.
However, Intel’s purchase of Nervana Systems and Google’s development of the Tensor Processing Unit suggest that major tech companies are willing to look at purpose-built chips for their deep learning efforts. If Graphcore can deliver on their stated performance, the lower power and space profile of the IPU may be enough to drive adoption – particularly in power and space constrained environments like automobiles. | <urn:uuid:9b48d41e-c9a8-47cb-a00d-5926b32ecaba> | CC-MAIN-2017-04 | https://www.nextplatform.com/2016/12/05/building-intelligence-machine-learning-hardware/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00038-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926336 | 940 | 2.75 | 3 |
This training video helps you to keep Hyper-V environment very healthy. Hyper-V requires routine monitoring and maintenance to remain healthy. This Training video is from Backup Academy.
Two common tasks are: Resource metering and Patching.
Hyper-V Resource Metering
- Average CPU usage, measured in megahertz over a period of time.
- Average physical memory usage, measured in megabytes.
- Minimum memory usage (lowest amount of physical memory).
- Maximum memory usage (highest amount of physical memory).
- Maximum amount of disk space allocated to a virtual machine.
- Total incoming network traffic, measured in megabytes, for a virtual network adapter.
- Total outgoing network traffic, measured in megabytes, for a virtual network adapter | <urn:uuid:3e44ca44-42c9-45d3-843b-837c2d8129e5> | CC-MAIN-2017-04 | https://www.anoopcnair.com/hyper-v-healthy-video-training/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00250-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.870583 | 156 | 2.8125 | 3 |
Duqu is a sophisticated Trojan that was created by the same people who created the infamous Stuxnet worm. Its main purpose is to act as a backdoor into the system and facilitate the theft of private information. Duqu was first detected in September 2011, but according to Kaspersky Lab data, the first trace of Duqu-related malware dates back to August 2007. The company’s experts have recorded over a dozen incidents involving Duqu, with the vast majority of victims located in Iran. An analysis of the victim organizations’ activities and the nature of the information targeted by the Duqu authors clearly suggest the main goal of the attacks was to steal information about industrial control systems used in a number of industries as well as gathering intelligence about the commercial relations of a whole range of Iranian organizations.
The big unsolved mystery of the Duqu Trojan relates to how the malicious program was communicating with its Command and Control (C&C) servers once it infected a victim’s machine. The Duqu module that was responsible for interacting with the C&Cs is part of its Payload DLL. After a comprehensive analysis of the Payload DLL, Kaspersky Lab researchers have discovered that a specific section inside the Payload DLL, which communicates exclusively with the C&Cs, was written in an unknown programming language. Kaspersky Lab researchers have named this unknown section the “Duqu Framework.”
Unlike the rest of Duqu, the Duqu Framework is not written in C++ and it's not compiled with Microsoft's Visual C++ 2008. It is possible that its authors used an in-house framework to generate intermediary C code, or they used another completely different programming language. However, Kaspersky Lab researchers have confirmed that the language is object-oriented and performs its own set of related activities that are suitable for network applications.
The language in the Duqu Framework is highly specialized. It enables the Payload DLL to operate independently of the other Duqu modules and connects it to its dedicated C&C through several paths including Windows HTTP, network sockets and proxy servers. It also allows the Payload DLL to process HTTP server requests from the C&C directly, stealthily transmits copies of stolen information from the infected machine to the C&C, and can even distribute additional malicious payload to other machines on the network, which creates a controlled and discreet form of spreading infections to other computers. A full description of the analysis and its related data can be found at Securelist, Kaspersky Lab’s research site.
“Given the size of the Duqu project, it’s possible that an entirely different team was responsible for creating the Duqu Framework as opposed to the team which created the drivers and wrote the system infection exploits,” said Alexander Gostev, Chief Security Expert at Kaspersky Lab. “With the extremely high level of customization and exclusivity that the programming language was created with, it is also possible that it was made not only to prevent external parties from understanding the cyber-espionage operation and the interactions with the C&Cs, but also to keep it separate from other internal Duqu teams who were responsible for writing the additional parts of the malicious program.”
According to Alexander Gostev, the creation of a dedicated programming language demonstrates just how highly skilled the developers working on the project are, and points to the significant financial and labor resources that have been mobilized to ensure the project is implemented.
Kaspersky Lab would like to make an appeal to the programming community and ask anyone who recognizes the framework, toolkit or the programming language that can generate similar code constructions, to please contact our experts.
We are confident that with your help we can solve this deep mystery in the Duqu saga.
The full version of the Duqu Framework analysis by Igor Soumenkov and Costin Raiu can be found on here on Securelist. | <urn:uuid:06f8a9b5-cb8d-4993-ac45-ec6e7a45a0ec> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/virus/2012/Kaspersky_Lab_Experts_Discover_Unknown_Programming_Language_in_the_Duqu_Trojan | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00158-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956063 | 801 | 2.8125 | 3 |
Researchers are licking their chops with the potential to speed the execution of parallel applications on the largest supercomputers using Vampir, a performance tool that traces events and identifies problems in HPC applications. The scalability breakthrough with Vampir came as the result of work done on Jaguar, the predecessor to Titan at Oak Ridge National Laboratory.
Vampir (Visualization and Analysis of MPI Resources) was developed at the University of Dresden to help troubleshoot problems that develop in parallel HPC applications. The tool, which also now supports OpenMP, Pthreads, and Cuda in addition to MPI, is especially useful in flushing out any of the myriad bugs or other problems that appear when researchers begin running their code on larger parallel clusters.
The potential to smooth the scale-up process is especially important because researchers do not start out running their parallel codes on massive machines. Instead, they start out on departmental clusters or small sets of bigger machines, perhaps 100 processors at a time. There are any number of problems that can appear as researchers begin running HPC applications on larger machines–including the overuse of barriers and I/O chokepoints–and the scale-up process is rarely linear or smooth.
This graph demonstrates how Vampir can increase bandwidth performance and maximum job size.
There were several issues with scaling Vampir itself that researchers had to overcome, according to a story published recently by the Oak Ridge Leadership Computing Facility website. One hurdle that researchers had to overcome involved how Vampir uses memory. Vampir works by installing itself onto a small portion of memory on each node, which it uses to log events as they occur in the HPC application.
However, if there isn’t enough memory available to capture all the events–either due to a long running application or by setting Vampir to collect a very fine level of detail of events–then the program slows down as huge amounts of data are written to the file system. The team of researchers addressed this problem by modifying the procedure to happen quickly and trouble-free, the ORLCF story notes.
The team successfully ran Vampir at scale on all 220,000 CPU processors on Jaguar in 2012. That was before Jaguar morphed into Titan, which sports nearly 300,000 CPUs (and more than 18,000 GPUs) and is currently the third-fastest supercomputer in the world. Prior to this, Vampir had only proven itself on a machine with 86,400 cores, according to the ORLCF story.
Terry Jones of Oak Ridge National Laboratory, left, and Joseph Schuchart of Technische Universität Dresden were part of a team that readied the Vampir performance tool to work on extreme-scale supercomputers.
“Understanding code behavior at this new scale with Vampir is huge,” ORNL computer scientist Terry Jones told ORLCF. “For people that are trying to build up a fast leadership-class program, we’ve given them a very powerful new tool to trace events at full size because things happen at larger scale that just don’t happen at smaller scale.”
The researchers involved in the work–including people from ORNL, Argonne National Laboratory, and Technische Universität Dresden–published their work in a June paper titled “Optimizing I/O forwarding techniques for extreme-scale event tracing.” | <urn:uuid:496279ba-30f7-4b83-9c33-6e9a3bd51215> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/07/31/vampir_rises_to_the_occasion_at_ornl/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00488-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950394 | 704 | 2.921875 | 3 |
If your anxiety about high electricity bills exceeds your fear of heights, there’s cause for celebration. The U.S. Department of Energy has announced $29 million in new SunShot investments—most of the funding to be directed toward “plug-and-play” solar photovoltaic (PV) systems that the “average Joe” can install on his own roof within one day.
Overall, the new solar projects will be aimed at improving grid connection and reducing installation costs through innovative do-it-yourself technologies and reliable solar power forecasts.
"The price of solar panels has fallen dramatically in recent years, but we also need to reduce the cost and time required to actually install them in homes and businesses, and help utility companies better integrate renewable energy into the grid." said Energy Secretary Steven Chu. "Projects like these can help reduce the cost of solar power and make it easier for American families and businesses to access clean, affordable energy."
Plug-and-Play PV Systems
Over the next five years, the DOE will invest $21 million to design PV systems that can be purchased, installed and operational in one day. Plug-and-play PV systems will make the process of buying, installing and connecting solar energy systems faster, easier and less expensive for homeowners. This effort is part of the department’s broader initiative to bring down “soft” or non-module hardware costs, which now account for a majority of the total outlay for residential systems.
Cambridge, Massachusetts-based Fraunhofer (News - Alert) USA’s Center for Sustainable Energy Systems will develop PV technologies that enable consumers to easily select the right solar system for their house and install, wire and connect to the grid.
In addition, Raleigh-based North Carolina State University will lead a project to create standard PV components and system designs that can be adapted without difficulty to any residential roof, and can be installed and connected to the grid quickly and efficiently.
Reliable Solar Forecasting
The Energy Department also announced an $8 million investment in two projects that will help utilities and grid operators to more accurately forecast when, where, and how much solar power will be produced at U.S. solar energy plants. Enhanced solar forecasting technologies will help power system operators to integrate cost-competitive, reliable solar energy into the grid and provide clean, renewable energy to U.S. consumers.
Through this initiative, the University Corporation for Atmospheric Research, based in Boulder, Colorado, will research methods to understand cloud impact and develop short-term prediction techniques based on this work.
In Armonk, New York, the IBM (News - Alert) Thomas J. Watson Research Center will lead a new project based on the Watson computer system that uses big data processing and self-adjusting algorithms to integrate different prediction models and learning technologies.
These projects are working with the Energy Department and the National Oceanic (News - Alert) and Atmospheric Association to improve the accuracy of solar forecasts and share the results of this work with industry and academia.
The SunShot Initiative is a collaborative national effort to make solar energy cost-competitive with other forms of energy by the end of the decade. Inspired by President Kennedy's "Moon Shot" program that put the first man on the moon, the SunShot Initiative has created new momentum for the solar industry by highlighting the need for American competitiveness in the clean energy race.
Want to learn more about the latest in communications and technology? Then be sure to attend ITEXPO Miami 2013, Jan 29- Feb. 1 in Miami, Florida. Stay in touch with everything happening at ITEXPO (News - Alert). Follow us on Twitter.
Edited by Brooke Neuman | <urn:uuid:e5a0e093-68bc-4f67-8578-261b6a8fd135> | CC-MAIN-2017-04 | http://www.iotevolutionworld.com/topics/smart-grid/articles/2012/12/10/318985-friends-high-places-doe-funds-plug-and-play.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00452-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927704 | 758 | 2.546875 | 3 |
What You'll Learn
- Describe what application development tools are available
- Identify platforms where tools are used
- Describe the purpose and use of each tool
- Create a user profile, library, output queue, and job description
- Describe the key elements of Work Management, security, and device files
- Write a basic control language (CL) program to run application programs
- Define physical and logical files on the IBM i
- Create physical and logical files
- Use the basic features of the LPEX Editor to enter and maintain file definitions
- Use the basic features of the Screen Designer and Report Designer to design, create, and maintain displays and reports
- Use Interactive SQL and IBM i Navigator to create schema, tables, views, and indexes
Who Needs To Attend
This intermediate class is intended for Programmers and Systems Analysts or anyone with a need to understand the IBM i (formerly AS/400) from a programming point of view. The overall objective of this class is to provide basic IBM i concepts and an overview of the programming facilities available on the server as well as the client.
This class provides the necessary foundation for the rest of the programming curriculum and the database classes, including DB2 for i: DB Coding and Implementation Using DDS and CL Commands (OL62G). | <urn:uuid:245ab371-f7c9-435f-89ff-85aff159ccac> | CC-MAIN-2017-04 | https://www.globalknowledge.com/ca-en/course/120726/ibm-i-technical-introduction/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00084-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.875759 | 271 | 3.15625 | 3 |
Concerns about the safety of cellular telephones-whether they create health risks or are safe to use in all operating environments-have spread to other wireless devices, such as the wireless networking equipment (WLANs)* manufactured by Cisco Systems
® and Linksys
®. These issues are of concern not only to Cisco customers, but to Cisco as well.
There is no correlated proof that these low-power devices pose any health risks to the user or the general public. Further, Cisco and Linksys wireless products are required to be evaluated for compliance with international RF regulations before being placed on the market for sale.
This document discusses the results of research into the possible health effects of RF devices.
Low-Power Wireless Devices Pose No Known Health Risk
Do low-power wireless devices such as WLAN client cards, access points, or RFID tags pose a health threat? Available evidence today suggests that there is no clear correlation between low-power wireless use and health issues.
Recent studies strongly suggest that the use of cellular telephone equipment does not create health risks. Two important recent studies that reached this conclusion are:
• A report written by Dr. John D. Boice, Jr. and Dr. Joseph K. McLaughlin of the International Epidemiology Institute in the United States in September 2002 for the Swedish Radiation Protection Authority.
• A report to the European Commission from the Scientific Committee on Toxicity, Ecotoxicity, and the Environment, titled "Opinion on Possible Effects of Electromagnetic Fields, Radio Frequency Fields, and Microwave Radiation on Human Health."
Few studies deal directly with the affects of WLAN devices. The emission levels of WLAN and RFID tags are below RF emission levels from typical cellular telephones. Therefore, any conclusions relating to the safety of cellular telephone equipment can almost certainly be applied to WLAN or RFID devices**.
The RF emission levels from a typical WLAN are well within the safety emission level thresholds set by the World Health Organization (WHO)***
* These devices are also referred to as RLANs by the ITU-R;, however, this paper refers to these devices as WLANs.
** Though Cisco does not make RFID devices, vendors and customers will require Cisco in some cases to use RFID devices to track products. Hence, the customer needs to be aware of RF issues concerning these devices.
*** The RF emission limits adopted by various national agencies are based on guidelines from the WHO International Commission on Non-Ionizing Radiation Protection (ICNIRP).
CISCO AND LINKSYS COMPLIANCE WITH RF EXPOSURE REQUIREMENTS
All Cisco and Linksys wireless products are evaluated to ensure that they conform to the RF emissions safety limits adopted by agencies in the United States and around the world. These evaluations are in accordance with the various regulations and guidelines adopted or recommended by the Federal Communications Commission (FCC)* and other worldwide agencies**.
Compliance for these devices is typically based on the Maximum Permissible Exposure (MPE) levels for mobile or fixed devices*** or per Specific Absorption Rate (SAR) tests for portable**** devices. Depending on the type of product, compliance is based on modeling, technical analysis, or RF measurement testing. The analysis or testing is performed in accordance with the various national and international standards adopted by independent third-party accredited labs.
Before any wireless device can be placed on the market, Cisco submits MPE technical analysis or SAR test data results to the appropriate agencies for review. These studies and test reports must demonstrate that the devices meet the RF emissions safety limits, or they cannot be placed on the market. Cisco and Linksys make sure that all of their products adhere to the stricter standards imposed by the worst case-the uncontrolled environment that imposes the tightest compliance limits.
The Cisco and Linksys manuals include statements on compliance with the various RF safety regulations, as well as guidance on proper installation and operation of these systems, to ensure that they remain in compliance with all applicable regulations.
IMPACT ON MEDICAL DEVICES
Another concern about cellular telephones has been their potential impact on medical devices. Many hospitals ban such phones from emergency rooms or other sensitive areas. Again, this has led some to question whether wireless networking devices can be used in proximity to medical equipment.
To address these concerns, Cisco wireless networking devices are specifically designed to reduce emissions that could interfere with medical devices. Cisco radio module products meet both the FCC and European Commission emission levels required for devices operating in a medical environment, specifically the EN 55011 emission standards.
In September 1996, an independent test was conducted by a hospital before the installation of a Cisco spread spectrum wireless network. The results showed that the Cisco 2.4-GHz wireless network devices did not interfere with or degrade the performance of heart pacemakers, even when operated at close proximity to these devices. In 2003, Cisco did further research testing with medical implant devices from two major medical equipment manufacturers, and tested its WLAN system with an MRI system at a major hospital research center. The results of the latest research was that the Cisco WLAN systems did not degrade the performance of either the MRI machine, nor degrade the performance of the pacemakers used in the research test. This research is continuing, including testing with Cisco 5-GHz devices whose initial tests are yielding similar results.
* The requirements as referenced are in Office of Engineering and Technology Bulletin 65C Revision 01-01, Evaluating Compliance with FCC Guidelines for Human Exposure to Radiofrequency Electromagnetic Fields.
** Such as ITU-T Recommendation K-52 Guidance on complying with limits with human exposure to electromagnetic fields
*** For discussion purposes, Cisco and Linksys access points and bridges are classified as either mobile or fixed, depending on antenna gain and installation requirements.
**** For discussion purposes, Cisco and Linksys client cards and voice over IP (VoIP) phones are classified as portable devices and may be subject to SAR testing.
OPERATION IN HAZARDOUS ENVIRONMENTS
Another occasional RF safety concern is the use of RF devices in hazardous locations such as oil refineries, mines, or construction sites where explosives are used. Several countries, including Australia and countries in the European Union, have adopted guidelines for operating wireless devices in hazardous environments, although they do not specifically address low-power wireless networking systems.
In most circumstances, low-power radios (such as WLANs) operating at less then 100mW Effective Isotropic Radiated Power (EIRP) and operating at 2.4 and 5.8 GHz should not pose any risk if operated under normal circumstances. However, it is recommended that you first consult the facility's safety administration to determine its policy on the use of RF devices in certain areas. The chances are extremely low that the radio will cause interference that could lead to a safety problem or actually cause a heating effect that can cause an accident; however, caution is urged.
It is recommended that the installation of radio devices in hazardous areas be done by professional installers in accordance with the recommendations of the group responsible for safety at that site.
5. Epidemiologic Studies of Cellular Telephones and Cancer Risk: Dr. John Boice and Dr. Joseph Mclaughlin, October 2002.
6. European Commission Report, Scientific Committee on Toxicity, Ecotoxicity, and the Environment: "Opinion on Possible Effects of Electromagnetic Fields, Radio Frequency Fields, and Microwave Radiation on Human Health," 10/30/2001.
7. International Telecommunications Union-Telecom Sector Recommendation K-52 Guidance on complying with limits for human exposure to Electromagnetic Field, September, 2004. | <urn:uuid:71c8dc00-4b90-4d9a-8218-cdd9979a2c5f> | CC-MAIN-2017-04 | http://www.cisco.com/c/en/us/products/collateral/wireless/aironet-1200-access-point/prod_white_paper09186a0080088791.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00296-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933209 | 1,574 | 2.75 | 3 |
The Asia-Pacific accounted for the largest share of nanofiber production of 44%, surpassing North America, standing at 29%, and Europe, estimated at 24%. The Asia-Pacific takes the big lead of about 49% in nanofibers research; the contribution in the last 5 years has been particularly astonishing. This immense growth could be possible mainly due to the higher percentage of government funding allocation in the areas of sustainable energy and environment.
Nanofibers have become one of the greatest exploratory subjects for academics and have come across as the most popular source of business fascination for many industries. Many of today’s most successful companies and organizations have been realizing the importance of nanofibers. However, the market success is largely dependent on the application development suitable for the wider business community.
Market Dynamics of the Asia-Pacific Nanofibers Market:
The unique physical properties of nanofibers make them desirable compound in several industries and as a result, a lot of investment is being made on the development of nanofibers.
Nanofibers are more expensive than the traditionally used materials.
The nanofibers have the ability to change the properties of a wide spectrum of materials and even assist in the creation of totally new materials; of late they are finding application in almost every field. Presently, they are excessively used in air filtration as they are very effective in removing submicron particles, including viruses and bacteria. Their high filtration efficiency does not change over time, as in the case of electrically charged filters, while the energy consumption necessary for filtration is much lower than those commonly used filters. Nano filtration membranes can separate molecules from the size between 0.5 and 10 NM.
Nanofibers have a great high-filtration efficiency, which, fortunately, doesn’t alter with time, like in the case of electrically-charged filters. However, the energy consumption of nanofibers in accomplishing this task is much lower than that of the electrically-charged fibers.
The Asia-Pacific nanofiber market can be segmented based on their type (polymeric nanofibers, carbon nanofibers, ceramic nanofibers, glass nanofibers, composite nanofibers), by their end-users (medical, chemical, energy, electronics, healthcare, aerospace, composites, textiles and automotive), and by their geography (China, India, Japan and other countries).
The report offers insights into the recent developments, financials, and products& services offered by the major players of the industry in the region. Some of the key players are:
Key Deliverables in the Study: | <urn:uuid:0b429dce-0b90-477b-b80d-06af3d7f8c07> | CC-MAIN-2017-04 | https://www.mordorintelligence.com/industry-reports/asia-pacific-nanofibers-market-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00112-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952316 | 558 | 2.625 | 3 |
Definition: Find the weight (or length) of the shortest paths between all pairs of vertices in a weighted, directed graph.
See also Floyd-Warshall algorithm, Johnson's algorithm similar problems: single-source shortest-path problem, shortest path, minimum spanning tree, traveling salesman, all simple paths.
Note: The problem is to find the weights of the shortest paths between all pairs of vertices. For a map, it is to produce the (shortest) road distances between all cities, not which roads to take to get from one city to another.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 1 February 2005.
HTML page formatted Mon Feb 2 13:10:39 2015.
Cite this as:
Paul E. Black, "all pairs shortest path", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 1 February 2005. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/allPairsShortestPath.html | <urn:uuid:dd8531b9-48c7-4a4d-bed5-9d2cec04799b> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/allPairsShortestPath.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00232-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.878335 | 237 | 3.703125 | 4 |
Definition: A collection of items in which only the earliest added item may be accessed. Basic operations are add (to the tail) or enqueue and delete (from the head) or dequeue. Delete returns the item removed. Also known as "first-in, first-out" or FIFO.
Formal Definition: It is convenient to define delete or dequeue in terms of remove and a new operation, front. The operations new(), add(v, Q), front(Q), and remove(Q) may be defined with axiomatic semantics as follows.
Also known as FIFO.
Generalization (I am a kind of ...)
abstract data type.
Specialization (... is a kind of me.)
See also deque, stack, priority queue, first come, first served.
Demonstrations with dynamic array, fixed array, and linked list implementations.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 7 July 2014.
HTML page formatted Mon Feb 2 13:10:40 2015.
Cite this as:
Paul E. Black, "queue", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 7 July 2014. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/queue.html | <urn:uuid:f93456f7-d77c-4698-9d16-99355754b4af> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/queue.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00232-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.894094 | 299 | 3.46875 | 3 |
A reader's guide to virtual world technology
Links to articles and resources for feds looking to learn more about Second Life and other virtual world technology
Virtual training done right
Source: Workforce Management
When considering virtual-training applications, first think in terms of interaction.
Experts say that too often, training managers develop 3-D worlds that look impressive but do not enable participants to interact with one another or the environment in a meaningful way, according to freelance writer Sarah Fister Gale’s article in Workforce Management magazine.
Virtual training should enable users to act and see the results of their actions, or there's no real benefit to having a 3-D environment. Users should also be able to see what other participants are doing and interact with them through gestures and other visual or verbal cues.
That kind of interaction will keep users engaged in a way that classroom instruction never can, experts say. Without it, users will quickly lose interest.
Virtual training at DOD: An analysis
Congressional Research Service
An April 2008 article from the archives of the Federation of American Scientists provides a useful overview of the policy implications of virtual training at the Defense Department and in the intelligence community.
Congressional researchers outline the benefits of virtual training for DOD, which include the lack of wear and tear on real equipment and related costs. The technology also makes it possible to train with first responders and others, which is a logistical challenge in the real world.
But the report also notes that more research is needed to gauge the effectiveness of virtual training. For example, such training usually takes place in comfortable, air-conditioned environments, “so the operators never get tired from running with a large backpack, or wet and cold, or otherwise physically stressed as they would in a real-world training exercise,” the report states.
A running commentary on virtual worlds
Source: Federal Consortium for Virtual Worlds
Last month, the Federal Consortium for Virtual Worlds asked a handful of bloggers to provide a blow-by-blow report on its conference in Washington, D.C. As is usually the case with live blogging, the posts lack details and analysis. But the bloggers did a good job of capturing insightful comments and intriguing questions, and they provide a curious source of ideas and commentary on government agencies’ use of virtual worlds.
For example, traditional online communities of practice tend to be asynchronous, meaning users do not need to be online at the same time to interact. But virtual worlds are built around synchronous, or real-time, collaboration. One speaker wondered how a hybrid approach might work.
Second Life tutorials
Source: Kansas State University
Kansas State University has provided a lifeline for those who find themselves baffled by Second Life, as has been known to happen even to technology-savvy editors and reporters.
The university’s Communications Department has assembled a good collection of how-to resources. The first three documents are intended for folks looking to get started: They cover how to create avatars; how to drive, pilot or fly virtual vehicles; and basic tips, tricks and hints.
The site also includes links to tutorials in Second Life, a list of related development tools, and pointers to other sites that offer tutorials, tips and tools. | <urn:uuid:031a65cc-16bc-47fb-a430-b32777a1e43e> | CC-MAIN-2017-04 | https://fcw.com/articles/2009/05/04/pointers-second-life.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00499-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931006 | 659 | 2.765625 | 3 |
Kanchongkittiphon W.,Boston Childrens Hospital |
Kanchongkittiphon W.,Harvard University |
Kanchongkittiphon W.,Mahidol University |
Mendell M.J.,Indoor Air Quality Program |
And 6 more authors.
Environmental Health Perspectives | Year: 2015
Background: Previous research has found relationships between specific indoor environmental exposures and exacerbation of asthma.Objectives: In this review we provide an updated summary of knowledge from the scientific literature on indoor exposures and exacerbation of asthma. Methods: Peer-reviewed articles published from 2000 to 2013 on indoor exposures and exacerbation of asthma were identified through PubMed, from reference lists, and from authors’ files. Articles that focused on modifiable indoor exposures in relation to frequency or severity of exacerba-tion of asthma were selected for review. Research findings were reviewed and summarized with consideration of the strength of the evidence.Results: Sixty-nine eligible articles were included. Major changed conclusions include a causal relationship with exacerbation for indoor dampness or dampness-related agents (in children); associations with exacerbation for dampness or dampness-related agents (in adults), endotoxin, and environmental tobacco smoke (in preschool children); and limited or suggestive evidence for associa tion with exacerbation for indoor culturable Penicillium or total fungi, nitrogen dioxide, rodents (nonoccupational), feather/down pillows (protective relative to synthetic bedding), and (regardless of specific sensitization) dust mite, cockroach, dog, and dampness-related agents.Discussion: This review, incorporating evidence reported since 2000, increases the strength of evidence linking many indoor factors to the exacerbation of asthma. Conclusions should be considered provisional until all available evidence is examined more thoroughly.Conclusion: Multiple indoor exposures, especially dampness-related agents, merit increased attention to prevent exacerbation of asthma, possibly even in nonsensitized individuals. Additional research to establish causality and evaluate interventions is needed for these and other indoor exposures. © 2014, Public Health Services, US Dept of Health and Human Services. All rights reserved. Source | <urn:uuid:d2d307f8-029f-4908-a31d-635c22e172cc> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/air-quality-program-1714882/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00407-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.901239 | 442 | 2.578125 | 3 |
To recap, my original definition of a Virtual Private Cloud (VPC) is as a method for partitioning a public computing utility such as EC2 into quarantined virtual infrastructure. A VPC may encapsulate multiple local and remote resources to appear as a single homogeneous computing environment bridging the ability to securely utilize remote resources as part of an seamless global compute infrastructure. A core component of a VPC is a virtual private network (VPN) and or a virtual LAN (Vlan) in which some of the links between nodes are encrypted and carried by virtual switches.
According to the new VPC website, the "Amazon Virtual Private Cloud (Amazon VPC) is a secure and seamless bridge between a company’s existing IT infrastructure and the AWS cloud. Amazon VPC enables enterprises to connect their existing infrastructure to a set of isolated AWS compute resources via a Virtual Private Network (VPN) connection, and to extend their existing management capabilities such as security services, firewalls, and intrusion detection systems to include their AWS resources. Amazon VPC integrates today with Amazon EC2, and will integrate with other AWS services in the future."
VPC definitions and terminology aside the new service is important for a few reasons.
1. In a sense Amazon now has publicly admitted that private clouds do exist and the core differentiation is isolation (what I call quarantined cloud infrastructure), be it virtual or physical.
2. Greater Hybrid Cloud Interoperability & Standardized Network Security by enabling native VPN capabilities within their cloud infrastructure and command line tools. Amazon's VPC has added a much greater ability to interoperate with existing "standardized" VPN implementations including:
3. Further proof that Amazon is without any doubt going after the enterprise computing market where a VPN capability is arguably one of the most requested features.
- Ability to establish IKE Security Association using Pre-Shared Keys (RFC 2409).
- Ability to establish IPSec Security Associations in Tunnel mode (RFC 4301).
- Ability to utilize the AES 128-bit encryption function (RFC 3602).
- Ability to utilize the SHA-1 hashing function (RFC 2404).
- Ability to utilize Diffie-Hellman Perfect Forward Secrecy in “Group 2” mode (RFC 2409).
- Ability to establish Border Gateway Protocol (BGP) peerings (RFC 4271).
- Ability to utilize IPSec Dead Peer Detection (RFC 3706).
- Ability to adjust the Maximum Segment Size of TCP packets entering the VPN tunnel (RFC 4459).
- Ability to reset the “Don’t Fragment” flag on packets (RFC 791).
- Ability to fragment IP packets prior to encryption (RFC 4459).
- (Amazon also plans to support Software VPNs in the near future.)
4. Lastly greater network partitioning, using Amazon's VPC, your EC2 instances are on your network. They can access or be accessed by other systems on the network as if they were local. As far as you are concerned, the EC2 instances are additional local network resources -- there is no NAT translation. A seemless bridge to the cloud.
In the blog post announcing the new service, I found their hybrid cloud use case particularly interesting; "Imagine the many ways that you can now combine your existing on-premise static resources with dynamic resources from the Amazon VPC. You can expand your corporate network on a permanent or temporary basis. You can get resources for short-term experiments and then leave the instances running if the experiment succeeds. You can establish instances for use as part of a DR (Disaster Recovery) effort. You can even test new applications, systems, and middleware components without disturbing your existing versions."
This was exactly the vision I outlined in my original post describing the VPC concept. I envisioned a VPC in which you are given the ability to virtualize the network giving it particular characteristics & appearance that match the demands as well as requirements of a given application deployed in the cloud regardless of whether it's local or remote. Amazon seems to realize that cloud computing isn't a big switch to cloud computing where suddenly you stop using existing "private" data centers. But instead the true opportunity for enterprise customers is a hybrid model where you use the cloud as needed, when needed, and if needed and not a second longer then needed.
I also can't help wondering how other cloud centric VPN providers such as CohesiveFT will respond to the rather sudden addition of VPN functionality, which in a single move makes third party VPN software obsolete or at very least not nearly as useful. (I feel your pain, remember ElasticDrive?) I am also curious to see how other IaaS providers such as Rackspace respond to the move, it may or may not be in their interest to offer compatible VPC services that allow for a secure interface between cloud service providers. The jury's still out on this one.
Let me also point out that although Amazon's new VPC service does greatly improve network security, it is not a silver bullet and the same core risks in the use of virtualization still remain. If Amazon's hypervisor is exploited, you'd never know it and unless your data never leaves an encrypted state it's at risk at one end point or another.
At Enomaly we have also been working on enhanced VPC functionality for our cloud service provider customers around the globe. For me this move by Amazon is a great endorsement of an idea we as well as others have been pushing for quite awhile.
On a side note, before you ask, Yes, I'm just glad I bought the VirtualPrivateCloud.com./.net/.org domain names when I wrote the original post. And yes, a place holder site and announcement is coming soon ;) | <urn:uuid:4ba55306-db56-40ad-9286-5a9702c9664f> | CC-MAIN-2017-04 | http://www.elasticvapor.com/2009/08/amazons-virtual-private-cloud-is.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00131-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923193 | 1,181 | 2.75 | 3 |
Debris from the Columbia shuttle catapulted through the skies over at least four southwestern states on Feb. 1, creating a puzzle spread over thousands of square miles. GIS and GPS made a whole picture from this extensive landscape and slowly created maps showing the retrogressive pattern of debris.
The initial concern was recovering the remains of the seven astronauts who lost their lives in the event. At the same time, an intense effort was launched to find the cause of the shuttle's midair disintegration. Collection of debris was a key element in this effort.
The need for accurate mapping and GIS technology was immediately apparent -- calls for GIS support were issued on day one. The state was ready to respond to that need, according to Drew Decker, director of Texas Natural Resources Information System (TNRIS).
"What we have been doing for the last four to five years is collecting a lot of base map information," he said. "Therefore, we already had the resources."
As an agency within the Texas Water Board, TNRIS had already gathered statewide data. The agency also collaborated with Stephen F. Austin University in Nacogdoches, Texas, which has an extensive database of maps, and surrounding counties to form a data pool -- the foundation for reconstruction of the shuttle's final minutes.
This repository of information provided teams throughout eastern Texas with data about roads, terrain, infrastructure and topography -- all plotted with coordinates that allowed field personnel to document what debris was found and where. Decker said some of the search teams were equipped with GPS receivers, allowing them to immediately record the data.
Some of the debris was logged using a 24-satellite array, which provided accuracy within 100 feet or more. Software from Trimble Navigation Ltd. fine-tuned the GPS data to within 3 feet. This information was then transmitted to ArcInfo GIS software from ESRI where it was plotted on TNRIS' data-rich maps.
Decisions made early in the recovery effort also helped the extensive process of mapping debris. Officials in Nacogdoches developed a "data dictionary" to create standards before any logging of debris was done, according to Mick Garrett, Trimble's North American sales manager, mapping/GIS products. The initial effort, however, was decidedly low-tech.
"A grid was laid out, about a kilometer square ... people would walk the grid and look for pieces," Garrett said. "When a piece was found it was flagged. If it was a significant piece, we would take a GPS location, photograph it and flag it so someone would come and recover it. There were so many pieces of tile out there -- some the size of a silver dollar."
Reports of debris came from as far away as California.
Narrowing the Search
"Teams were based out of regional efforts by county," Decker said, and search teams were sending information to Lufkin, Texas. He said layers of data on GIS maps indicated where debris was most likely located and what natural impediments -- such as water or rugged terrain -- might hamper the search.
"There are some places where you might expect to find things and don't," he said. "That might mean you have to go out and look again."
One week after the disaster, the search refocused in eastern Texas, around Lufkin and Nacogdoches, where teams from the university's GIS program and its Forest Resources Institute found thousands of parts and pieces. The search area eventually narrowed to a 10-mile by 240-mile corridor along the shuttle's path. The landing gear was found in this area in mid-February
The terrain in eastern Texas ranges from rugged -- thick woods and impenetrable underbrush -- to open pine forests, marshy streambeds and 238 bodies of water. The search was often hampered by weather, which turned from warm and sunny to rainy and near-freezing.
Information transmitted from the ground to a satellite receiver is easily affected by the environmental factors, such as weather and terrain. High-quality electronics can compensate for those variables, according to Trimble's Garrett.
"Certain errors occur when GPS signals travel through the ionosphere and troposphere that can affect the accuracy of the position," he said. "What we can do is use a method called differential correction that removes that error. Instead of being 30 or 40 feet out of position, we are within 3 or 4 feet."
Accuracy is critical in a disaster that covers multiple counties, said Joseph B. Bowles, senior technical marketer for ESRI. "If you are off just a half a degree, your line goes off by miles," he said.
Garrett said this kind of accuracy depends, in part, on good firmware -- generally defined as software embedded in hardware. The tool used in Nacogdoches and other sites in eastern Texas was the company's GPS Pathfinder Pro XR receiver that runs TerraSync software.
A Collaborative Effort
More than 40 counties in Texas reported finding shuttle material. Information was drawn from multiple sources across jurisdictional lines ? from all three levels of government.
The effort included 17 state agencies, 17 federal agencies and numerous local governments. Texas has a history of cross-boundary collaboration, said Sheila Sullivan, ESRI's San Antonio regional manager.
"It's a real community here in Texas, and people see the need to share data," she said. "They recognize we have to share data to achieve things." Sullivan also said there was very little turf struggle over who would take the lead in the days following the disaster.
Adequate funding from the Texas Legislature created a platform for cross-jurisdictional sharing long before a disaster demanded it, Decker said.
Communities throughout the state had a head start on collaboration because of TNRIS' five-year statewide mapping project. The Legislature also recognized the ongoing need to update maps and allocated subsequent funding so TNRIS could update information.
"We happen to have a lot of data here because the Legislature provided funding for four years," Decker explained. "We could invest in partnerships with local government." | <urn:uuid:fc7115a3-05ef-4e27-be95-69d4c1d5b103> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Plotting-the-Path-of-Destruction.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00525-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.97244 | 1,255 | 3.0625 | 3 |
Fuel Cells Offer Different
Advantages, Disadvantages"> Meanwhile, fuel cells also present an alternative source of power for notebook PCs. Numerous manufacturers are currently working on creating fuel cells that can be used with notebooks. The first fuel cells, which are hitting the market now in small quantities from companies such as UltraCell, will be used as external power sources, becoming a portable electric outlet of sorts, for notebooks used by military units or scientists who currently have to carry numerous battery packs, said Ted Prescop, senior applications engineer at UltraCell, in Livermore, Calif.UltraCells first fuel cell, a 2.2-pound unit that costs about $1,000 and uses 1-liter methanol fuel cartridges, is sampling to some clients now and is scheduled to go into wider production in early 2007. The unit will serve as a universal power supply for several different notebook brands, he said. Right now, "Fuel cells dont make sense for short run-times. It only makes sense if youre going to be running for a couple of days," he said. "Its much easier to bring along extra fuel than extra batteries." Over time, fuel cells and fuel cartridges will come down in price and size, Prescop predicted, making them more consumer-friendly. UltraCells second-generation fuel cell, which he said is being developed in the companys labs now, will be attachable to the back of a notebook and will work in concert with a notebooks battery. The cells half-liter fuel cartridge may stick out slightly, he said, but isnt likely to be more obtrusive than one of todays extended run-time battery packs. Travelers using the product could leave their AC adapters behind. A third-generation integrated fuel cell, which could be built into a notebooks chassis, should arrive within about five years, Prescop said. But even with a fuel cell present, a notebook is still likely to use a battery, given that fuel cells tend to deliver a constant rate of power, while notebooks needs vary. The companion battery, which is likely to be smaller than those used today, would also provide energy to start the fuel cell. However, its main purpose would be to act as a gas tank of sorts, storing energy from the fuel cell and then supplying it to the notebook according to the machines needs. Theoretically, a small fuel cell with a 200-cubic-centimeter cartridge could meet most peoples needs, Prescop said. A fuel cell that provided 180 watt-hours of power per 200cc cartridge would supply enough juice to run a 10-watt ultraportable notebook for about 18 hours, for example. But, despite the advances that are possible for fuel cells or the more immediate potential for silver-zincs ability to exceed the energy density of todays lithium-ion batteries, any potential replacement technology will still face a difficult road. Given that PC makers such as HP operate on thin margins, costsranging from the costs involved in redesigning battery circuitry to the potential price premiums of new battery packs themselvesare all critical considerations. "From a competitive standpoint, from a dollars-per-watt-hour perspective, will [zinc-silver] ever approach where lithium-ion is?" Wozniak asked. For now, PC makers such as HP are likely to prefer to simply continue working to improve the safety and energy capacity of lithium-ion technology, he indicated. Efforts are already afoot to do both. Sanyo, Sony and Panasonic (of Matsushita Electrical Industrial Co.), the worlds top battery makers, continue to work to squeeze more out of lithium-ion technology. Intel and Panasonic, for example, have been collaborating to develop new lithium-ion cell designs that use nickel to increase their energy density. (An Intel representative said on Oct. 4 that company officials were unavailable to comment on the technology, while Panasonic officials in New Jersey did not return a call requesting more information on the battery technology.) Companies are also exploring other approaches to lithium-ion technology, including one called lithium-ion phosphate. Valence Technology, of Austin, Texas, one company that is developing a lithium-ion phosphate technology, has said the approach is safer because its technology uses different types of materials, including a phosphate-based cathode, which are not prone to thermal runaway or overheating and will not burn. However, none of the advances are without trade-offs, Wozniak said. "Lithium-ion phosphate has been around a while and it is a safer chemistry for no other reason than [that] its a lower-energy chemistry," he said. At about 1.6 amp-hours, versus the 2.6 to 2.9 amp-hour ratings of current and near-future lithium-ion battery cells, "You need a battery thats half again as big to get the same amount of power. Its not well-suited for portable applications, really." Thus, PC makers have begun working on ways to improve the safety of lithium-ion batteries by addressing the way the cells are made. The OEM Critical Components Committee of the electronics products standards body IPC recently met to begin work on a set of standards for manufacturing and testing lithium-ion cells. Its aim is to raise the bar for battery cell makers and therefore make it less likely for contaminants to enter the cells during manufacturing. Others are doing work individually. HP has been working with a company called Boston-Power, Wozniak said. He said Boston-Power is formulating a revised lithium-ion battery chemistry that can lengthen battery packs life cycles by roughly tripling the number of charge/discharge cycles they can endure before their ability to store energy degrades. However, ultimately, "You cant sell safety," he said. "You cant say, Heres my regular battery and heres my safer one. If you come up with a technology thats safer, it has to be licensed across the industry to have an impact." Check out eWEEK.coms for the latest news in desktop and notebook computing.
"The main benefit to a fuel cell is you can get a lot more energy per pound than you can with a battery," Prescop said. "What a lot of fuel cell companies are struggling with is how to make a small system thats also affordable. Its going to take a few years to get down in price to where average laptop users are going to consider [a fuel cell] as an alternative to batteries." | <urn:uuid:68d5d58a-557e-4ba2-84c3-aac8773f91fb> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Desktops-and-Notebooks/LithiumIon-Batteries-to-Survive-Notebooks-Flames/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00251-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961066 | 1,316 | 2.984375 | 3 |
Definition: A tree where every subtree of a node has keys less than any other subtree of the node to its right. The keys in a node are conceptually between subtrees and are greater than any keys in subtrees to its left and less than any keys in subtrees to its right.
Specialization (... is a kind of me.)
binary search tree, B-tree, (a, b)-tree, ternary search tree.
See also search tree property, move-to-root heuristic, k-d tree.
Note: A search tree that is also a binary tree is a binary search tree.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 14 December 2005.
HTML page formatted Mon Feb 2 13:10:40 2015.
Cite this as:
Paul E. Black, "search tree", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 14 December 2005. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/searchtree.html | <urn:uuid:fce83037-c80a-47ee-b99c-4b586942c6e4> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/searchtree.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00159-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924435 | 246 | 3.265625 | 3 |
Governor Andrew M. Cuomo is doing his best to ensure that New York State residents will not just be “whistling in the dark” during the next power outage. He announced this week that the state is making $20 million available to support clean-energy projects that will provide continuous power and heat during the weather-related blackouts that are hitting the Northeast with increasing frequency.
These projects support the recommendations of Governor Cuomo’s NYS 2100 Commission to use distributed generation to provide backup power when electricity lines are down. The commission—co-chaired by Judith Rodin, president of the Rockefeller Foundation, and Felix G. Rohatyn, former chairman of the Municipal Assistance Corporation— was formed last year in the aftermath of Hurricane Sandy and tasked with finding ways to improve the resilience and strength of the state’s infrastructure in the face of natural disasters and other emergencies.
Combined heat and power (CHP) projects provide manufacturers, apartment buildings, hospitals, universities and other large buildings the ability to produce a portion of their own heat and electricity.
“Investing in combined heat and power technology will help keep our electric grid reliable and efficient, and make our businesses more competitive,” said Governor Cuomo. “In the wake of Hurricane Sandy, we have learned the value and importance of having clean-energy technologies like CHP in place that will keep the lights on and systems running for our residents and businesses.”
CHP projects, also known as “cogeneration,” capture heat produced during electricity generation and use it to provide on-site heat or hot water to buildings. These installations are capable of achieving higher levels of fuel efficiency by simultaneously producing both electric and useful thermal energy at the facility where the energy is needed. This localized generation can both reduce a facility's vulnerability to electric distribution system outages and decrease peak demand on the electric grid. Power created at the customer site also avoids inherent energy losses during transmission and distribution.
This program, administered by the New York State Energy Research and Development Authority (NYSERDA), will only fund CHP systems that can continue to operate during a grid outage. In addition, all applicants in flood zones must install systems in locations that would be “high and dry” in the wake of a worst-case flood scenario.
“Governor Cuomo has called for making the state’s infrastructure more resilient in the face of extreme weather like we witnessed with Hurricane Sandy. Through the use of combined heat and power technology, building owners can make that happen,” said Francis J. Murray Jr., President and CEO, NYSERDA. “CHP systems can benefit our metropolitan areas in many ways, from easing air pollution to reducing fossil-fuel consumption, as well as reducing the pressure on the electric grid in times of great need.”
Since relieving strain on the electric grid is so important in densely-populated New York City, projects in the city and lower Hudson Valley will receive slightly higher funding, based on a sliding scale. In addition, this program will provide 10 percent more funding to projects that can power an official “facility of refuge”—a shelter to be used at times of emergency—as recognized by the American Red Cross or the local Office of Emergency Management.
The program will pay an incentive of up to $1.5 million per project for installing equipment approved by NYSERDA and installed by approved CHP system vendors. Projects can be as small as 50 kilowatts and as large as 1.3 megawatts, based on building requirements. Incentive amounts will be available on a first-come, first-served basis until December 30, 2016 or until all funds are committed. Only CHP systems installed at sites that pay the System Benefits Charge (SBC) are eligible for incentives.
After Hurricane Sandy, Governor Cuomo announced three commissions, NYS 2100, NYS Ready and NYS Response, to improve the State’s emergency preparedness and response capabilities, and to strengthen the state's infrastructure to withstand natural disasters.
This week’s announcement aligns with a recommendation within the NYS 2100 Commission Report that NYSERDA should expand its incentive programs for distributed generation resources, including CHP, and provide preference to those facilities that will serve as refuge during storm outages.
Over the past 12 years, NYSERDA has invested more than $100 million in CHP technology. This has helped to cut energy costs and reduce the energy use of industrial, commercial, institutional and multifamily residences.
Edited by Brooke Neuman | <urn:uuid:57559557-d232-4be4-ad27-582d1b5816ef> | CC-MAIN-2017-04 | http://www.iotevolutionworld.com/topics/smart-grid/articles/2013/02/15/327098-new-york-state-allots-20-million-backup-systems.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00067-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.962681 | 938 | 2.609375 | 3 |
Since the optical fiber and fiber optic port prices are still very expensive, so the optical fiber is mainly used for optical fiber is mainly used for the connection between the core switch and the aggregation layer switches, or used in the aggregation layer switches between the cascade. Should be noted that the fiber ports do not have the ability to stack can only be used in cascade.
1. Fiber optic jumper wire cross connection
In all switches are two fiber ports that pitch-catch. Of course, fiber jumpers must be 2, otherwise wouldn’t be able to communicate between the ports. When the switch via fiber optic port cascade must be at both ends of the fiber jumpers transceiver swap, when the a Termination “receive” the other end “send.” Similarly, when one end of the “send” elbow, the other end to close. ” Cisco GBIC fiber module marked transceiver flag, left side inward arrows indicate the “revenue”, the right side of the outward arrows indicate the “send”, the port LED is not lit, the connection failed. Only when the fiber port after the success of the LED light into green.
Similary, when the convergence layer switch is connected to the core switches, between the transceiver ports of the optical fiber must also be cross-connections, as shown:
2. The connection of the optoelectronic transceiver
When the fiber optic cable integrated wiring between buildings or between floors while the horizontal cabling using twisted pair connection between the two transmission media. First, using the same time has a fiber port and RJ-45 port switch, switch between the interconnection between the photoelectric port; using inexpensive photoelectric conversion device, connect one end of the optical fiber, the other end connected to the switch twisted pair ports photoelectric conversion between, as shown:
Comparatively speaking, the higher transmission performance modular switches, photoelectric conversion devices cheaper. Therefore, it should be based on the network data transmission needs determined and the amount of investment which equipment. It needs to be noted that not all fiber transceivers to support full-duplex, some products only support half-duplex, and should therefore pay attention to identify at the time of purchase. In addition, taking into account the compatibility suggested the use of the products of the same brand and type.
One end of the optical transceiver connected to the fiber optic patch panels using fiber optic jumper with the remote fiber-optic interface connection; using twisted pair jumper the other end connected to the RJ-45 port switch, with the other computers on the switch connection to complete the network backbone fiber optic transmission. | <urn:uuid:65f256c5-fbdb-4997-bbeb-6a48a4b98759> | CC-MAIN-2017-04 | http://www.fs.com/blog/integrated-cabling-network-connection-between-the-optical-fiber-equipment.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00279-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.905208 | 547 | 2.828125 | 3 |
The city of Long Beach, Calif., requires re-evaluation of its City Council districts once every five years to adjust for changes in population and ethnic composition. In 1991, redistricting was a labor-intensive process that took five months to complete. Thanks to technology, the next round of redistricting was much more pleasant. GIS Manager Tina Dickinson, using only ArcView, developed an application allowing Long Beach to wrap up
the redistricting project in just over two weeks.
Ruth McGree, chief of staff to Councilman H. Delano Roosevelt, said the 1991 redistricting was done manually, with each minor change to a district requiring recalculation of ethnicity and population figures. "We would sit down with maps and move district lines and see what census blocks were taken in. Then we would add up those figures, locate the larger-density areas and redesign the districts so that people were spread out evenly among all nine. It seemed to take forever.
"The political nature of redistricting is tough enough," McGree said, "having to add figures up on a Paradox-type program and estimate the final demographics and population figures for every change made was really tough." Since even the smallest change to a district required so much time and effort to assess the impact on all the others, council members were limited in the number of changes they could make. Not everyone was happy with the results.
Dickinson's user-friendly application, on the other hand, let council members and their aides create hypothetical district changes almost as fast as they were needed, after only 30 minutes of instruction. Without this application, redistricting in 1996 might have taken nearly a year to complete.
Of course, the technology was only part of the solution; before it could be used, the city had to adjust for the undercount of the 1990 Census, sort through a decade of economic decline, and come up with a new model on which to base ethnicity and population estimates.
Decade of Decline
From the late 1980s onward, Long Beach has experienced a series of economic setbacks. The Navy closed its massive shipyard, including its base, housing area and hospital; the Los Angeles riots torched part of the city, and Douglas Aircraft cut its workforce by nearly 50 percent. The cost? A $4 billion hole in the economy and 58,600 jobs lost. More than 21,000 people left town for good. Trying to assess the resulting changes in population and ethnicity, and come up with anything resembling an accurate census for the redistricting program presented a formidable challenge.
Assembling the Data
Data acquisition and processing was the task of Advanced Planning Director Jack Humphrey. Drawing from the 1990 Census, and from a subsequent demographic model developed by the Urban Research Unit, Humphrey created a 1.8MB spreadsheet down to the census-block level, then used it as a basis to estimate the ethnicity and population of the various districts.
Humphrey acknowledged the estimates were bound to be shaky. "Certainly, there was some element of error, but we felt we kept it to a minimum. Also, since the redistricting process involved large areas of the city, we believed the differences would average out."
In addition to new addresses picked up during the Census Bureau's LUCA (Local Updating of Census Addresses) Program in 1995 (see "The Census of The Century," Government Technology, August 1998), the planning department factored in the number of buildings and apartments demolished or no longer occupied, and checked with schools to determine the current ethnicity in various areas. "We couldn't wait till the year 2000 to do this," Dickinson pointed out. "Besides, we expect to run the redistricting process again as soon as the Census 2000 data is available to us. We'll probably see lots of changes in ethnicity, not so much in the number of people."
Building the Application
While the data was being assembled, Dickinson created and tested the Graphical User Interface (GUI) for the redistricting application. The primary considerations were that the program be easily understandable to council members and their aides, that it display immediate demographic changes in response to district boundary moves, and that it conform to city redistricting guidelines.
Dickinson designed the GUI to immediately display the results of district boundary changes geographically as well as through bar charts, pie charts, tables and a histogram indicating the degree of deviation from the ideal population configuration.
Dickinson linked the population data, provided by the planning department in DBF format, to the polygon shape files of census blocks, the latter being color-coded by district. To ensure boundary changes remained within city guidelines for redistricting, she also programmed the application to prevent the splitting of census blocks, moving council members out of their respective district, or putting together or breaking up districts with a minority majority.
Seeing the Options
Council members were allowed to develop up to five separate redistricting plans each for discussion. Out of these, members would select several for presentation to the City Council. Approved plans would then be open for discussion at a public hearing.
Since the application immediately and graphically displays the effects of boundary changes on the population and ethnic composition of a particular district, members were able to quickly explore a range of redistricting plans. The steps are fairly straightforward and the results of boundary changes immediately available. For example, if a district boundary is moved to include two adjacent census blocks, the newly acquired blocks assume the color coding of that district.
The program then recalculates the resulting changes in population and ethnicity, and indicates how close the district is to the ideal configuration. Council members can also click on a census block outside their respective boundaries and see its demographic makeup before moving the block into their district. The ability to quickly generate "what if" options helped members determine those redistricting plans most likely to be approved by other members, the City Council and the public.
Throughout the process, however, boundary adjustments had to be small because changes to a district also affected the surrounding districts.
The GIS staff trained council members and their aides in half-hour sessions for each group. After that, two-hour time slots on the application were allocated to each group for developing their respective plans. According to Dickinson, plans for all nine districts were completed within a week.
"This wasn't just for their own districts; council members had to come up with five plans for the readjustment of all districts." She said the process was an eye-opener for many council members. "They had an intuitive idea of the ethnic breakdowns of their respective districts, but some of them were in for a surprise when they incorporated census blocks into their districts before checking them out."
Dickinson said that after development of the individual plans, council members had to convince others that theirs was the plan to go with. "There was a lot of discussion and compromising. If a member liked another's plan except for one or two points, they would do minor tweaking until both were satisfied. Those who didn't need to make any plans agreed with the ones developed by others. In the final phase, council members came up with three different plans and voted on them. Open hearings resulted in general public approval of the final redistricting plan."
"It was an elegant application," said Humphrey. "Simple, easy to use and totally interactive. We tried to make it as open and understandable as possible. As a result, we did the redistricting process in just over two weeks. The public was generally accepting of the plan, mainly because there were no dramatic changes in the borders. The 1996 redistricting plan produced zero litigation."
Bill McGarigle is a writer, specializing in communications and information technology. He is based in Santa Cruz, Calif. E-mail | <urn:uuid:ac028257-14b7-47c6-b29f-c38147f4771e> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Slicing-Up-the-Geo-Political-Pie.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00333-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.972461 | 1,573 | 2.828125 | 3 |
The once-a-century earthquake that shook the East Coast on Aug. 23 marked a milestone for the U.S. Geological Survey's decade-old Did You Feel It? earthquake report crowdsourcing page.
Within hours of the quake that originated in Mineral, Va., the page had recorded more than 140,000 responses from as far north as Maine, as far south as Atlanta, and as far west as Indianapolis. That nearly doubles the site's previous record of about 72,000 responses for a single quake, said David Wald, a USGS seismologist who developed the crowdsourcing page.
At their height reports on the Virginia quake were coming into the page at about 13 per second, he said.
USGS launched DYFI? in California in 1999 and nationwide in 2001. Since 2005 the site has collected reports on quakes worldwide, wherever people have Internet access and the site is not blocked by unfriendly governments.
DYFI? now gets enough responses to most quakes that researchers can estimate the intensity, reach and origin point of a quake in a matter of minutes through online responses alone, Wald said, long before seismological data can give more definitive answers. When USGS geologists compare "shake maps" created by seismological equipment with maps created by DYFI? reports, the resemblance is almost spot on, he said.
Citizen reports on the intensity of quakes also can be extrapolated into crucial early damage assessments that take days or weeks to confirm through on-the-ground engineers' observations, he said.
In some cases, DYFI? reports can fill in information gaps when quakes hit regions that are not prone to seismological activity and where there is less high-caliber and high-cost detection equipment, Wald said.
Wald was speaking at a discussion arranged by the nonprofit Woodrow Wilson Center's Science and Technology Innovation Program.
The DYFI? site requests users' ZIP codes and addresses on the first page, then drills down into intensity questions on the second page, asking whether they were inside, outside, or in a moving vehicle when the quake hit, and whether there was damage or disarray in their building.
The site has been largely free of pranksters, Wald said, so geologists have to do very little sifting out of false reports. More often, people will post a second time to correct an error in their first report, such as accidentally listing the wrong ZIP code in the spot for the time they felt the quake, he said.
"People tend to be pretty sober after an earthquake," he said.
In some cases, the DYFI? page has helped advance the geological community's understanding of earthquakes, Wald said. Before the page's launch, for instance, seismologists generally believed people couldn't feel earthquakes with a magnitude lower than 2.0. When such quakes hit, though, the DYFI? page will now frequently get a dozen or so reports from very near the epicenter.
Scientists had long known that earthquakes of equal intensity will be felt over a greater distance on the East Coast than on the West Coast, Wald said, a result of differences in the regions' Mohorovicic discontinuity or "Moho," an area between the earth's crust and its upper mantle where seismic waves change their velocities. DYFI? reports didn't alter that belief, but confirmed it "like gangbusters," he said.
The 5.8 magnitude quake that hit Mineral, Va., for example, was felt more than 1,000 kilometers away. A similar quake on the West Coast would only have been felt about 300 kilometers away, Wald said.
When it comes to crowdsourcing quake information, DYFI? isn't the only game in town. Stanford University researchers have launched the Quake Catcher program in cooperation with USGS to embed small, cheap seismic sensors in hundreds of volunteers' laptop and desktop computers.
A USGS researcher also is mining Twitter data and has been able to pinpoint the epicenter of an earthquake within seconds based on the origin of a spike in Tweets using the word "earthquake," Wald said.
The volume of DYFI? responses has been about the same for East and West coast earthquakes if the quake's intensity and the number of people in the affected area are accounted for, Wald said. Responses from abroad are significantly lower, largely because of language barriers, he said. The response to the Virginia quake was likely elevated by the quake's high intensity over a huge landmass that included major cities such as Washington, Baltimore and New York, he said, rather than the novelty of East Coast quakes as some speculated. | <urn:uuid:523b4216-09e2-4f1a-b611-2c2d70c0bf52> | CC-MAIN-2017-04 | http://www.nextgov.com/technology-news/2011/09/mineral-va-quake-yeah-we-felt-it/49851/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00149-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963621 | 953 | 2.515625 | 3 |
Social engineering is the use of psychological tools such as deceit, misdirection, manipulation and flattery to elicit unauthorised information or access to systems. Social engineering is an increasingly common way for criminals to attack organisations as it does not always rely on cyber vulnerabilities but rather takes advantage of the weakest element in an organisation, human beings. People are susceptible to social engineering because these attacks exploit social norms and human nature, including reciprocity, curiosity, and pride. As we become increasingly connected – at work, at home and intertwining the two – the opportunities and impacts of social engineering are increasing.
At the same time, we have seen an increase in cybercrime-as-a-service, with organised criminal enterprises offering ransomware, DDoS, and espionage to hire. I believe that we will see more social-engineering-as-a-service over the course of 2017 and beyond. Organisations will increasingly be attacked via the individuals that work for them, who will be compromised via social engineering attacks on themselves and their networks (not just colleagues and peers, but spouses, children, and friends).
We have already seen this, to some extent, with Marcel Lehel Lazar, who went by the pseudonym ‘Guccifer’. He was sentenced to four years in prison on September 1 2016 for unauthorised access to a protected computer and aggravated identity theft. His high-profile victims included Sidney Blumenthal, a confidant to Hilary Clinton. In accessing Blumenthal’s emails, Lazar found that Clinton had used a private email address to correspond with her former political adviser and published the address online. This led to the revelation that Clinton had used this address and a private email server during her time as U.S Secretary of State. Lazar used Open Source Intelligence (OSINT) to gain access to the internet accounts of his victims. He found public information about his targets online and used that information to guess their passwords and security questions, which he has said “it was easy … easy for me, for everybody”.
Looking forward, there will be more and more use of OSINT and social engineering by criminals. It is my expectation that these methods will also be increasingly used by organised criminals as a service for those who want to access or discredit others.
To hear more from Jessica on the cause and effect of social engineering, register for the next Avecto webinar, The psychology of security: Stop social engineering attacks.
With a unique take on security, consultant and psychology expert Dr Jessica Barker examines why employees are very often the weakest link in the security chain and what you can do to prevent being the next victim of attack. | <urn:uuid:8cf1c008-4cad-4c2b-9092-6cd46e960dcd> | CC-MAIN-2017-04 | https://blog.avecto.com/2016/11/2017-the-year-of-social-engineering-as-a-service/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00177-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.962649 | 538 | 2.84375 | 3 |
“Universities as we know them will not exist 100 years from now,” DeMillo said. “There may be a couple recognizable names, maybe Harvard or Oxford. But higher education will be universally accessible, mediated by technology, probably offered through a variety of commercial platforms and very, very inexpensive.”
Knowledge will become a commodity and in fact, is already headed in that direction, adds Cameron Evans, CTO of U.S. education at Microsoft.
That’s why higher education will have to figure out how to make the college experience more about applying knowledge rather than capturing knowledge.
“If there’s anything that will be significantly different 25 years from today, it’s that people won’t go to school for knowledge,” Evans said. “They will go to school for an experience that they couldn’t otherwise have gotten online.”
Students’ school experience will focus on higher-order activities, with professors acting as facilitators of project-based learning or independent tutors of higher-level understanding, said Michael Staton, co-founder of Inigral, a private Facebook community for colleges and universities. High-quality content creation, delivery and assessment will move online.
“If you can learn the same content online at the same pace or even at a more rapid pace, what is the point of going to school?” Staton asked.
A New Divide?
One danger of the pure technology model, Taiz said, is that students who don’t have much money will attend technology-mediated schools. And students with more resources will go to prestigious university campuses such as Harvard, Yale and Stanford.
But others argue that the divide has little to do with technology. “We have big socioeconomic gaps in who goes to what kind of college,” said Kauffman’s Wildavsky. “So it’s not that this advent of technology is going to create something that didn’t exist already.”
Nor are all technology-mediated models necessarily bad. Older working students especially benefit from the opportunities of online classes. And some students may choose a technology-mediated education because the experience is good enough, Wildavsky said.
For example, former Stanford professor Sebastian Thrun taught an Introduction to Artificial Intelligence course on campus in 2011 with Peter Norvig, Google’s director of research. But they also opened up the course online at no charge to anyone in the world who wanted to participate. As a result, many of the students from the face-to-face class opted to participate online.
As more and more students apply, top universities are becoming more selective, adds DeMillo. They’re selecting students by the quality of their high school education, which means they’re selecting by ZIP code and economic status.
“We’re going through that now, and it has nothing to do with online education,” DeMillo said.
Online Courses, Supersized
Massively open online courses have been around in some form for at least four years. But their popularity exploded in 2012 after Stanford’s experiments — and these efforts will continue to reshape higher education.
Thrun left Stanford to co-found Udacity, which launched to offer high-quality, low- cost classes. More than 160,000 students from more than 190 countries signed up for Udacity’s first artificial intelligence course.
Two other Stanford professors, Andrew Ng and Daphne Koller, spun off a company called Coursera. And, in 2012, Harvard University and the Massachusetts Institute of Technology teamed up to start the not-for-profit edX. These organizations — along with Udemy and other academics — all offer massively open online courses that are available to anyone, with unlimited space and no charge.
“I think not only are they sustainable, as you look at the economics of the cloud,” Evans said,“[but also] they’ve become the norm.”
The question isn’t so much whether they can be sustained technologically or economically, he said, but whether people can stay engaged in the course. And that’s one of the challenges these course providers will have to face.
Currently the courses are not as engaging because students don’t build an affinity for the university or make friendships like they do on campus, Evans said. As 3-D technology and 4K resolution displays and video improve, they will help students make deeper emotional and social connections.
However, these courses are only for certain types of students; they won’t meet everyone’s needs, Taiz said. “I worry if we think that this is the way of the future.” | <urn:uuid:0e913d1e-f19b-4d49-80d0-2ec7bd709891> | CC-MAIN-2017-04 | http://www.govtech.com/What-Will-Higher-Education-Look-Like-in-25-Years.html?page=2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00389-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951202 | 998 | 2.796875 | 3 |
Tapping open data sources
Every day we see how business Intelligence and data visualization is helping people across industries with a purpose of gaining new levels of insight and understanding of what's going on in their organization.
As well as being able to more easily analyse and drill down into the data within a particular system, modern BI tools really come into their own when combining various data sets. For instance, analysing sales information can highlight some useful trends, but combining that information with customer service data can deliver even greater insights – perhaps how sales of particular items correlate to customer service queries.
That's a fairly trivial example, but you get the idea of how being able to access and tie together a variety of data sets can answer ever more complex questions and provide even more useful answers. But even then, many BI users miss a trick by only looking inwards when seeking out useful data.
As the data revolution that has swept the globe, so new data sources have started to become readily available - from businesses, media organisations and governments, as well as research groups and analysts. As a result, just as expanding the reach of BI across departments can open up a host of new analysis possibilities, so can tapping the information that is readily available outside the organisation. Best of all, many of these data sources are freely available.
Some examples of open data sets include:
For any businesses that has more than one location, sells online and/or provides services, there is an element of geography involved. Mapping data can help bundle postcodes into particular regions and/or enable data to be visually broken down on a map, creating an easy way to see results by region.
Social media is a great way for businesses to engage with existing and potential customers, address queries, highlight products and events and discuss relevant topical news. It's also a veritable treasure trove of data that can be used to identify potential new combined with internal figures to reinforce or dispel existing plans or hypotheses.
Many governments are starting to make data more readily available, including a wide range of socio-economic information – everything from house prices and income, to health levels and education, not to mention more niche elements like traffic, environmental status or weather patterns.
The most obvious use for data like this is to improve sales and advertising by comparing target groups with demographic breakdowns, but depending on the nature of the business, there are a myriad of other possibilities for using this type of information as part of the data analysis.
These are just a few examples of the types of available data sources that exist outside of organisations. The one thing to look out for is to ensure that the data is trustworthy, but by turning to reputable sources it's possible to be reasonable confidant.
A fully featured business intelligence solution can tap into all these and more to provide an even greater pool of resources to analyse. By looking beyond just the information within the business, new correlations, trends, conclusions and predictions can be uncovered and visualized, opening up a new wave of insight and understanding. | <urn:uuid:777016cb-99d6-4d30-9892-0e67c6cb3b8a> | CC-MAIN-2017-04 | http://www.informationbuilders.com/blog/fateh-naili/16675 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00141-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935754 | 609 | 2.53125 | 3 |
Electronic Signatures: Using Digital Signatures
Electronic signatures are about enabling trust in the exchange of information and transactions electronically. The electronic signature process involves authentication of the signer’s identity, a signature process according to system design and software instructions, binding of the signature to the document and non-alterability after the signature has been affixed to the document. The generation of electronic signatures requires the successful identification and authentication of the signer at the time of the signature. Verifying a signature on a document guarantees the integrity of the document and verifies the identity of the signer.
Implementation features of electronic signatures include core (basic) and optional capabilities, such as:
- Message integrity
- User authentication
- Ability to add attributes
- Continuity of signature capability
- Independent verifiability
- Multiple signatures
- Transportability of data
Core Implementation Features
If an entity uses electronic signatures, the signature method must ensure all of the following features:
- Message Integrity: The assurance of unaltered transmission and receipt of a message from the sender to the intended recipient. An example would be the use of a digital certificate to encrypt and sign messages.
- Non-Repudiation: Strong and substantial evidence of the identity of the signer of a message, and of message integrity, sufficient to prevent a parity from successfully denying the origin, submission or delivery of the message and the integrity of its contents. An example would be a certificate to encrypt and sign e-mail messages.
- User Authentication: The provision of assurance of the claimed identity of an entity. An example would be the use of a certificate for authentication.
Optional Implementation Features
If an entity uses electronic signatures, the entity may also use, among others, any of the following features:
- Ability to Add Attributes: One possible capability of a digital signature technology; for example, the ability to add a time stamp as part of a digital signature.
- Continuity of Signature Capability: The concept that the public verification of a signature must not compromise the ability of the signer to apply additional secure signatures at a later date. For example, RSA Security’s eSign product provides such capability.
- Countersignatures: The capability to prove the order of application of signatures. This is analogous to the normal business practice of countersignatures, where a party signs a document already signed by another party.
- Independent Verifiability: The capability to verify the signature without the cooperation of the signer. A certificate authority (CA) may be used for this purpose.
- Interoperability: The applications used on either side of a communication, between trading partners and/or between internal components of an entity, are able to read and correctly interpret the information communicated from one to the other. For example, an organization may standardize on the X.509v3, PKCS and PKIX specifications.
- Multiple Signatures: With this feature, multiple parties are able to sign a document. Conceptually, multiple signatures are simply appended to the document.
- Transportability of Data: The ability of a signed document to be transported over an insecure network to another system, while maintaining the integrity of the document, including content, signatures, signature attributes and (if present) document attributes.
The standard for electronic signature is a digital signature. So what is a digital signature? A digital signature is an electronic signature based on cryptographic methods of originator authentication, computed by using a set of rules and parameters so that the identity of the signer and the integrity of the data can be verified.
This process yields a unique bit string, referred to as a message digest. The digest (only) is encrypted using the originator’s private key, and the resulting bit stream is appended to the electronic document. The recipient of the transmitted document decrypts the message digest with the originator’s public key, applies the same message hash function to the document and then compares the resulting digest with the transmitted version. If they are identical, then the recipient is assured that the message is unaltered and the identity of the signer is proven. Since only the signatory authority can hold the private key used to digitally sign the document, the critical feature of non-repudiation is enforced.
Pretty Good Privacy (PGP) enables each user to issue and manage his own digital certificates. In a PGP-based public key infrastructure (PKI), there is no CA. PGP cryptographic methods and keys compare well with those used in X.509-based PKI solutions. In a PGP solution, each user signs his own digital certificate. The issuer and subject fields are identical. Thus, all PGP certificates are initially self-signed.
PGP supports RSA, DSS and Diffie-Hellman for public-key encryption. For conventional encryption, PGP supports International Data Encryption Algorithm (IDEA) and Triple Data Encryption Standard (3DES). The hash-coding algorithm supported is Secure Hash Algorithm -1 (SHA-1).
PGP uses a distributed trust model. PGP is generally implemented in a self-contained software package that supports encryption and the capability to sign e-mail messages. It includes the software to create key pairs. PGP is available for free from www.mit.edu and other sites on the Internet. It is available commercially from PGP Corp. (www.pgp.com). The services supported by PGP are digital signature, message encryption and compression.
The Secure Multipart Internet Message Extensions (S/MIME) protocol uses public keys that comply with the X.509 standard. S/MIME is a specification for securing e-mail. It supports both encryption and signing. S/MIME supports digest and hashing algorithms MD5 and SHA-1. It also supports digital signature algorithms (DSA) and RSA. The key encryptions algorithms supported include Diffie-Hellman and RSA, while data encryption algorithms include RC2/40-bit-key, RC2/128-bit key and 3DES. S/MIME is integrated in Microsoft’s Outlook and Outlook Express as well as Netscape’s Messenger software.
PKI is a way for an organization to provide support for digital signatures and digital certificates. PKI is fast emerging as a core component for an infrastructure. PKI delivers an infrastructure that enables trusted communication.
A PKI is about trust. It is about building trust on your enterprise network infrastructure. PKI is a trust framework that organizations must build into their network systems (Internet, intranet and extranet) and security policies. Why is PKI important? Because PKI can make Internet transactions as secure as face-to-face transactions. A PKI deals with the reality that the inside and the outside of the enterprise are becoming one.
PKI is the next layer of security technology. It is the next “infrastructure” challenge for organizations. PKI establishes trusted communication between all entities on the Internet. Not only does a PKI provide support for digital signatures but also other applications such as secure virtual private networks (VPNs), secure e-mail, Web applications, ERP applications, reduced sign-on and remote access.
Today, off-the-shelf software programs such as Web browsers provide support for digital signatures. Web browsers have the ability to | <urn:uuid:4633124f-e064-44b3-a6b6-2a6cfa857a69> | CC-MAIN-2017-04 | http://certmag.com/electronic-signatures-using-digital-signatures/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00049-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.879867 | 1,515 | 3.0625 | 3 |
According to a study by the security firm Qualys, desktop applications like iTunes, Firefox, PowerPoint and, surprisingly, antivirus programs account for more than 60% of critical vulnerabilities. And attackers are focusing on new network targets as well.
VoIP servers and phones, IM servers, and even printers and faxes are now consider weak points that may provide access to an otherwise hardened network. Added to that, user-introduced errors and network misconfigurations can undermine even the best security plans.
According to Moses Hernandez, a network engineer for Mercy, in the past the task of studying the network was done infrequently. We used scanning to get a feel for the network and know what was on it, Hernandez said. We simply needed to get an idea of what was out there.
Soft on the Inside
However, after scanning for inventory and topology, Mercy realized it had to keep scanning on an ongoing basis. Networks can change dramatically over time, and new vulnerabilities are discovered in operating systems and applications on an almost daily basis. Vulnerability assessments have become so important that we scan every week or even every day, Hernandez said.
In other words, simply hardening your network against outside intruders is no longer an effective strategy. Increasingly, with guest access and partner applications and distributed networks, its more and more difficult to define what inside-the-network even means.
According to Ross Brown, CEO of vulnerability management vendor eEye Digital Security, in the past the term vulnerability had a specific meaning, referring to flaws in systems or software. These could be fixed via patches. Today, the term vulnerability has a broader meaning, encompassing not just software flaws but also user-introduced vulnerabilities, network misconfigurations, and even interoperability problems. The new generation of vulnerability management tools even discovers instances where users are putting the organization at risk by not following corporate policies.
A recent survey by Computer Security Institute (CSI) and the FBI found that nearly 52% of participants were hit by security breaches, many from outside of the organization. However, 68% said that a significant portion of those breaches came from within the network. | <urn:uuid:21aae7a5-8fc6-4953-a096-6dc9b540fab4> | CC-MAIN-2017-04 | http://www.cioupdate.com/trends/article.php/3650791/Protecting-Against-An-Avalanche-of-Vulnerabilities.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00563-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963104 | 434 | 2.59375 | 3 |
The Advantages of using Network Taps for Network Surveillance, Network Bandwidth Monitoring
Over the past two decades, network data has become increasingly more valuable to both private and commercial interests. Networks are routinely sharing sensitive information such as payment processing details, while also engaging in online and offline activities which require fast, secure, and completely error free communication across computer networks.
The above being the case, network surveillance, and real-time network bandwidth monitoring are an essential part of maintaining a fast and secure computer network. The only problem is that with ever greater volumes of data being shared between ever larger and more complex network segments, real-time network surveillance poses an ever greater challenge to network administrators.
How Network Taps help Improve Network Surveillance
Real-time monitoring of network traffic has become more difficult since the mid-1990’s, due to the fact that MAC and network switches tasked with facilitating fast, error-free communication between different network segments, began hiding errors which could previously be monitored using different software applications.
Of course, switching data packets via layered MAC and network addresses has long since become the standard in both open and closed network communications. Present data transfer protocols, after all, facilitate faster data transfer rates and can significantly reduce network downtime.
However, network surveillance has become more difficult because of these changes. This is because, by nature, switched networks hide many application problems and errors from diagnosis. Either this or different diagnostic tools will make problems appear across an entire network, rather than help network administrators isolate problems as originating from a specific machine or network segment.
Thankfully, a network tap (otherwise known as a test access point) can help IT professionals and network administrators better monitor bandwidth usage and test individual network areas for communication and security errors. by installing a physical hardware tap on part of a network, third parties can monitor all traffic exchanged between any two computer s, access points and/or network devices.
The 101: How a Network Tap Works
If network communication between two points or devices is facilitated by a fiber optic or copper cable, a network tap can be installed between sections of cabling or devices, in order to accomplish full network surveillance and network bandwidth monitoring of traffic being exchanged between these two points.
Consisting of multiple ports, the first is tasked with facilitating data transfer just as if a tap wasn’t in place at all. There is little or no data lag or loss; the tap itself will be invisible to the rest of the network, and network traffic will continue uninterrupted even if a tap itself fails due to a power or hardware problem.
In the meantime, the remaining ports will work to mirror all traffic passing between the two points where a tap has been placed. Because every byte of data being exchanged between two network points is copied to a taps mirroring port, network administrators are able to monitor bandwidth usage and engage in full, uninterrupted network surveillance.
Why Administrators need to Engage in Network Surveillance
There are an almost unlimited number of reasons why IT professionals and network administrators might need to monitor the traffic between two network points. A lag or error appearing on a network might, for example, be the result of an application or device operating on a specific server or computer terminal which an administrator will then need to isolate.
In like regard, network taps play a vital role in network bandwidth monitoring, as well as in helping detect malicious network intrusions. Even better, taps can help administrators quickly isolate specific pieces of equipment which have facilitated a network intrusion in the first place.
Network Taps & Network Bandwidth Monitoring
Given the cost of high-speed Internet access and the ever-greater need for people and businesses to communicate instantly across computer networks, the last thing that your business needs is to start suffering speed lags. Moreover, while it can be easy for individuals and businesses to blame Internet service providers and faulty routers when their Internet speed starts to suffer, the simple truth is that the majority of system lags come about due:
- Human users of networks engaging in high bandwidth consumption activities
- Security breaches
Thankfully, network taps allow network administrators to immediately discover who is logged on to a network, what bandwidth heavy applications are running on different devices, and where exactly a bandwidth drain is coming from. Moreover, as well as helping administrators more easily monitor bandwidth usage across networks, network taps facilitate far better overall network visibility and troubleshooting.
In short, network taps provide administrators and IT professionals with strategic and continuous network monitoring which lets organizations know exactly what is happening on a network at any one moment. Once a tap is installed, administrators never have to worry about how to access, analyze or troubleshoot traffic and bandwidth usage problems in the future.
Network Tap Benefits over Other Network Surveillance Systems
Given how integral network surveillance and network bandwidth monitoring is to almost every modern organization, there are naturally a range of different surveillance and monitoring tools available to network administrators. An alternative to using dedicated hardware taps will often, therefore, involve using Span Ports to similarly mirror network traffic.
Span ports (otherwise known as switched port analyzers, or mirror ports) operate very differently to network taps. This is because network switches themselves will mirror network traffic without the need for a separate hardware device. Span Ports are often the preferred choice of network surveillance tool when you need to see backplane traffic on a large core Ethernet switch, or a specific VLAN, and for smaller businesses and organizations with smaller budgets, smaller overall networks, and less sensitive network data.
However, using span ports to mirror network traffic isn’t nearly as effective, or as secure as using network taps is to do the same. Spans ports are dynamic, and administrators disable them without knowing their long term role in network monitoring.
For example, the majority of network taps are completely passive. When in operation, they will not increase network traffic load or even be visible to said network.
Much more significantly, network surveillance and network bandwidth monitoring using span ports depends completely on individual port and switch configurations. Whereas a network tap will mirror all traffic on a network completely unimpeded, span ports may drop specific packet types or strip portions of headers from packets. At the same time, even a properly configured span port may not always transmit an accurate mirror of network traffic. This is because loaded network switches will often prioritize traffic forwarding over traffic mirroring.
Span ports are still an important source for network monitoring, and often a source for traffic, along with network taps. Remember to use SPANs when you need to have permanent, long term visibility to the core of a large network or VLAN, but where you don’t necessarily need to see every single packet.
Location Location Location
The comprehensive network surveillance and network bandwidth monitoring benefits of network taps are inarguable. In fact, many IT professionals are of the opinion that taps themselves should be a standard part of any new network deployment.
However, for the best results, it is important to remember when placing taps on a network, taps should be placed in accordance with the physical location of a network's most critical resources. At the same time, larger networks will likely want to combine tap and span output together for a comprehensive view of their network.
Are you about to deploy a new network? If so, don't leave your network security or performance to chance. Instead, make sure to incorporate taps into your next deployment and in doing so, better ensure the viability of your network and data integrity. | <urn:uuid:cc4f02c2-f234-4b21-8aaf-e088694afa70> | CC-MAIN-2017-04 | http://www.datacomsystems.com/news-events/news/2016/sep/15/the-advantages-of-using-network-taps-for-network-surveillance-network-bandwidth-monitoring | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00471-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931548 | 1,511 | 2.890625 | 3 |
An Authentication Server is a system which provides Authentication
services to other systems on a network. The classical
example of this is a Kerberos server. Users and network
servers alike authenticate to such a server, and receive
cryptographic Ticket's. They exchange these tickets with
one-another to verify each-other's identity. | <urn:uuid:2f6c00be-08fe-4f35-9a49-e0ea8142ae22> | CC-MAIN-2017-04 | http://hitachi-id.com/concepts/authentication_server.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00344-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.906687 | 70 | 2.546875 | 3 |
DHS expands animal disease surveillance project
- By Kathleen Hickey
- Apr 28, 2014
Monitoring the safety of the nation’s food supply begins with early detection of potential disease outbreaks or changes in animal health status. Tracking, analyzing and sharing that information can help ensure the health of animals in the global agricultural market.
Over the next three years, possible animal disease outbreaks in at least 15 states and all major animal industries will be tracked using the Enhanced Passive Surveillance (EPS) system.
Developed by The National Center for Foreign Animal and Zoonotic Disease Defense (FAZD Center), a Department of Homeland Security Center of Excellence, EPS is designed to help those working with animals easily report potential disease outbreaks or changes in animal health.
The system enables users to enter animal health information with iPads, which is then integrated with data from veterinary diagnostic laboratories, wildlife biologists and livestock markets. The data is monitored and analyzed using the AgConnect system, the FAZD Center’s suite of customizable data integration and analysis products for real-time data awareness in the event of emerging, zoonotic and/or high consequence diseases. EPS data can also be analyzed using automated visual, geospatial, and temporal analysis tools within AgConnect.
“EPS leverages veterinarians in the field for reporting on animal health at the time they are observing or treating animals,” said Dr. Lindsey Holmstrom, DVM and FAZD Center research scientist in a January FAZD Center website posting. “This is a unique and critical data source for supporting animal health and disease surveillance that we previously did not have available in real-time. The system also provides information back to veterinarians from others reporting into the system, based on established data sharing protocols, which increases their awareness of the disease status in their geographic area.”
The goal of EPS is to provide surveillance information to emergency managers, state animal health officials and veterinarians during a disease outbreak, including identifying where the outbreaks are located and areas that are disease-free.
“EPS allows us to put mobile technologies in veterinarian’s hands and collect animal health data at local, regional or national levels. This allows the integration of surveillance data into a common display for early detection of emerging and high-consequence disease outbreaks,” said Tammy R. Beckham, FAZD Center director.
EPS was initially piloted in four states —Arizona, Colorado, New Mexico and Texas — with plans to expand the system to at least 15 states over the next three years. The expansion of the testing, Phase II, is funded through $2 million in federal funds from the DHS Science and Technology Directorate. The project has the potential for a nearly $9 million investment over the next three years. All major U.S. animal industries - horses, sheep, goats, beef and dairy cattle, swine and poultry – as well as wildlife (e.g., deer, feral swine, and wild birds) are tracked under EPS in Phase II.
In addition to expanding the types of animals tracked, Phase II increases the user base, adding producers, agriculture company veterinarians and production managers, as well as wildlife sources, such as wildlife biologists and organizations. Both producers and veterinarians can access the real-time data. FAZD also plans to expand the mobile platforms on which EPS can be accessed and add apps customized for specific industries.
“Ultimately, this project will demonstrate the power of data integration and aggregation,” said Dr. Beckham. With EPS, health monitors in the United States “will ultimately have a tool that will allow them to have real time situational awareness and ultimately defend our food supply from disease outbreaks through low-cost technology and real-time reporting.”
Kathleen Hickey is a freelance writer for GCN. | <urn:uuid:8d44ab40-11bd-4354-9afd-1775e06bc105> | CC-MAIN-2017-04 | https://gcn.com/articles/2014/04/28/enhanced-passive-surveillance.aspx?admgarea=TC_STATELOCAL | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00188-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934168 | 785 | 2.71875 | 3 |
Confusion has come along with the associated taxonomy of VoIP technology and IP telephony. Both of them refer to use the same IP network to send voice messages. But the main difference between VoIP and IP telephony is that VoIP is connecting old fashion analog phones to specific gateway device who are able to convert analog voice data into digital bits and send them across the internet bypassing the expensive PSTN telephone networks. In the case of IP telephony the phones by them selves are digital devices and they are made to record the users voice directly into digital signal and send it across IP network using special Communication manager devices that are enabling this technology to work. IP telephony technology resides on IP network and natively uses the IP network for communication. | <urn:uuid:7194ceb4-45ab-47ba-8737-33ebe7cbf706> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/tag/ip-telephony | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00188-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949688 | 150 | 3.25 | 3 |
By Gladys Mak, Research Analyst, Environment
It is 7 a.m. and you are all set for a good hot shower. You turn on the mains but not a drop squirts out. Does this sound familiar? If no, consider yourself lucky but if a resounding yes is the answer then blame it on your water utility provider!!
Situations like these are common in most developing nations. The Earth is 70-75% water yet only about 2.5% consists of fresh water. To add to this, the ever-growing global population is making water a much sought after component in this time and age. For decades, water has always been seen as a public good and most water utilities around the world have been publicly owned. This trend however is changing with countries now embracing the idea of privatization. Will that help? Perhaps.
What is Water Privatization?
Well, water privatization often refers to an increased participation of private companies in the provision of water. Privatization can be Public-Private Partnership (PPP), partial or even total. In Public-Private Partnership (PPP), the idea is that water itself is not deemed as a commodity and is not privatized. Instead, only the services and the job of supplying water are, while the government plays a regulatory role. In other sorts of privatization, depending on the agreement, water can be owned by a private enterprise and be offered for sale.
Some hard facts:
- There are about 1.2 billion people who lack clean and affordable water globally
- The government (Public utilities) have been slow and ineffective in providing access to safe drinking water to all
- In Indonesia, only half of its urban residents have access to safe drinkable water while 80% do not even have it supplied directly to their residential homes
- High incidences of water losses due to leaky pipes and mismanagement of water in most developing countries | <urn:uuid:2952b458-24bd-4f7d-be2c-4ad8ea6bf950> | CC-MAIN-2017-04 | http://www.frost.com/sublib/display-market-insight-top.do?id=49231878 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00306-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957167 | 389 | 3.109375 | 3 |
How many Wi-Fi devices are currently humming along in your home? If it’s more than three, or you like to frequently stream HD movies, a new generation of Wi-Fi is coming soon, and it's worth your attention. The new standard, called 802.11ac, more than doubles the theoretical speed at which broadband signals travel between current Wi-Fi-enabled devices.
Note the word theoretical. The upper data-speed limit of 802.11ac devices is 1.3 Gbps, compared to the 450 mbps limit of 802.11n, which is the latest Wi-Fi standard before ac. Few 802.11n Wi-Fi users actually get 450 Mbps, and it's unlikely 802.11ac users will see 1.3Gbps, so why does the new standard make a difference?
When it comes to sheer speed, it really doesn’t. But in terms of capacity, it does, and the new speeds imply more capacity. Wi-Fi networks are like pipes with data flowing inside of them. It takes data a certain amount of time to get from one end of the pipe to another; that’s the speed. The width of the pipe determines how much data can simultaneously flow through a network at a certain speed.
The 802.11ac standard will allow more devices to connect to a network without degrading performance, and bandwidth intensive applications like HD movies will stream with fewer delays and better quality when beamed around a network, according to the Wi-Fi Alliance, the industry group that comes up with new standards and certifies compliant devices.
What does 802.11ac mean for you? To start, unless you're a Wi-Fi speed freak, there's no need to run out and buy a new 802.11ac router or device. If your home network is slow, your broadband provider is probably to blame, not your PC or networking equipment.
It’s also important to understand that your network is only as fast as the devices connected to it. For example, let’s say you own a laptop with an 802.11n adapter and you buy an 802.11ac router. Your laptop will still only get 802.11n speeds. The good news though is that your existing Wi-Fi-compatible devices will work with a new, 802.11ac router, even if they don't get the fastest-possible speeds.
Most laptops and tablets released in the coming months and years, even some smartphones, will be 802.11ac compatible. (The new MacBook Air is already 802.11ac compliant.) So if your current router suddenly dies, you should buy a new one that supports the 802.11ac standard, because your next devices will very likely be 802.11 compliant. At first, 802.11ac devices will probably cost a bit more than devices that don't support the new standard, but it shouldn't take long for 802.11ac routers to cost the same amount as 802.11n routers do today.
Image via Andantech | <urn:uuid:411d5fde-ec53-4d20-87d3-d4889c4b851a> | CC-MAIN-2017-04 | http://www.cio.com/article/2370412/wifi/what-you-need-to-know-about-the-new-802-11ac-wi-fi-standard.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00426-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934219 | 625 | 2.875 | 3 |
Security Spotlight: Security Tokens
Spend any time flipping through a good infosec textbook or certification study guide nowadays, and you’ll come to two inescapable conclusions:
- Multi-factor authentication—a system that establishes user identity by combining multiple authentication methods such as account/password with a fingerprint scan, voiceprint, Smart Card, or something similar—is much stronger and harder to break than single-factor authentication
- Security tokens make an excellent element in such a multi-factor authentication scheme.
Of course it helps to understand that a security token is a small device that individuals carry with them (often as a key fob on a keychain, or something else they always keep with them). They insert the security token into a reader while active on a computer workstation or system. They’re used in combination with a user personal identification number (a PIN is thus the second factor in the two-factor authentication scheme normally used with security tokens), so that even if somebody steals or finds a security token, they still can’t log in without also providing the correct PIN.
Security tokens generate identification codes that are synchronized with security monitors on a network. These codes are both complex and hard to guess, and they change regularly (often, at 5-minute intervals) and automatically, which makes them nearly impossible to compromise. Security tokens and readers usually add no more than $250 to the cost of computer systems (but there are also other start-up costs associated with their use), so they’re more affordable than you might guess.
For more information on security tokens, please visit Whatis.com. | <urn:uuid:13da3e4a-12cc-424a-b7f6-a9016b267190> | CC-MAIN-2017-04 | http://certmag.com/security-spotlight-security-tokens/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00114-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913967 | 333 | 3.125 | 3 |
A UK firm has developed a credit card sized, bare bones computer that could potentially revolutionise IT training in schools. The first model went on sale today, and promptly sold out.
The Raspberry Pi computer has been developed by a group of volunteers from the UK technology industry and the IT educational sector, and is aimed at schools and the disadvantaged to get computer programming back on the agenda in UK schools.
The Raspberry Pi is a credit-card sized computer board that plugs into a TV, a keyboard and to broadband. It runs a miniature ARM processor and performs much as a basic desktop PC does, perfect for operating spreadsheets, word-processing and some low-end games. Amazingly, it also plays full 1080P high-definition video.
The hard drive is provided by SD cards, which users can then install open source software and operating systems on, such as Linux.
The main focus of the device will be to provide a low risk computing environment for children in schools to fiddle with the machine and practise programming. Given the low price of the device, it is emminently replaceable without users worrying about breaking a mainstream computer that may cost more than £500.
It also means computing is once again affordable for the masses. £22 means that students could potentially purchase the device for themselves and do as they please, much like the early days of computing where programmers earned their stripes pulling Amstrad’s and ZX Spectrums to bits.
The Conservative government has made the tech industry a centrepiece of its business and educational reform, supporting the Silicon Roundabout and scrapping the IT curriculum in favour of a more ‘hands-on’ approach.
The Royal Society released disturbing research in January that showed that there has been a 60% decline since 2003 in students achieving A-level Computing. Just 4,002 students achieved the grade last year.
The Model Bs have seen such a large demand from the community that it crashed the Raspberry Pi website, forcing them to restore it in a static layout. The organisations technology partners, Premier Farnell and RS Components, have had similar pressures placed on their websites and the units are already sold out.
Both websites now have a waiting list until the production can be ramped up to meet demand. Batch orders will be available in a month or so, as well as bundle deals which will include mice, keyboards and other accessories.
This first launch is aimed at software and hardware enthusiasts, makers, teachers and others who want to build exciting things with the Raspberry Pi before the official educational launch, which will happen later in 2012.
The device, while not self powered, draws comparisons with the ‘one laptop per child‘ program, the brainchild of MITs Nicholas Negroponte. While focused on driving computer technology education in the third world, the goals and aims are the same – to reduce the digital divide.
Full Spec sheet – Raspberry Pi SBC (Model B)
– Broadcom BCM2835 700MHz ARM1176JZFS processor with FPU and Videocore 4 GPU
– GPU provides Open GL ES 2.0, hardware-accelerated OpenVG, and 1080p30 H.264 high-profile decode
– GPU is capable of 1Gpixel/s, 1.5Gtexel/s or 24GFLOPs with texture filtering and DMA infrastructure
– 256MB RAM
– Boots from SD card, running the Fedora version of Linux
– 10/100 BaseT Ethernet socket
– HDMI socket
– USB 2.0 socket
– RCA video socket
– SD card socket
– Powered from microUSB socket
– 3.5mm audio out jack
– Header footprint for camera connection
– Size: 85.6 x 53.98 x 17mm | <urn:uuid:8bcc1553-df64-4d1c-9e9e-cfb174f60e1b> | CC-MAIN-2017-04 | http://www.cbronline.com/news/sold-out-22-raspberry-pi-computer-to-revolutionise-it-education-29-02-12 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00416-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94368 | 775 | 2.90625 | 3 |
Security and the Internet of Things
How to Secure Your Internet of Things System
By 2020 there will be 25 billion embedded and intelligent systems interacting with one another over the Internet of Things (IoT) and the security industry will be a large part of this. The biggest issue is not the security of the IoT cloud, but the lack of security on the devices themselves.
Read our white paper and discover where the gaps are in the IoT, the steps to secure your IoT system, and how to plan for a more secure IoT future.
- Discover the Main Security Flaws with the Internet of Things
- Find out the 4 Steps to Securing an IoT System
- Learn How to Plan for a More Secure Future | <urn:uuid:80c6062e-0b8e-417f-bfa6-bc617d620bda> | CC-MAIN-2017-04 | http://www.aimetis.com/Solutions/WhitePaper.aspx?Item=00005 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00564-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.870381 | 146 | 2.65625 | 3 |
In an era where people rely on mobile devices for information, a warning siren may seem like outdated technology. But in Hawaii, a new retrofit program will ensure that the booming outdoor alarms will continue to serve as the state’s primary emergency alert system for years to come.
Hawaii is in the midst of a $25.6 million overhaul of its statewide warning siren network. Once the project is complete, 490 sirens will be spread throughout the state, including 205 on Oahu, the most populous of the Hawaiian islands. The sirens will operate on a state-of-the-art satellite-cellular communications system.
The investment could be seen as a surprise by some, particularly since sirens have increasingly been disregarded by people on the U.S. mainland in favor of other types of alerts.
For example, when a tornado laid waste to Joplin, Mo., in May 2011, resident Jose de Leon told The Joplin Globe that he heard the tornado siren but chose to ignore it, as did many others in the area.
In an interview with Emergency Management in 2011, Jon Martin, professor and chair of the Department of Atmospheric and Oceanic Sciences at the University of Wisconsin-Madison, believed approximately 25 percent of the 535 deaths caused by spring tornadoes in 2011 could have been because people failed to heed warning sirens.
Despite the flippant attitude toward sirens by some people, Tom Simon, systems engineer of Hawaii State Civil Defense, said sirens are absolutely needed in the state. He explained that tourists and residents aren’t always carrying a smartphone to receive geo-located emergency notifications and even if they are, signal strength may be suspect in a mountainous or elevated region.
“Because of the amount of time people here in Hawaii spend outdoors and … the potential for tsunamis, we are still putting a lot of emphasis on our siren system,” Simon said. “If you’re on the beach and you don’t happen to have your cellphone, you still need to know that it’s time to get away from the beach.”
Hawaii won’t solely depend on the sirens for emergency notifications, however. The state’s alert system distributes messages and emergency notifications over radio, TV and cable. Officials also can send text-like messages to cellphones through broadcast technology using the Wireless Emergency Alert service, formerly called the Commercial Mobile Alert System, deployed in April 2012 by the wireless industry, the FCC and FEMA.
The siren network modernization project consists of two parts. The first step is replacing the old radio-based technology at each siren site with the satellite-cellular control system. The second consists of replacing the sirens themselves. The sirens and control system are provided by Federal Signal.
Work began last year on Oahu, which is home to approximately 40 percent of all the sirens in Hawaii. Simon said they installed and tested the new system successfully on eight siren sites in 2012 and then proceeded with retrofit work on 143 sirens on Oahu. Six new siren sites will also be added in the coming months.
Maui County will be next, where 88 sirens need retrofitting or replacing, Simon said. The entire statewide project should be complete in 2014.
Hawaii’s old siren control system ran through VHF wideband radio. All four counties used different types of control systems, which was a drain on state resources. Technicians had to learn how each one worked and how they needed to be maintained.
George Burnett, telecommunications branch chief of Hawaii State Civil Defense, said one of the main reasons the state opted to upgrade satellite-cellular technology was because the separate county control systems were incompatible.
He added that most of the mechanical sirens in operation around Hawaii are 25 to 30 years old and well past their usable life cycle, which was a critical factor in moving forward on the upgrade project.
“The sirens were no longer maintainable,” Burnett said. “We were having to go to extraordinary lengths to get parts, rebuild motors and things like that, which were very difficult to accomplish.”
According to Simon, the radio system’s transmitters also needed constant alignment adjustments to give a clean signal. Technicians were spending an inordinate amount of time on the task. That will no longer be a problem with the new control technology.
The new satellite-cellular system allows the state to standardize siren control and provides redundancy. If the satellite signal has interference, it immediately jumps to the cellular signal as a backup, ensuring that downtime is virtually nonexistent. SkyWave is providing satellite communications, while Verizon will handle the cellular signal.
In addition to increased redundancy, workers can now access informative data on the status of each siren’s condition. In the past, the only time the state would know a siren was malfunctioning was if a resident noticed that a siren didn’t go off during a monthly test and called it in, or during twice yearly preventive maintenance visits.
With two-way communication, technicians can more efficiently track and address maintenance issues as they arise. The sirens are solar powered and each use four deep-cycle batteries. Technicians can now be miles away and check items such as battery voltage, whether the charger is working and even receive notifications from the siren if someone tries to break into it.
“I would say our sirens are in much better condition now based on the information we’ve been able to get through the system and the [technicians] having time to go out and fix them,” Simon said.
The satellite-cellular connection also lets the state test the system without disturbing residents. Simon said “quiet tests” can be conducted during which the siren is given instructions to activate at a frequency that’s too high for anyone to hear, but give the state a reading on the amplifiers’ output. Once complete, the results can be reviewed to determine if the sirens are working properly.
“In the few months we’ve had this working on Oahu, we’ve found this additional information has really helped the technicians go out and get more of the sirens fixed more quickly,” Simon said. “They’re working almost immediately after we find a problem, instead of not knowing.”
While the installation process went fairly well on Oahu, there were some expected bumps along the way. Since the control system was completely new, there was a bit of a learning curve as technicians became familiar with operating it. Federal Signal also had to tweak the system to address minor connectivity issues.
Since the sirens use satellite signals as their primary source of communication, the area around the sirens must be clear of vegetation. That can be a problem in Hawaii. Simon explained that because there’s a lot of rain throughout the islands, certain locations can experience rapid overgrowth.
Although that typically doesn’t affect cellular communications, it can interfere with satellite coverage. So existing siren sites had to be evaluated, and when looking into new locations, officials had to factor vegetation into consideration.
In addition, using satellite communication causes a delay from the time the siren is activated by state personnel until it actually goes off. The state synchronizes its monthly test of the siren system with a radio broadcast. But once the siren is activated, it takes 30 to 45 seconds before it sounds, which could be confusing to the average resident.
The delay shouldn’t matter during an actual emergency, however. Since disasters are usually unexpected, people wouldn’t know when the button was pushed, so an extra 30 to 45 seconds before the siren starts would likely have a negligible impact on safety.
The U.S. military has also taken note of Hawaii’s siren upgrade. The Army, Navy, Air Force and Marines all have bases in Hawaii and depending on the branch, either work in conjunction with the state to issue emergency warnings or use their own system.
Simon said Joint Base Pearl Harbor-Hickam has its own radio control system, but one of the siren sites activates automatically off the state’s signal. The Army, he said, is in the process of doing something similar.
Photo courtesy of Adam DuBrowa/FEMA. This story was originally published by Emergency Management magazine. | <urn:uuid:a49b9924-5509-4302-94b6-527dcd51636a> | CC-MAIN-2017-04 | http://www.govtech.com/public-safety/Sirens-Remain-Vital-to-Hawaiis-Emergency-Alert-System.html?page=2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00253-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958127 | 1,744 | 2.671875 | 3 |
Question 2) Microsoft Windows XP Operating System
Objective: Configuring and Troubleshooting Hardware Devices and Drivers
SubObjective: Configure and troubleshoot storage devices
Single Answer Multiple Choice
You are a desktop support technician for your company, and you are troubleshooting a hard disk problem on a Windows XP Professional computer. The computer has two hard disks, disk C and disk D, and Windows Support Tools has been installed on the computer. The Windows XP Professional operating system is installed on hard disk C.
You start the computer, and Windows XP Professional loads correctly; however, you notice that hard disk D is not recognized. You suspect that there might be a problem with either the MBR or the partition table on hard disk D. You want to replace the MBR and the partition table on hard disk D.
Which of the following should you implement to accomplish this task?
A. In the Recovery Console, use DiskProbe on hard disk D.
B. In the Recovery Console, issue the fixmbr command on hard disk D.
C. In a Command Prompt window, issue the fixmbr command on hard disk D.
D. In a Run dialog box, issue the dskprobe.exe command, and use DiskProbe on hard disk.
D. In a Run dialog box, issue the dskprobe.exe command, and use DiskProbe on hard disk D.
After starting the computer in the Windows XP Professional operating system, you should use DiskProbe to correct the problems on hard disk D. DiskProbe will correct problems with both the partition table and the master boot record (MBR) on hard disks that are not used to start Windows XP Professional. You must use DiskProbe to make a backup of MBR and partition table information before you can use DiskProbe to restore this information.
DiskProbe does not work from the Recovery Console. DiskProbe is a part of the Windows Support Tools package, and this package should be installed on a Windows XP Professional computer to enable you to use DiskProbe. Windows Support Tools are not installed by default with Windows XP. You must install them separately from the Windows XP installation CD by browsing to the Support folder, opening the Tools subfolder, double-clicking Setup.exe, and then following the installation wizard. Not all tools are installed by default; you must select the Complete check box to have DiskProbe and certain other tools installed. Windows Support Tools are also available from the Microsoft Download Center Web site. DiskProbe is compatible with Windows XP Service Pack 2 (SP2).
In the Recovery Console, you can issue the fixmbr command to restore the MBR on a hard disk. If a hard disk is not recognized by a Windows XP Professional computer, then restoring the MBR will often correct this problem and cause a hard disk to be subsequently recognized. Issuing the fixmbr command on hard disk D will not, however, restore the partition table on hard disk D. The fixmbr command is not available in a Command Prompt window on Windows XP Professional computer.
1. Microsoft Windows XP Professional Resource Kit Documentation – Part VI System Troubleshooting – Ch 28 Troubleshooting Startup – Following a Process for Startup and Recovery – Using Recovery Console
2. Microsoft Windows XP Professional Resource Kit Documentation – Part VI System Troubleshooting – Ch 27 Troubleshooting Disks and File Systems – Repairing Damaged MBRs and Boot Sectors in x86-based Computers – Restoring the MBR
These questions are derived from the Self Test Software Practice Test for Microsoft exam #70-271 – Supporting Users and Troubleshooting a Microsoft Windows XP Operating System | <urn:uuid:251038c1-ea36-4a60-be99-11c2b2d505c3> | CC-MAIN-2017-04 | http://certmag.com/question-2-test-yourself-on-supporting-users-and-troubleshooting-a-microsoft-windows-xp-operating-system/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00281-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.80712 | 757 | 2.5625 | 3 |
Investment in fundamental science and discovery is integral to U.S. economic growth and development as well as to national security. The innovation engendered by such investment has enabled the U.S. to maintain an intelligence and national security advantage over our adversaries and has played a pivotal role in advancing our nation’s capabilities.
This relationship between scientific investment and national security has long been recognized. In 1945, American engineer, inventor, and science administrator Vannevar Bush espoused the vital relationship between research and national security in a report to the president entitled “Science: The Endless Frontier.” Half a century later, the Hart-Rudman Commission reinforced the essential role of government in funding basic research. The return on scientific investment is clear when one considers that whole industries have been created from such research, which has resulted in the mass production of steel, aviation, nuclear power, GPS and the Internet.
Governments in many parts of the developing world seem to recognize this. They have taken steps to develop their own science and technology, or S&T, infrastructures; stimulate industrial research and development, or R&D; expand their higher education systems; and build indigenous R&D capabilities. In the last decade, global S&T capabilities have grown -- nowhere more so than in Asia. But over this same period, U.S. investment in basic and applied research has declined.
While the United States continues to maintain a position of leadership in terms of broad research and development activities, our position is eroding as other nations -- particularly China, now the second largest investor in R&D -- take steps to develop their research infrastructure and invest in fundamental research.
Decreased emphasis in the United States on fundamental research, particularly in fields likely to enable or enhance national security capabilities, will have long-term negative effects on our nation:
- Our ability to develop revolutionary capabilities that support intelligence and national security objectives will degrade.
- We will be more susceptible to technological surprise -- particularly from those nations that are now investing heavily in the sciences.
- Intelligence and national security agencies and the companies and institutions that support them will find it increasingly difficult to maintain a sufficiently skilled workforce.
As part of a national strategy, government must place greater emphasis on investment in fundamental science and discovery -- basic research. These research areas should be carefully coordinated to maximize the likelihood they will ultimately yield advances in capabilities that will adequately prepare the U.S. to face future adversaries -- both states and non-state actors.
There are many positive outcomes of a robust basic research portfolio with emphasis on intelligence and national security objectives. To increase the emphasis on science, innovation and discovery, the government should:
- Continue funding basic research with the objective of developing science that will revolutionize intelligence and national security capabilities.
- Increase coordination within the nation’s national security research enterprise to maximize the value of research efforts.
- Develop incentives to encourage industry investment in long-term basic research areas relevant to national security.
- Increase outreach and engagement with universities.
- Increase educational outreach to attract and retain students in science, technology, engineering and mathematics with a focus on intelligence and national security objectives.
Despite fiscal challenges in the years ahead, continued advocacy across government for national security focused R&D is essential, particularly with regard to basic research, in order to maintain and enhance our national security and ensure our technological leadership.
The Intelligence and National Security Alliance, through its Council on Technology and Innovation, will continue to focus on the challenge of ensuring our nation’s security through discovery and innovation. Additional information can be found in the recently published INSA paper entitled “Emerging Science and Technologies: Securing the Nation through Discovery and Innovation.” Copies of the paper may be downloaded at www.insaonline.org.
Joseph R. DeTrani, is president of the Intelligence and National Security Alliance. He previously was the Director of the National Counterproliferation Center, the North Korea Mission Manager for the Office of the Director of National Intelligence, and the Special Envoy for the Six Party Talks with North Korea.
Allan Sonsteby is an INSA Board Member and INSA Technology and Innovation Council white paper lead. He is currently the Associate Director of the Applied Research Laboratory at Pennsylvania State University. | <urn:uuid:715620d8-b08b-4f28-a126-935a6d13656f> | CC-MAIN-2017-04 | http://www.nextgov.com/emerging-tech/2013/09/commentary-declines-basic-research-threaten-us-leadership/70030/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00281-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929747 | 877 | 2.515625 | 3 |
The University of Delaware and and an energy company are partnering to make commercially available technology that would allow electric vehicles to sell power back to the grid.
The university and NRG Energy are working together on the company and technology, to be called eV2g. University of Delaware professor Willett Kempton led development of the patented technology, a "vehicle-to-grid" system that the university said will allow parked and plugged-in electric vehicles to sell extra power that's stored in their batteries back to the grid when demand is high.
"This technology can be paid well, to provide the short bursts of back-and-forth power that we use to correct imbalances in the electric power grid," Kempton said in a news release from the university. "In the future, this technology will be important for smoothing out the fluctuations in renewable energy production."
As communities consume more renewable energy, one problem is that sources such as wind power aren't constant. Relying on electric vehicles to even out the energy supply could be of help.
EV2g will collect payment from the grid operator and will pay electric vehicle owners for making their vehicles available when charging.
These "grid-integrated vehicles" could generate money for owners of electric vehicles and fleet managers, as well as lead to lower energy prices for consumers.
An NRG Energy official said eV2G is the first technology to offer a true two-way interface between electric vehicles and the electricity grid. | <urn:uuid:51f690d8-a6f1-4100-8aba-bdeccdf47cb0> | CC-MAIN-2017-04 | http://www.govtech.com/transportation/Technology-to-Allow-Electric-Vehicles-to-Sell-Excess-Power.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00189-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959424 | 304 | 2.890625 | 3 |
In today’s competitive market, cloud computing offers an outstanding opportunity not just to innovate, but to do so more quickly and more cost-effectively than ever. It is an exceptionally efficient platform for IT-service delivery. Because you can create new virtual servers in a cloud with unmatched speed and consistency, as well as allocate IT resources like processing power and storage automatically on the basis of policies, clouds can bring new services into production much faster than traditional architectures.
Additionally, operational costs—particularly capital expenditures—fall because of cloud computing’s utility-style billing, which is based on real use. This approach makes it much less risky for organizations to experiment with new services. Unsuccessful services that see little utilization will generate low bills. On the other hand, if service usage is high, the costs will be justified by the fact that the service is successful, end users are receiving the intended value and, in the case of external services, significant new revenues are generated.
And because cloud strengths are well suited to many kinds of services, clouds can also create many different forms of new value. The five examples that follow illustrate how cloud computing has helped organizations accomplish more—more quickly and less expensively—throughout 2015.
Data center backup in a cloud
What if organizations could cost-effectively back up an entire data center—or enough of it to restore critical business functions in the event of an unexpected disaster?
This concept, all but impossible only a few years ago, is now standard operating procedure with many organizations using cloud computing at the close of 2015. In a cloud, virtual servers are created and provisioned on demand by business policies. This means that new virtual servers can be created to replicate any or all production servers already in routine use by an organization for its essential internal and external services. In a cloud, it can happen in very little time.
Today, organizations can use cloud platforms to duplicate some or all data center capabilities in at least two different ways:
(1) They can create policies in a third-party cloud that will in turn generate new virtual servers when needed, then populate them with the appropriate software stack and data required to operate or restore services.
(2) For an even faster response to a disaster (or outage of any kind), they can create the virtual servers themselves in advance, and leave them up and running at all times—a “hot” backup site, complete with all the necessary software pre-provisioned, meaning only the latest data need be moved before services are fully restored.
The business implications of both of these scenarios are enormously significant. Business continuity, for instance, improves considerably because the organization no longer relies on a single service-delivery infrastructure—it’s no longer “putting all its IT eggs in one basket.” Should services fail and then involve a long assessment and remediation process, the organization can temporarily fail over to the alternate infrastructure in a cloud. This capability can dramatically reduce the negative business impact that might otherwise have occurred until resolution of the problems in the primary data center.
And for both of these scenarios, the ongoing costs of the cloud-hosted backup site are surprisingly low. That’s because the vast majority of the time, the organization’s utilization of the backup site is minimal or nonexistent, and cloud billing is based on that utilization.
As disaster recovery becomes a growing priority for businesses, cloud backup has given them the tools to quickly, easily and affordably prepare for whatever may come. Especially for small and midsize organizations with small IT budgets, cloud computing surpasses every other option by eliminating the costs and management complexity associated with traditional backup solutions, which require a redundant hardware and software infrastructure. Cloud computing enables organizations to simplify the process and significantly reduce capital expenditures.
To read the other 4 ways please click here | <urn:uuid:6ee233ad-a110-42c9-a068-cdfa250d09ac> | CC-MAIN-2017-04 | http://www.datacenterjournal.com/five-ways-cloud-computing-has-created-positive-change-in-2015/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00125-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931074 | 781 | 2.53125 | 3 |
Over the next five years, technology is going to shake up K-12 education. The NMC/CoSN Horizon Report: 2016 K-12 Edition, published by the New Media Consortium and the Consortium for School Networking, shares the technology that politicians, education experts, school administrators, and teachers should be integrating into classrooms.
Time-to-Adoption Horizon: One Year or Less
- Makerspaces–As STEM skills become increasingly more important in the 21st-century job market, schools are focusing on ways to develop that skill set. To address the needs of the future, the report explains that a growing number of classrooms, libraries, and community centers are being transformed into makerspaces, physical environments that offer tools and opportunities for hands-on learning and creation. In addition to helping children develop skills they’ll need to succeed in the job market, makerspaces also allow educators to engage students in active learning, as well as promote creative, higher-order problem-solving through design, construction, and iteration.
- Online Learning–While learning used to be limited to the physical classroom, both formal and informal learning is moving online. However, many schools don’t have the technology or infrastructure they need to make the jump to online learning. Schools need to strengthen their infrastructure and technology offerings because, as the report explains, educators are becoming more comfortable testing various levels of integration in their existing classes and programs, and many believe that online learning can be an effective catalyst for thoughtful discussion on all pedagogical practice. While it is unlikely that education will ever move completely online, when combined with immersive technology such as virtual reality, online learning can enable simulations that strengthen a student’s understanding of a topic and allow students to play out how they would react in real-life situations.
Time-to-Adoption Horizon: Two to Three Years
- Robotics–While a robot maid like Rosie from The Jetsons may still be a couple of decades off, robots are transforming industries such as mining, manufacturing, and transportation–to name only a few. Educators are also turning to robotics in the classroom to engage students and improve student outcomes. K-12 educators can use robotics to engage students in hands-on learning, and learning how to code a robot can strengthen computational thinking skills. Additionally, the report explains that emerging studies also show that interaction with humanoid robots can help learners with spectrum disorders develop better communication and social skills.
- Virtual Reality–Virtual reality in the classroom can allow students to travel to the Great Pyramid of Giza, rather than simply read about it in a book. While common adoption for classroom use is still a few years off, VR is gaining traction in video games. Since the gamification of the classroom is on the rise, the use of VR to improve engagement and make learning more enjoyable, realistic, and hands-on will rise in coming years.
Time-to-Adoption Horizon: Four to Five Years
- Artificial Intelligence–AI in the classroom offers numerous benefits for both teachers and students. Smart chatbots can help students answer questions and complete homework, while taking part of the workload off of teachers. AI programs could also help teachers grade essays and papers, giving teachers more time to tutor students or lesson plan. The report also notes that many students may not be aware of their encounters with AI as it is embedded in adaptive learning platforms, in which intelligent software personalizes learning experiences based on how each student is responding to prompts and progressing through videos and readings in virtual environments.
- Wearable Technology–Wearable technology has impacts in numerous K-12 school subjects. Whether it’s to improve student fitness by encouraging exercise through goal-setting and health competition, or making STEM more fun by coding wearables, wearables could be integrated into many parts of a student’s day. Additionally, wearable technology can also be coupled with VR to create truly immersive educational experiences.
The report also highlights key trends in education industry, as well as new technologies that will shake things up over the next five years. | <urn:uuid:a3a5a20d-0867-4909-a031-a32e6301dd52> | CC-MAIN-2017-04 | https://www.meritalk.com/articles/virtual-reality-makerspaces-and-online-learning-on-the-horizon-for-education/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00455-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955219 | 823 | 3.421875 | 3 |
XSLT is an excellent tool for performing some kinds of transformations and a rather poor tool for other types of transformations, such as non-trivial string manipulation and iterative processes. However, the Novell XSLT processor implements extension functions that allow the style sheet to call a function implemented in Java, and by extension, any other language that can be accessed through JNI.
For specific examples, see Query Processors using the query processor, and the following example that illustrates using Java for string manipulation. The long lines are wrapped and do not begin with a <. To view the style sheet, see Extension_Functions.xsl.
<!-- get-dn-prefix places the part of the passed dn that --> <!-- precedes the last occurrence of ’\’ in the passed dn --> <!-- in a result tree fragment meaning that it can be --> <!-- used to assign a variable value --> <xsl:template name="get-dn-prefix" xmlns:jstring="http:// www.novell.com/nxsl/java/java.lang.String"> <xsl:param name="src-dn"/> <!-- use java string stuff to make this much easier --> <xsl:variable name="dn" select="jstring:new($src-dn)"/> <xsl:variable name="index" select="jstring:lastIndexOf ($dn,’\’)"/> <xsl:if test="$index != -1"> <xsl:value-of select="jstring:substring($dn,0,$index) "/> </xsl:if> </xsl:template> | <urn:uuid:91ed4aac-b615-42e1-9fc0-73532afe13ed> | CC-MAIN-2017-04 | https://www.netiq.com/documentation/idm402/policy/data/policyxsltexfunction.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00537-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.650679 | 349 | 2.78125 | 3 |
Google's Project Loon Test Flight in Brazil Shows Promise
A Project Loon test balloon allowed students in a rural classroom in Brazil for the first time ever to connect wirelessly to the Internet right from inside their school.Google's Project Loon has reached several key milestones in testing being done in Brazil as the innovative Loon experiment moves forward to deliver affordable high-speed Internet access to users in remote locations using connections made through high-altitude balloons. One of the first successes is the connection of a school on the rural outskirts of the town of Campo Maior to the Internet for the first time using the wireless connections through a Loon balloon, according to a June 16 post on the Project Loon Google+ page. "The vast majority of this community doesn't have Internet or cell service—but the locals know of a few very specific spots around town where they might find a weak signal," the post states. "So if you see them sitting in trees, you'll know why. (In fact, they have a word for this—'vaga-lume,' which means 'fireflying' in English—because at night that's what the glow from their mobile phones looks like.) But with the Project Loon team in town and one of our balloons overhead, the students in [a] geography class were able to get to the Internet from their classroom for the first time as they learned about world cultures." The successful test flight also marked a few other significant firsts for Project Loon, the post states. "Launching near the equator taught us to overcome more dramatic temperature profiles, dripping humidity and scorpions. And we tested LTE technology for the first time; this could enable us to provide an Internet signal directly to mobile phones, opening up more options for bringing Internet access to more places."
Project Loon, which was unveiled in June 2013, is being touted as a high-tech way to create Internet connections for two-thirds of the people in the world who currently don't have Internet access due to high costs and the difficulty of stringing connections in rural and far-flung parts of the world. | <urn:uuid:221c3aab-7002-441d-8ea5-fd9e1a3dd6c3> | CC-MAIN-2017-04 | http://www.eweek.com/cloud/googles-project-loon-test-flight-in-brazil-shows-promise.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00261-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965046 | 431 | 2.953125 | 3 |
In 2007, the city of Austin set out to accomplish a lofty goal — become 100 percent carbon neutral by 2020.
With eight years until its self-appointed deadline, the city continues to make strides to not only reduce its energy consumption, but also — and perhaps most importantly — create a culture of sustainability and conservation among its 12,000 city employees and the larger metropolitan area.
“Everything has evolved since we initiated the plan in 2007, which has been an interesting part of the whole process,” said Zach Baumer, Austin’s climate protection program manager. “The City Council and the mayor have changed, but the city remains committed, and has raised the importance level of the resolution. It is becoming ingrained in the culture of the city.”
In 2012, the city checked off one of its milestones, which was to power all city facilities using renewable energy, and is now the No. 2 city on the U.S. Environmental Protection Agency's (EPA) Top 20 Local Government list of green power purchasers — ranked by annual green power usage.
Austin sells more renewable energy through a utility-sponsored, voluntary green-pricing energy program than any of the 850 other programs in the nation. City officials are quick to point out that Houston ranked No. 1 for purchasing 438 million kilowatt hour (kWh). However, that represents only 34 percent of the city’s energy usage. Austin’s purchase of 406 million kWh represents 100 percent of municipal energy usage.
Making the push to 100 percent renewable energy did come with a price, though: The Austin American-Statesman reported that using renewable energy cost an additional $8.5 million for the city during the first year.
While the fuel charge rose from roughly 3 cents per kWh to 5.7 cents per kWh with the renewable energy program, GreenChoice, the city is locked into that rate for the next 10 years, thus reducing its risk exposure to rising fossil fuel prices.
“It did cost more in the short term,” Baumer said, “but the analysis was done, and we expect it will save us way more over the long term.”
Given the tough budgetary climate for municipal governments, and the price sensitivity many households are facing, making the push to carbon neutral is going to be a delicate balance of sustainability and affordability.
“We’ve set some very ambitious policy goals for our utility as it relates to utilizing renewables,” said Austin Mayor Lee Leffingwell during his 2012 State of the City Address. “As we move toward that goal — and as we transition to using more and more clean energy — we will always, always do so with affordability as core value.”
Thus far, though, the Austin community seems committed to the plan.
“The community keeps saying, ‘do it’ and we keep saying, ‘OK.’ Our citizens are going to hold us accountable, and there is constant pressure from the community, which is a good thing,” Baumer said. “There’s no question that climate change is real, so why would we not meet our commitments? It’s not just a question of ‘should we do it,’ it’s ‘we have to do it.’”
Convincing all city employees that energy conservation is a priority can be challenging, Baumer said, especially when they have important priorities of their own. For example, the fire department and EMS are going to place saving lives ahead of saving gas.
“We continue working to incorporate it into the organizational culture, because the kinds of things we’re changing have to go all the way through the system to every employee,” Baumer said. “We are constantly working on our outreach within the organization. We always try to tie what we’re doing back to innovation and sustainability.”
In addition to instilling an attitude of sustainability, specific goals for the city include converting its entire fleet of vehicles to renewable or hybrid consumption and achieving 700 MW of new savings through energy efficiency and conservation — both by 2020.
In 2011, the city emitted 183,000 metric tons of CO2, marking the fourth straight year of steady declines since the city’s climate resolution was introduced. In 2007, that number was nearly 300,000 metric tons.
As of 2011, 65 percent of vehicles used alternative fuels or were hybrid vehicles, up from 60 percent the year before.
That includes more than 200 gasoline/electric hybrids, more than 500 flex-fuel ethanol vehicles, more than 200 propane vehicles, more than 30 all-electric vehicles, and more than 1,800 diesel vehicles and equipment using B20 biodiesel blend. While the city doesn’t have as much control over outside contractors, such as the public transportation system, it is working with them to promote and encourage renewable fuels.
Since 2007, the city has increased purchases of E85 (ethanol) fuel to more than 220,000 gallons per year and B20 to more than 1.7 million gallons per year. In 2010, E85 and B20 replaced traditional gasoline and diesel purchases by 300,000 gallons and 2.4 million gallons per year, respectively.
The city also performs a life cycle cost analysis to ensure a certain vehicle makes the most sense over the long term.
“When looking to purchase a vehicle, we typically analyze four vehicle choices and compare them based on lifetime cost of ownership and environmental performance,” Baumer said. “The lifetime cost includes a 10-year time horizon, lifetime maintenance and fuel usage along with the up-front purchase price in our analysis. We also calculate lifetime [nitrogen oxide] and CO2 emissions for each vehicle. These are all compared, and the vehicle with the lowest cost of ownership and the lowest environmental impact wins.”
Though the goal is to become carbon neutral by 2020, part of that push will likely require the purchase of carbon offsets.
“Our approach is that we reduce our positive carbon use as much as we can, and then we offset whatever’s left,” Baumer said.
The purchase of offsets will fund a range of greenhouse-gas-reduction programs that over time will also help improve the environmental quality in central Texas.
There certainly will be infrastructure challenges, such as large construction vehicles, in which there is no viable electric or hybrid option at this point, and other large equipment, such as generators or refrigerants, that will continue to emit some greenhouse gases.
“We’re trying to do everything we can to minimize usage, and switch what we can. We are finding ways to increase efficiency, and all of that will help,” Baumer said.
The push for all city operations to become carbon neutral is just one component of the city’s larger climate protection plan, however. While it’s not necessarily part of achieving its goal of carbon neutrality by 2020, the city is working diligently with citizens and local businesses, as roughly 70 percent of Austin’s electricity is used by homes and businesses.
“We are working to inspire businesses to commit to the plan as well,” said Baumer. “We can only do so much as a city, but if you look at the million people who live and work here, they can make a large impact as well. We are always rethinking our public education programs.”
The city has created the Austin Green Business Leaders program, an ongoing program open to businesses of all sizes and all industries. The program’s central tool is the green business scorecard, which helps businesses assess and implement sustainability practices.
Companies earn points for taking actions listed in the scorecard, and depending on their score, can progress through silver, gold and platinum levels of recognition. After completing the scorecard, businesses are provided recognition and a toolkit to promote their company as an Austin Green Business Leader.
“I’ve often said, ‘What is good for our environment is good for our economy,” Mayor Leffingwell said. “It speaks to our values as a city that we have such vibrant and socially responsible businesses here in Austin, and these folks are setting an example in sustainability for businesses of all shapes and sizes to follow.”
The 2012 Austin Green Business Leaders include Fortune 500 companies, such as Dell and Whole Foods Market, as well as local independents such as Buenos Aires Café, and House+Earth, a building materials supply company.
“Not only does the Austin Green Business Leaders program provide an effective barometer for measuring the sustainability of a business’ operations, it also serves as an educational and motivational tool for implementing more sustainable actions,” House+Earth Principal Scott Kuryak said. “With the current scorecard, large corporations and small companies alike can participate on a level playing field, which should encourage companies of all types to participate.”
Though there is still much work to do, and there will likely be unforeseen challenges and obstacles, the city will remain committed to its climate plan through 2020 and beyond.
“The bottom line is that if we will proceed carefully at this crossroads,” Leffingwell said, “we can continue to benefit from a utility that’s a national leader on green energy and conservation; that helps attract sustainable economic growth to our area; that helps support our special quality of life; and that delivers reliable and affordable service.” | <urn:uuid:e7a65f88-965d-4bf4-9da8-5150be157f52> | CC-MAIN-2017-04 | http://www.govtech.com/technology/Austin-Goes-Carbon-Neutral.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00261-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957533 | 1,970 | 2.625 | 3 |
The evolving 3-D printing process excites people for several reasons. The technology's capable of making complex shapes quickly, and it's a great way to produce parts to test for form and function in early manufacturing stages. However, most operators currently use plastic or light metal alloys for relatively large objects.
But what if the technology could print human organs and microfibers on a large scale?
Scientists are already printing tiny strips of living tissue, and they hope to print entire human organs as the technology grows in sophistication. In a process called "bioprinting," doctors could use isolated organs and tissue to test vaccines and other biological agents without worrying about harming animals or relying on inaccurate modeling programs. And the process, once perfected, could produce entire body parts for patient transplants.
According to CNN, 3-D bioprinting involves harvesting living cells from biopsies or stem cells before allowing them to multiply in a petri dish. Scientists feed this "biological ink" into a 3D printer that converts the cells into a 3-D shape that may integrate with existing tissue when placed inside of or onto a host body.
Gastroenterologist Dr. Jorge Rakela essentially told CNN that the technology could transform medicine. "This is an exciting new area of medicine," he said. "It has the potential for being a very important breakthrough.
The world's zeal for 3-D printing will increase, as will the medical community's involvement. According to Bloomberg, the market for 3-D printing reached $777 million in 2012, and it may grow to $8.4 billion in 2025 as medical applications come into play.
Current applications hold promise, but some incorporate non-organic issue for a cybernetic result. Princeton scientists 3-D printed a bionic ear last year that could hear beyond a regular human's natural ability. They printed human cells and nanoparticles, and bonded them with antenna and cartilage to create the body part. They created an ear that heard radio frequencies a million times higher than human ears can. Princeton researcher Michael McAlpine told Mashable that it was intended for demonstration purposes rather than actual application.
"The idea of this was: Can you take a normal, healthy, average human and give them [a] superpower that they wouldn't normally have?" he said.
Other researchers also are developing the technology to produce microscopic materials. Harvard scientist Jennifer Lewis and her students have printed microscopic components, including electrodes, that could be used to make lithium-ion batteries. This year, they also manufactured a patch of tissue with blood-vessel-like material inside that can carry actual blood.
She's adapted 3-D printing to make it more sophisticated, with "inks" comprising materials that are more diverse than plastic and metal, and also high-precision printing platforms with fine nozzles.
Lewis told the Wyss Institute last year that her team's approach was "distinct from commercially available 3-D printers because of its materials flexibility, precision and high throughput."
3-D printing's evolution will continue for the foreseeable future, especially when it comes to organic tissue. A huge limitation to the advancement of 3-D printing of organic tissue has been supplying them with blood throughout the process. Additionally, living tissue is more complex than anything else that's currently being created. But enthusiasts have reason to hope with developments like Lewis's blood vessel work.
Anthony Vicari, an analyst at Boston-based Lux Research, told Bloomberg that 3-D printed organs are possible, but it will be a while before they become reality.
"Organs are foreseeable, but that's a long-term goal," Vicari said. "That requires not just the better printing technology, but much better understanding of tissue engineering." | <urn:uuid:92f8228e-cedb-44ed-969c-0122bd69fedd> | CC-MAIN-2017-04 | http://www.govtech.com/videos/Will-3-D-Printing-Produce-Human-Organs-Nearly-from-Scratch.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00565-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957061 | 769 | 3.671875 | 4 |
As I See It: IT, the Early Days
by Victor Rozek
If the name Wilhelm Schickard doesn't mean anything to you, you're probably not alone. After all, the man has been dead for nearly 400 years. But while he was with us, Wilhelm was an accomplished fellow, managing to get his BA at the age of 17, and his MA two years later. An eclectic scholar, Schickard became a Lutheran minister and professor of Aramaic and Hebrew at the University of Tubingen, where his curiosity eventually turned to the study of astronomy, surveying, and mathematics.
In adulthood, he was a severe looking man with a pronounced widow's peak and a countenance that confirms a suspicion I've long held; namely, that if portraits are any indication, people living in the 17th century weren't allowed to be happy. Regardless, Schickard was a man of many aptitudes, but he is perhaps least appreciated for his most important achievement-being the father of IT.
Opinions, of course, vary, and arguably IT can trace its ancestry to the abacus. But the abacus is a beaded hybrid, not an automatic computing machine. For that matter, 30,000 years ago cave dwellers started carving notches into bones. I suppose a purist could argue that if a finger bone was used, this was indeed the world's first "digital" technology.
But I digress.
Many credit Charles Babbage, with assistance from Ada Byron, as being the parents of modern information technology with Babbage's design of an analytical engine, the first programmable machine envisioned two centuries after Schickard's death. Babbage, however, did not live to see his invention constructed. Nonetheless, its design bore striking similarities to the modern computer. Still others, who take a longer view, credit Pascal (the man, not the language) for inventing the first calculator in 1645.
Pascal is probably the front-runner for consideration as being the genesis of all things IT, perhaps because his name still resonates. After all, his contemporary Schickard never had a programming language named after him. But there is a small problem with anointing Pascal as the source of digital technology: He wasn't the first to invent it, Schickard was. In fact, Schickard invented the first-known mechanical calculator in 1623, the same year Pascal was born.
The machine contained six vertical cylinders for addition and subtraction, and eight horizontal graduated rods called Napier's bones, which allowed the user to perform multiplication and division. It had several notable features, among them it could add and subtract six-digit numbers and had a revolutionary carry mechanism to add partial products. It also alerted the user to an overflow of capacity by ringing a bell; the first error message, a feature that would later become synonymous with Windows. But although it could miraculously perform all four basic numeric functions, it remained largely unknown for three centuries.
Perhaps, as was the case with early computers, the average person could not imagine a useful application for such a device. It seems absurd now, but in 1943 even IBM chairman Thomas Watson could only envision a global market for no more than five computers. And Ken Olsen, the founder of the former Digital Equipment Corp, once asked sarcastically, "Who needs a personal computer at home?"
Nor could it have been simple to mass produce such a labor intensive device, much less market it. And unlike Pascal's device, which he modestly called the Pascaline, no examples of Schickard's work survived to inhabit dusty museums.
But his letters survived, and in one of them he describes his invention to a friend. And should the skeptics accuse him of wishful construct, his notes and plans for the invention also survived and more than three hundred years later, in 1960, a working model was finally assembled. Still, so poorly regarded was Schickard's claim that the 1958 edition of the Encyclopedia Britannica (printed two years before Schickard's calculator was first assembled) credits Pascal with the invention. Schickard's design isn't even mentioned. In fact, he doesn't appear in the encyclopedia at all.
Unlike the diverse Schickard, Pascal showed an early fixation with mathematics despite his father's admonitions. As an indication of how timeless parental concerns are, Pascal's father was distrustful of Parisian schools and decided to home-school his boy. For reasons passing understanding, the father forbade the son from studying mathematics before the age of 15, thereby ensuring Pascal's life-long fascination with the subject. While other boys were out sampling the sinful pleasures of Paris, Pascal was covertly studying geometry.
In 1639, Pascal's family moved to Rouen, where his father managed to get himself a government job as a tax collector for Normandy. Ever the dutiful son, Pascal spent three years developing a digital calculator to help his father count the tribute demanded by the French monarchy. According to J.J. O'Connor and E.F. Robertson, who compiled Internet biographies for Schickard and Pascal, when completed "the Pascaline resembled a mechanical calculator of the 1940s." It was not as sophisticated as Schickard's calculator, however, being essentially limited to addition and subtraction. Multiplication and division functions were performed by doing a series of additions and subtractions.
The peculiar nature of French currency at the time made Pascal's job especially tricky. "There were 20 sols in a livre and 12 deniers in a sol." A base-ten system would have been simpler, but nonetheless, over the next decade Pascal produced 50 prototypes. Unfortunately, few actually sold, and manufacturing came to an eventual halt. But because of their numbers, some of the prototypes survived.
Several years later, his notoriety waning, Pascal engineered not a machine but a scandal. Using the name Amos Dettonville, he challenged other notable mathematicians to a contest, and awarded the prize to himself. All is apparently fair in love and math. Einstein once said that imagination is more important than knowledge, and clearly Pascal did not suffer from a failure of imagination.
Although both Schickard's and Pascal's inventions were revolutionary, a true program-controlled machine was still three centuries away. The first binary, floating point, programmable computer was created by Konrad Zuse in 1941. The better known Mark II, ENIAC, Whirlwind, Colossus, and UNIVAC followed several years later. Perhaps Zuse's creation, the Z3, is not as well known as some of its counterparts because it did not survive the bombardment of Germany in 1944. However, as with Schickard's calculator, the system was later reconstructed and a model is on display in the Deutsche Museum in Munich.
The Z3 consisted of 600 relays in the numeric unit, 1600 relays in the storage unit, and 1400 relays in memory. It boasted a frequency of 5.3 Hertz, and required three steps to perform an addition instruction, 16 steps for multiplication, and 18 steps for division. The average calculation speed for multiplication or division was about three seconds. Addition could be performed in under a second. It weighed 1000 kilograms, and will not easily be mistaken for a laptop. Ironically, for all its bulk and sophistication, the calculation speed of the Z3 was only a scant improvement on an abacus in the hands of an expert.
From dinosaur bones to Napier's bones, from calculators to computers; such are the threads that bind innovators across centuries and give rise to global industries. "A tool," said Henry Ward Beecher, "is but the extension of a man's hand, and a machine is but a complex tool. But he that invents a machine augments the power of a man and the well-being of mankind."
And womenkind, too. | <urn:uuid:62956015-46b9-4c2b-ba6d-4e1fe83466fd> | CC-MAIN-2017-04 | http://www.itjungle.com/tlb/tlb053105-story04.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00016-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.975168 | 1,616 | 3.109375 | 3 |
The latest intelligence leak from Edward Snowden may be the most science fiction-like revelation yet: the National Security Agency is building a quantum computer, a machine thousands of times faster than the fastest computers on the planet.
The NSA acknowledged in 2009 that its research includes quantum computing. But now we know that such technology could be used to further bypass privacy encryptions on the Web.
"A working quantum computer would open the door to easily breaking the strongest encryption tools in use today," explains The Washington Post, which first reported the leak Thursday.
The NSA is not alone in the arms race for world's most powerful computer. NASA, along with Google and the Universities Space Research Association, purchased a $10 million version of the machine in 2012 from Canadian company D-Wave Systems. The trio seeks to apply the machine in areas ranging from air traffic control and robotics to the search for habitable planets.
Scientists say the value of quantum computers will be making sense or "optimizing" a world with increasingly complicated sets of data. The NSA, on the other hand, has bigger fish to fry than most. Its hypothetical large-scale quantum computer could crack not only the digital tools used to protect online shoppers' financial transactions, but state secrets too.
Quantum computers are a complete rethinking of computing. Traditional computers—even the most sophisticated ones—still rely on transistors, electrical circuits that are either switched on or off, producing the lines of ones and zeros that make up computer processing.
A quantum computer isn't limited by ones and zeros. It introduces many more levels of complexity by tapping into the weird physics of electrons, which can operate in several states simultaneously. Quantum computers introduce many shades of one and zero. Or to make your head explode, a qubit (a quantum bit) can be both a one and a zero at the same time.
And here's why that's a game changer: "Dividing or multiplying numbers is fairly easy for any computer, but determining the factors of a really large 500- or 600-digit number is next to impossible for classical computers," explains National Geographic. "But quantum computers can process these numbers easily and simultaneously." And modern-day encryption, explains the University of Waterloo, more or less relies on "math problems that are too tough to solve."
Just explaining how quantum computing works requires talking in multiple universes. It's that crazy. But the machines are finicky, and in the earliest stages of development. The NASA-Google computer needs to be shielded from the Earth's electromagnetic field, and it takes a month to calibrate.
The NSA appears to feel the same time and resource constraints that the space agency is dealing with. "Although the full extent of the agency's research remains unknown, the documents provided by Snowden suggest that the NSA is no closer to success than others in the scientific community," the Post writes.
While some scientists say NASA's machine, which researchers began testing this past fall, is not a true quantum computer, the agency says that "it will be the most powerful in the world." The latest NSA leak may suggest otherwise. | <urn:uuid:9b97fb7b-65d6-4c76-86bd-e9d91708a6eb> | CC-MAIN-2017-04 | http://www.nextgov.com/defense/2014/01/latest-nsa-leak-nears-science-fiction-levels/76196/?oref=ng-dropdown | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00318-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922184 | 630 | 2.875 | 3 |
The Three Most Common Mistakes in IT Security
The most common security mistake organizations make is not institutionalizing awareness of computer security and not training employees as to why it’s important. A password’s importance extends beyond telling companies when users have logged on to a machine, and the importance of a hardware inventory goes beyond telling companies who has what laptop and when. Both are important for the entire organization and for regulatory compliance.
For this reason, it’s important to foster proper awareness of the purpose behind computer security — employees frequently misidentify security measures as restrictions rather than protections. The mistake is in valuing hardware over people and not recognizing the significance of the human factor in security computers and data. Beyond that, the most common mistakes IT departments themselves make are a little more specific.
What follows is the three most common mistakes in IT security:
1. Using Default Configurations for Hardware and Software. Installing software or hardware appliances with the default password configurations or out-of-the-box settings opens an organization’s IT infrastructure to brainless attack.
“If you install a piece of software that has administrative passwords on it, for example, and you don’t change the default passwords and configurations, then anyone who knows what the default configurations or passwords are can access that software or hardware,” said Allen Clarkson, Western Governors University interim program manager and faculty mentor for the IT Program.
Clarkson administers WGU’s security degree, has taught classes on security and has published articles about security.
2. Having a Poorly Implemented or Incoherent Password Policy. Passwords are still the lynchpins of user authentication and user-level assess systems.
“We have other technologies — biometrics, smart cards and whatever else that are becoming more realistic solutions for small and medium-sized organizations — but passwords are still the first step in securing systems,” Clarkson said. “It’s how we know who users are, when they log on and that they have the access they’re supposed to have.”
Poorly designed and implemented password policies, however, can end up exposing networks to easy attacks. Potential missteps include neglecting to articulate a policy for password expiration or to enforce best practices for password creation.
“For example, a password policy may require that you have a combination of lowercase and capital letters, symbols, numerals, that you don’t use words — that sort of thing,” Clarkson said. “If you don’t enforce that policy systematically, if you just sort of suggest it to people, most non-IT people will stick with ‘password’ or ‘letmein’ or their dog’s name or their birth date or whatever.”
But if an IT department enforces a complex password policy without anything in the way of user training, it can lead to an even more profound vulnerability.
“A strong password policy where users have to have very complex passwords may end up creating an even more dangerous situations or defeating the whole purpose because users write down their passwords on a sticky note and stick them right on their screen,” Clarkson said. “The reason that people do that is not because they don’t care what their password is or don’t care about people being able to access the network but simply because the passwords are too complex for them to memorize.”
The key is to enforce a strong password policy while training users on how to design one they can memorize such as spelling a word they’ll remember with numerals intermingled. | <urn:uuid:1e8c53a9-e24a-414a-8165-02d07087dcd7> | CC-MAIN-2017-04 | http://certmag.com/the-three-most-common-mistakes-in-it-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00042-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927574 | 754 | 2.765625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.