text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
While working on the previous post, there was an accidental discovery. To indent (Wikipedia) a bullet or numbered bullet once the button has been clicked on, click on the Quotes button.
To outdent (Wikipedia) the bullet or numbered bullet just click the Bullet button.
- A single click on the Bullet button gives us a bulleted list.
- The indented bullet list after clicking the Quotes button.
- Indented again, after the Quotes button.
- The outdented bullet after clicking on the Bullets button.
This is good to know since we have been manually changing the HTML code to introduce the needed indents or subsequent outdents.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book | <urn:uuid:305fd395-26aa-4420-95e9-a4e43dbce13b> | CC-MAIN-2017-04 | http://blog.mpecsinc.ca/2009/05/windows-live-writer-indenting-bulleted.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00158-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.787519 | 156 | 2.78125 | 3 |
Hardly 15 years old, Wi-Fi technology has become the connective tissue binding our digital lifestyles. It was developed from the scrap heap of wireless spectrum to become one of the most critical technologies of our day. In fact, Wi-Fi carries more data than any other medium – more than 12,000 petabytes per month.
The cable industry has invested heavily in Wi-Fi to extend the consumer broadband experience well beyond the home. We’ve built over 300,000 publicly accessible hotspots and worked hard to push the technical possibilities of home and outdoor broadband. But the frequencies currently used to power today’sWi-Fi have become overcrowded and need room to expand to keep pace with exploding consumer demand.
That’s why as a nation, we need to make more spectrum available for unlicensed use so that Wi-Fi can grow – and the fastest way to do that is to find new bands where Wi-Fi can share with existing technologies. By increasing the amount of spectrum that Wi-Fi can share, our spectrum supply can keep pace with the tremendous growth in Wi-Fi usage and jump-start a new generation of Wi-Fi that can reach speeds of up to a gigabit per second.
“As a nation, we need to make more spectrum available for unlicensed use so that Wi-Fi can grow.”
Recently, policymakers have taken positive steps forward to encourage efforts that would open up new spectrum for Wi-Fi use. U.S. House Energy and Commerce Committee leaders said they will initiate talks to yield greater spectrum sharing in the 5.9 GHz band, that was allocated over 15 years ago for vehicle communications. While this technology is important, auto manufacturers have yet developed a commercial product despite millions in government subsidies, which means that today, while there are neither auto nor Wi-Fi technologies in the band, is the perfect moment to rethink how these frequencies can be best utilized and shared and to develop a true win-win solution.
In a bipartisan joint statement, FCC Commissioners Jessica Rosenworcel and Mike O’Rielly said, “More than a decade and a half after this spectrum was set aside for vehicle and roadside systems, we agree it is time to take a modern look at the possibilities for wireless services in these airwaves, to allow a broader range of uses. We believe by taking steps right now, we can support automobile safety, increase spectrum for Wi-Fi, and grow our wireless economy.”
To move the ball downfield, we believe it is time for both autos and Wi-Fi to roll up our sleeves and do the engineering work needed to make spectrum sharing a reality. With consumer demand and bandwidth intensive applications skyrocketing, we need to work now to keep Wi-Fi growing for all. And as our technology needs expand, finding ways to maximize this inherently limited resource will be increasingly important. That’s why its time for both autos and W-Fi to agree on how to share the road . Fortunately, Wi-Fi can do just that. | <urn:uuid:513f503e-fba4-4180-b0be-11a89fd7f317> | CC-MAIN-2017-04 | https://www.ncta.com/platform/technology-devices/how-spectrum-sharing-will-lead-to-better-wi-fi/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00368-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943089 | 624 | 2.734375 | 3 |
Every year counterfeit fraud losses continue to decrease all over the world thanks to the security of EMV.
What’s behind the security of an EMV card?
Why did the developers of EMV specify a smart card chip inside of every card? For one reason - security.
A smart card chip is a small computer (or microprocessor) that has its own data storage, processing power, and application software. Unlike a magnetic stripe card, a chip is extremely difficult to crack.
A smart chip offers greater security because it contains a secure vault that holds unique keys specific to each card that protect your transactions.
A Unique Code for Each Transaction
EMV cards generate a unique code that is validated by your bank for each transaction, and the code cannot be re-used. A fraudster couldn’t make a transaction using a fake card with stolen data at an EMV terminal because it wouldn’t be able to generate the proper code.
EMV security is based on strong cryptography which is used to generate the unique transaction codes that allow the terminal to authenticate the card. This cryptography is built on private key infrastructure, meaning that only a chip card that is personalized with the cardholder’s private key during manufacturing can generate a valid transaction.
SDA vs. DDA
EMV cards can use either SDA or DDA, which is Static or Dynamic Data Authentication. DDA has become the industry standard because it is much more effective at reducing card fraud. Visa and MasterCard have mandated a migration to DDA on all EMV cards in Europe and Canada, and it is becoming standard in the U.S., too.
How effective is DDA at preventing cloning? Extremely. France's financial authority, the Banque de France, has proudly touted that no fraud cloning cases have been reported since France completed its DDA migration program in 2008.
Learn more about DDA
Contactless and Mobile EMV
Contactless cards and mobile payments are attractive for the convenience they offer accountholders. Fortunately, today’s contactless EMV cards, along with Mobile EMV payments, are fortified with the full protection that EMV affords: two-way authentication of the card and POS, cryptographic verification, and the dynamic code that protects each transaction. | <urn:uuid:3455cb43-a490-4fa3-ac1e-bdeb0f20ea74> | CC-MAIN-2017-04 | http://www.gemalto.com/emv/security | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00304-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935725 | 471 | 3.25 | 3 |
View the Strongest Solar Flare of 2013
/ April 16, 2013
After 3 a.m. on April 11, this year's strongest sunflare -- which are powerful bursts of radiation -- peaked.
Harmful radiation from a flare cannot pass through Earth's atmosphere to physically affect humans on the ground, according to NASA. But when they're intense enough, they can disturb the atmospheric layer where GPS and communications signals travel, which disrupts radio signals for as long as the flare is ongoing -- anywhere from minutes to hours.
NASA also says that increased numbers of flares are common at the present time, since the sun's normal 11-year activity cycle is ramping up toward solar maximum, which is expected in late 2013. Humans have tracked this solar cycle continuously since it was discovered, and it is normal for there to be many flares a day during the sun's peak activity. | <urn:uuid:e2ed1e72-f175-4119-8ef0-39932d22d116> | CC-MAIN-2017-04 | http://www.govtech.com/photos/Photo-of-the-Week-Strongest-Solar-Flare-of-2013.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00176-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.970246 | 180 | 3.6875 | 4 |
Why Some Applications Are More Intuitive
Users often run into situations where a certain product or application will be somewhat puzzling. They’ll be forced to ask, "How do I do this?” or “Where do I find this function?" For those who like puzzles, this may be entertaining. But for the majority of people who use such applications to be more efficient, this can be an irritable distraction. The more puzzles they encounter, the faster they abandon the product.
The user expects the product to be designed for her or him. They understand that some learning may be needed. But in general, the functions have to be engineered in such a way that the user can figure them out just by looking at them.
Then, why are there so many confusing products out there?
Recent research reveals that designers may fail to take into account the differences between the automatic response system and the reflective response system in the human brain. The former gives priority to the response, the latter to reflection, which is then followed by a premeditated action. For example, the automatic response system allows us to avoid an accident by swerving the car wheel without thinking, while the reflective response system enables us to deliberately choose our investments.
These two systems are present when we interact with products and applications. A classic example of the importance of the automatic response system is described in the book “The Design of Everyday Things”. Most doors used to have handles on both sides, with signs to indicate whether you should push or pull. However, the majority of people will pull, even if a sign tells them to push. The door handle triggers the automatic response system, before the sign can trigger the reflective response system.
Therefore, in order for a product or service to be truly intuitive, it needs to follow two principles. First, its basic and most popular functions need to be accessible via the automatic response system, without thought and premeditation. Such minimalistic design was achieved in the iPod, where the four function wheel left no room for mistakes by the end user. Second, the design should not put the two systems in conflict. In other words, the door handle sends one signal, while the sign sends a different one. This type of error is the most common cause of user frustration.
A prime example of such conflict can be found in stove designs, where four burners are arranged symmetrically in a two by two grid, but the knobs are positioned linearly. How do you know which knob goes with which burner? In software applications, we often see similar conflicts. Icons do not properly represent the related function, like the hover over help label, and dialogs contain functionally unrelated components, leaving the user with countless questions.
Designers and architects of physical goods have been following these principles for some time. We now see many doors that do not have handles if the user needs to push to open it, and stove tops have knobs that are arranged in the same manner as the corresponding burners. The application of these principles has also resulted in more intuitive and responsive UIs, like the ribbon found in Microsoft Office and other applications.
We must strive to apply the same techniques to the design of BI tools and custom web applications, particularly when users want to do more sophisticated types of analysis. Good design will simplify complex tasks, allowing end users to accomplish more with less effort. BI has been somewhat behind this trend, because creation has been delegated to IT professionals who understand the technology, but are not trained in the principles of design. But, this is not rocket science, and adherence to the two principles discussed above can help make applications more intuitive and easy to use. | <urn:uuid:8cc9ce5f-1a9a-4bfe-9795-f6dbedf44b4b> | CC-MAIN-2017-04 | http://www.informationbuilders.com/blog/rado-kotorov/3320 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00351-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94485 | 747 | 3.109375 | 3 |
U.S. data centers are using more electricity than they need. It takes 34 power plants, each capable of generating 500 megawatts (MW) of electricity, to power all the data centers in operation today. By 2020, the nation will need another 17 similarly sized power plants to meet projected data center energy demands as economic activity becomes increasingly digital.
Increased electrical generation from fossil fuels means release of more carbon emissions. But this added pollution doesn't have to be, according to a new report on data center energy efficiency from the National Resources Defense Council (NRDC), an environmental action organization.
In term of national energy, data centers in total used 91 billion (kilowatts) kWh in 2013, and by 2020, will be using 139 billion kWh, a 53% increase.
The report argues that improved energy efficiency practices by data centers could cut energy waste by at least 40%. The problems hindering efficiency include comatose or ghost servers, which use power but don't run any workloads; overprovisioned IT resources; lack of virtualization; and procurement models that don't address energy efficiency. The typical computer server operates at no more than 12% to 18% of capacity, and as many as 30% of servers are comatose, the report states.
The paper tallies up the consequences of inattention and neglect on a national scale. It was assembled and reviewed with help from organizations including Microsoft, Google, Dell, Intel, The Green Grid, Uptime Institute and Facebook, which made "technical and substantial contributions."
The NRDC makes a sharp distinction between large data centers run by large cloud providers, which account for about 5% of the total data center energy usage. Throughout the data center industry, there are "numerous shining examples of ultra-efficient data centers," the study notes. These aren't the problem. It's the thousands of other mainstream business and government data centers, and small, corporate or multi-tenant operations, that are the problem, the paper argues.
The efficiency accomplishments of the big cloud providers, "could lead to the perception that the problem is largely solved," said Pierre Delforge, director of the NRDC's high-tech sector on energy efficiency, but it doesn't fit the reality of most data centers.
Data centers are "one of the few large industrial electricity uses which are growing," Delforge said, and they are a key factor in creating demand for new power plants in some regions.
Businesses that move to co-location, multi-tenant data center facilities don't necessarily make efficiency gains. Customers may be charged on space-based pricing, paying by the rack or square footage, with a limit on how much power they can use before additional charges kick in. But this model offers little incentive to operate equipment as efficiently as possible.
In total, the report says U.S. data centers used 91 billion kilowatt-hours of electricity last year, "enough to power all of New York City's households twice over and growing." By 2020, annual data center energy consumption is expected to reach 140 billion kilowatt hours.
If companies used data center best practices, the report states, the economic benefits would be substantial. A 40% reduction in energy use, which the report says is only half of the technically possible reduction, would equal $3.8 billion in savings for businesses.
The report also finds that energy efficiency progress is slowing. Once the obvious efficiency projects, such as isolating hot and cold aisles, are accomplished, addition investment in energy efficiency becomes harder to justify because of cost or a perception that they may increase risk. IT managers are "extremely cautious," about implementing aggressive energy management because it could introduce more risk to uptime, the report notes.
There are a number of measurements used to determine the efficiency of data centers, and the report recommends development of tools for determining CPU utilization, average server utilization, and average data center utilization. It says that "broad adoption of these simple utilization metrics across the data center industry would provide visibility on the IT efficiency of data centers, thereby creating market incentives for operators to optimize the utilization of their IT assets. "
The NRDC isn't the first to look at this issue. In 2007, the U.S. Environmental Protection Agency, working with a broad range of data center operators and industry groups, released a report on data center power usage that found that the energy use of the nation's servers and data centers in 2006 "is estimated to be more than double the electricity that was consumed for this purpose in 2000." It called for energy efficiency improvements.
This story, "Data centers are the new polluters" was originally published by Computerworld. | <urn:uuid:1df01ef4-3026-4b4e-8571-22f7aad2e7ca> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2598409/data-center/data-centers-are-the-new-polluters.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00497-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950839 | 957 | 3.53125 | 4 |
5.1.1 What is S/MIME?
S/MIME (Secure / Multipurpose Internet Mail Extensions) is a protocol that adds digital signatures and encryption to Internet MIME (Multipurpose Internet Mail Extensions) messages described in RFC 1521. MIME is the official proposed standard format for extended Internet electronic mail. Internet e-mail messages consist of two parts, the header and the body. The header forms a collection of field/value pairs structured to provide information essential for the transmission of the message. The structure of these headers can be found in RFC 822. The body is normally unstructured unless the e-mail is in MIME format. MIME defines how the body of an e-mail message is structured. The MIME format permits e-mail to include enhanced text, graphics, audio, and more in a standardized manner via MIME-compliant mail systems. However, MIME itself does not provide any security services. The purpose of S/MIME is to define such services, following the syntax given in PKCS #7 (see Question 5.3.3) for digital signatures and encryption. The MIME body section carries a PKCS #7 message, which itself is the result of cryptographic processing on other MIME body sections. S/MIME standardization has transitioned into IETF, and a set of documents describing S/MIME version 3 have been published there.
S/MIME has been endorsed by a number of leading networking and messaging vendors, including ConnectSoft, Frontier, FTP Software, Qualcomm, Microsoft, Lotus, Wollongong, Banyan, NCD, SecureWare, VeriSign, Netscape, and Novell. Information on MIME can be found at ftp://ftp.isi.edu/in-notes/rfc1521.txt. | <urn:uuid:c6d0fa08-1dbd-4b3d-a8b4-62d1dece5433> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/s-mime.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00497-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.85092 | 382 | 3.125 | 3 |
Today in Washington, DC, experts from more than 30 US and international cyber security organizations jointly released the consensus list of the 25 most dangerous programming errors that lead to security bugs and that enable cyber espionage and cyber crime. Shockingly, most of these errors are not well understood by programmers; their avoidance is not widely taught by computer science programs; and their presence is frequently not tested by organizations developing software for sale.
The impact of these errors is far reaching. Just two of them led to more than 1.5 million web site security breaches during 2008 – and those breaches cascaded onto the computers of people who visited those web sites, turning their computers into zombies.
People and organizations that provided substantive input to the project are listed below. They are among the most respected security experts and they come from leading organizations ranging from Symantec and Microsoft, to DHS’s National Cyber Security Division and NSA’s Information Assurance Division, to OWASP and the Japanese IPA, to the University of California at Davis and Purdue University. The MITRE and the SANS Institute managed the Top 25 Errors initiative, but the impetus for this project came from the National Security Agency and financial support for MITRE’s project engineers came from the US Department of Homeland Security’s National Cyber Security Division.
Until now, most guidance focused on the ‘vulnerabilities’ that result from programming errors. This is helpful. The Top 25, however, focuses on the actual programming errors, made by developers that create the vulnerabilities.
2009 CWE/SANS Top 25 Most Dangerous Programming Errors
Table of contents:
” Brief Listing of the Top 25
” Construction and Selection of the Top 25
” Organization of the Top 25
” Insecure Interaction Between Components
” Risky Resource Management
” Porous Defenses
” Appendix A: Selection Criteria and Supporting Fields
” Appendix B: Threat Model for the Skilled, Determined Attacker
” Appendix C: Other Resources for the Top 25
Document version: 1.0 – download in PDF format | <urn:uuid:26bb6278-5f23-4016-a09f-381077ab23af> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2009/01/12/cwesans-top-25-most-dangerous-programming-errors/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00341-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91809 | 432 | 2.75 | 3 |
/ December 10, 2013
Part engineer, part sculptor, Wim Noorduin at Harvard University creates microscopic gardens of tulips, roses and violets, in which each bloom is smaller than a strand of human hair.
The tiny flowers form when a glass plate is partially submerged in a beaker containing silicon and minerals, including barium chloride. Noorduin manipulates the environment of the salts by changing the temperature and injecting gases like carbon dioxide in order to create the flowers. According to a report on NPR, simply walking past the beaker will change the growth patterns.
Noorduin likened the process to 3-D printing, on a much smaller scale. Future uses could include microelectronics, medical sensors and optical materials.
Photo credit: Wim Noorduin/Harvard University | <urn:uuid:667331cc-c7c9-4404-81d6-32fc471b1956> | CC-MAIN-2017-04 | http://www.govtech.com/photos/Photo-of-the-Week-Nano-Garden.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00461-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.907304 | 163 | 3.4375 | 3 |
Glands of the Human Body
Organs of the human body, which manufacture some liquid products, which are secreted from the cells, are called glands. There are two types of glands in human body.
Types of Glands in Human Body
- Ducted Glands: Ducted glands, also called exocrine glands secrete their product through well defined ducts, e.g., Liver - production of bile; Lachrymal - secretes tears in the eyes; Salivary - secretes saliva in the mouth; Sweat glands in the skin - secrete sweat. ducted glands, also called exocrine glands secrete their product through well defined ducts, e.g., liver - production of bile; lachrymal - secretes tears in the eyes; salivary - secretes saliva in the mouth; sweat glands in the skin - secrete sweat. >
- Ductless Glands: Also called endocrine glands or internally secreting glands, they secrete hormones directly into the blood-stream in response to instructions from the brain. also called endocrine glands or internally secreting glands, they secrete hormones directly into the blood-stream in response to instructions from the brain. >
|Thymus||In early childhood it plays some part in building resistance to diseases and physical development|
|Prostrate||Regulates blood pressure and sexual potency|
|Gonads||Relates to reproductive system and secretes sex hormones|
|Adrenal||Causes acceleration of the breath, heightens emotion and a sudden increase in physical strength during fear or anger|
|Pancreas||Aids in digestion of proteins, carbohydrates and fats; it secretes insulin and deficiency of insulin causes diabetes|
|Pituitary||Called the master gland as it controls the other ductless glands and influences growth and metabolism|
This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you.
We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus | <urn:uuid:7b633590-2769-425a-b8ee-084396323159> | CC-MAIN-2017-04 | http://www.knowledgepublisher.com/article-672.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00003-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921512 | 484 | 3.359375 | 3 |
Many who are new to networking and security wonder what it means to have “ports” open on your computer. Some get rather anxious when an online port scan reveals that something’s open on their system. What follows is a silly, but hopefully memorable way for beginners to remember how network ports work.
Houses, Windows, and Midgets
Imagine a house with many, many windows. And imagine that all these windows are spring-loaded so they slam shut when they aren’t being actively held open. Also within the house are a number of midgets. Each one is able to talk to people outside the house temporarily, but only through an open window.
Well, ports on a computer are the windows on the house, and the applications running on your computer are the midgets. And just like our spring-loaded windows, ports are always closed by default. The second an application stops holding one open, it closes.
So when you find open ports on your system, don’t worry about the port itself. It can’t stay open without help. Instead, focus on the reason it’s open, i.e. the application that’s keeping it that way.
For that task you can use a program from Foundstone called Fport. It’ll give you the name of the program so you can find it and shut it down. Once you’ve shut down the program the port will be closed again. :: | <urn:uuid:2bfe6625-f7ed-4087-a963-8c0564c2c783> | CC-MAIN-2017-04 | https://danielmiessler.com/study/network_ports/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00305-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947599 | 304 | 3.1875 | 3 |
Center of the IT Universe: The Network Administrator
If there’s a single job role that can take responsibility for starting the whole IT certification phenomenon, it would have to be that of the network administrator. Starting in the mid-1980s, the introduction of network operating systems made it clear that trained professionals were key players when it came to installing, configuring and maintaining the systems and networks necessary to make these technologies work.
As networking technologies have grown faster and more capable, and the notion of networking has expanded to embrace Internet access, security, switching and all kinds of services, requirements for network administration knowledge and skills have grown apace. What hasn’t changed is this position’s pivotal importance in designing, implementing, maintaining and evolving workable IT infrastructures of all kinds.
It’s also worth noting that the line between the role of a systems administrator (responsible for installing, configuring, maintaining and troubleshooting computer systems) and that of network administrator (someone responsible for installing, configuring, maintaining and troubleshooting computer, voice or converged data and voice networks) can be hard to draw. Just as most professionals who work under the title of systems administrator work with networks, most professionals who work under the title of network administrator work with systems—particularly network servers and the services they so routinely provide. That’s why you’ll see an interesting mix of subject matter as you explore the systems/network administrator’s organizational role.
A systems/network administrator is an individual who understands local networking and network operating system tools, technologies and services thoroughly. That means he or she not only understands how to install, configure, maintain and troubleshoot a wide range of devices and systems, but also knows how to work with wired and wireless network media and equipment. A systems/network administrator must be able to understand and interpret business or organizational needs, goals and objectives, and help to implement and maintain appropriate information technology to help meet them.
Key Knowledge and Skills
Savvy systems/network administrators need:
- A strong working knowledge of desktop and server PCs, including laptops and desktop end-user PCs, as well as departmental and specialized servers. This includes the hardware components that go into such devices, as well as the common operating systems, applications and services that they run.
- A strong working knowledge of Ethernet networking, including fiber and wired versions of 10, 100 and 1,000 Mbps technologies, plus half- and full-duplex implementations.
- A good understanding of wireless LAN and WAN networking technologies, especially various elements of the 802.11 collection, plus some familiarity with emerging WLAN/wireless broadband options.
- A good understanding of the OSI networking model and the various devices that plug in or operate at (or across) various layers.
- A strong working knowledge of TCP/IP addressing, services, tools and utilities, as appropriate to the systems and networks that one must manage. These days, this encompasses a lot of territory, from basic networking protocols and services to session-oriented services and streaming media, as well as basic Web, FTP and other standard IP-based networking services.
- A strong general knowledge of information security policy, procedures and best practices is absolutely essential.
On the platform-specific side, most people end up specializing in one or two areas, though some exceptional individuals may become expert in three or (rarely) more. Here, the bare minimum for platform- or product-specific knowledge consists of being able to install, configure, maintain and troubleshoot:
- Network devices and components in use.
- Desktop and server operating systems in use.
- Key networking services, such as DNS, DHCP, NAT, directory services, network file systems and so forth.
- Key application services, such as databases, Web-based services, e-mail and so on.
This knowledge becomes more complex and specialized as one climbs into more senior roles.
In practical terms, some professionals tend to concentrate more on the network infrastructure side of things and focus more on routers and gateways, remote access, VPNs and other such elements, while others tend to concentrate more on the systems side of things and focus more on servers and their underlying operating systems and the services they provide. Either way, there’s plenty of room to specialize and narrow one’s focus to dig more deeply into specific areas of expertise as one’s seniority, skills and knowledge base increase over time.
A good general computing background is a great place to start a career as a network administrator, with perhaps an associate’s or bachelor’s degree in computer science, management information systems or information technology. Beyond that foundation, certifications provide a first step to help turn wanna-bes into practicing IT professionals (though nothing means more to HR staff or hiring managers than on-the-job experience):
- A+: Ensures basic knowledge of PC components, operating systems, setup, configuration and troubleshooting.
- Network+: Ensures basic knowledge of networking devices, protocols, services and applications, including coverage of setup, configuration and troubleshooting.
- Security+: Ensures basic familiarity with key information security terms and concepts, as well as basic best practices, processes and procedures.
These credentials also are excellent stepping-stones for entry-level personnel ultimately seeking work as network administrators by way of the help desk or technical support operations.
Discussion of targeted systems/network administrator credentials usually requires following some kind of platform or vendor choice to help individuals focus learning in a particular market niche—preferably one where opportunities are ample, and where employment is growing, rather than otherwise. Today, these include:
- Microsoft: Microsoft’s credentials in this area are the Microsoft Certified Systems Administrator (MCSA) and the Microsoft Certified Systems Engineer (MCSE). As the most popular desktop operating system and a leading server operating system, these credentials remain strong in today’s tough market. In fact, Windows Server 2003 credentials are enjoying a strong uptick this year as a result of Microsoft’s transfer of Windows 2000 from its core product line to “extended support.”
- Novell: Novell’s Certified Linux Professional (CLP) and Certified Linux Engineer (CLE) adhere to Novell’s popular and well-recognized SUSE Linux platform. The Certified Novell Administrator (CNA) leads to the Certified Novell Engineer (CNE). The primary focus of this credential remains NetWare, which remains a strong—but steadily shrinking market sector.
- Red Hat: Red Hat’s Linux certifications have been around the longest and continue to enjoy name recognition in this Linux portion of the systems/network administrator certification marketplace. The more junior Red Hat Certified Technician (RHCT) credential is more systems-focused, while the more senior (and highly regarded) Red Hat Certified Engineer (RHCE) credential covers a broad mix of system and networking topics. Red Hat also offers a security certification, as well as a credential for architects.
- Cisco: The Cisco Certified Network Associate (CCNA) credential is a stepping-stone to most of Cisco’s professional- and specialty-level technical certifications and commands a population second only to the entry-level Microsoft Certified Professional (MCP) credential. The Cisco Certified Network Professional (CCNP) i | <urn:uuid:9815bd8c-ef90-4c84-b859-542ee6512ba4> | CC-MAIN-2017-04 | http://certmag.com/center-of-the-it-universe-the-network-administrator/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00213-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920891 | 1,514 | 3.015625 | 3 |
How to implement basic forensic procedures
This article is about the implementation of basic forensic procedures for security of the network. Forensic science, commonly known as forensics, is the application of science to matters of interest to the legal profession. This branch of science is not restricted to inspection of murder scene only; it is applied to any crime scene. Whenever and where ever a crime is committed the investigator can take refuge to Forensics. So the following article will discuss about the implementation of forensics in technology. As it is commonly known that computers are the kernel of communication and tracking and recording of information, and so it is prone to various kinds of crimes like deliberate date tampering, deletion of data, even performing various kinds of technical fraudulent so on and so forth. So computer forensic uses technology to seek computer evidence of the crime. It even attempts to retrieve information, erased or altered to track down the attacker or criminal. After this much knowledge about forensic, it is now time to know about a step by step process to implement the forensics in the investigation system.
Order of volatility
To capture the volatile data is the first job of the forensic team. The volatile data consists of register, cache, peripheral memory, random access memory (RAM), and network state, running processes etc which are the places that are not static or that can be erased or reinstalled any time. Because of its short "shelf life", it becomes very difficult for the team to clutch the data. For the risks factors of losing or even shattering the data, a particular order of volatility is to be maintained to secure the data according to its fragility. Otherwise data loss may happen, and this data loss may result to collapse of the entire investigation process, which is not at all intended. The order should exactly be like this- register, cache, peripheral memory first; random access memory (RAM) second; network state third; and the at last the running processes. Maintaining this sequence, while using or handling the volatile elements in a computer system will ensure that no data is lost at the time of investigating the volatile elements.
Capture system image
The images proving the evidences are always helpful in proving the true happenings and so the next challenge of the computer forensic team is to capture the system image. To do so the mirror image backup programme is applied. The mirror image backup is more accurate than normal copy because it replicates all sectors of the computer hard drive including hidden data storage areas also. The accuracy of the mirror image backup programmes is guaranteed by hashing algorithms. It creates a snapshot of the current system based on the content of the drive and stores them in the software memory box. This helps in proving that the evidence retrieved is real and not planted. This acts also as a justification of the evidence and thus showing that the evidence are not built but are natural and are picked up from the very incidence. One has to be properly trained to implement this programme in a controlled fashion to maintain the authenticity of the evidence. Mirror image backups are performed using handheld devices, some of which uses the Global Positioning System (GPS) to identify the spot of data capture.
Network traffic and logs
Most of the time, the extra pool of the data traffic is the time when the intruder enters the system to steal data or leak data. The hackers intentionally create virtual network modes and initiate their intrusion, within which he or she himself includes in it and then easily he can intrude himself into the system to leak out, wash out or delete a data. Most of the time, the IP addresses are identical, or most of the data traffic comes from few IP addresses, and there the data monitoring becomes necessary to be implemented. Monitoring network traffic and logs of the users of the system, is the next step to be followed by the forensic team. By using some software, the team can retrieve the history of usage of the system for collecting evidence for the crime being committed. The software keeps a record of the applications running and monitors the network bandwidth to check for intruders and unauthorized transfer of data. One of the commonly used programmes, hardware or software firewall, keeps a log of the port usage of the computers in the system and filters those usage.
CCTV camera is now installed everywhere to capture video footages and these footages helps a lot in investigation and especially the forensic investigations. Every facility in the networking industry has the feature of monitoring its employees and operations by video footage capturing. The forensic team analyses and scrutinize this footage to identify the discrepancies. This in turn helps them to recognize the ambiguity of actions and operations before, during, and after the commencement of the crime. Using video stills can improve the investigation by a lot as the investigation procedure gets a standalone proof there, or even can generate a clue out of it, which might help the forensic team to meet the investigation results.
Record time offset
The recordings are another step which acts as a great clue or even a direction finding way for the investigators. Often those recordings are deleted or washed away, but keeping them stored at some remote location, might become helpful in some occasions. The step to be followed next is to check the audio bytes of the evidence collected from the video footage. To do this the team has to implement the record time offset applications on the audio bytes. A galore of software is present in the sector to facilitate this action. This record time offset authenticates and validates the legitimacy of the evidence.
Mapping of a data is an important aspect as they can easily make the findings and makes the search easy. Investigations first part is the searching session, where clues and evidences are searched and tracked down. The second phase is relating the search results and making them organized to trace the source of occurrence, and finally to investigate the source of occurrence and again arranging clues. After organizing those clues, the next and the final thing that is left is the identification of the motive of the culprit or the criminal. The next step is to take hashes and analyze the hashes of the system. Hash is any function that is used to map an arbitrary data or a fixed sized data. By monitoring the hashes the team can atomize the evidence and pin point the cause and actual source of ambiguity.
The screenshots of the system is further analyzed by the team. The screenshots as it is known are bits of evidence from the crime scene. These provide a clear observation of the exact order of actions performed and crimes executed. The mirror image backup is more accurate than normal copy because it replicates all sectors of the computer hard drive including hidden data storage areas also. The accuracy of the mirror image backup programmes is guaranteed by hashing algorithms. It creates a snapshot of the current system based on the content of the drive and stores them in the software memory box. This helps in proving that the evidence retrieved is real and not planted. This acts also as a justification of the evidence and thus showing that the evidence are not built but are natural and are picked up from the very incidence.
One has to be properly trained to implement this programme in a controlled fashion to maintain the authenticity of the evidence. Mirror image backups are performed using handheld devices, some of which uses the Global Positioning System (GPS) to identify the spot of data capture. Every facility in the networking industry has the feature of monitoring its employees and operations by video footage capturing. The forensic team analyses and scrutinize this footage to identify the discrepancies. This in turn helps them to recognize the ambiguity of actions and operations before, during, and after the commencement of the crime. Using video stills can improve the investigation by a lot as the investigation procedure gets a standalone proof there, or even can generate a clue out of it, which might help the forensic team to meet the investigation results
The important area of an investigation is the part to proof the criminal. All the proofs and evidences are matched and the motive of the criminal is absolutely clear to proof the thing, but everything gets struck due to lack of evidences. It is not so much tough to gather evidences, but the fact is that the evidences are the things that betray at right time the most. Next in the line is to net out witnesses, if any, of the crime. In doing so, the previous steps play a pivotal role. The team can easily identify the witnesses and take account of their experience. By doing so the team gets another step closer in the pursuit of the attacker or criminal.
Track man hours and expense
These steps include the application of software to track man hours and its expenses. The software helps in calculating the available man hours and man hours needed and at what expense. Calculation of the man hour makes the entire investigation procedure well under the budget synchronization. Anything in today's market is cost sensitive and the cost sensitivity, if to be adjudged and reconsidered, need a proper planning. Man Hour calculation is the key area to do that. As every extra man will charge extra cost and every non-working hour will also cost extra, which is completely unwanted and should be curbed or taken into consideration for a control. Anyway, the cost affectivity is that area, concentrating on which every small company sees a chance to grow or excel.
Chain of custody
A chain of custody must be started and maintained by the team as soon as the investigations begin. The chain of custody charts that the evidence was under strict control at all times and no unauthorized personnel was permitted to access it, reducing the chances of corrupting the evidence. The chain of custody is the record of the whereabouts of the evidence at all times.
It explains in detail the serial numbers of the systems involved, person who handled and had the custody of them and for what span of time. Chain of custody plays a very important role in legal authentication of the investigations. It is the process of chain wise identification of heads, which are to be inspected. So, until they are scheduled one after another, then it would become very much tough to reach the final results. So the team must, at the very beginning plan the chain, whom to be interrogated or whom to be investigated after whom. This also depends on the concept that making out these data from there would be helpful to make out the data from the next source or like that, which helps to finalize the chain.
Big Data analysisBig data analysis is the process of accumulating, categorizing and analyzing large set of data or big data. Big data analysis helps to discover the patterns, matrix, designs and other functional information about the data. This is the last procedure to be followed by the computer forensic team.
To conclude this vivid discussion, it can be sited that if the basic forensic procedures are implemented in an orderly and efficient manner security and stability of the network can be assured. One must be aware of the basic strategy and applicability of the measures of security and investigations. So, for the best results of the investigations, one is needed to learn and adopt the knowledge of forensic in telecommunication. By a thorough knowledge about it, the entire investigation can be chalked out easily to reach the solution or the final result.
Hence, the forensic procedures can help one know many things. They can help one know how to detect the culprit if some internet crime is done, how the traces can be made and etc. Also, they give us some guidelines which can let one understand that how important it is to not to become open to attacks since it is not too easy to get rid of them. Also, one can know how he can avoid some attacks and what he could have done in the defence. | <urn:uuid:eb3086f0-7c0c-4200-882a-01881a037a04> | CC-MAIN-2017-04 | https://www.examcollection.com/certification-training/security-plus-how-to-implement-basic-forensic-procedures.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00213-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948301 | 2,338 | 3.109375 | 3 |
Wishing you could bring back some of your family’s homemade gravy in your carry on after Thanksgiving? Scientists are working on new airport screening technology that aims to identify dangerous liquids from those that aren’t, just like that secret-recipe gravy you’ve been waiting for all year.
New detection technology has been developed by Los Alamos National Laboratory scientists that may provide a breakthrough for screening liquids at airport security. Combining advanced magnetic resonance imaging (MRI) technology and low-power X-ray data has released this new detection innovation that could potentially benefit both airport security and passengers.
Called MagRay, the system’s goal is to quickly and accurately distinguish between liquids that may be safe on a commercial aircraft versus those that are prohibited. For example, white wine and nitromethane, a liquid that can be used in explosives, may appear identical, yet one is highly dangerous. MagRay could potentially provide the key to discerning between the liquids quickly.
“One of the challenges for the screening of liquids in an airport is that, while traditional X-ray based baggage scanners provide high throughput with good resolution of some threats, there is limited sensitivity and selectivity for liquid discrimination,” said Michelle Espy, a Los Alamos National Laboratory physicist and MagRay project leader, in a press release. “While MRI can differentiate liquids, there are a certain class of explosives — those that are complex, homemade or may have mixes of all kinds of stuff — that are more challenging.”
Funded in part by the U.S. Department of Homeland Security's Science and Technology Directorate, the scientists combined advanced MRI technology with X-ray, which unlocked new information that’s not provided by either tool independently.
“We’re looking for where a liquid lies in a sort of three-dimensional space of MRI, proton content and X-ray density,” said Larry Schultz, a MagRay engineer. “With those measures we find that benign liquids and threat liquids separate real nicely in this space, so we can detect them quickly with a very high level of confidence.”
In the following video from the national laboratory, the team explains how the technology operates as well as what the next steps will be in transitioning the system to the private sector. | <urn:uuid:4e8760bf-40e5-4285-8add-41bdebeaa00d> | CC-MAIN-2017-04 | http://www.govtech.com/public-safety/Airport-Screening-Tech-Improve-Identification-Liquids.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00425-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950469 | 474 | 2.625 | 3 |
Conducting Usability Testing
Conducting usability testing
This is the single most critical aspect of ensuring your software or Website is accessible. Developers can test applications on their own by trying to navigate the application either only by ear or by only using a keyboard. This allows you to notice what the screen reader will pick up or skip over. More important, however, is real-world usability testing. While regulations and checklists provide good guidance, soliciting feedback from visually impaired or otherwise disabled users is critical, as they will be more familiar with what works and what doesn't in a real-world situation.
Benefits of providing IT accessibility
While non-federally funded companies aren't legally required to create accessible Websites or applications, doing so is becoming a common practice that not only can avert risk but also have great rewards for today's organizations.
Some large companies have already faced legal action for not taking accessibility into account. In 2006, the National Federation of the Blind (NFB) filed a class action lawsuit against Target, the national retail chain, for offering online-only discounts through its non-508-compliant Website. Its Website had been noticeably less accessible to the blind and visually impaired than its brick-and-mortar stores. As a result, after two years of litigation, Target settled and was ordered to work with the NFB to make its Website accessible.
In addition to avoiding a negative and potentially damaging lawsuit, companies that ensure accessibility in their software applications or their online presence stand to gain market share by reaching new audiences. The disabled community is a significant market. According to the National Organization on Disability, disabled adults control more than $3 trillion in discretionary income worldwide and this number is expected to increase.
The disabled community is also very loyal and tends to support and evangelize companies that provide equal access, resulting in a significant opportunity for organizations to extend their brands while gaining and retaining customers.
Michelle Bagur is a Senior Developer at EffectiveUI where she specializes in accessibility in Rich Internet Applications (RIAs). Michelle began her programming career in the gaming industry in Dallas and later moved to Denver to develop medical simulators utilizing haptic devices and advanced three-dimensional technology. Michelle earned her Master's degree in Integrated Science (a combination of computer science, physics and biology) from the University of Colorado at Denver before moving into the world of RIA development. She particularly enjoys the synthesis of technologies and platforms that RIA development encourages. She can be reached at firstname.lastname@example.org. | <urn:uuid:b0cb7df0-aa81-4136-a174-104e67799ce8> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Application-Development/How-to-Ensure-IT-Accessibility-in-Applications-and-Websites/3 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00231-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951674 | 514 | 2.578125 | 3 |
Le Bellec F.,Center International Of La Recherche |
Damas O.,Center International Of La Recherche |
Boullenger G.,Center International Of La Recherche |
Vanniere H.,Center International Of La Recherche |
And 3 more authors.
Acta Horticulturae | Year: 2012
Weed management is an important point of citrus production in Guadeloupe (French West Indies). Orchards are traditionally planted on sloping ground, and the use of herbicides regularly sprayed on the whole farm is the most common practice. Such a practice impacts the environment and the production benefits. Introduction of cover crops on these orchards is an interesting alternative. The very first aim of cover crops is to control weeds, but benefits in terms of erosion and other services are also expected. This study consists in quantifying the effects of cover crop introduction on mandarin orchards (Citrus reticulata 'Frémont'). The cover crop system is designed in order to obtain a perennial association and to limit chemical intervention as much as possible. Over the 3 years course of the study, two different modalities were compared: (i) farmer practice, mandarin orchard with herbicide (Glyphosate) every 2 months, and (ii) mandarin orchard in association with Neonotonia wightii. Despite some difficulties in setting up the system, N. wightii efficiently suppressed weeds after 6 months. No more herbicide was sprayed on the associated plot while the bare ground plot received 5 herbicides a year. Meanwhile, no significant difference in the predawn water potential of the soil has been revealed between N. wightii and bare ground conditions. This indicates that N. wightii use growth does not entail additional water supply. After 3 years of experimentation, no impact on the orchard performance of the field has been observed. Experimentation is still on-going to estimate the transfer of this new technique to the farmer. Source | <urn:uuid:f67e3522-865b-4361-8558-6dbef55bee16> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/center-international-of-la-recherche-1733984/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00380-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.917311 | 411 | 2.578125 | 3 |
In 2014, over 110 million Americans had their personal information stolen by hackers. Homes, businesses, even the most well equipped institutions are vulnerable to intrusion. This year, more than 25 million personal records, including eye color and social security numbers, were hacked from the U.S. government’s Personnel Management Agency. It is becoming increasingly vital to protect our data, and this infographic explores the best ways to prevent an attack.
Phil Muncaster reports on China and beyond
Jon Collins’ in-depth look at tech and society
Kathryn Cave looks at the big trends in tech | <urn:uuid:bf06357f-f741-4da4-933c-6dc91fdcffbd> | CC-MAIN-2017-04 | http://www.idgconnect.com/view_abstract/35141/ways-sniff-out-cyber-attacks-before-they-start | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00406-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949142 | 120 | 2.609375 | 3 |
IT is a growing profession, and one that requires skills that women not only possess but oftentimes excel at. And yet women remain a distinct minority within the profession. Why is that, and how can it be changed?
Computing should be an attractive field of study for anyone. Nonetheless, although recent Bureau of Labor Statistics findings show that computing-related jobs are growing at a rate almost double that of all other fields, fewer students are enrolling in computing majors. That trend led the BLS a few years ago to project that by 2018 approximately half of all jobs requiring extensive computing expertise will go begging for lack of qualified IT professionals.
Sometimes, one crisis can resolve another. The looming shortage of IT professionals could spur an increase in the number of women in the profession and finally bring their participation rate in IT to levels consistent with other professions. The current disparity is striking. According to the BLS, in 2012, women held 57% of all professional occupations, but only 26% of professional computing occupations.
To continue reading this article register now | <urn:uuid:aa17d0b7-c79a-4b28-ba1a-0d22689d8613> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2490526/it-careers/it-s-vanishing-women.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00434-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947712 | 211 | 2.875 | 3 |
How Hadoop WorksBy David F. Carr | Posted 2007-08-20 Email Print
Initiative for distributed data processing may give the No. 2 search service some of the "geek cred" it's been lacking.
The Hadoop runtime environment takes into account the fact that when computing jobs are spread across hundreds or thousands of relatively cheap computers, some of those computers are likely to fail in mid-task. So one of the main things Hadoop tries to automate is the process for detecting and correcting for those failures.
A master server within the grid of computers tracks the handoffs of tasks from one computer to another and reassigns tasks, if necessary, when any one of those computers locks up or fails. The same task can also be assigned to multiple computers, with the one that finishes first contributing to the final result (while the computations produced by the laggards get thrown away). This technique turns out to be a good match for massive data analysis challenges like producing an index of the entire Web.
So far, at least, this style of distributed computing is not as central to Yahoo's day-to-day operations as it is said to be at Google. For example, Hadoop has not been integrated into the process for indexing the Web crawl data that feeds the Yahoo search engine—although "that would be the idea" in the long run, Cutting says.
However, Yahoo is analyzing that same Web crawl data and other log files with Hadoop for other purposes, such as market research and product planning.
Where Hadoop comes into play is for ad-hoc analysis of data—answering a question that wasn't necessarily anticipated when the data gathering system was designed. For example, instead of looking for keywords and links, a market researcher might want to comb through the Web crawl data to see how many sites include a Flickr "badge"—the snippet of code used to display thumbnails of recent images posted to the photo sharing service.
From its first experiments with 20-node clusters, Yahoo has tested the system with as many as 2,000 computers working in tandem. Overall, Yahoo has about 10,000 computers running Hadoop, and the largest cluster in production use is 1,600 machines.
"We're confident at this point that we can get fairly linear scaling to several thousand nodes," Baldeschwieler says. "We ran about 10,000 jobs last week. Now, a good number of those come from a small group of people who run a job every minute. But we do have several hundred users."
Although Yahoo had previously created its own systems for distributing work across a grid of computers for specific applications, Hadoop has given Yahoo a generally useful framework for this type of computing, Baldeschwieler says. And while there is nothing simple about running these large grids, Hadoop helps simplify some of the hardest problems.
By itself, Hadoop does nothing to enhance Yahoo's reputation as a technology innovator, since by definition this project is focused on replicating techniques pioneered at Google. But Cutting says that's beside the point. "What open source tends to be most useful for is giving us commodity systems, as opposed to special sauce systems," he says. "And besides, I'm sure we're doing it differently." | <urn:uuid:f7c6b123-78c9-4e17-89e9-d252da7fd861> | CC-MAIN-2017-04 | http://www.baselinemag.com/c/a/Projects-Enterprise-Planning/Yahoo-Challenge-to-Google-Has-Roots-in-Open-Source/2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00160-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954458 | 683 | 2.5625 | 3 |
Green Computing refers to the efficient use of computer resources. Some of the green computing practices include, Server Virtualization, Power Management, Secured Network to reduce eWaste, Using Recycled Materials, and so on. Though the ultimate goal is to save the planet and the resources for our future generations, let us limit ourselves with the Power Management here.
The launch of the Energy Star program back in 1992 was one of the first manifestations of the green computing movement. Started by the US Environmental Protection Agency, it served as a kind of voluntary label awarded to the products that minimizes the power usage. Energy Star applied to Computer Monitors, Television Sets, Air Conditioners, Refrigerators, etc.
A typical desktop computer comprises of CPU and a monitor. A computer running Intel P4, 2.4 GHz processor consumes 64.6 Watt per hour. Add another 50-100 Watt for the monitor. While the CPU consumes much litter energy when idle, the monitors on the other hand consume the same energy even when the computer is idle.
Let us do the math to calculate the average power consumption of a computer:
This is just for a single computer. As the number of computers increases, the saving on the electricity cost is huge. Just to give an example, for 2000 computers, the annual saving is $22,800.
Also, EPA estimates that the computers are active only 58% of time during working hours. The rest are used in non-computer activities such as phone calls, meetings, lunch, and refreshments. This idle time can be used to save additional power by adopting stringent power schemes to desktops.
Some of the power management techniques that enterprises can adopt are:
Desktop Central helps to standardize the power settings in all the computers of the network at once. It provides the following power saving options:
The Power Management Configuration in Desktop Central allows you to create multiple power schemes that can be deployed to multiple computers. The advanced options allows you to specify what has to be done when the laptop is closed or when the power button is pressed.
The screensavers were originally designed to save the CRT monitors from getting damaged by automatically changing the images on the screen during periods of user inactivity. With the advent of LCD monitors and technology improvements in mordern CRT monitors, screensavers are no longer required to protect monitors from phosphor burn-in. A graphically more intensive screensaver will require more power than the normal screen. While turing off the monitor from the power schemes will help you save power irrespective of whether screesaver is enabled or not, you can use the Desktop Central Registry Configuration to turn off the screensavers in older models that do not support power schemes.
Considering a normal working hours of 40 hrs/week, shutting down the computers during non-office hours will alone let you save a whooping 76% on your energy consumption. Desktop Central allows you to schedule remote shutdown tasks that can be scheduled to run on all weekdays. This helps you to shutdown all the computers during non-working hours. You can create multiple tasks to shutdown groups of computers at different times based on the employee working hours.
Desktop Central gives you a complete control on the target computers to which the configuration has to be deployed. You can create a custom group of users/computers and apply different power schemes. You can also exclude certain computers based on their OS and types from the selected targets.
Whether the intentions are to save the planet or money, enterprises that adopt these energy saving techniques will be benefited. Desktop Central in addition to Power Management offers various features like Software Deployment, Asset Management, Patch Management, Remote Control, Active Directory Reports, User Logon Reports, and Windows System Tools. | <urn:uuid:2b4cb889-d7cb-45be-b24f-f481157dafbd> | CC-MAIN-2017-04 | https://www.manageengine.com/products/desktop-central/power-management.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00160-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.912249 | 757 | 3.46875 | 3 |
0.2.6 Boyer-Moore String Searching
The Boyer-Moore searching algorithm, described in R. S. Boyer and
J. S. Moore's 1977 paper A Fast String Searching Algorithm is
among the best ways known for finding a substring in a search space.
Using their method it is possible to search a data space for a known
pattern without having to examine all the characters in the search
space. Boyer-Moore search algorithms are based on two search
The first of these rule tells us how to search for substrings without
repeats in a data space. Keep a pointer into the data space at the
current search location; initialize this pointer to the start of the
space plus n - 1 characters where n is the number of characters in
the target string.
Compare the character in the data space pointed to by this pointer
with the characters in the target string. If this character does not
occur in the target string, advance the pointer by n places.
If the character does occur in the target string, advance the pointer
by n - p places where p is the position that the character in
question first occurs in the target string.
This process repeats until either a match is found or we have shifted
over past the end of the search space.
The second search heuristic applies to searching for targets with
repeating patterns. Using only the rules set forth in the first
heuristic will work for targets with repeating patterns but the search
will not be as efficient as possible. By examining partial matches
and repeats in the target string, though, it is possible to make more
drastic pointer jumps and arrive at the match more rapidly. This type
of jump is based on a table which is computed before the search | <urn:uuid:42ebe6ec-c9e7-42f5-917e-0e345f3090e2> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/alg/node25.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00270-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.902717 | 370 | 3.921875 | 4 |
In 2014, the speed at which a website renders is a huge deal. Web speed can have a direct and profound effect on sales revenue and brand reputation.
It is fascinating (and somewhat daunting!) to consider how fast online browsing and processor speeds will become in the next 50 years, and the effect they will have on the online marketplace.
The Future of Computer Speeds
How fast will computers be in the future, you ask? According to Moore’s Law – really darn fast. If the number of discrete elements on a circuit doubles every two years, then a computer’s processing speed roughly doubles every two years as well. While processor speeds stopped conforming to Moore’s Law and started slowing down in 2010, recent developments in quantum computing could catapult processor speeds beyond even what Moore predicted.
In 2010, the fastest commercial processor was IBM’s z196, which ran at 5.2 GHz (or 5.2 billion cycles per second, an almost unfathomable speed). The fastest supercomputer in the world, on the other hand, belonged to China’s Tianhe-1, which ran at 2.5×1015 FLOPS, or 2.5 quadrillion floating point operations per second.
But what about in 2050?
In less than 2 years, IBM’s Sequoia came only at 20 petaFLOPS, an 8x increase in processing power compared to the Tienhe-1 (remember: Moore’s Law only predicts a doubling every two years). And in 2025, a supercomputer fast enough to simulate the human brain in real-time is expected to run 500x faster than the Sequoia. 13 years will have passed between 2012 and 2025, and according to Moore’s Law, computing speeds should have only increased 90.5x.
In other words, while there may be a lull in commercial processor speeds (which have to account for market needs), cutting edge processor speeds have already begun increasing faster than Moore’s Law accounted for. By 2050, a supercomputer operating at 1025 FLOPS could be a reality.
This is equivalent to a computer with the full processing power of 1 million human brains.
It is a definite future possibility (though many would say reality) that computers will be “smarter” than humans.
The Future of Online Speeds
While it is harder to predict Internet speeds with the same amount of accuracy that we can predict processor speeds, it is safe to say that online speeds will increase at an even faster rate.
It’s important to understand that commercial Internet speeds are artificially reduced by providers so that they may deliver on promised speed to all of their subscribers. Actual peak download speeds are much, much faster – almost ridiculously so.
For example, the average data transmission speed in the US in 2009 was 1.7 Mbps (megabits per second).
At the same time, Bell Labs broke a transmission record with 100 Pbps (petabits per second), which is almost 100 billion times faster. That’s fast enough to download 400 DVDs worth of data each second.
So what will peak transmission speed look like in 2015?
With improved laser technology and an increased worldwide distribution of servers, Bell Labs could feasibly download every website that has ever existed on the Internet in a few seconds (if even).
Fortunately, commercial Internet speeds will remain artificially slower, and website pages will probably continue to grow larger in size. Which, of course, means that website loading time will remain a priority for developers. | <urn:uuid:29d8cc08-0570-4588-bb6f-a5dc97404a56> | CC-MAIN-2017-04 | https://www.apicasystem.com/blog/future-computing-website-speed/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00388-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934598 | 730 | 2.890625 | 3 |
It’s easy to think geographic-based disasters are the most common issue to guard against. Often, geographic-based disasters like hurricanes and tornados have seasons, and they can be protected against on a schedule. Some even come with advance warning thanks to the National Weather Service. All of this is easy to plan for because with advance warning you can be in control, even when you’re not.
But, how do you guard against human error?
There’s no prediction center for someone pushing an old code that causes a disruption for your main revenue application. There’s no season for someone stealing your employee’s laptop (which wasn’t password protected).
Based on how often companies plan for geographic-based disasters compared to human-error based disasters, which do you think occurs most often?
In fact, it’s not geographic-based disasters, but more often human error that causes a problem. Human error can account for major disasters, too. In a 2011 blog from BetterRiskManagement.com, the article cites this story from 1991 when an AT&T switching center in mid-town Manhattan had a complete outage when the site tried to switch from commercial power to generator power. The switch wasn’t completed properly and the site went dark 8 hours later. These human errors knocked out New York’s air traffic control system, stopped wire transfers, shut down banks and impacted the stock market.
Sometimes human error can combine with technological disasters, too, which cause problems to multiply. In an often-told story about Pixar’s experience with just such an outage, the world nearly lost the movie Toy Story 2. David Spigelman tells the story that could happen to anyone on WorkingNets.com.
In the article Spigelman tells how due to an individual executing incorrect commands on the servers where the movie files were stored, the files were all deleted in a matter of seconds. Now, that’s the human error element to the story. Spigelman also shares the technological failure piece, stating that while Pixar had a backup system in place, they had been failing for a month and no one had noticed. Here, too, is a bit of human error sprinkled in with the technological failures. Pixar was saved because someone had the entire movie stored on their home computer due to the fact they had been working from home recently.
This story illustrates how easy it is for technological and human errors to impact your environment. You won’t get advance warning, so how do you guard against it?
Recovery-as-a-Service provider Bluelock has worked with partners to develop a cloud-based disaster recovery solution that may not be able to prevent human or technological error, but it will help bring you back from one faster and more easily than ever before.
To find out how, including seeing a demo of how to protect your business from the threat of human error, watch the replay this technical deep-dive webinar.
“Deep Dive into Disaster Recovery in the Cloud” is aimed at showcasing the latest cloud-based DR solutions and their use cases. In this one-hour session Bluelock CTO Pat O’Day and Solutions Architect Jake Robinson cover how cloud-based RaaS actually works and run three demonstrations.
Viewers will learn how to seed and migrate data to the cloud, how to set RTO and RPO policies, how to recover an application remotely across or across the country. They will also explain how to test and declare, how to assess workloads and size your project and how to budget for cloud-based DR. | <urn:uuid:51b51ea3-144d-4a65-b697-01ef38d5342e> | CC-MAIN-2017-04 | https://www.bluelock.com/blog/disaster-recovery-for-human-error-whats-your-plan/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00508-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950126 | 746 | 2.59375 | 3 |
A program that secretly and maliciously inserts unwanted code into program or data files on a system. It spreads by integrating such code into more files each time an infected program is run.
Once detected, the F-Secure security product will automatically disinfect the suspect file by either deleting it or renaming it.
More scanning & removal options
You can refer to General Removal Instructions for a simple guide on how to remove harmful programs.
More information on the scanning and removal options available in your F-Secure product can be found in the Help Center.
You may also refer to the Knowledge Base on the F-Secure Community site for further assistance.
A malicious program that integrates its own code into a program or file (referred to as the host file) on a computer system, without the knowledge or consent of the user. Viruses spread by infecting other files on a system each time an infected one is run; in extreme cases, after multiple affected files are run, all susceptible files on a system may be infected.
The addition of unwanted virus code into a targeted file usually causes some form of damage to it, leading to instability or total malfunction.
Viruses were once the most common type of malicious program and there are literally hundreds of thousands of viruses in the wild. These viruses are designed to attack various targets on the computer, including:
- Executable or data files, such as applications, games and documents (a file virus)
- The separate, critical boot sector of an operating system, which holds instructions for starting the computer (a boot virus)
- The separate macro scripts used in programs to perform automated functions (a macro virus)
And much more.
A virus almost always arrives on a computer system as an executable file, most popularly as an e-mail attachment. Some viruses are spread as part of a Trojan's payload. Other common ways viruses are spread are through removable media such as floppy disks, CDs or USB thumb drives.
Description Created: 2011-10-18 15:00:00.0
Description Last Modified: 2011-10-18 15:00:00.0 | <urn:uuid:0975350e-e6f5-4d2b-b416-fc7c4da92683> | CC-MAIN-2017-04 | https://www.f-secure.com/v-descs/virus.shtml | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00140-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928584 | 433 | 3.15625 | 3 |
As everything around us becomes connected to the Internet, from cars to thermometers to the stuff inside our mobile phones, technologists are confronting a tough new challenge: How does a machine verify the identity of a human being?
In Redwood City, Calif., a start-up called OneID is offering a single sign-on for a variety of Web sites and devices. In a video, an engineer at OneID demonstrated how he used it to open his garage door at home. Jim Fenton, an engineer with OneID, demonstrated how to open a garage door using his company’s technology.
“The Achilles’ heel of the Internet of things is, how do you secure access to all these things?” said the engineer, Jim Fenton. “If you connect all these things to the Internet you need to have good ways — good from a security standpoint and a convenience standpoint — good ways to control access to things. Having user names and passwords is not a good solution for every device.”
Trouble is, not very many things — online or off — have yet adopted the OneID system, which means Mr. Fenton must still use a lot of user names and passwords. He keeps them in a couple of password managers on his computer, along with an encrypted USB stick. “It’s not fun,” he said.
Read the full article at: http://bits.blogs.nytimes.com/2013/09/10/beyond-passwords-new-tools-to-i...Back to all News | <urn:uuid:4e8c70e3-98f5-4940-a362-fd5f5fea58fb> | CC-MAIN-2017-04 | http://www.northbridge.com/beyond-passwords-new-tools-identify-humans | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00445-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943408 | 325 | 3.03125 | 3 |
According to the 2014 Future of the Internet canvassing, Internet analysts and other respondents answered questions in a manner that suggests concerns that in the next decade, there could be changes that alter the way the Internet works.
The Pew Research Center Internet Project and Elon University’s Imagining the Internet Center conducted the online canvassing, inviting more than 12,000 experts identified by Pew, Internet analysts, and interested members of the public to share their opinions of the likely future of the Internet. The 2,551 respondents answered one or more of eight questions on the survey.
According to an article from the Pew Research Internet Project, most who responded during the canvassing say their hope is that in 2025 there will not be significant changes to the way people use the Internet to share and access content — other than new ways to connect through technological advances. But there were trends revealed in the canvassing that have the potential to impact the way the Internet works. The trends and some of the comments received during the canvassing follow:
1. Actions by nation-states to maintain security and political control will lead to more blocking, filtering, segmentation, and balkanization of the Internet.
“The pressures to balkanize the global Internet will continue and create new uncertainties,” said Paul Saffo, managing director at Discern Analytics and consulting associate professor at Stanford University. “Governments will become more skilled at blocking access to unwelcome sites.”
2. Trust will evaporate in the wake of revelations about government and corporate surveillance and likely greater surveillance in the future.
Danah Boyd, research scientist for Microsoft, stated, “Because of governance issues (and the international implications of the NSA reveals), data sharing will get geographically fragmented in challenging ways. The next few years are going to be about control.”
3. Commercial pressures affecting everything from Internet architecture to the flow of information will endanger the open structure of online life.
PJ Rey, a PhD candidate in sociology at the University of Maryland, wrote, “It is very possible we will see the principle of Net neutrality undermined. In a political paradigm where money equals political speech so much hinges on how much ISPs and content providers are willing and able to spend on defending their competing interests. Unfortunately, the interests of everyday users count for very little.”
4. Efforts to fix the TMI (too much information) problem might over-compensate and actually thwart content sharing.
Joel Halpern, a distinguished engineer at Ericsson, wrote, “While there are pressures to constrain information sharing (from governments and from traditional content sources), the trend towards making information more widely and easily reached, consumed, modified, and redistributed is likely to continue in 2025 … The biggest challenge is likely to be the problem of finding interesting and meaningful content when you want it. While this is particularly important when you are looking for scientific or medical information, it is equally applicable when looking for restaurants, music, or other things that are matters of taste. While Big Data analysis has the promise of helping this, there are many limitations and risks (including mismatched incentives) with those tools.”
While it is desirable to subject the Internet to the rule of law, according to an eWeek article, care must be taken to ensure that the free flow of information continues unabated on the Internet. | <urn:uuid:2d2e7ff0-3087-401a-9e94-ab95c837be04> | CC-MAIN-2017-04 | http://www.bsminfo.com/doc/pew-online-canvassing-reveals-potential-impacts-to-internet-use-0001 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00353-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940476 | 687 | 2.546875 | 3 |
NASA this week rolled out a video game that lets would-be network executives design and build a giant virtual ground and space communications system that would keep astronauts in orbit and scientists on the ground in touch.
Developed by the Information Technology Office at NASA's Ames Research Center NetworKing lets players build fast and efficient communication networks by first setting up command stations around the world and then linking them to orbiting satellites and space telescopes. Resources are earned throughout the game as players continue to acquire more clients.
More space news: 8 surprising hunks of space gear that returned to Earth
According to NASA, players can strategically use accumulated resources to enhance and increase their networks' capabilities. Players with the most integrated communications networks will have the ability to acquire more complex clients, such as the International Space Station, Hubble Space Telescope and the Kepler mission.
NASA said the key objectives to the game include: Building a Near Earth Network; deploying a constellation of geosynchronous relay satellites to support LEO missions that require continuous coverage. Gamers can also build a Deep Space Network that supports interplanetary spacecraft missions with powerful antennas. Players can then manage and improve the networks by monitoring network usage, dealing with disasters and upgrading the system with advanced network technology.
NetworKing is available to the public for play on the NASA 3D Resources website. Players can access the game using an Internet browser. It can be downloaded and run on both a PC and Macintosh operating system. To play the NetworKing game go HERE.
To play the game in your browser, follow the link below or click on the button at the top of the page. If you're trying the game for the first time, you'll see a link to download and install the necessary plug-in for your browser, NASA said.
Layer 8 Extra
Check out these other hot stories: | <urn:uuid:c391fb55-ea2d-445d-8ce5-b91b30392412> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2220856/lan-wan/nasa-video-game-lets-you-build--run-complex-space-network.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00353-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921978 | 374 | 2.875 | 3 |
Putting sensitive data in email messages or cloud storage should give you the heebie-jeebies, but a good dose of cryptography can give you peace of mind. Pretty Good Privacy (PGP) or its open-source implementation, OpenPGP, is the gold standard of encryption online, and when used properly, has the potential to thwart even the likes of the NSA.
Encryption solutions like BitLocker and DiskCryptor don't secure email messages or files in the cloud. OpenPGP's industrial-strength encryption can ensure secure delivery of files and messages, as well as provide verification of who created or sent the message using a process called digital signing.
Using OpenPGP for communication requires participation by both the sender and recipient. You can also use OpenPGP to secure sensitive files when they're stored in vulnerable places like mobile devices or in the cloud.
The trade-off for all this protection is that it's a little more complicated to use. Follow these steps to get started.
The OpenPGP-compatible Windows program we'll use is gpg4win (GNU Privacy Guard for Windows).
First, download and run the setup program. When prompted for which components you'd like to install, include the GPA (GNU Privacy Assistant) component in addition to others that are chosen by default. GPA is the program I recommend for managing your encryption keys, which I also cover in this article.
You'll need to install gpg4win on all the computers you think you'll have to encrypt or decrypt your files on.
Creating the OpenPGP keys
To use OpenPGP, you have to generate at least two keys: a public key and a private key. Keys are just very small files containing encrypted text. Your public key can be handed out to anyone to send you an encrypted message or file. Your private key is passphrase-protected, and is required to decrypt the message or file.
To create your keys, open GPA and click Keys > New Key..., enter your name, and click Forward.
Next, enter your email address and click Forward.
If you'd like to back up your key pair (highly recommended), select Create backup copy.
If you lose your private key or forget the passphrase, you'll be toast! You won't be able to decrypt any messages or files that require your private key. Additionally, if your private key and passphrase are compromised, the attacker will have access to everything you've encrypted.
Consider backing up the key pair onto a flash drive, and storing it somewhere safe. Treat your private key file like a digital Social Security card: Never store it in the cloud or on the storage of an internet-connected computer or device.
Once the certificate is created, you can choose a location to back up the key pair.
Finally, you'll be prompted to enter a passphrase for your private key. Use a strong, long and mixed-character passphrase, and never use words that are in a dictionary.
Exporting or distributing your public keys
Once you've generated your key pair, you can export and distribute the public key to receive encrypted messages and files from others. Just right-click the key in GPA, select Export Keys, and save.
You can include your public key in your email signature or publicize it on your blog or website. You can distribute the file or just the plain text that you see when you open the file in a text editor.
If you'd like the public to find and download you public key on a public server, right-click your key and select Send Keys.
Importing PGP keys
You may want to import the public or private keys to another PC or device.
Remember, the private key is very sensitive. Import it only to computers and devices that will need it to decrypt files. Conversely, feel free to load your public key onto any device that you'll need to encrypt files on.
To send encrypted messages or files to friends that use PGP, you'll have to import their public keys onto your desired PCs or devices.
To import a public key in text format, you can copy the entire raw key block--including the beginning and ending labels and dashes--and paste it into the GPA application.
Importing keys to GNU Privacy Assistant (GPA) on Windows
To import a key, open GPA and click Import. Next, browse to and select the desired key, and click Open.
Importing keys to Android Privacy Guard (APG)
To import keys to APG in Android, copy the key file or raw key text onto the device.
When importing your private key, use a secure method, like connecting your device to your computer via USB or using an OTG cable to attach a USB stick with your key pair. Don't email yourself your private key. Just don't do it.
Open the APG app, tap the key icon in the upper left to open the menu, and tap Import Keys. If you're looking for a public key, you can search public servers. Otherwise, select the drop-down menu on top to import a key from a file, QR Code, clipboard, or NFC.
Once the key is loaded, tap Import selected keys.
Now that your keys are ready where you need them, here's how to encrypt and decrypt your messages and files.
Encrypting and decrypting files in Windows
When you install gpg4win, it installs an extension in the Windows Explorer shell that lets you encrypt one or more files or folders on your system with a right-click. Files will be added to a TAR archive file and compressed before they are encrypted.
Encrypting with gpg4win
To begin, right-click your selection and select Sign and Encrypt. The 'sign' part confirms for the recipient that you're the one who encrypted the file. Check Remove unencrypted original file when done if you want the original files to be removed. Click Next to continue.
Select the recipients' public keys, click the Add button to put them on the list, and click Next. You may want to add yourself as well, so you can decrypt the file if needed.
If you selected to sign (or sign and encrypt) the file, next you need to select which private key you'd like to sign the file with, if there's more than one installed on the PC. Click Next and you'll have to enter that private key's passphrase as well.
Once you're done, you'll have an encrypted file with extension .gpg that you can email or send to others.
Decrypting with gpg4win
To decrypt files using gpg4win, right-click the encrypted file and select Decrypt and Verify.
The first two options (related to the signature and archive) should automatically default to the correct configuration. You can also choose to save the decrypted file to another location.
Click the Decrypt/Verify button and enter the passphrase for your private key.
Encrypting and decrypting files on Android
Once you get a hang of encrypting and decrypting on a PC, you'll have no problem doing it on your Android device.
Encrypting with APG
First, open the APG app, tap the key icon in the upper left to open the menu, and tap Encrypt.
For enhanced security, select the Sign option, if you have your private key imported on the device.
Tap the Select button so you can specify the certificates of those whom you've chosen to decrypt the file. From there, you can type a text-based message or tap the arrow to select file encryption. Tap the Show advanced settings to set compression and other settings.
When you're ready, tap Encrypt File (if encrypting a file) or Share with (if encrypting a message) to access the Android's native sharing options. You can tap Clipboard to paste it into another app.
Decrypting with APG
To decrypt a file with APG, tap the key icon in the upper left to open the menu, and tap Decrypt.
If APG detects you've previously copied an encrypted message from any app, it will automatically try to decrypt it. To decrypt a text-based message using a raw block, paste it into the message box.
To decrypt a file, tap the arrow to select File decryption, tap the folder icon to browse for the file. You can choose to delete the encrypted copy when you're done by selecting Delete After Decryption.
Tap the Decrypt button and enter the passphrase for the private key that the file was encrypted for.
Encrypting and decrypting email messages in Windows
There are a few email clients that offer OpenPGP-compatible add-ons. If your email program doesn't have such a feature, you can still encrypt and decrypt messages manually--inside of files, or using the Clipboard feature of GPA or a similar feature on mobile apps like APG.
When using the Clipboard feature of GPA, you can generate encrypted text messages that you can then paste inside emails, instant messages, or other forms of communication.
Open GPA and click the Clipboard button. Type or paste in the text you'd like to encrypt and then click Encrypt. Next, choose the certificates of those whom you'd like to decrypt the file. Then you can distribute the entire raw message block, including the beginning and ending labels and dashes.
To decrypt, paste in the entire raw message block including the beginning and ending labels and dashes. You'll be prompted for the passphrase of the private key associated with the message.
While OpenPGP isn't quite "set it and forget it" technology, it is very effective--so effective, in fact, that instead of trying to crack the encryption, some government agencies have resorted to issuing subpoenas for private keys and passwords.
While this tutorial doesn't provide you with an NSA-defeating level of protection (you still have much to learn, grasshopper), you now have the basics for keeping your information private from most casual attacks.
This story, "How to use OpenPGP to encrypt your email messages and files in the cloud" was originally published by PCWorld. | <urn:uuid:5908f6b5-7f56-4e24-8fdc-52d6ded8dc20> | CC-MAIN-2017-04 | http://www.itworld.com/article/2693923/security/how-to-use-openpgp-to-encrypt-your-email-messages-and-files-in-the-cloud.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00261-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.902796 | 2,113 | 2.59375 | 3 |
The Cloudy World of Passwords
"... it is time for on-line service providers to start adopting identity authentication systems that are based on one-time passwords or passcodes.”
- Stephen Howes
Recent reports have highlighted the risks and flaws of static passwords and have suggested practical ways to improve password security and reduce the likelihood of a security breach. Suggestions have included changing passwords on a regular basis (e.g. every 30 days), using combinations of numbers and letters and mixing upper and lower case characters. However, these suggestions are really trying to make the best of a system that is fundamentally flawed, and I would say that such advice is comparable to proposing how to arrange the deckchairs on the Titanic as it sails full-steam towards the iceberg.
Static passwords have increasingly become the subject of a variety of malicious attacks, including shoulder-surfing, key-logging, screen-scraping and brute force ‘dictionary' attacks. The cyber-criminals responsible for these kinds of attacks are constantly adapting and updating their methods and, as the number of users of online services continues to rise, now really is the right time for individuals and organisations to embrace authentication methods that offer better security and improved ease of use. From recent phishing attacks targeting Twitter and Gmail to the news in February 2010 that Cambridge University scientists found a fundamental security flaw with the popular ‘chip and PIN' system, every week seems to throw up yet another story proving that static passwords and PINs are past their sell by date.
With cloud computing-based services becoming the norm in today's online world, and increasing amounts of data moving into the cloud, it is time for on-line service providers to start adopting identity authentication systems that are based on one-time passwords or passcodes. While it may not be possible to completely eradicate all phishing or other hacking attacks with a single solution, one-time password methods are generally more robust and have been proven to dramatically reduce this problem. They can also, depending on the method chosen, be cheaper than legacy password systems and can improve the customer experience of the web site in question. So by making this relatively simple and cost-effective change, organisations can reduce the number of potentially embarrassing security breaches while also saving money and improving customer satisfaction.
Gridsure is exhibiting at Infosecurity Europe 2010, the No. 1 industry event in Europe held on 27th – 29th April in its new venue Earl's Court, London. The event provides an unrivalled free education programme, exhibitors showcasing new and emerging technologies and offering practical and professional expertise. For further information please visit www.infosec.co.uk | <urn:uuid:19e47356-9a18-4f44-9a17-a0b98fdebe9e> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/netsecur/cloudy-world-passwords | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00041-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942066 | 542 | 2.578125 | 3 |
I often hear people say things like, "standards are slipping" - maybe they are talking about our politicians! But that's not where I'm going today; I just had a few random thoughts on standards, per this definition:
- standard (n): a specification for hardware or software that is both widely used and accepted (de facto) or is sanctioned by a standards organisation (de jure)
We're all familiar with the British Standards' kite mark and the American National Standards Institute (ANSI). Internationally, there is the Internet Engineering Task Force (IETF - maybe you could quibble on that one as its RFCs are more akin to recommendations; but let's not deviate from our standard interpretation), the International Telecommunications Union (ITU-T) or the 3rd Generation Partnership Project (3GPP) or...
Never mind that the list of standards bodies goes on and on, my first thought was, "Isn't it maddening that their collected output is seemingly endless?" The standard response is "yes", by the way.
Don't get me wrong, it's certainly a good idea that the graphics card you buy to upgrade your computer is built to the PCI electro-mechanical standard. You want it to slot in snugly, regardless of whether you bought it on eBay or from your local RadioShack. There might be a bewildering array of options for full-length, half-size, half-height, low-profile, etc. but the point is, it fits, because it's been standardised, which is good.
So much for hardware; then there was software.
The need for a standard specification is suggested by recalling the not too distant past, when everyone queued up to criticise Cisco for its so called 'Skinny' interpretation of SIP. Standards are good; right.
VoiceXML (VXML) is a standard that gets a mixed reception, depending to whom you're talking. Some engineers say things like, "It's not that easy or intuitive for a novice to grasp. It'd be just as quick to use a general purpose, high-level language."
Advocates of VXML as a standard for specifying interactive voice dialogues between (wo)man and interactive voice response (IVR) machines would surely take a different view. And, to be provocative, I suggest that Call Control eXtensible Markup Language (CCXML) was an afterthought - or as Homer said, "Oh, so they have Internet on computers now!"
The problem with such standards is that they are so niche. Nobody writes a business application entirely in VXML (or indeed, in MSML, MSCML, MGCP, MEGACO, ...). Quite likely, such applications are written in a general purpose, high-level language. These are de facto standards, such as Python, Ruby or C#.
That brings me to media servers and the argument for using niche standards to control them. I suggest application portability is a myth. A well structured, high-level API for voice (and video) may be proprietary, however, it is an arguably better option than using a niche standard, particularly if the API library uses the same language in which you write your business application.
Incidentally, the Battle of the Standard was fought on Cowton Moor, near Northallerton, on the 22nd of August, 1138. Historians on either side have since claimed at least a moral victory for their sponsors. So, it seems that standards never change. | <urn:uuid:5b971420-577f-40b8-9e43-d671c3068611> | CC-MAIN-2017-04 | http://blog.aculab.com/2011/05/telecoms-standards-are-they-slipping.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00435-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956144 | 734 | 2.53125 | 3 |
Overview of Malware and its types
Malware is the type of virus which is specifically designed to destroy the data. It can come in various forms and hence it can perform many actions within the data and on it. For example it can completely destroy data, can send the data automatically to some other place, can alter the data or can keep monitoring it till the specified time period. It can come in many forms and one should take some appropriate steps to prevent that data destruction tool. There are numbers of software's now which are available in the market and they can be sued very effectively against these attacks. Following are some of the malware types which one might face in their computing machines;
This software is basically the advertising supporting software. It is a package which comes automatically with the advertisements inside. Hence it can generate some good revenue for the owner and the author. These advertisements which are shown might be the software's user interface or they can even be the screen which would be presented to one while the installation is being done. The built in functions might be designed in some way so that they can analyse that which of the internet sites are being visits by one and this thing mostly leads to the unwanted advertisement displays and many people complaint about it. The software's which legitimated ones are, these functions of the advertising are integrated intentionally and they come with the bundle. It is usually also considered as the way in which developer can cover the cost of creating that software. These days, this term adware has been strongly associated with the malware to describe it as a form of it. It is basically something that shows the unwanted advertisements. The advertisement which are produced by the adware are sometimes also shown as the pop ups. When the term is called in such way, the severity may vary. So, one should use the pop blockers to protect himself from such malware.
The computer virus is the type of malware program which acts in interesting way. This program, which is executed, replicates itself by putting in some of the copies of itself in some other computer programs, boot sector, data files, hard disk etc. when the replication process is done, then the areas which are affected are said to be the infected ones. Normally, the viruses are built to perform some of the harmful activities on the hosts when they get infected. They can steal the CPU time or even the space at the hard disk. They can also corrupt the data and can put some funny messages on the screen of user. Their keystrokes can be recorded and even the contacts can be spammed as well. There is some miss-conception in the mind of many that viruses can only be used for destruction of data, but it is not the case. Actually everything which can replicate itself, and is designed to do so, is the cirrus. Hence one would install the anti-virus which can take care of this problem well.
As the name implies, this software is basically used for the gathering of information about some organization or person. That information is gathered without anyone letting know that the information is being fathered from their system. This software's helps sending the information to some other entity which can gather the data about what one is doing and it is done without the consent. That's why, it is known as the spyware. It is mostly brought down in 4 types. One of them is the system monitors, other is the Trojans, tracking cookie and the fourth one is the adware. The common purpose, for which he spyware is used, is to track the movements of the internet users on the websites. They also serve some pop ups to the internet users.
This malwares is actually the non-self-replicating one. It contains some malicious code which carries out some actions which are determined by the nature of that specific Trojan. That happens upon the execution only. The result of the action is normally the data lose. And it can also harm he system in some ways. This term, has been derived from the story of the troy. They used the wooden horse. The reason why Trojans are given this name is that they often take form of some social engineering and hence can present themselves as if they are so useful and are so interesting. Then they get themselves installed at the victim's computer by manipulating them.
The rootkits are typically malicious ones. They are the stealth type of the software. They are designed in some special way that they can actually hide themselves very well and it is pretty difficult to get them detected in the system. The normal methods of detection don't work on them. Hence they can have some continued access to the computer. When someone hears the word rootkit, it means that it is software which has some negative credibility and it should not be staying on the computer. The bad thing about this is that it can get installed on the computer automatically and the attacker doesn't need any permission for that. Once they have installed it into the system, they can get access to the root or the administrator access. This access is gained through the successful attack. Once it is installed, it becomes quite possible to hide this virus in the system and hence the privileged access can be maintained easily. Having the full fledge control over the system can result in the modification of software, hence, the software might also be used for the detection and that one can be altered as well. The worst thing I, that the rootkit is something difficult thing. The reason is, one might be able to change the rootkit and can get access to the software which is sued for the detection of that rootkit. Detection method which can contain the several techniques like the usage of some trusted OS, the dumping of the memory, the signature scanning etc. can be used. The removal of these rootkits can become pretty much difficult or partially possible. This specially happens in the time when the kernel is the host for those rootkits. So, if that happens, then the reinstalling of that specific OS is the only solution which can help one getting rid of the problem.
Backdoor, actually is the thing which enables the attacker to get passed through the authentication which is normal. Hence one can get some illegal access towards the computer. This can also result into the access to the plain texts. The backdoor may have many forms. One of them can be the form of some installed program. Hence it can also enter and can destroy the system in the form of rootkit. If the user doesn't change some of the default passwords, then it is quite possible that they can act as the function of the backdoors. Also, the debugging features which we use can also be used as the backdoors. The backdoor might take another form of the user which is hard coded. It can give some combination of it and the password which can give away some access towards the system. A famous example which has been used for this is the plot device which was used in the old movies. Another great thing is that one can create the backdoor even without getting the source code of the program changed. Also, it can get modified after it has been completed. This thing is actually done by re writing of that compiler. Hence, it can make the recognition of the code during the compiling which triggers the backdoor's inclusion in the output of compiling.
This thing is basically some piece of the code which has been inserted into the software and the system intentionally. It can help setting off some malicious function. It is done when there are some specified conditions and they are met. Like, a programmer might like to hide some code piece which can start the deleting of files. Those files should never get out of the company's database. There are some of the software's which are actually malicious and it is in their genes. They can be the worms or the viruses. They get executed in this case, after some defined situations are put up there and hence they pop up. This method is actually used by some worm or the virus, which can get some maximum amount of the momentum and can spread really fast. Some attackers can make that attack on some specified date, like on the April 1st. so, the Trojans which can get activated at some dates which are already specified, are known as the time bombs. So one can install some anti-viruses which can enable one to not to get them installed and control the situation if it is going out of the hand. The best possible situation is to reformat the windows.
The botnet is classically an internet program which is connected through internet and it can help one communicate with the other same type of programs so that some takes can be performed. They can be same as the keeping control of some IRC, which is the internet related charts. Also, it can be utilized for sending out some of the spam emails or to participate in some distribution of the denial of services attacks. Botnet is the word which is made up from the two words, the network and robot. This name is normally used with some negative or the malicious connotations.
One might not like this malware. This is the malware which is used for some restricting the access to a computer system. That computer system is normally the infected one. As the name gives a hint, it requires some ransom which has to be paid to its creator so that the restriction can be removed and one can get the control of the computer back. Also, there are the forms of this programs which can encrypt the files which are on hard drive or some of them can end up with simply showing some message about payment of money to the one who has implemented this program.
In the language of computer, the polymorphic code is the thing which uses the polymorphic engine. It helps mutating while maintaining real algorithm in intact. The code is changed itself whenever it is ran through, but, there is the thing that the function which is associated with it would not get changed. Like, the numbers 1+4 and 20-15 would be producing the same type of results but they would be using some different codes. Also, this is the technique which is used by the worms, shell codes and some viruses as well. So, protection from this is very important for a person.
This is one type of the virus. There are many of the codes which are used it in and hence it becomes quite difficult to get it detected. There are many tricks that this virus has under its sleeves. One of them is that the virus can be located at some other location but it would act as if it is located at some other location. It makes it become somehow, difficult to get detected and get removed. Hence it works like some armour. That's why it is given name of the armoured virus. Protection against it can be pretty difficult so one might think about reinstallation of windows.
Hence, there are many of the malware types which can destroy the computer's system and the data. One must keep a close eye on any weird activity and should run and antivirus regularly so he can keep some check on the viruses. Also, if the worst case scenario is seen, one might consider some other alternatives too which can be the reinstallation of windows which can surely guarantee the complete removal of the all malwares and threats. | <urn:uuid:14ab5de0-44c1-4d2e-8ad8-78b46d7c4637> | CC-MAIN-2017-04 | https://www.examcollection.com/certification-training/security-plus-overview-of-malware-and-its-types.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00005-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.971558 | 2,283 | 3.265625 | 3 |
Cisco Switches and Routers running the Internet Operating System (IOS) have many things in common. Configuring these devices of course, is a skill that is sharpened the more you touch the device. During this post, our discussion will primarily focus on the basic commands associated with console and telnet access to routers and switches for out-of-band and in-band management.
First the routers and switches can be configured with an out-of-band interface called the console port. (I’m pretty sure many of you are familiar with the console port and that you have to connect a rollover cable with a DB 9 or 25 pin connector, to that interface to gain access.) The console port is important because system messages are displayed by default and access to this interface is always up (though it may be password protected). The Auxiliary port is another type of out-of-band management interface but it interfaces with a modem. Console and Auxiliary ports are a special type of EIA 232 (RS 232 is the older term for this) that normally uses an RJ 45 connector. However the auxiliary port uses RTS and CTS for flow Control with the modem. These pins (1 and 8) on the console port are disabled.
Configuring the console port and auxiliary ports are normally a basic procedure. The global commands for displaying the default for these ports are line con 0, line aux 0. On these lines it’s possible to add a simple layer of security by adding a line password to these ports. As shown in the example, you can configure a simple password.
The example also show the exec-timeout 0 0 command. This means that on the console port will never timeout. The default for this command is exec-timeout 10 0 which means that the console line will logoff when inactivity at this line reaches 10 minutes.
Other things commands shown with the command above help the display of the terminal access when someone is logged in. Logging synchronous means that system messages will never interrupt your terminal display. This command prevents you from being distracted when many system messages pop up when interfaces come on or off line, or router neighbor relationships are built or torn down. Also the length 40 means that 40 lines will be displayed at a time when a show command is issued. The example below displays the terminal length 5 command which is another form of the length command. Here you can see it only displays five lines at a time. If the terminal length is set to zero, then every show command will be displays from beginning to end without stopping.
Second, configuration can be done through the network via a web based interface like Cisco SDM or though Telnet or SSH. The HTTP and HTTPS must be enabled for SDM to work on a router. For telnet and SSH to access the router or switch must be reachable via and ipv4 or ipv6 address and you must configure the VTY (Virtual Teletype) lines. When configuring line vty 0 4, Login and password commands are necessary for telnet. Trying to configure these commands without the other can result in these messages when trying to telnet or configuring. (In a future post SSH and SDM will be discussed.)
Above you can see that Example 3 and 4 both display a message that says the login process will be disabled until the password is set. Example 5 shows what will happen when someone tries to telnet to that router and it is connection being closed. It is also possible to configure more VTY lines than the default, though this is normally done in environments where terminal access is the overall goal.
These are basic terminal commands and settings. For the CCNA these skills must be well understood and under your belt. Using these commands can help any network administer when managing routers and switches.
Author: Jason Wyatte | <urn:uuid:15399d37-d565-473c-8c7f-dba6b0ef7e50> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2009/06/02/basic-cisco-terminal-access/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00399-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.903194 | 782 | 2.765625 | 3 |
When's the last time you looked up from your smartphone and spoke to the person sitting next to you? For many of us, it's been awhile.
Mobile technology is having a huge impact on our lives and our society – along with changing the way we think about data and use the massive amounts of information generated by that technology. Experts have been taking note of the mobile phenomena and the impact on our culture. Recently, I came across an article in the March 2010 issue of Psychology Today about the impact technology is having on our culture and the advent of the iGeneration, or Internet Generation. This is the first generation to have all of the information ever generated by mankind instantly available to them! Our technology is changing the way we think and interact with each other, and it’s also providing new ways to store, retrieve and analyze the massive amount of information generated every year.
Smartphones change the way we interact and access information
In traveling for business, I have been through a lot of different airports and cities all over the U.S. Airports and shuttle buses were once noisy places with excited people having friendly chats. That is changing. I’m noticing that people are not interacting with each other as much as they used to. Most folks traveling today have their noses buried in their smartphones as they access information over the Internet, check emails, read company documents or simply play games. At breakfast in a restaurant in Kentucky, I noticed the same thing. Most of the diners were completely focused on their smartphones, even while they were eating.
We are constantly accessing information from our mobile devices and becoming heavily reliant on the business and personal information we find on the Internet or access via the cloud. These activities generate massive amounts of data that needs to be stored and can be used for big data analysis.
Smartphones generate BIG data to solve BIG problems
In the latest issue of MIT’s Technology Review Magazine, David Talbot has an article about smartphones as the real generator of big data, and how leveraging that data can change our lives. Talbot indicates that there are now more than six BILLION smartphones in use out there generating data. Every text, every search, every phone call, every email and every picture or video you upload or share is stored. If you consider each smartphone user will generate about 60 gigabytes of data each year, times the six billion devices (not counting notebooks, notepads and other devices), we generate and store more than 335 exabytes of information every year with smartphones alone. That’s really BIG data. All that data needs to be stored somewhere, which means the storage industry is in a race to provide higher and higher densities of data storage devices at lower costs, and data deduplication technology is becoming even more important.
The good news is that data we create via smartphones can be put to good use. As Talbot mentions in his article, smartphone usage patterns helped researchers in Africa determine where malaria outbreaks were occurring and where the affected people went. In this manner, researchers could determine where to best distribute medicines more efficiently. This is a great example of how big data analysis can be put to good use and have a positive impact on humanity. Soon, as mobile devices are used more frequently to purchase goods and services, the information generated will be mined to determine where you go to shop, what your interests are and even what brand of coffee you like, so advertisers and others can pinpoint your wants and desires. Now that’s what I call intelligent storage networking – making use of big data storage to mine that information.
I love my smartphone. It helps me with everything I do by enabling instant access to all my work and information, which is stored in various clouds. Our use of personal information devices will continue to increase, and hopefully we’ll find even better ways to put all that mobile data generation to good use.
This article is published as part of the IDG Contributor Network. Want to Join? | <urn:uuid:6e0879ea-757e-4756-b31d-3f09130b0777> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2473730/smartphones/smartphones--big-data--storage-and-you.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00031-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95277 | 810 | 2.53125 | 3 |
Interrogation Zone Basics
These questions are derived from the Self Test Software Practice Test for CompTIA’s RFID+ exam.
Objective: Interrogation Zone Basics
SubObjective: Given a scenario, solve dense interrogator environment issues
Multiple Answer, Multiple Choice
You are instructed to implement an RFID-based tracking system on all the dock doors in a manufacturing facility.
Which two options should you implement to minimize interference between overlapping interrogation zones? (Choose two.)
- Optimize the power levels of the interrogators.
- Use interrogators in dense-reader mode.
- Use an anti-collision protocol.
- Implement shielding between the dock doors.
A. Optimize the power levels of the interrogators.
D. Implement shielding between the dock doors.
You should optimize the power levels of the interrogators and provide a shielding between dock doors to minimize the interference between overlapping interrogation zones. You must optimize the power levels to generate enough power and backscatter a signal to the interrogator’s receiving antenna to prevent interference between overlapping interrogation zones. You can also provide shielding between the dock doors to prevent radio frequency (RF) waves from passing through and minimizing the interference between overlapping interrogation zones.
While using more than one interrogator in close proximity, dense-reader mode allows interrogators to hop between channels within a certain frequency spectrum to prevent interrogators from interfering with each other. In dense-reader mode, interrogators work at a frequency range of 902 – 928 MHz that limits the read range of interrogators. This will result in lower read rates. Therefore, we should not use interrogators in dense-reader mode to minimize interference between overlapping interrogation zones.
An anti-collision protocol is used by RFID interrogators to prevent collisions when reading more than one tag in the same interrogator’s field. An anti-collision protocol prevents collision within a single interrogation zone. While preventing collision, an anti-collision protocol enables an interrogator to identify tags one by one. This protocol cannot minimize interference between overlapping interrogation zones.
RFID Journal, Why UHF RFID Systems Won’t Scale, http://www.rfidjournal.com/article/articleview/1056/1/82/
RFID Journal, UHF RFID’s Key Constraints, http://www.rfidjournal.com/article/articleview/2244/1/128/ | <urn:uuid:d9f893ae-9107-4e07-8dfa-8c6d32a3ccde> | CC-MAIN-2017-04 | http://certmag.com/interrogation-zone-basics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00151-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.827281 | 507 | 2.765625 | 3 |
VoIP networks are very popular these days. In order to support communication between traditional PBXs, Cisco IP phones, analog PSTN, and the analog telephones, all over IP network, quite a number of protocols are needed. Few protocols are indicating protocols (for instance, MGCP, H.323, SIP, H.248, and SCCP) used to position, sustain, and bring down a call. Other protocols are marked at the real voice packets (for example, SRTP, RTCP, and RTP) relatively indicating information. Few of the most common VoIP protocols are shown and described here.
- SIP – This section is about “Session Initiation Protocol” (SIP), similar to H.323, it is regarded to be a “peer-to-peer” protocol! SIP is a famous, well known and popular protocol that can also be used and applied in a mixed-vendor surrounding, possibly due to its usage of existing and other residing protocols, such as SMTP and HTTP (Hyper text transfer protocol).
- RTP – This section is related to “Real-time Transport Protocol” (RTP) this takes the payload of the voice. Fascinatingly, even though the RTP can be classified as a protocol that is of “Layer 4”, it is summarized in the internals of UDP (and this is also a protocol of Layer 4). Even though the port numbers of this UDP that are used; can have various differences regarding the vendor, particularly in Cisco surroundings, the “RTP” characteristically makes use of UDP ports within the range of 16,384 to 32,767.
- RTCP – The control protocol of RTP (RTCP) supplies and provides with information regarding the flow of RTP (such as, information and data regarding the eminence of that call). According to context; in a Cisco surrounding or environment, RTCP characteristically makes use of odd UDP ports within the range 16,384-32,767.
- SRTP – Secure RTP – protects the broadcast of the voice through RTP. Particularly and specifically, SRTP can add authentication, encryption, anti-replay mechanisms to voice traffic, and integrity.
- H.323 – H.323 is an ITU customary. Instead of being just one protocol, it is a collection of protocols. Ahead of protocols, the H.323 customary also explains various devices, for example VoIP gatekeepers and VoIP gateways. H.323 is regarded as a peer-to-peer protocol; this is because few H.323 devices can create their very own call-routing choices, as contrasting to dependant on an outer catalog database.
- MGCP – An initial development of Cisco, the “Media Gateway Control Protocol” (MGCP) is regarded as a client protocol or a server protocol as well. This server (for instance, has an analog port that is in a router with voice enabled feature) can also interact with a server (Cisco Unified Communications server is a common example in this criteria) through a cycle of signals and events. This particular server can tell the customer that in this event that compiles of a phone going off-hook starts the signal of dialing the tone to that phone.
- H.248 – On the basis of MGCP, that is H.248 customary is also well known as Megaco in the technical nomenclature. Particularly, the new H.248 is actually a joint IETF and ITU customary. Even though H.248 also resembles the features of the MGCP, it is actually quite supple in terms of usage and its sustaining ability/to support for applications and gateways.
- SCCP – This is about Skinny Client Control Protocol (SCCP), and it is frequently known as the skinny protocol, it is actually a proprietary signaling protocol of Cisco. SCCP is frequently and quite often used for the purpose of signaling (indicating) between and Cisco Unified Communications Manager servers Cisco IP Phones. But, various gateways of Cisco also help in sustaining SCCP. SCCP is regarded as a server/client protocol; examples include H.248 and also MGCP. | <urn:uuid:b3fa6594-5111-4f15-b0be-2403f3df6c5b> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2012/voip-protocols | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00151-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941353 | 881 | 3.25 | 3 |
A clustered index is a special type of index that reorders the way records in the table are physically stored.
Therefore table can have only one clustered index. The leaf nodes of a clustered index contain the data pages.
Clustered index is more efficient . Rows ordered on clustered index makes load as well as retrieval faster
A DB2 index is a clustering index if the CLUSTER keyword is specified when the index is created.In a clustered index rows are stored contionously in sequence
When you define a clustering index on a DB2 table, you direct DB2 to insert rows into the table in the order of the clustering key values. The first index that you define on the table serves implicitly as the clustering index unless you explicitly specify CLUSTER when you create or alter another index. For example, if you first define a unique index on |the ACCTID column of the TRANS table, DB2 inserts rows into the TRANS table in the order of the customer account number unless you explicitly define another index to be the clustering index | <urn:uuid:c1fe46b1-6c44-403a-a956-ec57e28fac71> | CC-MAIN-2017-04 | http://ibmmainframes.com/about34307.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00545-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.757296 | 217 | 3.359375 | 3 |
Modern computer architectures commonly include one or more CPUs, a cache or caches, a few DDR-based memory channels, rotational and/or solid state disks and one or more Ethernet ports.
Figure 1: System block diagram
A high percentage of CPU-based systems use DDR-based DRAM for external memory. DDR-based DRAM currently provides very favorable cost/bit while providing enough bandwidth with low enough latency to meet the application demands. Although process engineers have continued to find ways to cost effectively scale feature size, the CPU power consumed has become prohibitive.
In contrast to the previous decade, CPU clock rates are scaling slower over time due to the power constraints. However, the number of transistors per silicon area continue to increase roughly at the rate of Moore’s Law. Therefore, CPUs are being designed and built with an increasing number of cores, with each core executing one or more threads of instructions.
This puts a new kind of pressure on the memory subsystem. Though the demand for instructions and data per thread is not increasing very quickly, the rapid growth in the number of available threads puts an increasing emphasis on memory bandwidth. This article summarizes the challenges that arise for the memory subsystem associated with these terascale CPUs.
Memory Key Metrics and Fundamentals
The key metrics for examining the memory sub-systems are bandwidth, capacity, latency, power, system volume, and cost.
Bandwidth (Bytes/second, B/s or bits/second, b/s). Bandwidth is the number of Bytes transferred in a given amount of time. Bandwidth is usually the most talked-about performance metric. The bandwidth required for a system is usually market segment and application (working set size, code arrangement, and structure) dependent. Interestingly, bandwidth alone is not a very useful metric for system design decisions. Other factors must be considered such as cost, power and form factor (size/space) constraints in conjunction with bandwidth.
Capacity (Bytes or B). Capacity is the total number of bytes that can be stored in the region of memory.
Latency (seconds, sec or simply s). This is the time it takes to read a word from the region of memory. The focus is usually on read latency. Write latency is often of less interest; the time required to write to a memory is often not a factor for the performance of the application.
Power (Watts or W). Power equals the energy consumed divided by the time in which that energy is consumed.
System volume, Form Factor. This is the volume required for different technologies into a system. This is usually driven by the physical size of components and/or cooling requirements.
Cost ($). Cost usually refers to the money required to use components in a system.
Often metrics are combined. Frequently used metrics include bandwidth/Cost or Watts/bandwidth (J/bit).
Double data rate (DDR) memory has become the dominant memory technology (in terms of number of units sold). DDR-based DRAM products are optimized for high capacity and low cost, not high bandwidth, low latency or low power.
As the CPUs continue to increase in capability toward the terascale level, many of the key metrics are not scaling well and are becoming system design challenges. The metrics being stressed most are bandwidth, power and latency. As potential solutions are investigated, the other metrics of capacity and form factor become challenging as well.
The expression “hitting the memory wall” is often used. Commonly the “memory wall” has the connotation that DDR cannot supply enough bandwidth for CPUs. A more accurate statement is that based on the DDR interface and channel specifications, the bandwidth per pin cannot scale up as quickly as the compute capabilities of CPUs. Simply adding more pins in parallel is not an appealing option due to system cost reasons. The problem becomes acute when CPUs reach the TeraScale performance level. Being more precise, the rate at which bits can be moved between CPUs and DDR devices is limited by the frequency dependent loss, impedance discontinuities, the power available and cost to implement. It will be extremely challenging to push and pull data at rates that exceed 2.4 – 3.2 Gb/s per data signal across DDR channels.
The need to reduce latency and the value of reducing latency is very difficult to assess. Most systems today have put a higher value on bandwidth and choose to use forms of pipelining such as pre-fetching to hide latency. As CPUs approach the terascale range via many threads running in parallel pipeline-based methods to hide memory latency will become less effective. To keep cost and power low, more emphasis will be placed on reducing the latency for the first level of the memory hierarchy that is external to the CPU chip.
Increasing the bandwidth by adding data pins as well as reducing the read latency of DDR devices could be done while maintaining the existing architectures of both the DRAM as well as the interface. However, addressing these bandwidth and latency metrics alone is not enough since one of the greatest challenges to achieving terascale bandwidths is maintaining low power consumption.
DRAM device power is composed of three main components: power consumed by the storage array, power consumed by the datapath and power consumed by the I/O pins. Roughly 50 percent of the power consumed is in the datapath, with the other 50 percent split between I/O circuits and the array. All three areas need to be addressed to create DRAM products suitable for terascale systems.
Evolutionary DRAM Summary
In summary, the key trends for evolutionary memory sub-system scaling are:
•Bandwidth scaling for traditional DDRx-based systems will end at about 2.4 – 3.2 Gb/s per pin (bump).
•To achieve the bit rates above, each channel will likely be limited to one DIMM without extra components, such as buffer on board (motherboard).
•GDDRx gives increased bandwidth but at the cost of capacity. Pin bandwidth will be limited to 5-6 Gb/s for GDDR channels being constructed today.
•Power in the memory sub-system varies from 40-200 mW per Gb/s, translating to hundreds of Watts for a TB/s of bandwidth.
•Adding capacity to evolutionary memory sub-systems is limited to adding channels, buffer on board or other forms of buffered DIMMs.
•Latency improvements for evolutionary systems will be minimal.
Terascale Memory Challenges and Future Memory Technologies
In the following section, we describe some of those challenges facing memory architects and designers and potential solutions.
The first question we need to ask is which memory technology(s) will fill the needs of these systems. DRAM technology has long dominated the market for off-chip memory bandwidth solutions in computing systems. While non-volatile memory technologies such as NAND Flash and Phase Change Memory are vying for a share of this market, they are at a disadvantage with respect to bandwidth, latency, and power.
A holistic approach is needed to achieve the required results. The main factors that will need to be addressed to achieve the optimal solution for increased bandwidth and lower energy per bit of future terascale memory sub-systems are the channel materials, the I/O density, the memory density, and the memory device architecture. We examine the changes required in these areas.
First we look at the materials that could be used to construct channels between CPUs and memory modules.
Figure 2: Data Rate versus Trace Length for different materials
Adding complexity to the I/O circuits in the form of additional equalization, more complex clocking circuits, and possibly data coding can increase the data rate, but also increase the energy per bit moved. More complex interconnects, such as flex cabling, improved board materials, such as Rogers or high-density interconnect (HDI), and eventually, optical solutions, must be considered. The emphasis on higher bandwidth/pin, I/O density and lower energy per bit read/written will lead to selective use of new channel materials.
A DRAM technology that supports a high bandwidth per pin, high capacity and low energy per bit moved will be required. A promising solution to solve these issues is 3-D technology, based on through silicon vias (TSVs). 3-D stacked memory will provide an increase in memory density through stacking, and it will enable a wide datapath from the memory to the external pins, relaxing the per-pin bandwidth requirement in the memory array as shown in Figure 3.
Figure 3: 3-D Stacked Memory Module
This design achieves six objectives:
- A method for further scaling of DRAM density.
- A relatively wide datapath from the memory array to the memory pins, relaxing the speed constraints on the DRAM technology.
- A high density connection from the memory module to the memory controller, which makes for more efficient use of power.
- The elimination of many of the traditional interconnect components from the electrical path.
- It separates the high bandwidth I/O solution from the microprocessor and memory controller power delivery path when using the top of the package for high speed I/O.
- The increased density eliminates the need for the electrically-challenged and energy-inefficient, multi-drop DIMM bus.
A key new challenge is introduced; we need a way to move the data from the wide datapath from the memory array to the memory device pins. The general characteristics necessary for an optimal solution are the ability to efficiently multiplex the data at a rate that matches the data rate of the increased device pins (Gb/s), rather than a rate that matches the slower, wider memory datapath, at an efficient energy level (low pJ per bit) that closely matches the characteristics of the CPU generating the memory requests. The architecture, design and implementation of the data collection function will be dependent on the usage of the 3-D memory module, ranging from specialized DRAM chips to a mix of logic process chips and DRAM process chips.
Given a memory of the type we describe, we must also examine the entire memory hierarchy. For example, it may be advantageous to add a level of memory to the hierarchy.
Analyzing different memory hierarchies is a huge challenge. All the metrics mentioned previously need to be evaluated in the context of the applications of interest (see “Key Metrics”). When considering additional levels of the memory hierarchy, the key decisions are where to add a level or levels in the memory hierarchy and how the levels of memory are managed.
Memory Hierarchy — Where to Add Memory
Earlier, we concluded that to meet the needs of terascale systems, designers should investigate new architectures and manufacturing techniques for DRAM, with an emphasis on 3-D stacking with TSVs. We are confident that these techniques will lead to improved DRAM products, while maintaining a low cost per bit stored. We also realize that when the new technologies are introduced, it will take time for the price per bit to drop. Therefore, early use of 3-D stacked memory as near memory, backed up by DDR-based DRAM or other low cost per bit memory technologies, may be an appealing and cost-effective choice for designers.
The policies of what data (or instructions) are placed, where they are placed as well as what is copied and shared are the key research issues facing system designers. The simple statement that data movement must be minimized will take on additional importance as terascale CPUs are built.
Summary and Conclusions
The demand for bandwidth continues to increase. Terascale CPUs will exacerbate the challenges of the memory subsystem design, including the architecture and design of memory controllers, the memory modules and memory devices themselves. DDR-based memory and interfaces will continue to be used for the markets segments where they can, but the shift to something new will begin in next few years.
To learn more, read the Intel Technology Journal, Volume 13, Issue 4, December 2009, Addressing the Challenges of Tera-scale Computing, ISBN 978-1-934053-23-2 | <urn:uuid:687095e0-a0fc-423c-8d2c-26a547436089> | CC-MAIN-2017-04 | https://www.hpcwire.com/2010/12/06/terascale_memory_challenges_and_solutions/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00481-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.918919 | 2,492 | 3.671875 | 4 |
A network switch basically is acomputer networkinghardware device that links multiple computers together within a single Local Area Network (LAN). Ethernet switch devices were commonly used forhome networks before home routers became popular; broadband routers integrate Ethernet switches directly into the unit as one of their many functions. High-performance network switches are still widely used in corporate networks and data centers for synchronized database.The switching capabilities exist for several types of networks, such as ethernet switches are the most common type. Mainstream Ethernet switches like inside broadband routers support Gigabit Ethernet speeds, and high-performance switches which are found in data centers which generally support 10 Gigabyte per second (Gbps).
Switches may operate at one or more layers of the OSI model, including the data link and network layers. A device that operates simultaneously at more than one of these layers is known as a multilayer switch. Switch is a device used on a computer network to physically connect devices together. Multiple cables can be connected to a switch to enable networked devices to communicate with each other.
The report discusses the major markets by component,traffic monitoring method,configuration options,application, and geography for network switch components market. The components comprise: Physical switch platform, Common software infrastructure, Network management tools and applications, and Storage; it is further segmented by form factor which consists of desktop, rack-mounted, chassis, and DIN rail; and by traffic monitoring method which is mainly classified into managed and unmanaged switches.
This report also presents the market trends which depicts the growth of the Network switch market from 2014 to 2020. The report presents detail analysis of different segments for global market with country wise analysis;which includes;component, traffic monitoring method, configuration options, application, and geography. The analysis of global is done with special focus on high growth application in each vertical and fast growing application market segment. Illustrative segmentation, analysis, and forecast of the major geographical markets give an overall view of the global market.
Some of the key players in this market include:Cisco Systems, Inc. (U.S.), D-Link Corporation (Taiwan), Huawei Technologies Co. Ltd. (China), ECI Telecom Ltd (Israel), Enterasys Networks, Inc. (U.S.), Juniper Networks, Inc. (U.S.), Netgear, Inc. (U.S.),and ZTE Corporation (China), are among others.
Please fill in the form below to receive a free copy of the Summary of this Report
Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement | <urn:uuid:9604f0ff-bf41-4133-97c7-9e77cdca8538> | CC-MAIN-2017-04 | http://www.micromarketmonitor.com/market-report/network-switch-reports-9453715981.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00389-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925363 | 540 | 2.65625 | 3 |
While “cloud computing” has been reverberating as a hot topic in the telecom and IT trade press for years, you know it is reaching critical mass when USA Today writes about it.
In fact, that story about how small businesses are able to leverage a host of services and applications delivered from the cloud is a good example of how confusing this phrase can be. Is the cloud a network-based application? Is it a service creation environment? Or is it the hardware and networks for service delivery? Or is it all of those things?
Today, cloud computing is typically delineated into three main categories—Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). Let’s take a quick look at each:
- IaaS is the lowest level of service and offers a customer the ability to consume virtual computing resources on demand. The number of processors, memory and storage can be configured on demand in an elastic fashion to best match the needs of a customer’s specific application. An example IaaS provider is Amazon Web Services.
- PaaS is the next level of cloud capability and offers a full development platform upon which a customer can develop and deploy an application, ranging from Web to business and social applications as well as vertical- or technology-specific ones. A LAMP platform is a good example of a PaaS.
- SaaS is at the highest level and offers a customer access to a full turnkey application as a service. The service provider hosts all hardware, software and data allowing customers seamless access to the service at any time and from any location. Given their customer-facing nature, SaaS players are the most familiar to people from pioneers like Salesforce.com to Google Apps.
To over simplify it from my view – IaaS is actually the cloud itself, the baseline hardware and software for building and delivering applications. PaaS is an application development environment created within that cloud while SaaS takes a finished application and turns it into a turnkey, utility-like service for simple consumption over the Internet.
While they play different roles, all three combine to form the basis for cloud computing and show that as services become more automated and turnkey, the level of control and customization is typically reduced in exchange for greater simplicity in the consumption and use of the service. | <urn:uuid:10ee4f7e-f4dd-44f7-9eed-1148203df005> | CC-MAIN-2017-04 | http://www.internap.com/2010/11/17/cloudy-with-a-chance-of-confusion/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00325-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942068 | 500 | 2.65625 | 3 |
The Department of Energy's Pacific Northwest National Laboratory said today the results of two, year-long power grid programs that let customers control their own power usage saved them 10% on electricity bills and reduced peak power requirements by 50% for days on end.
First, in the Olympic Peninsula Project found homeowners were willing to adjust their individual energy use based on price signals -- provided via information technology tools. Then the Grid Friendly Appliance Project demonstrated that everyday household appliances can automatically reduce energy consumption at critical moments when they are fitted with controllers that sense stress on the grid. Both studies helped reduce pressure on the grid during times of peak demand.
The 112 homeowners who participated in the Olympic Peninsula project received new electric meters, as well as thermostats, water heaters and dryers connected via Invensys Controls home gateway devices to IBM software. The software let homeowners customize devices to a desired level of comfort or economy and automatically responded to changing electricity prices in five-minute intervals.
To reduce usage in peak periods, when electricity is most expensive, the software automatically lowered thermostats or shut off the heating element of water heaters to the pre-set response limits established by individual homeowners. Customers received constantly updated pricing information via a Web site. A "virtual" bank account was established for each household and money saved by adjusting home energy consumption in collaboration with needs of the grid was converted into real money kept by the homeowners. With the help of these tools, consumers easily and automatically changed how and when they used electricity, for their own financial benefit and the benefit of the grid, the DOE stated in a release.
Meanwhile, in the second program called the Grid Friendly Appliance project, Grid Friendly Appliance (GFA) controllers were embedded in dryers and water heaters in 150 homes in Washington and Oregon. The GFA controller is a small electronic circuit board developed by researchers at PNNL.
The GFA controller detects and responds to stress on the electricity grid. When stress is detected, the controller automatically turns off specific functions like the heating element in the dryer. This momentary interruption can reduce electricity consumption enough to stabilize the balance between supply and demand on the grid without the need to turn on inefficient gas-turbine generators, according to the researchers. T
he study found that Grid Friendly Appliance controllers have the technical capacity to act as a shock absorber for the grid and can prevent or reduce the impact of power outages. Such events occurred once a day on average, each lasting for up to a few minutes. The appliances responded reliably and participants reported little to no inconvenience. The vast majority of homeowners in the study stated they would be willing to purchase an appliance configured with such grid-responsive controls, the group said.
The PNNL work is similar to other projects going on across the country that seek to get consumers involved in the electricity process. Pennsylvania lawmakers recently moved to make curbing electricity costs easier by requiring utilities to offer real-time pricing to businesses and consumers through high-tech wireless meters so customers or third party companies can control their own use as power prices climb.
Illinois’s ComEd utility recently made a real-time-pricing program permanent and available to all its customers. Its goal is to sign up 110,000 over the next several years. Utilities too have expanded programs that promote energy-saving equipment, using efficient light-emitting diodes, or LEDs, or compact fluorescent lights, variable-speed motors and efficient chillers. They also have been expanding programs that give them greater ability to cycle air conditioners, again reducing overall energy consumption and peak use.
Layer 8 in a box:
Check out these related stories | <urn:uuid:d62edc22-7d36-41dd-9316-0cb09c96cad3> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2350345/security/user-controlled-electricity-saved-money--stress-on-power-grid.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00354-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949952 | 746 | 2.765625 | 3 |
Verizon’s Tony Judd argues that agriculture is one of the first sectors to take advantage of the opportunities offered by the Internet of Things (IoT).
The United Nation’s Food and Agriculture Organization (FAO) estimates that to support an additional 2.3 billion people by 2050, we need to produce 70 percent more food. However, with dwindling resources, climate change and the increasing cost of electricity, this will be a challenge. As a result, the ability to produce more food, more quickly, is rapidly becoming a priority. The Internet of Things (IoT) can contribute to solving this problem.
Smart Farming on the way with IoT
Agriculture presents perhaps the perfect business case for IoT implementation — farmers work across large areas and have their assets in different places, which means they are difficult to manually survey. IoT, combined with big data, provides farmers with a wealth of information they can use to optimise efficiency, maximise productivity, and ensure the quality of food in the supply chain — from field to fork.
Interestingly, all kinds of agriculture, be it crop, dairy or indeed livestock farming, are reliant on maintaining the condition of distributed assets — from cattle and crops, to tractors and irrigation equipment.
IoT solutions help farmers track and monitor these assets. There are already initiatives which have developed IoT systems that enhance livestock welfare these use data collected from a variety of sensors to ensure all operations are being executed within set parameter and alerting farmers of any issues.
For example, the health of livestock can be monitored remotely and farmers can track the animals’ movement to establish grazing patterns and help increase yield. For assets like irrigation systems or farming vehicles, data gathered by IoT sensors gives farmers a holistic view of performance and helps schedule servicing and prevent yield-sapping breakdowns. In areas like precision agriculture, real-time data about soil, weather, air quality and hydration levels can help farmers make better decisions about the planting and harvesting of crops.
You might like to read: John Deere turns to IIoT to make smart farming a reality
Although connected devices have only entered the public consciousness in the past few years, there are signs of this application already being taken seriously. In a recent consultation of how to re-use the VHF spectrum, an OFCOM report cited wirelessly connected ‘smart’ farming as a one of the key opportunities in opening up the spectrum. It was highlighted that using VHF would “allow a range of new opportunities in the IoT sector for M2M applications”. Adding that they thought it could “bring significant benefits to citizens, especially those in remote or rural parts of the UK”.
Given how common the phrase ‘connected car’ has become in the public’s consciousness over the past year, there’s no reasons why we can’t move on to the connected farm; companies need to be committed to helping develop IoT across all sectors, including agriculture.
Given the urgent requirement in the industry to produce more food, businesses should all be looking at how they can improve their operations to ensure they are as successful as possible. IoT could provide a real advantage to those that embrace it by providing better quality information that aids better decision making.
Tony Judd, managing director for Verizon, UK&I and Nordics
You might like to read: Farming and shipping first to power IIoT revolution | <urn:uuid:71259b4b-10e1-4597-a623-fc03edd2a27c> | CC-MAIN-2017-04 | https://internetofbusiness.com/iot-agriculture-sowing-seeds-innovation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00170-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938334 | 700 | 2.9375 | 3 |
The tablet computer has revolutionized personal computing and the way in which we consume media in our leisure time. Its impact is also being felt in the workplace, as adoption of bring-your-own-device (BYOD) policies and flexible working schemes increases, and in schools across the country where it’s making learning more interactive. A few years ago tablets would be a rare sight in the classroom but, as we learned at BETT 2014, today they are a staple of modern education.
It comes as no surprise then that six-year-olds understand digital technology better than adults and learn to use smartphones and tablets before they are able to talk, according to a report from Ofcom. While this may sound like further reassurance that children today are better equipped for a digital world than ever before, not everyone is so positive. Some have drawn attention to the potentially harmful effects on mental and behavioral development that children are being subjected to – especially when some reports suggest that by age seven, kids will have spent one full year in front of a screen. Those in favor of tablets for kids counter with the argument that tablets in fact give young users a window to real-world experience that helps to kick-start the learning process.
One issue that has not received the same level of attention but could have more serious implications for young people today concerns their employment prospects. We’re constantly told that tech-savvy children have a great advantage when entering the job market, but what if their affinity for tablets is actually more of a hindrance than a help?
This may seem a bold claim, but consider that the desktop PC is still the preferred choice of employers and the default tool used by the majority of workers. With school children being raised on tablets, it’s possible that for many of them their first day at the office will also be the first day they have to do any real work with a physical keyboard and mouse. Basic typing skills will be sub-standard to say the least – not to mention the effect that reliance on personal devices has on handwriting development. New apps may launch every day, but Microsoft Office products such as Word and Excel are still the gold standard in office administration: anyone who has tried to manipulate a complex spreadsheet on an iPad knows that this is no mean feat.
Of course in many areas tablets trump desktop PCs as classroom learning aids. And they play an important role in wider educational initiatives such as coding for kids, helping to equip young school-leavers with skills at a level unseen in previous generations. But schools that push these new devices at the expense of more traditional technologies risk failing to equip their pupils with the foundation they need to excel at work. Just as employers should embrace the new; educators must remember not to forget the tried and tested technologies that, after all, still power our economy. Only when our children are confident in traditional computing skills can they grow into well-rounded, employable adults, as capable on a keyboard as they are with a tablet computer. A top-down approach, where tablets are adopted by the employer first, then by universities, colleges, secondary then finally, primary schools, will prevent the tablet from remaining only a supplementary learning device. | <urn:uuid:2bd4a286-6ed1-42f1-852d-42b3b352fc7c> | CC-MAIN-2017-04 | https://www.imperosoftware.com/hard-truth-swallow/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00500-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.970055 | 647 | 2.9375 | 3 |
OMAHA, Neb. -- What decisions, policies and practices need to be in place to transform regions and cities into sustainable, interconnected communities? Answering this question was the purpose of the Meeting of the Minds, an invitation-only conference held in mid-June in Omaha. Climate change, transportation, renewable energy, structural retrofits and other issues were discussed and debated among attendees from around the world.
The rapid pace of technological advancement provides local governments the opportunity to tackle problems in new and efficient ways, panelists at the event said. Retrofitting buildings throughout a city, for example, could create jobs and reduce energy expenditures.
Ron Dembo, founder and CEO of Toronto-based Zerofootprint, said "re-skinning" everything from houses to skyscrapers could lead to enormous long-term gains in energy efficiency and cost savings.
Dembo said that more than 70 percent of greenhouse gases emitted in large cities come from buildings. By comparison, SUVs contribute only 3 percent of the pollutants that many fear will lead to a hotter planet.
Though it sounds grotesque, re-skinning is merely the process of layering energy-efficient materials over an existing structure. Dembo cited the example of a dilapidated warehouse in San Francisco which, like most structures, performed dreadfully when it came to efficiently storing heat. By simply replacing the windows and overlaying the exterior with perforated, corrugated zinc panels, the building's energy efficiency increased by 60 percent -- and the exterior was made much more attractive.
The building was a winning entry in Zerofootprint's first global re-skinning competition. Dembo said he hopes the contest, which ended in February, will be the foundation for an "X-Prize-like" competition for transforming post-war, pre-1990 buildings into modern, energy-efficient structures through re-skinning. The X-Prize was a $10 million contest launched in 1996 for the first nongovernment organization to put a manned vehicle in space.
The Meeting of the Minds brought together a number of city government officials. Many came bearing the title of sustainability coordinator, environmental manager or the like. For more than a few cities represented, the position was a relatively new one, reflecting the vast disparity in sustainability policies and practices among cities.
Kristi Wamstad-Evans, for example, became Omaha's first sustainability coordinator just last fall. She's helping the city craft a comprehensive energy management plan and administering $4.3 million the city received from the U.S. Department of Energy's Energy Efficiency and Conservation Block Grant, which is funded by $3.2 billion from the American Recovery and Reinvestment Act.
Wamstad-Evans said the city is experimenting with two parking garages to determine the return on investment after replacing all of the garages' lighting with light-emitting diodes (LEDs). Omaha has already replaced 60 percent of the bulbs in traffic lights with LEDs. The city also plans to roll any savings it sees from improved energy efficiency into additional retrofitting projects.
Matt Naud, environmental coordinator for Ann Arbor, Mich., has been at his post far longer than most of his peers. For 10 years he has been working with residents and businesses to show them how sustainability doesn't have to be about the environment, but instead about the bottom line.
"Most of our focus is on community energy efficiency," he said. "If your city or business doesn't have an energy coordinator, it has nothing to do with climate change; it's a cost center that just happens to have huge environmental benefits."
Naud cited LEDs in traffic signals as a simple example of the business case for sustainability. The LEDs cost more up front but last years longer than incandescent bulbs and end up paying for themselves.
Naud also said Ann Arbor is encouraging businesses to conduct energy audits, which he says will pay for themselves after they reveal where energy waste is rampant.
He said the city is also looking at how it can offer
residents low-interest loans to pay for energy-efficient retrofits. Such loans would be tied to property assessments. The effort is part of the city's attempt to showcase the economic benefits of energy efficiency.
"You have to find out what works in your community to motivate people," Naud said. "In most instances you can make a very strong business case [for sustainability.]"
For Kay Johnson, environmental initiatives manager in Wichita, Kan., making a strong business case for sustainability is her only option due to the region's politics.
"Due to the politically conservative climate, I don't use the terms global warming or greenhouse gases," she said. "Instead I talk about energy efficiency and can still accomplish the same goals."
One of the city's goals was improving air quality. Two simple solutions have helped toward that end. First, Johnson said, Wichita's traffic signals were notorious for rarely providing two green lights in a row. By reprogramming traffic signal timing so traffic flows more steadily, the city cut down on the number of idling cars. Second, locomotives pulling lengthy loads, which resulted in traffic snarls at railroad crossings, also frequent Wichita. By elevating the tracks, the snarls and the pollutants were reduced.
Leslie Strader, assistant environmental steward of Columbus, Ohio, detailed some of the ambitious sustainability initiatives the city is involved with. First, she said the city built a new Web portal for citizens called Get Green Columbus, allowing people to learn more about the city's sustainability efforts.
"We've set some high goals for ourselves," Strader said. We've made the commitment of 40 percent reduction [of greenhouse gases] by 2030. We're going to start retrofitting all crosswalks with LEDs. We've started an energy efficiency fund where businesses can get low-interest loans to do energy efficiency upgrades."
She added that Columbus is looking at ways to provide residents with free energy audits and have their refrigerators replaced if they don't meet energy standards.
Columbus has also launched what it calls the GreenSpot program in which residents and businesses can sign up to make energy efficiency commitments. When the commitments have been fulfilled, a GreenSpot certification is issued and, for businesses, the certification can be displayed in shop windows or reception areas. The program has nearly 1,700 Columbus-area participants so far.
Photo: Omaha skyline by Chad Vander Veen | <urn:uuid:f740a641-837c-4012-827c-3d439aee06c7> | CC-MAIN-2017-04 | http://www.govtech.com/policy-management/Sustainability-Solutions-Can-Drive-Economic-Growth.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00344-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960787 | 1,320 | 2.671875 | 3 |
Retrofitting a data center is about managing limitations and trade-offs. Decision-makers have to consider physical limits (such as the weight a floor will support and how much cooling equipment can fit into an existing space). Then there’s infrastructure to think about: It would be difficult to swap out an old uninterruptible power supply (UPS) cable for a brand-new one. Such restrictions have an impact on energy efficiency too: Existing UPS cables generally operate at 85 percent efficiency, whereas the newest ones are in the range of 97.5 percent. To reach the highest efficiency numbers, you’d need to change your entire data center architecture, which is impractical for most companies.
Retrofitting a data center to make it more energy efficient has its restrictions, but doing so can be less costly than having to rebuild an entire facility. To weigh the variables—and achieve energy cost savings – you need to know what’s broken. Here are five tips for determining the efficiency of your data center and how to make it green as can be.
1. Get to know your data center.
An energy efficiency assessment from someone who specializes in data centers should be a priority, says Neil Rasmussen, CTO of American Power Conversion (APC), a provider of data center power and cooling equipment. IBM, EYP Mission Critical, Syska Hennessy, APC and Hewlett-Packard offer such services.
HP recently added Thermal Zone Mapping to its assessment offering. This service uses heat sensors and mapping analysis software to pinpoint problem areas in the data center and helps you adjust things as needed, says Brian Brouillette, vice president of HP Mission Critical Network and Education Services. For example, the analysis looks at the organization of equipment racks, how densely the equipment is populated, and the flow of hot and cold air through different areas of the space. It’s important to place air-conditioning vents properly so that cool airflow keeps equipment running properly, without wasting energy, says Brouillette.
2. Manage the AC: Not too cold, not too hot, but just right.
Energy efficiency often starts with the cooling system. “Air conditioners are the most power hungry things in the data center, apart from the IT equipment itself,” says Rasmussen. If your data center is running at 30 percent efficiency, that means for every watt going into the servers, two are being wasted on the power and cooling systems, he says. To reduce wasted energy, one of the simplest and most important things you can do is turn on the AC economizers, which act as temperature sensors in the data center. According to Rasmussen, 80 percent of economizers are not used, just as IT administrators often turn off the power management features in PCs. It’s also important to monitor the effects of multiple air-conditioning systems attached to a data center; sometimes, Rasmussen says, two AC systems can be “out of calibration” one sensing humidity is too high and the other sensing it'stoo low; their competition, like a game of cooling tennis, can waste energy.
Richard Siedzick, director of computer and telecommunications services at Bryant University, uses such features in his data center. “If the temperature rises to a certain level, the AC in that rack will ramp up, and when it decreases, it will ramp down.” The result is a data center climate that few are used to. Instead of being met with an arctic blast at the door, Siedzick says people have told him his data center is too warm. That’s not actually the case: AC economizers help cooling stay where it is needed, rather than where it is not. And that means increased efficiency and monetary savings. “We estimate we've seen a 30 percent reduction in energy [in part, due to more efficient cooling] and that translates into $20,000.” Siedzick says other precision controls, such as humidity sensors, are used in the data center as well. | <urn:uuid:98c988c9-c849-4996-9b94-8e2d2efd1d53> | CC-MAIN-2017-04 | http://www.cio.com/article/2438302/energy-efficiency/five-ways-to-find-data-center-energy-savings.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00070-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946917 | 833 | 2.609375 | 3 |
More than half of children in school will be studying Facebook rather than lessons, says a new study of more than 1,000 UK pupils.
Global Secure Systems (GSS), an IT security consultancy, found 52% of the 1,000 children aged between 13 and 17 who participated in the study confessed that they looked at social networking sites during lessons.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The survey, conducted through Facebook, aimed to discover just how widespread children's use of such sites at inappropriate times was. More than a quarter said they were Facebooking in class for more than 30 minutes a day.
David Hobson, managing director of GSS, made the initial discovery when he spent a day at a local public school speaking to its pupils about internet ethics and behaviour. During his presentation to 13-year-olds, who were all diligently tapping away on their laptops, he asked how many had visited social networking sites during their lessons. He was shocked when they all raised their hands. This ignited his determination to uncover if this was an isolated case or whether it was rife among school children.
"I am disturbed, but not surprised, by the findings," he said. He was concerned for the safety of youngsters on the web and worried by time lost for lessons.
"The time youngsters spend on the internet, and more specifically on social networking sites, is a huge challenge for parents and those of us in education," said Toby Mullins, head of Seaford College.
"Youngsters are not only using lesson time but often quietly continue late into the night, leaving them short of sleep and irritable the next day. I think a study like this to highlight the problem is very timely. We now need to plan for a solution."
Hobson said, "Kids are spending up to 2.5 hours a week of lessons on Facebook. I recognise that there is a place for social networking, with a whole new generation now relying on it to communicate, but not at the expense of an education. Schools could learn a lesson from industry and ensure school children use the internet productively. With the right software it is easy to limit access to inappropriate websites or limit it to break-time."
A separate GSS poll conducted with Infosecurity Europe 2008 discovered that social networking sites, such as Facebook, MySpace and Bebo are costing UK corporations close to £6.5 billion a year in lost productivity.
GSS itself clamped down on social networking during working hours. When asked for more bandwidth, Hobson analysed the company's traffic and discovered that it could save the cost of the upgrade simply by restricting the times people could access social network sites to lunchtimes and after hours. | <urn:uuid:e7eede79-c41c-4268-92a8-5345ce827db0> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/2240085386/Half-of-schoolchildren-use-Facebook-during-lessons-study-says | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00070-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.977034 | 568 | 2.828125 | 3 |
Verhaegen D.,CIRAD - Agricultural Research for Development |
Assoumane A.,CIRAD - Agricultural Research for Development |
Assoumane A.,University Abdou Moumouni |
Serret J.,CIRAD - Agricultural Research for Development |
And 10 more authors.
Tree Genetics and Genomes | Year: 2013
The dry forests of New Caledonia are an exceptional ecosystem because of their numerous endemic botanical species and their highly diversified fauna of insects, mollusks, reptiles and birds. Unfortunately, the area of the dry forests has been significantly reduced, mainly by human activities. Ecological, phenological and genetic analysis of Ixora margaretae, a symbolic species of the sclerophyll forest, has revealed contrasting traits among natural stands. The division of the natural range and then the separation of forest islands has greatly reduced the existing genetic variability of this species. The genetic diversity is strongly structured in genetic clusters which correspond well to specific ecotypes according to the environmental conditions and the forest types. Furthermore, genetic analysis of the reproductive and non-reproductive trees as well the half-sib families obtained by complete protection of mother trees has revealed substantial genetic drift which has resulted in increased loss of allelic variability. The total consumption of seeds by mainly rats confirms the observed absence of natural regeneration. All these results show that measures taken to protect the stands of dry forests will not be enough to maintain sufficient genetic variability of I. margaretae populations in the long term. Assisted regeneration with control of the increase in variability will be necessary to maintain the biodiversity of the species. The results obtained for I. margaretae must be confirmed with other symbolic species in order to take the necessary measures for the effective preservation of the dry forests in New Caledonia. © 2012 Springer-Verlag Berlin Heidelberg. Source | <urn:uuid:bbec6b32-a9cb-451a-bcd8-a700516385de> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/iac-institute-agronomique-neo-caledonien-bp73-2773169/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00070-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.901386 | 384 | 2.890625 | 3 |
Consider three different scenarios that place healthcare patient safety at risk. The first is an individual hazard, the second human behavior, and the third a system issue in the broad sense of "system" as distinct from information technology (IT).
The first consists in placing concentrated potassium alongside diluted solutions of potassium based electrolytes. Now you need to know that intravenous administration of the former (concentrated potassium) results in stopping the heart almost instantaneously. In one tragic case, for example, an individual provider mistakenly selected a vial of potassium chloride instead of furosemide, both of which were kept on nearby shelves just above the floor. A mental slip-erroneous association of potassium on the label with the potassium-excreting diuretic-likely resulted in the failure to recognize the error until she went back to the pharmacy to document removal of the drug. By then it was too late.
Second, a pharmacist supervising a technician, working under stress and
tight time deadlines due to short staffing, does not notice that the sodium
chloride in a chemotherapy solution is not .9% as it should be but is over 23%. After
being administered, the patient, a child, experiences a sever headache and
thirst, lapses into a coma, and dies. This results in legislation in the state
Finally, in a patient quality assurance session, the psychiatric residents on call at a major urban teaching hospital dedicated to community service express concern that patients are being forwarded to the psychiatric ward without proper medical (physical) screening. People with psychiatric symptoms can be sick too with life-threatening physical disorders. In most cases, it was 3 AM and the attending physician was either not responsive or dismissive of the issue. In one instance, the patient had a heart rate of 25 (where 80+ would be expected) and a Code had to be declared. The nurses on the psychiatric unit were not allowed to push a mainline insertion into the artery to administer the atropine and the harried resident had to perform the procedure himself. Fortunately, this individual knew what he way doing and likely saved a life. In another case, the patient was delirious and routine neurological exam - made up on the psychiatric unit, not in the emergency room where it ought to have been done - resulted in his being rushed into the operating room to save his life.
In all three cases, training is more than adequate. The delivery of additional training would not have made a difference. The individual knew concentrated potassium was toxic but grabbed the wrong container, the pharmacist knew the proper mixture, and the emergency room knew how to conduct basic physical(neurological) exams for medical well being. What then is the recommendation?
One timely suggestion is to manage quality and extreme complexity by means of check lists. A checklist of high alert chemicals can be assembled and referenced. Wherever a patient is being delivered a life-saving therapy, sign off on a checklist of steps in preparing the medication can [should] be mandatory. The review of physical status of patients in the emergency room is perhaps the easiest of all to be accommodated, since vital signs and status are readily definable. Note that such an approach should contain a "safe harbor" for the acknowledgment of human and system errors as is routinely performed in the case of failures of airplane safety, including crashes. Otherwise, people will be people are try to hide the problem, making a recurrence inevitable.
The connection with healthcare information technology (HIT) is now at hand. IT professionals have always been friends of check lists. Computer systems are notoriously complex and often are far from intuitive. Hence, the importance of asking the right questions at the right time in the process of trouble shooting the IT system. Healthcare professionals are also longtime friends of checklists for similar reasons, both by training and experience. Sometimes symptoms loudly proclaim what they are; but often they can be misleading or anomalous. The differential diagnosis separates the amateurs from the veterans. Finally, we arrive at a wide area of agreement between these two professions, eager as they are to find some common ground.
Naturally, a three ring binder on a shelf with hard copy is always a handy backup; however, the computer is a ready made medium for delivering advice, including top ten things to watch in the form of a checklist, in real time to a stressed provider. In this case of emergency room and clinics, the hospital information system (HIS) is the choice platform to install, update, and maintain the checklist electronically. However, this means that the performance of the system needs to be consistent with delivery of the information in real time or near real time mode. It also means that the provider should be trained in the expert fast path to the information and need to hunt and peck through too many screens. The latter, of course, would be equivalent to not having a functioning list at all.
And this is where a dose of training in information technology will make a difference. The prognosis is especially favorable if the staff already have a friendly - or at least accepting - relationship with the HIS. It reduces paper work, improves workflow, and allows information sharing to coordinate care of patients.
This is also a rich area for further development and growth as system provide support to the physician in making sure all of the options have been checked. The system does not replace the doctor, but acts like a co-pilot or navigator to perform computationally intense tasks that would otherwise take too much time in situations of high stress and time pressure. Obviously issues of high performance response on the part of the IT system and usability (from the perspective of the professionals staff) loom large here. Look forward to further discussion on these points. Meanwhile, we now add another item to add to the vendor selection checklist in choosing a HIS: must be able to provide templates (and where applicable, content) for clinical checklists by subject matter area.
It should be noted that "the checklist manifesto" is the recommendation in a
book of the same title by the celebrity physician,
"Potassium may no longer be stocked on patient care units, but serious threats still exist" Oct 4 2007, http://www.ismp.org/newsletters/acutecare/articles/20071004.asp
"An Injustice has been done," http://www.ismp.org/pressroom/injustice-jailtime-for-pharmacist.asp
Posted August 16, 2010 1:08 PM
Permalink | 1 Comment | | <urn:uuid:eb36ae05-259e-4e7d-ad62-960437113099> | CC-MAIN-2017-04 | http://www.b-eye-network.com/blogs/agosta/archives/2010/08/healthcare_pati.php | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00188-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957146 | 1,328 | 2.703125 | 3 |
Black Box Explains...Solid vs. stranded cable
Solid-conductor cable is designed for backbone and horizontal cable runs. Use it for runs between two wiring closets or from the wiring closet to a wallplate. Solid cable shouldn’t be bent, flexed, or twisted repeatedly. Its attenuation is lower than that of stranded-conductor cable.
Stranded cable is for use in shorter runs between network interface cards (NICs) and wallplates or between concentrators and patch panels, hubs, and other rackmounted equipment. Stranded-conductor cable is much more flexible than solid-core cable. However, attenuation is higher in stranded-conductor cable, so the total length of stranded cable in your system should be kept to a minimum to reduce signal degradation. | <urn:uuid:fc0d7c75-c67c-4851-ae75-e15bc976b135> | CC-MAIN-2017-04 | https://www.blackbox.com/en-pr/products/black-box-explains/black-box-explains-solid-vs-stranded-cable | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00006-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.910232 | 162 | 2.703125 | 3 |
The fastest computers are going hybrid
During the past decade, the biannual list of the world's fastest supercomputers has become increasingly dominated by systems that use a mix of processors, including commodity processors produced by Intel and Advanced Micro Devices
- By Joab Jackson
- Dec 15, 2008
Automobiles aren’t the only machines taking a hybrid
approach. Judging by the recent SC08 conference in Austin, Texas,
the future of supercomputer design seems to be heading toward using
multiple types of processors in a single system. That approach is a
significant change in the supercomputing field, and like any major
shift in technology, it comes with hidden problems.
In the past decade, systems that use commodity processors
produced by Intel and Advanced Micro Devices have increasingly
dominated the biannual Top500 list of the world’s fastest
supercomputers compiled by laboratories at the Energy Department
and a group of universities.
Although not as powerful as vector processors built specifically
for the high-performance computer market, those chips are much less
expensive and offer more processing power per dollar when bought in
Recently, however, developers began augmenting commodity
processor-based supercomputers with specialty processors, such as
floatingpoint accelerators, field-programmable gate arrays,
repurposed graphics processing units (GPUs) and even IBM’s
Cell Broadband Engine (Cell/BE) processors, which were designed for
video game consoles.
For example, developers of the top computer on the most recent
Top500 list — Los Alamos National Laboratory’s
Roadrunner, a 1.1 petaflop IBM machine — augmented its AMD
Opterons with IBM PowerXCell processors. And on the Green500 list,
which is the Top500 reordered by power efficiency, the top seven
computers all ran on IBM Cell/BE-based BladeCenter QS22
Why the shift? Better power usage.
“Power performance has become a very important metric as
of late — some feel even more important than [simply]
performance,” said Kaushik Datta, a graduate student in
computer science at the University of California, Berkeley. Datta
presented the results of a study he led about the best ways to
design multicore systems at the SC08 conference.
Although the Top500 list ranks machines by how many
floating-point operations/sec (flops) a machine executes, the
Green500 ranks them by how many flops per watt a machine executes.
In that realm, specialized processors rule. One industry expert at
the conference estimated that the Cell/BE can produce about 14
flops for about 97 watts of energy, and a GPU can produce about 2
flops per watt. Meanwhile, a generic x86 processor can produce only
about 1 flops at that wattage.
“As you specialize the chip, you’re able to be much
more efficient with what you are doing with the flops,”
Timothy Mattson, a senior research scientist at Intel, said during
a talk on the company’s experimental 80-core Tera-scale
Of course, new architectures require developers to rework their
code. We hear that the Cell/BE, which is still in its infancy, has
an especially steep learning curve for programmers.
“Are you willing to put in the time to program” for
these environments? Datta asked rhetorically. That is the question
system builders and developers will have to ask themselves while
hungrily eyeing performance gains.
Joab Jackson is the senior technology editor for Government Computer News. | <urn:uuid:2bffa88f-b964-432a-bd32-02fa826d9f69> | CC-MAIN-2017-04 | https://gcn.com/Articles/2008/12/15/The-fastest-computers-are-going-hybrid.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00492-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.910438 | 771 | 2.703125 | 3 |
Over the last few years, consolidation has assumed new forms, including data center consolidation and server consolidation.
Data center consolidation involves combining many data centers into one. Frequently, the data centers in question are located in different countries. Colgate-Palmolive, for example, has successfully consolidated 50 data centers worldwide into one. (For a detailed report, see www-935.ibm.com/services/us/cio/ciostudy/pdf/colgate01.pdf.)
You can correct x86 server sprawl by consolidating x86 servers. Servers running Windows can be virtualized, thus allowing each Windows application to have its own copy of the operating system. This eliminates the problems that can occur when several applications attempt to share one copy of Windows.
In the past, consolidating x86 servers on the mainframe required that either the workload be converted to z/OS or to Linux. It’s relatively easy to convert UNIX servers to Linux servers and then consolidate them on a mainframe. Consolidating Windows servers onto the mainframe is more difficult because the conversion to Linux may be harder or even impractical.
Now, new technology on the mainframe has come to the rescue. By using the zEnterprise BladeCenter Extension (zBX), Windows servers can be consolidated onto a mainframe with no changes.
Here are some benefits of consolidation on the mainframe:
• Reduced hardware maintenance. Fewer physical servers lower the cost of hardware maintenance. System z also has an outstanding reputation for reliability, so there are fewer hardware-related problems to disrupt normal processing.
• Reduced environmental costs. Fewer servers in the data center lowers power requirements. Also, less heat is generated, lowering the amount of air-conditioning required.
• Reduced software costs. System z software costs are much lower than for distributed systems. Often, the biggest consolidation savings result from reduced software costs.
• Fewer system administrators are needed. Since it’s possible to consolidate hundreds of x86 servers into a single mainframe, fewer system administrators are needed. One customer said that one system administrator can handle 100 virtual servers.
• Better hardware utilization. Hardware utilization increases even before virtualization. In addition, workloads are assigned to the most suitable (efficient) platform.
• Improved Reliability, Ability, and Serviceability (RAS). Mainframes have significantly better RAS characteristics than the typical distributed system.
• Better data center space utilization. This can be the most important savings of all. Having fewer servers reduces the square footage needed for the data center.
• Creates a standard “virtual server.” It’s possible to define a “golden image” of a virtual server, facilitating fast, new server deployment. The case studies included here show the enormous savings that can be achieved in this way as compared with the time needed to deploy a distributed system server.
• zBX. zBX lets you easily consolidate Windows servers that previously couldn’t be consolidated into the mainframe. (For a good introduction to IBM’s zBX, visit http://entsys.me/ybmk1.) IBM POWER7 systems such as AIX can also be consolidated into the mainframe. Now, customers can keep workloads on the platforms best suited for them without changing the workload or the operating system on which it runs.
• Easier automation possibilities. zBX and Unified Resource Manager (zManager) offer a single point of control, making software automation more feasible.
• Availability of private networks. The built-in networks in the System z support simplification and increased security. Firewalls are eliminated and the simplicity leads to greater reliability.
• Storage consolidation. Server consolidation is usually accompanied by consolidating storage into a large shared pool. This allows elimination of private storage in each distributed system, which will provide greater efficiencies in storage allocation.
The following case studies illustrate the savings that can be realized from consolidation. The studies are simplified to focus on consolidation benefits; to learn more, see www.nxtbook.com/nxtbooks/maxpress/realworld_ibm_systemz/.
Baldor Electric of Fort Smith, AR, acquired another company with 200 servers of various types, including Intel and high-end UNIX. Baldor was unhappy with the complexity of maintaining required service levels. They converted to all Linux servers running under z/VM on the mainframe.
Their mainframe system consists of:
• Six central processors
• 70 z/VM partitions
• Three System z Integrated Information Processors (zIIPs)
• 16 Integrated Facility for Linux (IFL).
Baldor reduced the data center size by 50 percent, freeing up 3,000 square feet. Electricity costs were cut by 60 percent, while overall IT costs were reduced by 50 percent as a percentage of sales. Forty percent of the company’s DB2 work is done on the zIIPs. DB2 licensing costs went down more than 90 percent compared with Intel or UNIX.
Bank of New Zealand
The Bank of New Zealand (BNZ) had a z10 and a z9. They consolidated a mix of 131 Sun servers onto Red Hat Linux running under z/VM. Power usage dropped nearly 40 percent. They need only one system administrator for every 100 virtual servers. BNZ reduced the front-end footprint by 30 percent. They believe they will achieve a 20 percent Return on Investment (ROI) over the life of the platform. Finally, deploying a new server takes minutes, not days.
Anyone contemplating or planning for a data center consolidation should consider the benefits of consolidation on the System z mainframe. Naturally, they need to carefully examine the potential benefits to assess how they apply to their situation. For example, the reduction in software licensing can amount to a significant savings. Savings on square footage in the data center may be critical in some cases such as where further expansion of the data center is impractical or too expensive. Often, mainframe consolidation may prove the most attractive solution. | <urn:uuid:4b9a86ee-c275-44f0-ade9-751c9967d28a> | CC-MAIN-2017-04 | http://enterprisesystemsmedia.com/article/cios-see-significant-savings-with-enterprise-system-z-consolidation | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00244-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.906032 | 1,250 | 2.9375 | 3 |
Definition: (1) Any function that is a constant times the logarithm of the argument: f(x)=c log x. (2) In complexity theory, when the measure of computation, m(n) (usually execution time or memory space), is bounded by a logarithmic function of the problem size, n. More formally m(n) = O(log n). (3) Sometimes imprecisely used to mean polylogarithmic.
Generalization (I am a kind of ...)
sublinear time algorithm.
See also linear, polynomial, exponential.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 17 December 2004.
HTML page formatted Mon Feb 2 13:10:39 2015.
Cite this as:
Paul E. Black, "logarithmic", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/logarithmic.html | <urn:uuid:d1e72d98-72fd-45bf-83a7-371a431e7777> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/logarithmic.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00272-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.813015 | 245 | 2.703125 | 3 |
As New Jersey and other communities in the Northeast begin the rebuilding process after Hurricane Sandy, they face the question of how to rebuild.
It seems with more and larger natural disasters, the subject of long-term recovery has been getting more attention. Some say it’s long overdue. One of the questions facing the aforementioned communities is: Do you rebuild as before or take into consideration the effects of continued climate change and the continuing trend of more devastating natural disasters?
The answer from researchers is without a doubt, storms like Sandy and Irene could make landfall more often than previously projected. Irene, a Category 3 hurricane, generated storm surges that caused flooding to be considered a 100-year event. But researchers at the Massachusetts Institute of Technology and Princeton University suggested recently that that type of surge could occur every three to 20 years as the climate changes.
They studied four climate models that generated 45,000 synthetic storms within the New York City area, under two different climates. The current climate condition represented the years 1981 through 2000, and the future climate reflected years 2081 through 2100.
The researchers simulated thousands of storms taking place under varied conditions and found that the 500-year floods that we’re used to could occur every 25 to 240 years if what they think about climate change is true.
In New York, a 100-year flood surge would produce a flood of about 2 meters. A 500-year flood surge would be a 3-meter high surge. The researchers found that with increasing greenhouse gas emissions, the 2-meter flood surge would occur every 20 years and the 3-meter surge every 25 to 240 years. Manhattan’s seawalls are 1.5 meters. The suggestion is to rebuild with higher seawalls in mind to prevent a major flood every 20 years.
Flood experts say the rebuilding effort offers an opportunity for better standards that will create more resilience for these communities. There will be pressure to rebuild quickly, but that should be resisted in favor of a smart plan that takes into consideration the dangers of future storms.
Along with calculating new flood surge levels, communities should consider elevating structures or otherwise flood-proofing or relocating them if they’re in areas deemed hazardous. In some cases, structures that have been repeatedly damaged shouldn’t be rebuilt but instead left to nature.
At the same time, it’s important to ensure that residents whose homes are damaged or destroyed don’t go through miles of red tape and bureaucracy like some did and are still doing after Katrina.
In the end, Sandy will have cost more than $50 billion, according to estimates. It would be a shame to rebuild only to have the next storm cause Sandy-type damage to the same areas. | <urn:uuid:4f522b33-ad83-4801-9a4c-2ece31adca05> | CC-MAIN-2017-04 | http://www.govtech.com/em/disaster/Sandy-Long-Term-Recovery-Column.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00510-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952329 | 556 | 3.3125 | 3 |
Researchers from the University of San Diego (Benjamin Laxton, Kai Wang and Stefan Savage) developed Sneakey, a system that correctly decoded keys from an image that was taken from the rooftop of a four floor building. In this case the image was taken from 195 feet. This demonstration shows that a motivated attacker can covertly steal a victim’s keys without fear of detection. The Sneakey system provides a compelling example of how digital computing techniques can breach the security of even physical analog systems in the real-world.
The access control provided by a physical lock is based on the assumption that the information content of the corresponding key is private – that duplication should require either possession of the key or a priori knowledge of how it was cut. However, the ever-increasing capabilities and prevalence of digital imaging technologies present a fundamental challenge to this privacy assumption.
Using modest imaging equipment and standard computer vision algorithms, we demonstrate the effectiveness of physical key teleduplication – extracting a key’s complete and precise bitting code at a distance via optical decoding and then cutting precise duplicates. In this paper, researchers describe their prototype system, Sneakey, and evaluate its effectiveness, in both laboratory and real-world settings, using the most popular residential key types in the U.S. | <urn:uuid:06d7ffc1-10b0-4870-ad45-7d926d6b5b3a> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2008/10/31/reconsidering-physical-key-secrecy-teleduplication-via-optical-decoding/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00326-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920115 | 264 | 2.703125 | 3 |
Photo: Carol Broos, music teacher, Sunset Ridge School, Northfield, Ill.
Carol Broos offers no rubrics in the technology classes she teaches at Sunset Ridge School in Northfield, Ill. Her classroom's learning environment is a free-for-all -- students do whatever assignments they want. All her students receive A's at the end of the academic quarter, whether they complete one project or 10.
Broos doesn't even have professional training in technology. She's a music teacher, and frequently doesn't even know how to operate the software she provides students. Her technology class is connected to her music program - each student learns to use music composition software. However, projects go far beyond that realm, and into Web design and graphics programming.
Apparently her unorthodox methods are effective. Many of her students work above their grade level and win national technology awards. Broos recently won the 2008 Golden Apple, a teaching accolade viewed by many as the most prestigious in the Chicago area.
Broos insists a traditional classroom environment in which the teacher gives the same lesson to the entire class can cripple technology education. She begins each class with five minutes of instruction; then students work on whatever projects tickle their interest.
"I don't believe in rubrics because they're too confining for my gifted kids. If I had a rubric, my gifted kids would totally slack [off]," Broos said.
Her approach enables speedy learners to dart straight to projects that match their abilities. High-achieving students typically need little help, which gives Broos extra time to focus on other children. By students working at their own pace, they produce better work, Broos said. The focus on projects that students pick also propels that advancement, she noted.
Broos contends that her teaching approach does a better job of stimulating students' interest in technology careers than a traditional learning environment. This could have implications for the technology work force as a whole, as the Computing Research Association's 2008 Taulbee Survey of Ph.D.-granting computer science (CS) and computer engineering departments reported an 18 percent drop in newly enrolled CS students over the prior two years. This decline will likely hit state and local governments hardest because CS graduates tend to favor private-sector jobs. A larger pool of technology graduates would give state and local governments a better shot at meeting their IT work force needs.
Educators commonly struggle to increase the amount of one-on-one instruction in the classroom. For a teacher, like Broos, who monitors vastly different student projects all at once, overcoming that challenge is mandatory. Broos found a solution using headsets. All of her students wear them, and their computers face toward the classroom walls so she can see all activity from her central station. The students -- fourth- through eighth-graders -- work in pairs.
"Their heads are faced in the direction of the wall. Even the ADHD [attention-deficit hyperactivity disorder] kid tends to be very focused. The only thing they can really look at is their screen," Broos said. The students don't hear the clatter of the classroom, so they sit down and work.
Through her headset, she instructs students who need help by viewing their work from her monitor.
The students' freedom to achieve their academic interests has produced results in Broos' classroom. Seventh-graders Henry Bacon, 13, and Frank O'Meara, 12, won digital education awards at the Center for Digital Government's 2008 Best of the Web awards ceremony. The boys developed Lazertron.net, a Web site that offers games and tutorials to teach their classmates how to program using Adobe Flash.
"We just totally fell in love with Flash, and we wanted other people to have the same experience as
us," Bacon said.
His partner O'Meara agreed.
"There are so many things you can do with Flash. You can do games and animation. The opportunities are endless," O'Meara said.
While the boys learned Flash in Broos' classroom, she didn't teach them a thing about it. "I have no idea how to use Flash. I work on my own programs. We have different go-to people in the classroom. These two guys are the go-to guys for Flash," Broos said. "Since people are working on different things, they can see what all of the other kids are working on. They tend to say, 'I want to learn how to do that.' It sort of spurs them ahead."
If Broos can't answer a student's question, the pupil is instructed to look for the information in online forums. Increasingly technology literacy requires self-directed learning, contends Broos. "Technology is tripling daily. We, as teachers, are not going to be able to know all of it. All of these kids are on Facebook. No one taught them."
Robby Hauldren, 12, another student in Broos' class, learned Flash using the tutorials on Lazertron.net. Now Hauldren is assuming operation of the Web site because Bacon and O'Meara moved on to other applications.
"I'm working on an Apple program called Quartz Composer," Bacon said. "It's what professional graphics designers use to make 3-D compositions. You can make a 3-D game, or just take images and write code to make them do stuff. Our games right now [on Lazartron.net] are more one-dimensional. It's complicated software for somebody our age."
Kendall Starkweather, executive director of the International Technology Education Association (ITEA), cheered Broos' willingness to bring applications she doesn't know how to use into the classroom. "If we limited education to what the teacher knew, we'd limit all of the creativity and brainpower the student might have," Starkweather said.
Broos' teaching method, which Starkweather admiringly called "management of learning," doesn't use a systematic grading system -- every student gets an A. But this atypical grading system concerns Professor Michael Daugherty, the department head of Curriculum and Instruction at the University of Arkansas.
"The student needs to earn the grade. I'd be much more comfortable if she said this was a pass-fail class. You're going to find that students will take advantage of that," Daugherty said. Though he said he approves of Broos' instructional system, her decision to give all students A's makes the strategy perilously similar to discovery learning, a hotly disputed method among education researchers. With minimal instruction, discovery learners problem-solve by drawing on their experience and knowledge. They interact with their environments by exploring and manipulating objects, wrestling with questions and controversies, or performing experiments.
"There are some [discovery learning] classes where kids just go in and play. You have to really monitor what's going on to make sure there's real learning taking place," Daugherty said.
Broos said that isn't a problem in her classes. "Over the years, I've had one slacker. I try to motivate kids to work. My kids work unbelievably. They don't want to be off the computers. If I see they're slacking, I'll tell them to take off their headsets and sit in the middle of the rug. It's almost like taking food away from them. They know they've got a good deal in here," she said.
She does have general expectations. For example, she asks fifth-graders to compose a song for the fifth-grade band using music notation software Sibelius. She also wants them to produce a Web site with blogs, podcasts, pictures and musical compositions. However, a student still receives an A if he
or she just does only parts of those projects.
Guidelines are vaguer in her sixth-grade class, where she requires a "spectacular project."
"They can do anything they want, but it has to be spectacular, and it has to be something someone else has never done. That was where Henry and Frank came up with their [Lazertron.net] Web site. They have to come up with something that totally wows me," Broos said.
Here is another point where some education experts who are generally supportive of Broos' method part company: Starkweather and Daugherty suggest she incorporate more-specific, non-negotiable requirements.
Daugherty said she could easily align her open-ended learning approach with the ITEA Standards for Technological Literacy for K-12. Students could still pursue projects of their choice, but those projects must expose them to a specified minimum of skills. Starkweather said it's likely that Broos' students already satisfy the standards. Both experts cautioned that teachers who use Broos' method could have trouble defending its effectiveness to school administrators without systematic proof of what learning occurs.
If someone inquires about the abilities of Broos' students, she can point to others besides Bacon and O'Meara. One student excels at Adobe Photoshop. Others use Google SketchUp to construct fantasy homes and cities. Two fourth-graders who are interested in hunting are constructing a Web site that details hunting trips they took with their fathers. However, Broos can say little about what students know collectively after leaving her class because the skills required to complete the projects vary. Still, Starkweather and Daugherty acknowledged that while Broos' technology education strategy lacks systematic proof of its efficacy, the anecdotal evidence is difficult to ignore.
Broos believes much of the restrictive culture found in technology education is fueled by laudable, but futile concerns about what children can view on the Internet.
"We have a lot of administrators who are worried about 'the two P's': predators and porn," Broos said. "These kids, even with filters, can get to both. We have to teach them how to deal with that, and not say, 'I'm going to protect you.' We're not going to be able to protect them. People that think we can -- forget it." | <urn:uuid:2f2ee089-a046-47d9-a33f-24aa28d50142> | CC-MAIN-2017-04 | http://www.govtech.com/education/Flash-Kids-Design-Instructional-Web-Site.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00142-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.970371 | 2,081 | 2.90625 | 3 |
It looked like something out of a science fiction movie when the first U.S. test of "driver-disengaged transit" occurred last August just north of San Diego. There, along a 7.6 mile stretch of highway closed to midday traffic, drivers simply took their hands off the wheel, removed their feet from the pedals, and cruised along -- guided by a combination of technologies that kept them within designated lanes, at a specified speed, and at a safe distance from other vehicles and obstacles. As their cars moved along the "intelligent" highway, people read the newspaper or did paperwork, but not one of them crashed, broke down, or were involved in one of the 6.5 million highway accidents that occur in the United States each year.
The recent demonstration, conducted by the National Automated Highway System Consortium (NAHSC), was the latest step toward a goal mandated in the 1991 Intermodal Surface Transportation Efficiency Act (ISTEA). The goal is to get automated highways in use by the general public by 2020, and NAHSC -- a working group of state and federal government agencies, private industry and academia -- has taken the lead in making that a reality.
What's An AHS
Automated highway systems, when launched, are expected to pack as many as three times more cars onto existing roads, sparing the expense of building new roadways while simultaneously decreasing congestion. Dick Bishop, Federal Highway Administration manager for the automated highway system program, said while it now costs between $1 million and $100 million to build a mile of new highway, it would cost less than $10,000 to equip that same mile with automated vehicle technology.
The view from the passenger's perspective inside a vehicle under automated control
Additionally, computer-steered vehicles are expected to eliminate up to 90 percent of the car accidents that occur on U.S. roads. "The solution technology holds is we can make roads safer," said Kyle Nelson, chief information officer at the California Department of Transportation (Caltrans). "Statistics show 90 percent of all accidents are caused by driver error, so anything you can do to give the driver a little more time, alert them they're about to have a collision or have a computer take the controls and assist, is going to save a lot of lives."
NAHSC plans to build special automated vehicle lanes on existing highways. Drivers will then have a choice of whether they want to use the system or not. If they choose to use it, they will enter a special lane that merges them onto the automated system. At that point, the car's technology interacts with the technology on the highway and takes over the driving. The automated vehicles will travel in a closely packed group, allowing them to "draft" quickly along the road, saving fuel at the same time. A driver who wants to get off the highway would enter a "transition" lane, slowly disengage the system and merge off into normal traffic.
NAHSC -- which includes organizations such as the U.S. Department of Transportation, the University of California, Bechtel, Caltrans, General Motors, and Lockheed Martin -- conducted the recent demonstration of the technology that could eventually comprise the $200 million automated system.
NAHSC examined many systems; among them a hybrid system from the American Honda Motor Co. Honda's offering used cameras and radar in rural areas and then switched to under-bumper sensors guided by iron magnets embedded in urban roadways. Another system, from Ohio State University, teamed radar with magnetic strips in the road.
But according to Nelson, the demonstration was more about combining technologies that already exist rather than testing new technologies. "The equipment being used is pretty much off-the-shelf stuff," he said. "What's new are the combinations of different technologies. Some of the vehicles out there were guided by magnets in the pavement, some of them were guided by magnetic tape, others were guided by video systems that literally track the white lines down the side of the road to keep the vehicle positioned. The idea is that we'll start determining which technologies work the best, and which are going to become the standards for the highways of the future."
Some of the components that make automated driving possible are stored in the trunks of the test vehicles.
Jim Rillings, program manager at NAHSC, said the demonstration helped the organization's members learn about what will work and what won't. "Almost everything we tried worked well, which surprised us," he said. "But we did learn that -- at least in the current state of development -- the vision-based technologies are not as reliable as other types of sensing technologies, like magnetic markers."
At the same time, NAHSC was searching for inexpensive technologies that will make automated vehicles affordable to the average consumer. They hope to reduce the costs of the equipment down to the $1,000 range, or about the same as some other high-end options offered on new cars.
Choosing A System
NAHSC won't decide which automated system it will go with until 2000. And, as Nelson points out, it may not be just one system that's chosen. "It may be decided that for highly-congested urban areas, the magnet technology is the way to go; while on long rural stretches of two-way highways, the visual system is the best."
"I think it will end up being a basket of options, with a range of technologies," said Rillings. "Those options will be available to regional transportation agencies to select which best suit their own surface transportation needs. But we do have to maintain national interoperability -- so trucks, for example, that run on an automated system in New York, need to be able to use the same equipment on a system in California." That means NAHSC will have to set certain standards, particularly when it comes to communication between the vehicles and the infrastructure.
Once a system is decided on, NAHSC plans to put together a prototype highway somewhere in the United States by 2002. After that, officials in Washington must decide whether to promote the system for installation by state highway departments.
"Right now, the most important things are to continue the technical development, continue making the systems more capable of operation in a wider variety of conditions, and continue working on ways to make it less expensive," said Rillings.
Will It Fly?
Despite the many advantages automated highway systems afford, the question remains whether Americans -- traditionally viewed as lovers of the automobile and the freedom that goes along with it -- will be willing to use an automated system.
"Yes, because it
will be entirely their choice," said Rillings. "I think that people that want to be chauffeured automatically will do so some of the time and won't other times."
Nelson agreed. "They'll like it once they realize the advantages. And it's not like all of a sudden tomorrow the public is going to find that they're in a fully automated vehicle. We see this as a very evolutionary process. Some of this equipment -- like GPS or smart cruise-control systems that use radar to apply the brakes when you're too close to the vehicle in front of you -- are available now. Every few years there will be new electronic and automated upgrades available to the public in the vehicles they get. That will make for a smoother transition down the line."
As far as California is concerned, Nelson said if even half the population uses the system, it will make an enormous difference. "There are some real problems with congestion on California highways," he said. "We got involved in this because we knew there had to be solutions that are less expensive than adding new lanes to freeways."
So far, NAHSC feels it's on track for success with the project. "We do feel we'll be successful, and a real reason for the success is the joint participation by both the public and the private sector," said Rillings. "We have private-sector companies involved, but we also have both state and federal government agencies involved. That really makes a difference to have everyone working together toward a common goal."
October Table of Contents | <urn:uuid:fe63333a-7a7e-4279-a266-d714155594ed> | CC-MAIN-2017-04 | http://www.govtech.com/featured/Where-the-Research-Meets-the-Road.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00053-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.969758 | 1,667 | 2.890625 | 3 |
The Obama administration took a step Friday toward plugging thousands of small methane leaks from oil and gas operations around the country, saying the escaping gas is contributing to climate change. The Interior Department announced proposed regulations that would require energy companies to reduce methane leaks in order to drill anywhere on land owned by the government or Native American tribes. The proposals would affect more than 100,000 oil wells that supply about 10 percent of the nation’s natural gas. A combination of accidental leaks and deliberate venting or flaring of methane gas from public and Native American lands released about 375 billion cubic feet of methane into the atmosphere between 2009 and 2014, according to government estimates. The leaks waste a valuable resource — the lost methane could have supplied energy for 5.1 million U.S. homes for a year — while also putting more heat-trapping greenhouse gases in to the atmosphere, U.S. officials say. “I think most people would agree that we should be using our nation’s natural gas to power our economy – not wasting it by venting and flaring it into the atmosphere,” Interior Secretary Sally Jewell said in announcing the proposal. “We need to modernize decades-old standards to reflect existing technologies so that we can cut down on harmful methane emissions.” [Leaks contribute to a Delaware-sized methane plume over New Mexico] The proposed regulations — which are opposed by the oil and gas industry — are the latest in a series of initiatives aimed at lowering U.S. emissions of greenhouse gases, which scientists say are contributing to a dangerous warming of the planet. The Environmental Protection Agency is expected to announce similar curbs for other oil and gas operations as part of an administration-wide effort to reduce U.S. emissions of methane gas by at least 40 percent by the year 2025, compared to 2012 levels. Methane, the main component in natural gas, is about 25 times more potent than carbon dioxide in trapping the sun’s heat in the lower atmosphere, according to EPA estimates. But methane also dissipates relatively quickly —in a few decades, compared to centuries for carbon dioxide. Scientists say rapidly cutting methane pollution can buy the world’s nations more time to tackle the bigger challenge of reducing carbon emissions. The Interior Department rules, if finalized, would impose new limits on venting and flaring — or burning off — of excess natural gas, a common practice in the oil and gas industry that prevents the buildup of pressure on wellheads. The proposals also sets standards for equipment by used energy companies and requires more frequent inspections to check for leaks, in the first significant update of the Interior Department’s regulations on methane in three decades. Administration officials estimate that the regulations would prevent the loss about at least $115 million worth of methane a year, more than offsetting the equipment costs. “The gas saved would be enough to supply every household in the cities of Dallas and Denver combined, every year,” said Neil Kornze, director of the Bureau of Land Management, the agency that oversees the bulk of government-owned lands in Western states. Industry officials criticized the proposals as burdensome and unnecessary, saying energy companies already are adopting voluntary measures to prevent the loss of valuable methane. “Another duplicative rule at a time when methane emissions are already falling — and on top of an onslaught of other new BLM and EPA regulations — could drive more energy production off federal lands,” said Erik Milito, director of upstream and industry operations for the American Petroleum Institute, the largest trade association for oil and gas companies. “That means less federal revenue, fewer jobs, higher costs for consumers, and less energy security.” But environmentalists and watchdog groups praised the proposal as a boon for taxpayers and the environment. “For too long, oil and gas companies have been allowed to waste billions of cubic feet in natural gas and avoid paying hundreds of millions in royalties,” said Ryan Alexander, president of Taxpayers for Common Sense, a fiscal watchdog group. Josh Mantell, carbon management campaign manager for The Wilderness Society, applauded the proposed rule as a significant step in controlling greenhouse gas emissions. “These guidelines would have the added benefit of reducing pollution that causes disease and emissions that contribute to climate change,” Mantell said. It’s official: 2015 ‘smashed’ 2014’s global temperature record. It wasn’t even close Why clean energy is now expanding even when fossil fuels are cheap Why we’ve been hugely underestimating the overfishing of the oceans For more, you can sign up for our weekly newsletter here, and follow us on Twitter here.
News Article | March 6, 2016
Last week the BLM kicked off its public comment process for important new rules designed to limit methane waste from oil and gas operations on Federal and Tribal lands. The agency held hearings in Farmington, New Mexico and Oklahoma City seeking input from the public on the proposal.read more
"The Obama administration on Friday proposed a new rule aimed at curbing emissions of planet-warming methane from oil and gas drilling on public land. It would force companies to use equipment to capture leaked gas and raise the costs they pay for extracting fuel on government property. The draft regulation, proposed by the Interior Department, is the latest step by President Obama to use his executive authority to clamp down on the fossil fuel emissions that contribute to climate change, and to make it more expensive for oil, gas and coal companies to mine and drill on public land. It follows last week’s controversial move by the Interior Department to halt new leases for coal mining on public lands, and to reform the government’s program for leasing federal lands to coal companies with an eye to raising their costs. It also comes as the administration has particularly targeted emissions of methane, a chemical contained in natural gas that is about 25 times more potent than carbon dioxide. The Obama administration wants to cut methane emissions from the oil and gas sector by 40 to 45 percent from 2012 levels by 2025." Coral Davenport reports for the New York Times January 22, 2016. "BLM Clamps Down On Methane Waste On Public Lands" (Greenwire)
News Article | September 9, 2016
On September 6, a U.S. district judge in Los Angeles issued a ruling overturning a federal plan to open vast tracts of public land in central California to oil and gas drilling, which includes hydraulic fracturing (fracking). U.S. District Judge Michael Fitzgerald ruled that the Bureau of Land Management (BLM) had failed to analyze the risks of fracking and other extreme oil and gas extraction techniques when preparing a resource management plan that would have allowed drilling on more than one million acres of land in California’s Central Valley, the southern Sierra Nevada, and in Santa Barbara, San Luis Obispo, and Ventura counties. “Fracking raises a number of environmental concerns, including risks of groundwater contamination, seismicity, and chemical leaks,” Fitzgerald wrote in the ruling. He also cited potential threats to endangered wildlife and concluded that the BLM’s environmental impact statement (EIS) was “inadequate.” “The Bureau is therefore obligated to prepare a supplemental EIS to analyze the environmental consequences flowing from the use of hydraulic fracturing,” Fitzgerald wrote. The ruling came in response to a lawsuit filed by the Center for Biological Diversity (CBD) and Los Padres ForestWatch, which were quick to applaud Judge Fitzgerald’s decision. “This is a huge victory in the fight to protect our water and wildlife from fracking pollution and dangerous drilling,” Brendan Cummings, CBD’s conservation director, said in a statement. “The Obama administration must get the message and end this reckless rush to auction off our public land to oil companies. As California struggles against drought and climate change, we’ve got to end fracking and leave this dirty oil in the ground.” BLM officials estimated that oil companies would be using hydraulic fracturing on 25 percent of new wells drilled on the public lands in question. The 1,073-page management plan they prepared, however, contained just three mentions of fracking. Furthermore, it lacked any deeper analysis of the threats which the controversial drilling technique poses to the environment and public health as it blasts huge amounts of water mixed with toxic chemicals underground to release oil and gas. Another consideration, which did not factor into Judge Fitzgerald’s decision, is just where the oilfield wastewater would be disposed, should these lands be opened to drilling. The CBD found that, between April 2015 and March 2016, 39 percent of new drill permits issued for wastewater disposal wells were for drill sites within five miles of a fault line. Regulators with California’s Division of Oil, Gas, and Geothermal Resources (DOGGR) were the officials behind these permits. DOGGR officials have been embroiled in controversy ever since it was revealed last year that they had improperly permitted oil companies to dump toxic waste into protected underground aquifers via thousands of wastewater injection wells, violating both federal and state laws. DOGGR has said it plans to seek exemptions for as many as 60 of those aquifers. In February, DOGGR filed the first application for exemption with the Environmental Protection Agency; the CBD filed suit last month to stop the exemption process from moving forward. In 2013, a federal judge ruled that the BLM had violated the National Environmental Policy Act (NEPA) when it failed to consider the risks of fracking when issuing oil leases in Monterey County, California. Since then, the BLM has halted all lease sales in Monterey County as it completes an environmental review of fracking’s risks for that county. A similar outcome is expected following this week’s decision, according to the CBD. “A management plan for BLM land in central California that doesn’t address fracking is like an emergency plan for San Francisco that doesn’t address earthquakes,” Greg Loarie of Earthjustice, which represented the CBD and Los Padres ForestWatch in the suit, said in a statement. “BLM can’t just ignore the most important environmental issue on their plate.” Judge Fitzgerald notes in his decision that the public lands BLM was proposing to open to drilling contain “extraordinary biodiversity” and “numerous groundwater systems that contribute to the annual water supply used by neighboring areas for agricultural and urban purposes.” In fact, of the 130 federally protected animal species classified as threatened or endangered in California, more than one-third can be found in or around the areas under consideration for drilling and fracking, according to the CBD. “This ruling will protect public lands from the crest of the Sierra Nevada to the Central Coast from an influx of oil development and fracking,” ForestWatch executive director Jeff Kuyper said in a statement. “These treasured landscapes provide many benefits to our local communities and are too valuable to sacrifice for a few days’ supply of oil.”
News Article | January 6, 2016
The Bundy militia, the handful of anti-government types currently having a sleepover party in a wildlife refuge visitor's center, wants an end to the "tyranny" of the federal government's oversight of public lands. Their plan mainly seems to be shepherding in a new era of unregulated, unchecked natural resource extraction and exploitation. Their main target is the Bureau of Land Management, a federal agency that has found an unlikely spotlight since rancher Cliven Bundy refused to pay a BLM bill and called it a revolution. In a highly classy and certainly not racist move, they've adopted the #BLM hashtag. The BLM is an interesting target. For the Bundy clan, it happens to make for an especially good foe because it's a relatively unknown agency. Most of its lands are far away from major population centers and consist of deserts and grasslands—not exactly destinations. It's a bit like the US Forest Service but without the forests. The Bundys want us to think that the BLM is, like the National Park Service, tasked with preservation, an arbiter of wilderness (they hate wilderness). They would like us to think that the BLM's mission involves keeping good folk like the Bundys from blindly tearing shit up like true Americans. The truth is closer to the opposite of this. While providing recreation opporitunities and protecting open-space is part of its mission, the BLM is of any federal land agency the most concerned with facilitating exploitation: mining, drilling, grazing. The BLM lands surrounding my old home in southwest Colorado (by Cortez at the Utah border), for example, even have the additional status of being a "national monument"—sort of like a national park but without the same protections—and yet you'd have a hard time throwing a stone without it clanking against a pipeline or piece of machinery. (The target there is mostly carbon dioxide, which is indeed a thing drilled for.) Across the border in Utah, it just gets worse with the open-pit nightmare of the Lisbon Valley Mine. This occurs on BLM land: In California, BLM land hosts 595 different oil leases, responsible for 15,800,000 billions of production annually. About 500,000 barrels a day. The federal government, the landowner (you), gets about 12 percent in royalties from oil and gas sales, a rate that hasn't be updated since 1920. Here's an aerial shot of the Kern River Oil Field. It is certainly liberated. And the Bundys demand more. To see the fundamental disconnect between the militia's campaign and reality, we need to look briefly at the origins of the BLM. There was a time when the agency didn't exist and ranchers had their rangeland utopia. Prior to 1934, some 80 million acres of western lands were just there for the taking. This was the homesteading era, and, indeed, ranchers took and took and took. Care to guess how this went? After decades of steady rangeland deterioration, and increasing violence among cattle ranchers, it became clear that the historical system of, well, no system wasn't sustainable; not "unsustainable" in the environmentalist sense (or not directly), but in the sense of the continuation of ranching as a viable economic activity. In the words of BLM historian Marion Clawson, "a large part of the public lands had already suffered serious, accelerated erosion, largely (but not wholly) as a result of uncontrolled grazing." Soon there would be literally nothing to graze at all. Since cattlemen first began appearing in the West, attracted by the promise of free grazing land, access to that free land was governed mostly by custom. This didn't work out so well, as Wyoming historian Russel L. Tanner writes in "Leasing the Public Range: The Taylor Grazing Act and the BLM.": However, the feds still didn't really want anything to do with the whole mess, and, beginning in 1879, a series of proposals were made to offer up the land to either the states or private buyers for a nickel an acre or less—basically giving it away. But, since the land was then free, or at least unmanaged by a formal entity, these proposals had little appeal and so things continued to deteriorate across the West. A peculiar sort of stalemate emerged as ranchers continued to claim public lands via unauthorized and illegal fencing while resisting reciprocal efforts by the federal government to give all of the same land away for next to nothing. In Wyoming, private lands made up only about 16 percent of the entire state in 1919, despite these efforts. For ranchers, the ideal seemed to be something like private stakes on public land. All rewards and no responsibility. This is exactly what the ranchers won, and it's what they continue to enjoy. In the words of Encyclopedia of the Great Plains editor David J Wishart, the Taylor Grazing Act of 1934 was enacted to, "stop injury to the public lands; provide for their orderly use, improvement, and development; and stabilize the livestock industry dependent on the public range." Part of the initial goal, according to a BLM history, was to increase rangeland productivity. More cows in less space. "The act as amended in 1936 established grazing districts on the vacant, unappropriated and unreserved lands of the public domain: fifty-nine districts encompassing 168 million acres of federal land and 97 million acres otherwise owned," Wishart continues. "The act, as amended in 1939, established grazing advisory boards, primarily composed of livestock owners." The Act created what was then known as the Grazing Service, which administered public lands in parcels and collected fees. These fees initially were meant to cover administrative costs, but, as time went on, and the Grazing Service became the BLM, grazing fees essentially came under the control of the ranching industry. Nowadays the BLM takes in around $12 million annually in revenue while spending some $80 million in a role that amounts to being a public caretaker of resources exploited by private entities (ranchers, miners, drillers). The difference is covered by American taxpayers as the BLM continues to spend more on maintaining rangelands than it takes in as income from those who profit from those lands. The map below is of the rangelands surrounding the Malheur National Wildlife Refuge, where the Bundys are making their stand, which has so far mostly been ignored by the feds. The areas with green striping are grazing parcels. They belong to ranchers that are not the Bundys. The lease named "big bird" looks to be the closest to the Bundy's occupation. According to BLM records, it belongs to "Golden Rule Farms" of Christmas Valley, Oregon. Alkali, the next lease over, belongs to Charles, Marjorie, and Darwin Dunten. The rangeland to the west is, ironically enough, allocated to the Confederated Tribes of Warm Springs. When the militia says things like "freeing the land," what it really means is less freeing the land from the BLM than it is in freeing the land from other ranchers (the sort that do actual ranching) who own leases on BLM land. | <urn:uuid:aef91853-1bc8-46a7-a722-065a9390c890> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/blm-1334901/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00355-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957594 | 3,708 | 3.34375 | 3 |
Dec 12, 2015 Wired
Wired: For Google, Quantum Computing Is Like Learning to Fly
"AT A NASA lab in Silicon Valley, Google is testing a quantum computer—a machine based on the seemingly magical principles of quantum mechanics, the physics of things like atoms and electrons and photons. This computer, called the D-Wave, carries a $10 million price tag, and the idea is that it can perform certain tasks exponentially faster than computers built according to the laws of classical physics—the physics of the everyday world.
The trouble is that even top quantum computing researchers can’t quite tell whether the D-Wave will provide this exponential leap when applied to tasks that are actually useful, that can improve how the everyday world operates, that are more than experiments in a lab. But after several months with its D-Wave computer, Google believes that this machine can prove quite useful indeed. "
Read the article here. | <urn:uuid:2b713081-8ab4-4327-8f7d-ca03b00bc5db> | CC-MAIN-2017-04 | http://www.dwavesys.com/media-coverage/wired-google-quantum-computing-learning-fly | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00079-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922248 | 188 | 3 | 3 |
While the famous Chinese general may not have had hacking techniques in mind when he penned The Art of War some 2500 years ago, there is great merit in knowing your enemy, and the techniques s/he may use against you. If you are a network administrator, a critical part of your job is defending your systems. Knowing what these attacks are, and how to defend against them, will help immensely with the task of protecting your information systems from harm. While there are thousands of potential attacks, and many books and countless websites that cover them to the tiniest detail, the following five general categories can help you defend against the lion’s share of threats facing your systems.
1. Attacking Defaults
These days, essentially every piece of hardware and network application on the market comes with a set of default credentials; a username and a password that grant administrative access to the system. One of the most common ways of gaining unauthorised access to a system is by exploiting the fact that often, admins do not know, or do not care to change, these defaults.
Whether we are talking about a database application, a router, or a printer, defending against these attacks is simple. The first thing you should do when connecting a system to your network or installing an application on a server is to change the default credentials.
2. SQL Injection
Arguably one of the most devastating attacks against web based systems is the SQL Injection attack. Today’s dynamic websites often comprise much more than just a web server serving html code and graphics files to users. Ecommerce sites use database servers to host the backend information that is used to build interactive sites, present product information, and take orders. Even some of the most simplistic seeming websites may have a database on the backend. If the site provides a way for users to log on, or to submit information, you can bet there is a database behind the scenes.
SQL Injection attacks are when an attacker inputs SQL commands into the fields meant for other information, like usernames or search strings. A properly designed website will examine any data submitted by a user to make sure that the information is valid. A username typically will contain only letters; an email address might have letters and numbers, but only a few metacharacters like @, ., -, and +. If this input contains something a simple as a single quote ‘ sign at the end of the username, it could be interpreted by the database application as constructed SQL, and interpreted as a query. While it may not be a valid query, the database server may return an error that exposes information like the name of the database, its tables, and key fields. Continuing down this path, an attacker could submit SQL commands into the username field that could be executed to return the contents of the database, or to do things like drop tables.
To defend against this attack, your web applications must evaluate all submitted data for input that does not contain expected and allowed characters. Whether your application sanitizes user input by removing invalid characters, escaping any SQL specific characters before passing input to the database, or rejects it with a message back to the user asking them to try again using only allowed characters, it must act as the first line of defense to ensure that no commands can be passed to the database. Remember, even a command that fails, if executed by the database server, may reveal more information to the attacker that will make the next attack more effective.
3. Exploiting Unpatched Services
I have been in the information security field since 1997, and have been a CISSP since 2003. Of all the hundreds of security incidents I have been involved in, whether on behalf of an employer or for a client, I can still count on my two hands the number of intrusions that have not been the result of an attacker taking advantage of an unpatched system. Patching is time consuming, often difficult, and can sometimes introduce problems even as it is trying to prevent others, but the fact remains that you must patch your systems. Every operating system, whether it is installed on a computer or embedded as firmware on a piece of networking equipment, and every application your users run, has flaws. They were all written by humans, and mistakes were made. As these flaws are uncovered, updated code is released by the manufacturer to correct these issues, hopefully before a bad guy uses these flaws to exploit a system.
As an administrator, you must keep up with these patches, testing them as necessary, and deploying them to all networked systems. As operating systems and applications age, and fall out of support, you need to budget the necessary time and resources to update/upgrade these systems. Just because a vendor no longer issues updates for a system does not mean that there are no more security issues to be discovered.
The bad guys may frequently use any or all of the three hacking techniques we just covered, but there are still more you need to be prepared against. In the second part of this series, we’ll look at two more common hacking methods that you will be up against, and summarize some best practices to help you defend against them all.
About the Author: Ed Fisher is an information systems manager and blogger at several sites including his own site, http://retrohack.com. An InfoTech professional, aficionado of capsaicin, and Coffea canephora (but not together,) he has been getting my geek on full-time since 1993, and has worked with information technology in some capacity since 1986. Stated simply, if you need to get information securely from point A to B, he’s your guy. He is like “The Transporter,” but for data, and without the car; and with a little more hair. | <urn:uuid:b6a0f8fa-4eb1-4fb4-8ec0-aa76fa1a48ff> | CC-MAIN-2017-04 | https://techtalk.gfi.com/5-popular-hacking-techniques-enemies/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00565-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950563 | 1,182 | 3.046875 | 3 |
An ancient demon of web security skulks amongst all developers. It will live as long as there are people writing software. It is a subtle beast called by many names in many languages. But I call it Inicere, the Concatenator of Strings.
The demon’s sweet whispers of simplicity convince developers to commingle data with code — a mixture that produces insecure apps. Where its words promise effortless programming, its advice leads to flaws like SQL injection and cross-site scripting (aka HTML injection).
We have understood the danger of HTML injection ever since browsers rendered the first web sites decades ago. Developers naively take user-supplied data and write it into form fields, eliciting howls of delight from attackers who enjoyed demonstrating how to transform <input value=”abc”> into <input value=”abc”><script>alert(9)</script><“”>
In response to this threat, heedful developers turned to the Litany of Output Transformation, which involved steps like applying HTML encoding and percent encoding to data being written to a web page. Thus, injection attacks become innocuous strings because the litany turns characters like angle brackets and quotation marks into representations like
" that have a different semantic identity within HTML.
Here is a link that does not yet reveal the creature’s presence:
Yet in the response to this link, the word “search” has been reflected in a
.ready() function block. It’s a common term, and the appearance could easily be a coincidence. But if we experiment with several
source values, we confirm that the web app writes the parameter into the page.
A first step in crafting an exploit is to break out of a quoted string. A few probes indicate the site does not enforce any restrictions on the
source parameter, possibly because the developers assumed it would not be tampered with — the value is always hard-coded among links within the site’s HTML.
After a few more experiments we come up with a viable exploit.
There’s nothing particularly special about the injection technique for this vuln. It’s a trivial, too-common case of string concatenation. But we were talking about demons. And once you’ve invoked one by it’s true name it must be appeased. It’s the right thing to do; demons have feelings, too.
Therefore, let’s focus on the exploit this time, instead of the vuln. The site’s developers have already laid out the implements for summoning an injection demon, why don’t we force Selector to do our bidding?
Web hackers should be familiar with jQuery (and its primary DOM manipulation feature, the Selector) for several reasons. Its misuse can be a source of vulns (especially so-called “DOM-based XSS” that delivers HTML injection attacks via DOM properties). JQuery is a powerful, flexible library that provides capabilities you might need for an exploit. And its syntax can be leveraged to bypass weak filters looking for more common payloads that contain things like inline event handlers or explicit
In the previous examples, the exploit terminated the jQuery functions and inserted an
alert pop-up. We can do better than that.
The jQuery Selector is more powerful than the CSS selector syntax. For one thing, it may create an element. The following example creates an
<img> tag whose
$("<img src='x' onerror=alert(9)>")
Or, we could create an element, then bind an event to it, as follows:
<img> tag. (The indexes may differ depending on the page’s HTML; the technique is sound.)
RegExp (regular expression) object. Even better, use the slash representation of
RegExp, as follows:
/</.source + "img" + />/.source
Or just ask Selector to give us the first
<img> that’s already on the page, change its
src attribute, and bind an
onerror event. In the next example we used the Selector to obtain a collection of elements, then iterated through the collection with the
.each() function. Since we specified a
:first selector, the collection should only have one entry.
Maybe you wish to booby-trap the page with a function that executes when the user decides to leave. The following example uses a Selector on the
I’ll save additional tricks for the future. For now, read through jQuery’s API documentation. Pay close attention to:
- Selectors, and how to name them.
- Events, and how to bind them.
- DOM nodes, and how to manipulate them.
- Ajax functions, and how to call them.
Selector claims the title of Almighty, but like all demons its vanity belies its weakness. As developers, we harness its power whenever we use jQuery. Yet it yearns to be free of restraint, awaiting the laziness and mistakes that summon Inicere, the Concatenator of Strings, that in turn releases Selector from the confines of its web app.
Oh, what’s that? You came here for instructions to exorcise the demons from your web app? You should already know the Rite of Filtration by heart, and be able to recite from memory lessons from the Codex of Encoding. We’ll review them in a moment. First, I have a ritual of my own to finish. What were those words? Klaatu, bard and a…um…nacho.
p.s. It’s easy to reproduce the vulnerable HTML covered in this article. But remember, this was about leveraging jQuery to craft exploits. If you have a PHP installation handy, use the following code to play around with these ideas. You’ll need to download a local version of jQuery or point to a CDN. Just load the page in a browser, open the browser’s development console, and hack away!
$s = isset($_REQUEST['s']) ? $_REQUEST['s'] : 'defaultWidth';
/* jQuery Selector Injection Demo
* Mike Shema, http://deadliestwebattacks.com
$("#main-panel").addClass("<?php print $s;?>");
<a href="#" id="link1" class="foo">a link</a>
<input type="hidden" id="csrf" name="_csrfToken" value="123">
<input type="text" name="q" value=""><br>
<input type="submit" value="Search">
<img id="footer" src="" alt=""> | <urn:uuid:7185a86a-a939-4f4e-bacd-a07668beab12> | CC-MAIN-2017-04 | https://deadliestwebattacks.com/category/html-injection/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00475-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.827276 | 1,417 | 2.78125 | 3 |
The Pew Internet & American Life Project has released its latest report on the Internet's impact on health and health care, "E-patients with a Disability or Chronic Disease."
The new survey data nails down what many people have long suspected -- people with chronic conditions are disproportionately offline, but once they are online, they are just as enthusiastic as other Internet users.
About a fifth of American adults say that a disability, handicap, or chronic disease keeps them from participating fully in work, school, housework, or other activities. Half of those living with a disability or chronic disease go online, compared to 74 percent of those who report no chronic conditions. Fully 86 percent of Internet users living with disability or chronic illness have looked online for information about at least one of 17 health topics, compared with 79 percent of Internet users with no chronic conditions.
Those with chronic conditions are more likely than other e-patients to report that their online searches affected treatment decisions, their interactions with their doctors, their ability to cope with their condition, and their dieting and fitness regimen.
After detailing their general online interests, the report focuses on how this special population uses the Internet to gather health information. Not surprisingly, once they are online, people with chronic conditions are avid e-patients.
The report also shows that e-patients with chronic conditions do not lack for information about their health concerns, but they are frustrated by the process of finding the right information at the moment they need it. | <urn:uuid:2f4c26e5-e83f-4e28-af8d-565e55ed071d> | CC-MAIN-2017-04 | http://www.govtech.com/health/Internet-Use-by-People-with-Disabilities.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00383-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959582 | 302 | 2.609375 | 3 |
Pick up any magazine or newspaper, surf to any Internet technology or news site, turn on the TV and listen to the news and it becomes apparent that identity theft is a major problem. And not just individuals are suffering from this new crisis, but public and private organizations are also feeling the effects of this growing problem.
From recent cases in the media, involving major organizations such as Bank of America, America Online, Berkeley University, Time Warner, and Ralph Lauren, ID theft can have severe consequences, such as direct loss of revenue and stock decline. There are also other major intangible side-effects that can result from a breach of personal data, such as brand damage, loss of customer confidence, and decline in service. The question is, would you do business with an organization that had recently lost thousands or millions of users’ personal data to hackers and scam artists?
For most, the answer would be no. It is clear that something must urgently be done to assure users, both internally and externally, connecting to applications and data from any location using a number of different connectivity methods, such as laptops, home PCs, PDAs, and smart phones.
Where to start
It used to be that all attacks and security breaches took place on the edge of the organization’s network, centered on the firewall. So naturally, organizations focused on enhancing security in the network to ensure nobody could break through the outside perimeters.
Recently, this has all changed. A number of different threats change the way we think about security and how we protect the information most valuable to us.
- Trojans – programs that get installed on a users device and alters the behaviour of that device without the user knowing it.
- Keyloggers – programs that capture the keystrokes of a user and sends it back to a third party.
- Screen-scrapers – programs that capture screen information on a users device.
- Password sniffers – programs that detect what passwords are being used.
- Viruses – code that infects a users device and often destroys data and settings on that device.
All these threats take the burden away from the network and moves the threats to end user devices being used to connect to data and applications. Devices need to be assessed before even being allowed to connect to an organization’s network, and only if the device meets the set security policy, should the user be allowed to proceed on to the authentication stage.
Figure 1. The location of threats and attacks have moved from the edge of the network to user devices.
On the user devices, there are many requirements that organizations should put on devices to ensure ID theft is minimized. Only when the requirements are met should a user gain access to the network. Some of these requirements could include:
- Anti-virus software is installed and up-to-date.
- Spyware and Trojan checking software is installed.
- Latest operating system and patches are installed.
- Device is approved for entry.
Other things that could be checked, depending on the situation, are network configuration settings, domain and registry settings, and open ports on the device.
To minimize ID theft, security needs to start at the end point, and only when the end-point is secure should users be allowed to proceed with entering user names and passwords.
Preventing others from using your identity
A big step that organizations can do to ensure that trust is established between users and applications is to implement stronger authentication methods. The most basic authentication methods are very easy to break, even for novice hackers and password cracking tools, but enforcing strong one-factor authentication, coupled with two- or three-factor authentication can virtually guarantee ID theft is minimized due to the sensitive and unique nature of the authentication methods.
Basic authentication requires users to input a username and a password, and everything will be available to them, including sensitive and confidential information. But making the distinction on how users authenticate is a big step in stopping sensitive information being stolen. For example, an organization could set the policy that if a user uses one-factor authentication, they will only have access to the most basic information, such as e-mail, but if the user uses higher levels of authentication then more applications and data will be available to them, such as e-mail, sales systems, purchasing systems, etc.
A large percentage of stolen data comes from unhappy employees who leave backdoors open to themselves where they only need the right username and password to get access to everything, after they have left the organization. With two-factor authentication, requiring the user to own a unique possession, such as PDA, mobile phone, or hardware token (usually a password generating device issued by a third party) , would make it much harder for disgruntled employees to hack their way back into systems to cause damage.
Highly sensitive systems could even require an extra level of protection adding in bio-metric authentication, where the user needs to use a physical attribute unique to them, such as their iris or finger print.
In March of 2005, Techworld reported on a gang of scam artists who had sent out millions of daily emails to users, and once the user clicked on the links in the email, a Trojan keylogger was automatically installed on the user’s device, recording everything the user did and sending it back to the gang. The gang was so successful in their operation that they managed to profit more than $37,000,000 before being shut down. Most of the profit came from users accessing online banking systems and credit card purchases, which could be easily captured and exploited. Two-factor authentication could have ensured that information had stayed safe, because users would be required to enter a unique one-time password that is unique to a device that they own.
Several large financial institutions across the world are now starting to implement two-factor authentication ensuring that trust can be re-established with their users, fearing that if nothing is done profits are lost, customer confidence will drop, and the brand will be damaged for long-term disadvantages.
The icing on the cake: adding an extra layer of security
Most organizations think that providing users with secure authentication, along with the latest anti-virus and anti-spyware software should take care of the problem of identity theft. The reality is that with the changing paradigm of how users connect, security also needs to be present after the user ends the session or connection.
In the 1980s and 1990s it was easy to control what devices users used to connect with, but in today’s world, go into any city around the world and there are bound to be several internet cafés offering easy and cheap access, or walk through the airport and you are guaranteed to find a number of Internet kiosks offering quick and easy access to the Internet. Any time a user uses an “insecure” device, they could be leaving a trail of information behind, stored in cookies, URL history, temporary files, and even downloaded files. For organizations to offer mobile access from any location using any device, automatic clean-up of session data is imperative.
By combining the session clean-up with an upfront device assessment, it becomes easy to detect if the user is using a device approved by the organization or if it is not approved. If the device is not approved, once the user is done using the system to connect to e-mail or any other number of applications, data should be erased immediately from the device.
By simply adding this extra layer of protection, most organizations can safe-guard themselves against losing sensitive data to malicious attackers. Session clean-up should be added as part of a larger security strategy aimed at minimizing the loss of personal data, in combination with device assessment and strong authentication.
According to recent surveys, identity theft is seeing the largest increase over any other kind of crime worldwide. Depending on what methods of security are implemented by the organization you are doing business with and what the relationship you have with them, secure access can be achieved. Securing access from any location, using any device is not an impossible task but can be overcome by thinking about how users access applications and data in a real-world scenario. Only then can user trust be re-established, and all the benefits of using online communications can come true resulting in maximum customer satisfaction, speedy collaboration, and significant competitive advantage. | <urn:uuid:f92738b3-ee1c-4906-bce5-cae3710e66e7> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2006/05/02/identity-theft---should-you-be-worried/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00043-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944435 | 1,705 | 2.625 | 3 |
While RAID-5 is still one of the most popular RAID levels, many people are turning to RAID-6 for their data storage needs. RAID-5 has its advantages over RAID-6, but the latter offers a stronger safeguard against failure. A RAID-6 array is just like any other storage device, though. It is not immune to failure and data loss. If you’ve lost data due to a RAID-6 array failure, our RAID-6 recovery experts can help you.
What Is RAID-6?
RAID-6 is essentially RAID-5 taken a step farther. A RAID-5 array stripes its data across multiple disks and includes parity data in case one disk fails. A RAID-6 array does the same thing. But then it adds more parity data. Due to its second layer of parity, RAID-6 is tolerant of up to two drive failures. It does, however, require one more hard drive than RAID-5 to hold the same amount of data, as the parity data take up twice as much space. Whereas a four-drive RAID-5 array would have three drives’ worth of capacity, a four-drive RAID-6 array only has two.
One of the possible ways XOR and Reed-Solomon parity blocks can be distributed throughout a RAID-6 array
RAID-5 and RAID-6 both use XOR, or “exclusive or”, logic to provide parity. For parity calculations, XOR logic works on the bit scale. Using the XOR function, all of the bits of one missing hard drive’s data can be reconstructed using the remaining drives. But this only works if one drive is missing. XOR logic can only go so far. Whenever more than one drive is missing, no amount of XOR parity calculations can fill in the rest of the gaps.
In a RAID-5 array with five hard drives, one out of every five blocks contains only parity data. The parity blocks are spread out across the drives to increase efficiency. In a RAID-6 array with the same number of drives, two out of every five blocks contain parity data. The extra parity blocks in a RAID-6 array don’t rely on XOR coding. Instead, they use Reed-Solomon error correction codes.
Reed-Solomon encoding in RAID-6 means that a second missing drive can be replaced by the calculations performed on the remaining drives long after the first drive has failed. When one drive is missing, XOR parity calculations on the other drives fill in the gaps. If another drive goes missing, Reed-Solomon parity calculations can recreate its data based on the parity data of the remaining drives (including the parity blocks filling in for the first missing drive).
There are ways in which RAID-6 is less desirable than RAID-5. Because it has to do more parity calculations each time it writes data, RAID-6 is slower than RAID-5. RAID-6 also requires more hard drives to have the same amount of space. But having more fault tolerance to the tune of two drives’ worth of parity data is nothing to sneeze at.
How Can a RAID-6 Array Fail?
Just because a RAID-6 array has more fault tolerance, however, doesn’t make RAID-6 failure impossible. While RAID-6 failure is certainly less likely, no level of RAID array is perfectly insulated against failure. And ironically, the one thing that is meant to prolong a RAID-6 array’s life can also hasten its demise.
When a hard drive inside a RAID-5 or RAID-6 array fails, it can be replaced. The RAID controller takes the fresh drive and begins running its parity checks on the other drives. Using the parity data, the controller turns the new drive into an exact duplicate of the old one. This process is referred to as “rebuilding” the RAID array. A RAID-6 array can be rebuilt if one or two drives have failed. However, there are risks to rebuilding a RAID-6 array.
A RAID-5 or RAID-6 array is in its most vulnerable state when it is being rebuilt. The still-functional drives must pull double duty while the new drive (or drives, in the case that two drives in a RAID-6 array have failed) is being integrated into the array. They are put under much more strain than usual. This can actually cause one or more drives to fail during the rebuild process. Furthermore, the time it takes to rebuild a RAID-6 array depends on the size of the drives in the array. As hard drive capacities increase, rebuild time rises, dramatically increasing a RAID-6 array’s window of vulnerability.
Multiple simultaneous hard drive failures are rare, but can occur. Hard drives can fail under the stress of a RAID rebuild. A sudden power surge or loss of power can cause several drives to crash at once. If the drives in your array came off the assembly line within minutes or days of each other, they could fail within minutes or days of each other too. Natural disasters and freak accidents can cause a RAID-6 crash. And RAID arrays like RAID-6 have no safeguards against data loss due to file deletion or reformatting.
The RAID-6 Recovery Procedure
RAID-6 recovery procedures are performed in our ISO-5 certified cleanroom area.
RAID-6 devices may be well-protected, but no data storage device is 100% failure-proof. A RAID-6 array is no replacement for a secure, off-site backup of your data. Fortunately, there’s no need to panic if you’ve lost data due to a RAID-6 crash. Our RAID-6 recovery engineers can help you.
Free RAID-6 Recovery Evaluation
Our RAID-6 recovery efforts begin with a free evaluation. We send you a free inbound UPS label to cover shipping costs. After one to two business days, our recovery engineers will have a statement of work for you. This statement of work includes a firm price quote, a probability of success, and an estimated time to completion. We will only move forward with the RAID-6 recovery procedure if you are comfortable with our terms.
Independent Analysis of Your RAID-6 Array’s Hard Drives
Our RAID-6 recovery engineers’ first goal is to create as complete of a forensic image of all the drives in your array as possible. Any necessary repairs to the failed hard drives from your crashed RAID-6 array are made in our ISO-5 certified cleanroom workbenches. Highly trained and skilled engineers in our data recovery lab carry out the repairs. When rebuilding a RAID array, we never work with any of the original drives in the array. Our forensic imaging software is write-blocked, so we never alter any of the information on the original drives.
All of the hard drives in a RAID-6 array have special metadata written to them by the RAID controller. This metadata helps the RAID controller know exactly how the hard drives are arranged. Our RAID-6 recovery experts use this metadata to make sense of the arrangement of the drives in the array. Everything from the order of the drives to the location of the parity blocks and when each drive stopped working can be discerned from this metadata.
If there are any unrecoverable portions of the drives, our RAID-6 recovery experts work around them. Our goal is to provide you with as much functional and uncorrupted data as possible. There is no cost, upfront or otherwise, associated with our RAID-6 recovery process until we recover your critical data. If we are unable to get your important files back, you owe us nothing.
Reuniting You with Your Data
After you pay for our data recovery efforts, we extract your recovered data to a password-protected external hard drive. This hard drive is then shipped to you. You are the only party other than our customer service representatives who know the password. This keeps your data secure in the event of an error occuring during shipping. We hold onto your data for five business days after your scheduled delivery, to give you enough time to make sure nothing has gone wrong. Once that grace period is over, we erase the data from our system. We make certain that you are reunited with your data as safely and securely as possible.
Ready for Gillware to Assist You with Your RAID-6 Recovery Needs?
Best-in-class engineering and software development staff
Gillware employs a full time staff of electrical engineers, mechanical engineers, computer scientists and software developers to handle the most complex data recovery situations and data solutions
Strategic partnerships with leading technology companies
Gillware is proud to be a recommended provider for Dell, Western Digital and other major hardware and software vendors. These partnerships allow us to gain unique insight into recovering from these devices.
RAID Array / NAS / SAN data recovery
Using advanced engineering techniques, we can recover data from large capacity, enterprise grade storage devices such as RAID arrays, network attached storage (NAS) devices and storage area network (SAN) devices.
Virtual machine data recovery
Thanks to special engineering and programming efforts, Gillware is able to recover data from virtualized environments with a high degree of success.
SOC 2 Type II audited
Gillware has been security audited to ensure data safety, meaning all our facilities, networks, policies and practices have been independently reviewed and determined as completely secure.
Facility and staff
Gillware’s facilities meet the SOC 2 Type II audit requirements for security to prevent entry by unauthorized personnel. All staff are pre-screened, background checked and fully instructed in the security protocol of the company.
We are a GSA contract holder.
We meet the criteria to be approved for use by government agencies
GSA Contract No.: GS-35F-0547W
Our entire data recovery process can be handled to meet HIPAA requirements for encryption, transfer and protection of e-PHI.
No obligation, no up-front fees, free inbound shipping and no-cost evaluations.
Gillware’s data recovery process is 100% financially risk free. We only charge if the data you want is successfully recovered.
Our pricing is 40-50% less than our competition.
By using cutting edge engineering techniques, we are able to control costs and keep data recovery prices low.
Instant online estimates.
By providing us with some basic information about your case, we can give you an idea of how much it will cost before you proceed with the recovery.
We only charge for successful data recovery efforts.
We work with you to define clear data recovery goals for our technicians, and only charge you upon successfully meeting these goals and recovering the data that is most important to you.
Gillware is trusted, reviewed and certified
Gillware has the seal of approval from a number of different independent review organizations, including SOC 2 Type II audit status, so our customers can be sure they’re getting the best data recovery service possible.
Gillware is a proud member of IDEMA and the Apple Consultants Network. | <urn:uuid:cfd6866a-f2e3-4bdd-9adf-d73e55b7303e> | CC-MAIN-2017-04 | https://www.gillware.com/raid-6-recovery-services/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00437-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925326 | 2,285 | 2.640625 | 3 |
High density 802.11n wireless LAN
The 802.11n standard, ratified by the IEEE in 2009 delivers data throughput at up to 300 Mbps in both the 2.4 GHz and 5 GHz bands, a huge improvement over earlier incarnations of the 802.11 standard such as 802.11b, which delivered 11 Mbps in the 2.4 Ghz band only.
The explosion of smartphones, tablets and other wireless devices in the workforce, many of which are now adopting “bring your own device” (BYOD) policies have combined with the capabilities of 802.11n, resulting in a large uptake in wireless LAN (WLAN) implementation in the enterprise.
As transmission speeds have increased and costs have come down, wireless LANs are often cheaper to install and manage than wired networks. However, the traditional multi-channel “micro-cell” approach still followed by many WLAN vendors can be complex to deploy and manage and places severe restrictions on wireless user densities. Vendors such as Fortinet have applied new approaches to the standard to deliver virtualised (virtual channel, virtual port) solutions that are simpler to deploy and manage, support significantly higher densities of users and deliver a wired-like experience. | <urn:uuid:4659b108-b9ef-4aeb-9d0f-00a95deabec1> | CC-MAIN-2017-04 | http://www.wavelink.com.au/technologies/high-density-80211n-wireless-lan.php | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00189-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940631 | 251 | 2.6875 | 3 |
Title pun intended.
Long ago, Maxtor was a big player in the hard drive market, until they were acquired by Seagate in 2006. In fact, they were the third largest hard drive manufacturer in the world just before the acquisition.
While they still exist as a subsidiary of Seagate, you’d be hard-pressed to find anyone buying a Maxtor drive these days.
When they were still abundant, Maxtor hard drives had a peculiar and not particularly useful feature that both delighted and frightened consumers. Occasionally, if there was an issue with the drive, it would start to make sudden, rapid polyphonic tones like an old cellphone.
For anyone unaware of this feature, it was startling and unpleasant to hear, probably because most people buy their hard drives for storage rather than as a means to find a 1990’s ringtone.
In any case, this odd feature led to the spread of the name ‘musical Maxtor,’ a name that’s not been uttered in the mainstream for nigh on a decade now. Here’s a link to a YouTube video where you can hear a Maxtor making its music.
But… Why? And How?
Honestly, I couldn’t tell you why Maxtor chose to implement this feature. I suppose simply ‘because they could’ was enough justification. It’s hilarious in hindsight. In any case, while we can’t say why, we can certainly explain how.
The answer lies in the mechanism used to move the read/write heads back and forth across the platter. Inside a hard drive is a strong magnet, rigidly attached to the case. This creates a static magnetic field. At the end of the arm that holds the read/write heads, there’s a wound up coil of wire.
When current is applied to this coil, it generates its own magnetic field. This field acts against the static magnetic field created by the magnet. The force moves the arm that is mounted on a swivel and holds the read write heads on the end opposite the voice coil.
By varying the current to the arm’s coil of wires, you control the movement of the read/write heads.
The control is very fast and accurate. You can quickly accelerate the arm and very precisely control its location. This device – a loop of wire moving against a magnetic field as it is supplied current – is a voice coil. It got its name because it’s used to generate sound in speakers.
As it turns out, you can control the movement of a voice coil so well, that its back and forth movements create sound waves when connected to something that vibrates well, such as a speaker cone.
In the case of the musical Maxtor, the drive’s arm is getting a series of electrical pulses through its voice coil. These signals cause the read/write to vibrate, and you’re hearing the music.
If you were to have one of these Maxtor’s running without its cover on and you put your finger on the center of the spindle, you’d
hear the same tones. It’s the electrical signal the drive sends to the arm when it’s stuck, and it ends up sending the arm into song.
Other drives typically do not send electrical signals to move the read/write assembly when the platters are not spinning. If you were to experiment with a contemporary Western Digital hard drive for instance, you’d see that the drive waits to confirm that its platters are spinning at the correct rotational speed before it directs its read/write assembly to move. That’s a design that makes more sense to us.
Unique in Practice, Not Potential
Maxtor hard drives are not unique in their ability to make sounds with the voice coil. Any hard drive has the ability to turn musical. You just have to send an electrical signal to the voice coil that corresponds to whatever sounds you would like to make.
There are plenty of demonstrations on YouTube of people connecting speaker wires from an amplifier to a hard drive’s voice coil, and having the hard drive turn into a speaker. Obviously you shouldn’t try this if you ever hope to use the hard drive again or if you have any valuable data on it.
Apart from all drives being able to make music, to the best of my knowledge, Maxtor’s ARE the only hard drives that intentionally come with this musical feature. It seems an odd quirk of the tech industry that large corporations are able to get away with ridiculous things like a mainstream musical hard drive, but it’s a quirk I greatly appreciate.
Fortunately, the spirt of ‘let’s do it because we can’ has yet to leave the tech industry. In one example, researchers in Canada recently developed a compact hitchhiking robot named hitchBOT, simply for the purpose of having a cute little robot make its way around the country (and Europe and Canada before that). Unfortunately, hitchBOT was mercilessly destroyed by someone grumpy in Philadelphia, but there are plans to rebuild it.
Another example from 2009: Google rented hundreds of goats to mow the lawn at their headquarters, rather than hire a lawnmower.
While it’s vastly misguided to attribute this cavalier, fun-loving spirit in the tech industry to Maxtor, since it was around long before them, it’s still nice to hear these stories about tech companies and enthusiasts that have some fun with their products. Maxtor may not be around in any real capacity these days, but fear not. We still have plenty of techies to carry the torch of tech-related hijinks well into the future.
Oh and P.S. Yes, Gillware is able to perform data recoveries on Maxtor drives (and pretty much all non-Maxtor drives). | <urn:uuid:f42523dd-befc-4535-957f-99b05ffbcbab> | CC-MAIN-2017-04 | https://www.gillware.com/blog/data-recovery/a-hard-drives-voice-coil-and-the-musical-maxtor-how-to-make-a-readwrite-arm-sing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00493-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961301 | 1,237 | 2.59375 | 3 |
Network Address Translation is the general form of IP Masquerading. A Firewall or Router changes the source IP address and TCP port number on packets for packets that it forwards and that match some set of rules. In turn, packets sent back to the Firewall or Router are re-routed back to the system which originated the session.
While IP Masquerading generally only works for a single "inside"
address range, and maps TCP port numbers and IP addresses in that
range to a single IP address (its own) and different TCP port numbers,
Network Address Translation can map from multiple IP address ranges to multiple IP | <urn:uuid:dc196230-9653-46e6-8d1d-1af7ae323580> | CC-MAIN-2017-04 | http://hitachi-id.com/concepts/network_address_translation.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00547-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.894496 | 128 | 2.9375 | 3 |
WANG Z.,China University of Petroleum - Beijing |
WANG Z.,Northeast Petroleum University |
LIU C.,Northeast Petroleum University |
WANG W.,Northeast Petroleum University |
And 3 more authors.
Mining Science and Technology | Year: 2010
Breccia lava is a type of rock in transition between volcanic clastic rock and lava. Its physical properties are comparatively special, while the nature of its genesis and the formation of reservoirs, deposits and concentrations vary considerably. In order to discover the different genesis for breccia lava reservoirs, our study has been carried out on breccia lava for its rock constituents, structures, diagenesis and reconstruction from tectonic movements to mineral depositions. From our analysis, we conclude that the main type of reservoir is a secondary emposieu, where rock compositions, structures, diagenesis and tectonic action are the major contributing factors to the deposition of breccia lava and its properties. Diagenesis constrained the extent of conservation of the primary pore of breccia lava and produced secondary porosity. The roles of tectonics were played out through the formation controlling tectonic fissures inside breccia lava and affected the impermeability of breccia lava fluids. The constituents of lava rock determine directly the type of diagenesis and its response to tectonic action. At the time of appropriate rock composition and structures, constructive formation occurred to breccia lava. Breccia lava also responded proactively to tectonic movements, forming the best quality collective reservoir agglomerations of vast spaces and strong permeability. © 2010 China University of Mining and Technology. Source
Liu C.,Daqing Petrochemical Company |
Wang W.,Daqing Petrochemical Company |
Yi L.,Daqing Yushulin Oilfield Development Co.
Petroleum Refinery Engineering | Year: 2013
PetroChina Daqing Petrochemical Co., Ltd has successfully disconnected the external catalyst cooler from a 1. 0 MM TPY RFCC unit and connected it for the first time in 20 years' operation after it came on stream. As the leaking of discharge valve of cooling water pump made it impossible to maintain the mechanical seal of the pump, it was planned to disconnect the catalyst cooler and associated thermal system to replace the valve without unscheduled shutting down of the RFCC unit and to realize the long-term operation. In order to prevent overheating of catalyst cooler tubes and tube rupture and minimize the impact on the tube service life, pumped cooling water was used to reduce the catalyst cooler temperature to lower than 200 °C after disconnection of catalyst cooler. When catalyst cooler was connected, the temperature difference between feed water and catalyst cooler was only dozens of degrees centigrade. The temperature rse and fluidization process were slow, and service life of catalyst were ensured. The one year's operation shows that this plan is reliable and safe. Source
Wang Y.-C.,Northeast Petroleum University |
Wang Y.-C.,China University of Petroleum - Beijing |
Xu G.-B.,Northeast Petroleum University |
Liu L.-F.,China University of Petroleum - Beijing |
And 2 more authors.
Zhongguo Shiyou Daxue Xuebao (Ziran Kexue Ban)/Journal of China University of Petroleum (Edition of Natural Science) | Year: 2011
According to the geochemical characteristics of source rocks and crude oils, the oil-source correlation of Putaohua and Fuyang oil layer in South Songzhan area, Songliao Basin was analyzed. The results show that the oils of Putaohua and Fuyang oil layer in South Songzhan area and in Sanzhao depression have the same source and the oils are all mainly derived from the hydrocarbon source rock of the first member of Qingshankou formation, second from the second and third member of Qingshankou formation. Inside the oil source area of South Songzhan, the oil of Putaohua layer is mainly from the oil generated from the first member of Qingshankou formation and the oil migrates along the effective faults due to the buoyancy of water. Under the effect of paleo-overpressure, the oil of Putaohua layer together with the oil expelled from the Qingshankou formation can migrate downward into Fuyang payzone along the T2 faults. The oil from Sanzhao depression, the large hydrocarbon-generating depression of the basin, can be effectively transported to South Songzhan area through the path of faults and sandbody. Source
Ma Z.,China National Petroleum Corporation |
Xie Y.,China National Petroleum Corporation |
Liu Y.,China National Petroleum Corporation |
Wang D.,China National Petroleum Corporation |
And 4 more authors.
Zhongnan Daxue Xuebao (Ziran Kexue Ban)/Journal of Central South University (Science and Technology) | Year: 2015
In order to fully understand the resource potential of the continental margin basins in eastern South America, through hydrocarbon accumulation factors analysis of the 19 continental basins, the petroleum features of the continental margin basins in eastern were systematically concluded, based on which the resource potential of these basins was evaluated by taking play as basic evaluation unit. The results show that: 1) the eastern margin is through rift-transition-continental structural evolution periods; 2) two major source rocks i.e. the Cretaceous rift lacustrine source rock and continental margin marine source rock are developed; resources mainly concentrate in Cretaceous (69%) and Tertiary (29%) reservoirs; Cretaceous interlayer mudstone and tertiary marine shale are the main seal rocks, and salt develops in the middle basins is another major regional seal; resources mainly distribute in the central salt developed basins (about 96.32% of the total recoverable reserves of continental margin basins); 3) the total undiscovered oil and gas resources are 132 451 Mmbo, mainly in Campos, Santos, Espirito Santo, Guyana and Malvinas in plane and the upper play in vertical; the upper play of Campos basin is the most favorable exploration area, and the middle and bottom plays of Santos are the secondary targets. ©, 2015, Central South University of Technology. All right reserved. Source
Pi Y.,Northeast Petroleum University |
Guo X.,Northeast Petroleum University |
Liu Q.,budget center |
Liu L.,Northeast Petroleum University |
And 2 more authors.
Energy Education Science and Technology Part A: Energy Science and Research | Year: 2014
There is an inevitable relationship between the resistivity and water saturation for sandstone reservoirs. For convenience, in this paper, it is the relationship between resistance and oil saturation that was used to characterize their relationship. Following factors are considered: temperature, permeability and porosity, formation water salinity, electrode spacing and displacing medium, which factors may affect the correspond relationship between resistance and oil saturation to some extent. So the experiments on the influence of above factors were conducted. The experimental results showed that: the greater the permeability and porosity, the less of resistance value; between 25 °C; to 65 °C;, the temperature has little effect on the resistance value of the cores; the influence of displacing medium to the resistance mainly depends on the salinity; the resistance value will increase with the distance between electrodes simultaneously. © Sila Science. All Rights Reserve. Source | <urn:uuid:a4b6f068-4357-4306-86aa-ccde5029040c> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/daqing-yushulin-oilfield-development-co-2454090/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00273-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.910758 | 1,568 | 2.515625 | 3 |
Emerging utility technologies point to importance of data
Discussions based around emerging trends in the utility industry, particularly power delivery, tend to center around the rise of the smart grid. However, there is a layer of innovation taking place behind the smart grid that can have an even greater impact on sustainability – renewable resources.
Considering renewable resources
Renewable energy has been around for a while, particularly in the form of wind and solar power, but also with thermal and hydro power as well. These power generation methods, however, have been severely limited for a few key reasons. In the case of wind and solar energy, the issue is that power delivery is intermittent and unpredictable. Smart grid is helping to overcome this, but innovation is curbed by issues with energy storage. At the same time, hydro and thermal power are limited by geographical accessibility. Recent advances in energy harvesting could overcome this problem, but data needs to be central to this progress.
New solutions for hydroelectricity
According to a recent IEEE Spectrum report, several companies are collaborating to develop underwater electric generators. These turbines would be placed in the depths of the ocean, generating power through turbines that are turned by deep-sea currents.
Data implications of the technology
The potential gains offered by underwater turbines are incredible because they would provide almost constant power generation without adversely impacting the environment. However, managing a network of deep-sea turbines can present major challenges. In particular, vast arrays of sensors would likely be necessary to track water pressures, currents and wildlife in the vicinity of the devices. These factors could all contribute to damages to the turbines that nobody can notice by simply checking on the turbines every week or so. The costs would be too great. As a result, utility companies using the new technology would depend heavily on data delivered from sensors and monitoring devices to individuals who are managing the solutions.
BPM solutions for utilities can play a major role in turning the initial data delivered by these systems into actionable information that enables cost-effective maintenance and management. As more advanced utility technologies that take advantage of natural resources in extreme environments emerge, BPM software can function as a strategic asset to utility providers trying to benefit from these solutions.
Director of Corporate Communications | <urn:uuid:52a5a3eb-9c34-491f-bff3-e55c25bf4465> | CC-MAIN-2017-04 | http://www.appian.com/blog/bpm/emerging-utility-technologies-point-to-importance-of-data | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00575-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948083 | 446 | 2.953125 | 3 |
Let me state right from the start that this article represents a change of mind for me on some aspects of remote storage. In the past, transfer speeds were woefully inadequate for moving large amounts of data. Secondly, before recent economic changes brought about a shakeout, some companies pushing this technology had questionable staying power and capabilities. However, things have changed.
Of course I've always been an advocate of backing up data. We all know that it's not a question of whether our hard drive will fail but a question of when it will fail. Everyone has had a piece of software corrupt a file and there are even rumors that some of us have been heard to mutter, "Wait a minute, what file name did I just save that to?"
Backups are essential, but they're also vulnerable. Computers are a prime target for burglars these days and airports abound with thieves specializing in the theft of notebook computers. If your only recent backup is located next to your computer at home or in the office, or in the same bag you'd been carrying your notebook in before it was stolen, your hardware loss is probably secondary to your data loss. A backup that is not itself safe and accessible in an emergency is of no use.
Add to this the various natural disasters that can befall computing systems from stand alone SOHO computers to server farms and the wisdom of storing your backups at some distance becomes evident.
The remote storage of important data is not something that began in the Twentieth Century with the invention of computers. Storing important records some distance from the place where they originated appears to have begun shortly after writing was invented by the ancient civilizations thousands of years ago. The most famous ancient archive was the library at Alexandria, Egypt. It stored copies of books from all over the world. It was burned, twice, and many ancient books were lost. There are lessons to be learned from this.
Since security of the backup is important the premises where the backup is stored is important as is the medium on which it's stored. A remote storage facility located in a locale that experiences frequent floods or has an unreliable power grid may give you nothing more than a false sense of security.
One of the most troubling aspects of remote storage has always been the expense involved. For remote storage you need to have a place to shelter the backups, and that has been a stumbling block for many companies and most individuals.
Of course large companies have been handling their own remote storage for years. When I administered a network for a very large bank in the eighties we used to mail one of its full 120 MB backup tapes to a storage facility once a month. They were a bank so real estate with air conditioned vaults was no problem. Smaller firms had to hope that someone responsible could be persuaded to take a backup home, and not lose it.
Things have evolved over the past several years. Desktop units that had 30 to 40 MB of storage now have a thousand times that. Server capacity has grown at an even greater rate. The days of mailing 120 MB tapes to a remote vault are over. Remote storage today usually means accessing a Storage Area Network (SAN). Larger firms do regular, enterprise-wide backups to SANs and they do them electronically. In the best of circumstances they have dedicated fiber optic links to their SAN server farm and everything is automated. these large firms also have the personnel needed to oversee and maintain their SAN.
Of course there's a great deal of expense involved in owning your own SAN and making use of it. You need the real estate to base the SAN itself in, and you need to be able to lay that fiber underground -- not an easy accomplishment in some larger cities. Additionally, you need the computer hardware and staff to maintain and operate it. In this expensive scenario you own and control everything associated with your remote storage and the security measures you take are up to you.
Most companies cannot begin to afford this type of solution. But the need for remote storage remains.
Who Are These Guys?
This brings us to the commercial SANs that will provide remote storage of your data enterprise-wide, if necessary, for a fee. This solution brings with it a number of considerations. Foremost among these considerations should be "Who are these guys?"
Anyone who contracts out the remote storage of their critical data to a SAN needs to do some investigating beforehand. Does the SAN operator look like they're going to be around for a while? We all know that shakeouts happen and some smaller companies offering remote storage have already gone under. This consideration may sound odd, but is the price the company is offering too low? If the price looks too good to be true that may mean that the remote storage provider's business model may be flawed and the result could be a big price increase down the road or a sudden notice to get your data off their servers before the company goes down for the final count.
An issue that sparks the biggest concern among many potential users of SANs is actually one of the most easily addressed: What about the security of the remotely stored data in terms of unauthorized access? Reputable SANs offer software that will encrypt the data on your end before sending it to the remote storage facility. If you really want to make sure that you have control of your data's integrity you can always encrypt it on your end with encryption software you're familiar with and fully trust, and then send it on to the SAN.
How you get the data to the SAN is another question, and is frequently a major sticking point. We all know that databases and ancillary documents can be huge these days. Obviously if you wanted to make an enterprise-wide 100 GB upload to your SAN every day you could not use your 56 Kbps modem to do it. In a case like this you'd need to make arrangements with the SAN for uploading your data to them. They're usually experienced in this issue and can help to tailor your backups to fit your bandwidth. Larger companies might just rent fiver optic cables or T3 lines, while others may need to strategize on exactly what gets sent to the SAN on what days.
Eventually, we get down to the scale of a SOHO or individual user. At this level there is unlikely to be a lot of money to throw at the problem. As a result SOHOs are probably going to be dealing with resellers of SAN services. Most resellers have a rate card displayed on the Web which lists how much you can store with them for what price, and other terms.
Whereas large companies spend millions on remote storage services the SOHO crowd can get 10 MB to 100 MB of remote storage for between $6.95 and 19.95 a month. There are quite a few variables among these smaller companies, and here is where I still have some qualms.
Whom Do You Trust?
Whom to trust with your valuable data can be a real question here. Some of the smaller stand-alone SAN companies are themselves vulnerable to economic conditions. If they go belly up one afternoon you could be left either scrambling to move your remotely stored data or out of luck entirely. The advantage is that you can probably get a hold of someone at the facility if you need to. Some of these companies claim to be protecting government secrets, one will also store your wine in their facility. It's good to remember that quirky need not mean unreliable. Some of these stand alone companies also offer additional computer related services, such as digitizing and storing your existing paper documents and the creation of disaster recovery programs.
The resellers are usually reselling a piece of the remote storage space they rent from one of the larger SANs and they're also a varied lot. Besides being resellers, some of these companies also outsource their billing, accounting, support and seemingly everything else. This particular sort of operation tends to be one guy with a Web site selling what he sees as a commodity. While this may not sound great at first blush it may be good enough for the small user. It depends on who the SAN really is.
So the answer to the question of who needs remote storage may be just about everyone. For large businesses that can handle the capital outlay for hardware, maintenance and software it's a necessity and most already do it. For middling companies that can afford vendors to handle remote storage it's an equal no brainer. For the SOHO and the backup conscious individual there are some details that have to be figured out, but as DSL and other broadband technologies spread the prospect of putting significant amounts of important data into remote storage within a reasonable amount of time becomes more attractive. The ROI of not losing days or weeks of work from some sort of disaster can be enormous, and then there's the peace of mind you get from knowing that your most critical data is safe.
R. Paul Martin has been a network administrator for a Fortune 100 company. He works as a freelance writer and as a technology consultant. | <urn:uuid:120432a5-58fb-4845-bc71-d0fc8942b23c> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/netsysm/article.php/785281/Remote-Possibilities.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00235-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.970078 | 1,825 | 2.53125 | 3 |
Apparently, the US Department of Energy (DOE) is revising its timetable for deploying its first exaflops-capable supercomputers. According to William Harrod, Research Division Director of the DOE’s Office of Science Advanced Scientific Computing Research (ASCR) program, the agency is now looking at the 2020 to 2022 to reach get its first exascale machines up and running. That effectively means the US is delaying its plans for this next-generation technology by two to four years.
Harrod outlined the impact of the delay at the Supercomputing Conference (SC12) last week in Salt Lake City, Utah. In an article posted today in Computerworld, Harrod described the slippage thusly: “When we started this, [the timetable was] 2018; now it’s become 2020 but really it is 2022.”
The DOE is in the process of writing up a proposal, known as the Exascale Computing Initiative (ECI), which is expected to be presented to Congress in February of next year. Of course, there’s no guarantee that the feds will actually act on the proposal in a way that meets the agency’s needs.
According to the Computerworld report, the effort is expected to cost in the neighborhood of a billion dollars over the next several years. Given the failure of the Obama White House and Congress to come to terms on budgets over the previous four years, that doesn’t bode well. Even at best, funding for the work won’t be put in place until October 2013, as part of the fiscal 2014 budget.
Although the budget stalemate that has gripped Washington for the last four years has not helped, a more fundamental problem is that it’s been difficult to make the case for exascale systems. Despite Obama’s 2011 State of the Union address invoking the Russian Sputnik challenge as a model for lighting a fire under US R&D, there is little public outcry for more federal spending in technology. Scientists insist that exascale machines will enable advancements in an array of fields – biology, energy, physics, material science, national security, and climate research; but such talk has not captured the public imagination to the degree that would force policymakers to act.
Unfortunately, to develop such supercomputers by the end of the decade requires actions now. While the hardware may indeed become available by 2018 – Intel, Cray and others have stated their intentions to supply such hardware in that timeframe – the software models for exascale computing haven’t been developed yet and will require a long lead time.
China is also working on these systems and intends to field an exaflop-capable machine around the same time – perhaps using domestically produced technology. Governments in Japan and Europe have plans to field exascale machines around the end of the decade as well Those nations have the same daunting challenges as the US, but if the Americans dawdle, it’s not inconceivable that the first exaflop machine will be in Europe or Asia.
In fact, if the TOP500 trends are to be believed, a supercomputer that is able to execute a Linpack exaflop will appear somewhere in world by 2019. Whether that machine becomes a platform for exascale computing or just a container for a collection of petascale and terascale applications is another matter. | <urn:uuid:a545f05d-d32b-4cb5-b124-a345216d39a6> | CC-MAIN-2017-04 | https://www.hpcwire.com/2012/11/26/doe_pushes_back_plans_for_exascale_supercomputing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00143-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949028 | 702 | 2.578125 | 3 |
Nanotechnology has found its way into an amazingly diverse number of industries. It's used for clothing, sunscreen, rocket propellants, packaging for beer bottles, synthetic bones and many other things. But never to recreate timeless art -- until now. Scientists from the Georgia Institute of Technology have used nanotechnology to recreate Leonardo da Vinci's masterpiece, Mona Lisa. But don't expect to see "Mini Lisa," as the GIT team calls it, anytime soon. The "painting" measures a mere 30 microns wide. How small is that? The head of a pin is about 1,000 microns wide. The researchers weren't trying to make a killing creating essentially invisible knock-offs of classic art, though I hear there's a thriving underground market for that. Rather, they were experimenting with changing surfaces on a molecular level. The scientists say the project shows that nanotechnology someday may be used to manufacture devices. Here's a good description of how the nanotechnology scientists did it, courtesy of National Monitor (where you also can find out who has The Best 6 Breasts of Hollywood; even The Huffington Post doesn't have that range!):
Scientists created the world’s smallest version of the Mona Lisa using an atomic force microscope and a technique called ThermoChemical NanoLithography (TCNL). To create Mini Lisa, the scientists located a heat cantilever at the substrate surface to create a series of limited nanoscale chemical reactions. By altering only the heat at each location, Keith Carroll controlled the number of new molecules that were created.
The GIT team reportedly is working on a nanoscale version of that hilarious "dogs playing poker" painting. Proprietors of nano-sized fleabag motels wait with bated breath. Now read this: | <urn:uuid:8fc8c1fc-5214-4ff7-905e-48066afe8421> | CC-MAIN-2017-04 | http://www.itworld.com/article/2707893/enterprise-software/how-many-mona-lisas-can-fit-on-the-head-of-a-pin-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00264-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950512 | 366 | 3.203125 | 3 |
2.1.6 What is a hash function?
A hash function H is a transformation that takes an input m and returns a fixed-size string, which is called the hash value h (that is, h = H(m)). Hash functions with just this property have a variety of general computational uses, but when employed in cryptography, the hash functions are usually chosen to have some additional properties.
The basic requirements for a cryptographic hash function are as follows.
- The input can be of any length.
- The output has a fixed length.
- H(x) is relatively easy to compute for any given x.
- H(x) is one-way.
- H(x) is collision-free.
A hash function H is said to be one-way if it is hard to invert, where ``hard to invert'' means that given a hash value h, it is computationally infeasible to find some input x such that H(x) = h. If, given a message x, it is computationally infeasible to find a message y not equal to x such that H(x) = H(y), then H is said to be a weakly collision-free hash function. A strongly collision-free hash function H is one for which it is computationally infeasible to find any two messages x and y such that H(x) = H(y).
For more information and a particularly thorough study of hash functions, see Preneel [Pre93].
The hash value represents concisely the longer message or document from which it was computed; this value is called the message digest. One can think of a message digest as a "digital fingerprint" of the larger document. Examples of well known hash functions are MD2 and MD5 (see Question 3.6.6) and SHA (see Question 3.6.5).
Perhaps the main role of a cryptographic hash function is in the provision of message integrity checks and digital signatures. Since hash functions are generally faster than encryption or digital signature algorithms, it is typical to compute the digital signature or integrity check to some document by applying cryptographic processing to the document's hash value, which is small compared to the document itself. Additionally, a digest can be made public without revealing the contents of the document from which it is derived. This is important in digital timestamping (see Question 7.11) where, using hash functions, one can get a document timestamped without revealing its contents to the timestamping service.
Damgård and Merkle [Dam90] [Mer90a] greatly influenced cryptographic hash function design by defining a hash function in terms of what is called a compression function. A compression function takes a fixed-length input and returns a shorter, fixed-length output. Given a compression function, a hash function can be defined by repeated applications of the compression function until the entire message has been processed. In this process, a message of arbitrary length is broken into blocks whose length depends on the compression function, and ``padded'' (for security reasons) so the size of the message is a multiple of the block size. The blocks are then processed sequentially, taking as input the result of the hash so far and the current message block, with the final output being the hash value for the message (see Figure 2.7). | <urn:uuid:763f22d8-5e7b-4a63-a148-8a37b847631e> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-is-a-hash-function.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00172-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.900746 | 690 | 4.46875 | 4 |
Really Simple Syndication (RSS) is an XML format designed for sharing news and other Web content. Think of it as a way to subscribe to websites, and elements of websites, that you'd like to receive on an ongoing basis.
Of course, you'll need an RSS reader to view these feeds. They typically come in two varieties: Desktop and Web-based. Desktop aggregators are applications that reside on your computer and store and present all the feed content locally. Web-based readers are aggregators that you view through your browser and the feeds and content are stored on the systems hosting the application.
If you need more details on RSS, check out this article. | <urn:uuid:6a15ac1f-447d-4eea-a061-26572337e822> | CC-MAIN-2017-04 | https://devcentral.f5.com/syndication | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00108-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958142 | 137 | 2.796875 | 3 |
In the early times, fiber optic signal transmission is one wavelength in one fiber glass. People finally found a way, which is known the widely used WDM(Wavelength Division Multiplexing) technology nowadays, to enable various kinds of fiber optic light to transmit in a single fiber glass in different wavelength.
In early WDM systems, there were two IR channels per fiber. At the destination, the IR channels were demultiplexed by a dichroic (two-wavelength) filter with a cutoff wavelength approximately midway between the wavelengths of the two channels. It soon became clear that more than two multiplexed IR channels could be demultiplexed using cascaded dichroic filters, giving rise to coarse wavelength-division multiplexing (CWDM) and dense wavelength-division multiplexing (DWDM). In CWDM, there are usually eight different IR channels, but there can be up to 18. In DWDM multiplexer, there can be dozens. Because each IR channel carries its own set of multiplexed RF signals, it is theoretically possible to transmit combined data on a single fiber at a total effective speed of several hundred gigabits per second (Gbps).
The first WDM systems were two-channel systems that used 1310nm and 1550nm wavelengths. Shortly afterwards came multi-channel systems that used the 1550nm region – where the fiber attenuation is lowest. These systems used temperature stabilized lasers to provide the needed channels count. Because of the high costs involved, DWDM was only economical for long-haul applications. Therefore, most optical systems vendors competed at providing the highest channel count and the longest distances.
The need for WDM solutions in the metro region became stronger and a new alternative technology emerged. Transmode was in the forefront in introducing a solution based on less expensive transmitters without temperature stabilization and where the wavelengths were more separated in the spectrum, CWDM. Another Transmode solution is based on a patented low-loss DWDM architecture on single-fiber configurations where the expensive Optical Amplifiers can be omitted.
WDM has revolutionized the cost per bit of transport. Thanks to WDM, fiber networks can carry multiple Terabits of data per second over thousands of kilometers – at cost points unimaginable less than a decade ago. WDM technology has the advantages of high capacity, long reach distance and ease of use. WDM is now recognized as the Layer 1 transport technology in all tiers of the network. It offers low-cost transport for all applications and services, scales easily in terms of capacity and reach and provides rapid protection against any fiber plant failure. Fully transparent to any bitrate and protocol, WDM is the natural integration layer for modern networks. This allows networks to become more manageable, operate more efficiently and transport considerably higher bandwidth for high-volume data transmission.
WDM is similar to frequency-division multiplexing (FDM). But instead of taking place at radio frequencies (RF), WDM is done in the IR portion of the electromagnetic spectrum. Each IR channel carries several RF signals combined by means of FDM or time-division multiplexing (TDM). Each multiplexed IR channel is separated, or demultiplexed, into the original signals at the destination. Using FDM or TDM in each IR channel in combination with WDM or several IR channels, data in different formats and at different speeds can be transmitted simultaneously on a single fiber.
The WDM structure looks similar like a common optical fiber coupler, but they are different because optical couplers divide the same wavelength fiber optic signals by different ratios such as 1:99, while WDM divide the two different wavelength fiber optic light as shown on the above picture.
The use of WDM technology can multiply the effective bandwidth of a fiber optic communication system, but its cost must continue to fall or using multiple fibers bundled into a cable. | <urn:uuid:29d69f38-674e-46d1-9b93-a369845dad6a> | CC-MAIN-2017-04 | http://www.fs.com/blog/development-of-wdm-technology.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00228-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939224 | 802 | 3.734375 | 4 |
When it comes to ushering in the next-generation of computer chips, Moore’s Law is not dead, it is just evolving, so say some of the more optimistic scientists and engineers cited in a recent New York Times article from science writer John Markoff. Despite numerous proclamations foretelling Moore’s Law’s imminent demise, there are those who remain confident that a new class of nanomaterials will save the day. Materials designers are investigating using metals, ceramics, polymeric and composites that organize via “bottom up” rather than “top down” processes as the substrate for future circuits.
Moore’s Law refers to the observation put forth by Intel cofounder Gordon E. Moore in 1965 that stated that the number of transistors on a silicon chip would double approximately every 24 months. The prediction has lasted through five decades of faster and cheaper CPUs, but it’s run out of steam as silicon-based circuits near the limits of miniaturization. While future process shrinks are possible and 3D stacking will buy some additional time, pundits say these tweaks are not economically feasible past a certain point. In fact, the high cost of building next-generation semiconductor factories has been called “Moore’s Second Law.”
With the advantages of Moore’s Law-type progress hanging in the balance, semiconductor designers have been forced to innovate. A lot of the buzz lately is around “self assembling” circuits. Industry researchers are experimenting with new techniques that combine nanowires with conventional manufacturing processes, setting the stage for a new class of computer chips, that continues the price/performance progression established by Moore’s law. Manufacturers are hopeful that such bottoms-up self-assembly techniques will eliminate the need to invest in costly new lithographic machines.
“The key is self assembly,” said Chandrasekhar Narayan, director of science and technology at IBM’s Almaden Research Center in San Jose, Calif. “You use the forces of nature to do your work for you. Brute force doesn’t work any more; you have to work with nature and let things happen by themselves.”
Moving from silicon-based manufacturing to an era of computational materials will require a concerted effort and a lot of computing power to test candidate materials. Markoff notes that materials researchers in Silicon Valley are using powerful new supercomputers to advance the science. “While semiconductor chips are no longer made here,” says Markoff referring to Silicon Valley, “the new classes of materials being developed in this area are likely to reshape the computing world over the next decade.” | <urn:uuid:cbc9074e-6665-43f1-b953-fbc3d99ef91c> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/01/10/moores-law-post-silicon-era/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00438-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935885 | 561 | 3.625 | 4 |
It's clear that the next generation of project management will need to be at the heart of this reinvention because projects are the only true mechanism for sustainable change. Knowledge work (work that uses ideas, expertise, information and relationships to achieve tasks) is the central ingredient to today's enterprises and enterprise projects. Unfortunately, the interdependence and changing nature of this work does not respond well to the scientific management methods that helped companies successfully manage projects over the past century.
The implication of this for large organizations and society overall is huge. According to the Project Management Institute, $12 trillionnearly 20 percent of the worlds GDPis invested in projects. And with this work, systematically improving productivity within and across organizations is the most common bottleneck. This bottleneck causes high enterprise project failure rates, which, for large enterprise technology projects, is as high as 70 percent according to Standish Group research.
Whilst Peter Drucker didn't focus his writing specifically on technology projects, there is nothing in large enterprises that exposes the function and dysfunction of "knowledge work productivity" more than these types of projects, given the their ever-changing inter and intra-organizational complications.
Traditional project management was designed for what Peter Drucker termed "manual work" and is based on the scientific management principles developed by Frederick Taylor in the early 1900's. This type of work―like the work required for building an assembly line―was and is visible, specialized and stable. Knowledge work on the other hand is invisible, holistic, and ever changing. Unlike manual workers who mainly use their hands and backs to get work done, knowledge workers use their situational knowledge to accomplish goals in dynamic environments.
Knowledge work needs to be managed differently than manual work because there are so many ways for it to go off track. A few common examples of unproductive knowledge work include:
To productively manage the often invisible and ever changing nature of knowledge work projects better, Drucker advised executives to take a more holistic approach, understanding that large projects, like business itself, is more of a social science. He emphasized our need to remove unproductive work and restructure work as part of an overall system. In this light he believed that knowledge should be organized through teams, with clarity around who is in charge at what time, for what reason, and for how long. The key difference between the traditional approach is the need to facilitate this across organizations as well as within organizations―including the project team, corporate functions and divisions, outside consulting firms, and sometimes the board of directors.
The next frontier of project management, in line with Drucker's thinking, requires that we deliver improvements with greater speed to compete globally. In the 21st Century, large firms wont threaten smaller companies nearly as much as fast companies will threaten slower ones. Does it take your large company a couple of weeks to set up a meeting with key people because their calendars are so busy or because they wont be in the office for awhile? And even then, is it difficult to get contentious tradeoffs made and decisions acted upon? If so, you are either in trouble or headed toward it.
The role of acceleration is to knowledge work projects what quality control is to manual work projects because knowledge work changes so rapidly. With knowledge work, acceleration doesnt imply that the efforts can be shoddy or sloppy. Rather, it means that work needs to be facilitated in real time. It requires ongoing prototyping in the field versus striving for perfection in the office. In today's knowledge age, what matters most is not what you know but how fast you can apply it. In a rapidly changing competitive environment, acceleration is an essential ingredient in achieving high quality and sustainable competitive advantage.
For knowledge work projects to be managed more productively, consistent with Drucker's ideas on knowledge worker productivity, a holistic underlying system is needed. It must get everyone on the same page and properly sequence and accelerate Where-Why-What-When-How-Who. Managers often are clear on many of these things at an individual level. But, collectively, it's very common to have different individual views that dont add up to a shared enterprise picture. With large enterprise projects, this results in unproductive work and high failure rates.
Using a purely objective approach based on scientific management principles to manage the fluid and invisible nature of knowledge work does not work well in practice. When knowledge work is managed like manual work, it tends to get over-engineered, with overly complex governance structures and project designs. Over-engineering knowledge work that is invisible, holistic and ever-changing makes the work take longer and cost more to implement and manage.
Knowledge work productivity often benefits from a just-in-time mindset versus the just-in-case approach. With manual work, taking more time to prepare often improves results and reduces risk because the work is stable and wont change while youre preparing. With the ever-changing nature of knowledge work, just in time is typically more productive and less risky.
Where traditional project management benefits from being very specialized and mechanized, effective enterprise knowledge work projects require a more holistic and socialized approach. It requires a minor amount of initial complexity at the front end to avoid an unworkable amount of complication later on. | <urn:uuid:e455da6e-73b5-46a2-944d-43494b09da3d> | CC-MAIN-2017-04 | http://www.cioupdate.com/insights/article.php/3866441/Reinventing-IT-Project-Management---Peter-Drucker-Style.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00346-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951917 | 1,059 | 2.5625 | 3 |
7.1 What is probabilistic encryption?
Probabilistic encryption, developed by Goldwasser and Micali [GM84], is a design approach for encryption where a message is encrypted into one of many possible ciphertexts (not just a single ciphertext as in deterministic encryption). This is done in such a way that it is provably as hard to obtain partial information about the message from the ciphertext as it is to solve some hard problem. In previous approaches to encryption, even though it was not always known whether one could obtain such partial information, it was not proved that one could not do so.
A particular example of probabilistic encryption given by Goldwasser and Micali operates on ``bits'' rather than ``blocks'' and is based on the quadratic residuosity problem. The problem is to find whether an integer x is a square modulo a composite integer n. (This is easy if the factors of n are known, but presumably hard if they are not.) In their example, a ``0'' bit is encrypted as a random square, and a ``1'' bit as a non-square; thus it is as hard to decrypt as it is to solve the quadratic residuosity problem. The scheme has substantial message expansion due to the bit-by-bit encryption of the message. Blum and Goldwasser later proposed an efficient probabilistic encryption scheme with minimal message expansion [BG85].
- 7.1 What is probabilistic encryption?
- Contribution Agreements: Draft 1
- Contribution Agreements: Draft 2
- 7.2 What are special signature schemes?
- 7.3 What is a blind signature scheme?
- Contribution Agreements: Draft 3
- Contribution Agreements: Final
- 7.4 What is a designated confirmer signature?
- 7.5 What is a fail-stop signature scheme?
- 7.6 What is a group signature?
- 7.7 What is a one-time signature scheme?
- 7.8 What is an undeniable signature scheme?
- 7.9 What are on-line/off-line signatures?
- 7.10 What is OAEP?
- 7.11 What is digital timestamping?
- 7.12 What is key recovery?
- 7.13 What are LEAFs?
- 7.14 What is PSS/PSS-R?
- 7.15 What are covert channels?
- 7.16 What are proactive security techniques?
- 7.17 What is quantum computing?
- 7.18 What is quantum cryptography?
- 7.19 What is DNA computing?
- 7.20 What are biometric techniques?
- 7.21 What is tamper-resistant hardware?
- 7.22 How are hardware devices made tamper-resistant? | <urn:uuid:c159be69-bb8c-4bbb-a164-4b389e1dd5e6> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-is-probabilistic-encryption.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00072-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929799 | 597 | 3.3125 | 3 |
Machine learning is coming to help you distinguish sophisticated attacks from the noise of everyday usage, identify anomalous behavior that may be malicious, and block attacks from your system before you even know they exist. Machine learning works best in the cloud, feeding on large amounts of data from multiple sources, supported by elastic compute resources for analysis, to build sophisticated models of behavior. These models can then be used either locally or in the cloud to identify friend from foe.
Attacks that do not require lateral movement or privilege escalation are harder to detect. These are the sorts of activities that machine learning is being used to catch today, such as data thefts with stolen credentials, or exfiltration by insiders.
In most modern conflicts, tools or weapons are generally available to all sides, and cybersecurity is no exception. Any tool that we can use, they can use. Any defense that we can create, they can try to find some way to evade.
Machine learning has the promise of being a powerful cybersecurity tool, but as with any technology, it’s important for us to think about how the adversary may attempt to circumvent it. This gives us the opportunity to strengthen our technology against obvious circumvention, maximizing its initial effectiveness and maximizing the area under the initial part of the “Grobman Curve” (from The Second Economy: The Race for Trust, Treasure and Time in the Cybersecurity War, by Steve Grobman and Allison Cerra).When it comes to machine learning, there are a few possible ways attackers might attempt to gain the upper hand: They can try to identify the model and find its weak points, legitimize bad behavior, or flood the model to make it unusable.
Learn the model and take advantage of it
For some of the simpler models that are in use today, figuring out what the machine is looking for and delivering it with malicious intent is a popular approach. We have seen examples of this at online retailers, where attackers create digital products, such as eBooks, hype them with fake reviews, generate a large number of downloads to increase their visibility using cloud computing resources, and then trick consumers into buying what appear to be popular editions but are really elaborate fakes.
In cybersecurity, spammers have been doing this for some time. Spam filters were initially based on searches for commonly used words and phrases in spam emails. Data was fed in by users marking emails as junk, providing the model with large volumes of data. Early models were easy to trick by inserting punctuation within words, or using recognizable misspellings. As the models became more sophisticated these tricks became more difficult to figure out, and spammers eventually incorporated social engineering into their messages to appear legit.
Legitimize bad behavior
Another potential path is to corrupt the model, so that it considers the malicious activity to be normal. Machine learning algorithms for cybersecurity work over time, continually reviewing the traffic on the network to establish what is normal, what is suspicious, and what is malicious. How do you position yourself on the green side of that line, and appear to be legitimate? If you slowly feed data to something that you know is learning, you can move the line of what is considered normal. Sophisticated adversaries could use cloud-based systems to attack machine learning model development, gradually moving the model so that their desired behavior will be considered normal or benign.
Flood the model
Finally, attackers could flood the model with random or malicious data, to make the model unusable. Microsoft’s machine learning experiment, a Twitter chatbot named Tay, suffered this fate. Tay’s initial model was built on filtered public data. However, after being fed a large diet of racist, misogynist, hate-filled speech, its responses veered well into the inappropriate range within 24 hours.
We will need to consider the security of our machine learning algorithms and protect them from abuse. After all, a model is just a collection of ranges of behavior. If the data being tested is in between these ranges, then look at other variables, quickly running through the math until a decision is reached. If adversaries can rapidly and repeatedly test against the model with specific values, they can potentially find a way around it.
While we promote continuous learning, the models must also be resistant to tampering. Is it possible to build machine learning algorithms that are resilient to poisoning in some way? As the models become increasingly complex, do they become harder to manipulate? These examples raise some useful questions and areas of future research for machine learning, so that we can continue to rely on this emerging technique in cybersecurity. | <urn:uuid:9c7d4cf2-e396-403b-acda-f3c4167fc07c> | CC-MAIN-2017-04 | http://www.csoonline.com/article/3141912/security/cloud-vs-cloud.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00374-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954704 | 929 | 2.90625 | 3 |
Burn-Down charts are amongst the most common sprint tracking mechanisms used in Agile software development methodologies. This article looks at creating and updating a burn-down chart using the effort-remaining approach. It will also be interpreting burn-down under different scenarios, the advantages of using burn-down charts, and some mistakes to avoid when using them.
A burn-down chart is a graphical representation of work left to do, versus time; i.e. it is a visual representation of the amount of work that still needs to be completed before the end of a project. It displays the effort remaining for a given period of time and is useful for predicting when all of the work will be completed, and whether it will be completed on time.
How to create a Burn-Down Chart using a Spreadsheet
The first step is to break the task into smaller items that cannot be further broken down. This is generally done during the sprint planning meeting. Once the task is broken down, the ideal burn-down chart is plotted.
Many Agile tools (Rally, RTC, Version One, Mingle, etc.) have built-in capability for burn-down charts. However, in its simplest form, a burn-down chart can be maintained in an Excel spreadsheet. The days in the sprint are plotted on the X axis, while the remaining efforts are plotted on the Y axis.
Refer to the example below:
Sprint duration – 3 weeks
Working days - 15 (2/2/1015 – 2/20/2015)
Team size - 7
Hours/Day – 6
Total capacity - 630 hours
On Day 1 of the sprint, once the task is broken down, the ideal burn-down will be plotted as below.
The Y axis depicts the total hours which should be completed by the end of the sprint. The ideal progress is shown in the red line, which assumes that all tasks will be completed by the end of the sprint.
The burn-down chart should contain:
An X axis to display the number of working days
A Y axis to display the remaining effort
A trend line as a guideline
Real progress of effort
How to Update the Burn-Down Chart
Each member picks up the tasks and then works on them. At the end of the day, they update the effort remaining for the task, along with its latest status. Refer to the example below. The total estimated effort for Task 1 is 10 hours. After spending six hours on the task, if the developer believes that it requires another 4 hours to complete, the Effort Remaining column (named as “Left”) should be updated as 4.
As we progress during the sprint, the burn-down will look like this:
Understanding Burn-Down Charts under different scenarios:
There are only two lines drawn in a burn-down chart, but the situation they describe might have different reasons. There are many different situations. Out of this, some common ones are described here:
Type 1: Sprint commitment met
A burn-down chart in which sprint commitments are met and progress has been smooth over the sprint.
Type 2: Sprint commitment not met
The teams are going at a slower pace and may not be able to complete all the commitments on time. The remaining work then becomes a part of the product backlog and is carried forward to subsequent sprints.
Type 3: Sprint commitment met too fast
It shows that we are going at a better rate and may be able to finish earlier. The stories were probably overestimated; therefore, the team finished them earlier.
Type 4: Team stretched for commitment
The team worked at a slow pace in the first few weeks of the sprint. However, this was observed by them and then pushed towards the end of the sprint, to meet the commitment.
Type 5: Team is not consistent
The team's performance has not been consistent, even though the commitment is met in the end.
Type 6: Team is not proactive
The team is probably doing some work, but maybe it does not update its progress accordingly. Another reason might be that the product owner has added the same amount of work that was already completed, therefore the line is straight.
Type 7: Team is non-functional
The team is non-functional on many levels and the product owner does not care about development progress.
Advantages of using Burn-Down Charts
Single planning and tracking tool for the team
The team performs task breakdown, updates the estimated effort, and also updates the effort remaining. The entire team drives planning and tracking using the burn-down tool, which is the biggest advantage of using it.
Risk mitigation by daily visibility
The burn-down chart provides daily feedback on effort and schedule, thereby mitigating risks and raising alarms as soon as something goes wrong, rather than waiting till the end.
Communication tool for customer and other stakeholders
Burn-down charts provide visibility of a project’s progress on a daily basis. In the absence of an online tool, burn-down can be physically represented using a whiteboard/chart paper.
Placeholder to track retrospective action items
It is a good practice to include retrospective action items from the previous sprint as "functional requirements" in the task breakdown for the current sprint. This way, the team keeps a focus on those action items and they are tracked as the sprint progresses.
Common mistakes of using burn-down charts
If the task is too big, then it will make tracking on a daily basis difficult.
People get confused with the effort spent and the effort remaining. If these are wrongly plotted then the report insight will be inaccurate.
Forgetting to update the remaining time for tasks will lead to incorrect data.
The burn-down chart is an essential part of any Agile project. It is a good way for the team to clearly see what is happening and how progress is being made during each sprint. Finally, for updating the burn-down chart, discipline is needed. The chart should be updated at the end on a daily basis. | <urn:uuid:b02202e2-68cd-44cf-86a0-7cb6a906c2ec> | CC-MAIN-2017-04 | https://www.hcltech.com/blogs/burn-down-chart-tracking-tool | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00008-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941324 | 1,244 | 2.796875 | 3 |
If you shake a stubborn bottle of catsup hard enough, its contents will loosen up and pour. During an earthquake, the same thing can happen to otherwise solid ground. The process is called "liquefaction," and when it occurs during an earthquake, buildings can shift or sink, and underground storage tanks -- like those at the local gas station -- can float to the surface. Unfortunately, the pipelines attached to those tanks float at a different rate, causing line breaks that may lead to fires or flooding.
In many cases, collateral damage may be more destructive than the quake itself: fires, flooding, loss of vital services or destruction of key structures -- such as hospitals, highways or fire stations -- can dramatically increase the total losses associated with an earthquake.
The tremendous variety of possible damage and the influence of local circumstances have traditionally made it difficult for community leaders to predict expected damage levels and plan emergency responses to earthquakes. Perhaps more important, the lack of effective risk analysis has made it difficult to bring public attention to bear on legislative, zoning and building code changes that could help mitigate the effects of earthquakes. And, of course, all these local and state impediments to reliable estimates roll up to the federal level, making it all but impossible for federal authorities to realistically estimate the national earthquake risk.
In Search of a Standard
Congress began to address some of these issues in October 1977 with the passage of the Earthquake Hazards Reduction Act. Its purpose is to reduce "the risks to life and property from future earthquakes in the United States through the establishment and maintenance of an effective earthquake hazards reduction program." The act established the National Earthquake Hazards Reduction Program (NEHRP), which coordinates activities between various federal agencies, including the Federal Emergency Management Agency (FEMA). In the early 1990s, under the auspices of NEHRP, FEMA partnered with the National Institutes of Building Science (NIBS) to develop a standardized methodology for estimating earthquake damage.
"We started in about 1992 to develop an earthquake loss methodology," said Claire Drury, FEMA project officer for what is known as the HAZUS Loss Estimation Methodology Project. "We found that people weren't aware of potential losses and you can't promote seismic building codes unless people realize there is a benefit to doing this. So what we have done is build the model that will be used by regional and local governments to run loss estimates to help them recognize the potential hazards."
The project, directed by NIBS, established an eight-member Project Work Group (PWG) -- consisting of earthquake experts -- and an 18-member project oversight committee (POC), which represented user interests in the earthquake community. Additional assistance was elicited from over 80 corresponding members of the POC, whose views represented user and technical interests.
In 1993, PWG and POC defined the components of the loss estimation methodology, prepared an extensive set of objectives for developing the methodology, and generated a standardized list of earthquake-caused economic and social losses as methodology outputs.
Risk Management Solutions (RMS) of Menlo Park, Calif. -- a company that specializes in providing software and information services to the insurance and financial industry -- was selected to develop the methodology and was also hired to do the software implementation.
"The HAZUS methodology itself is a nationally applicable methodology," said Scott Lawson, an associate vice president at RMS with a Ph.D. in structural engineering who headed RMS's part of HAZUS. "The information that was gathered was on a national scale -- data like building inventory information, census data and the EPA-supplied toxic-site information. We have information on highways, bridges and dams.
"FEMA had collected a whole bunch of information during the civil defense era," Lawson continued, "and we took all that information and put it in there -- all the data you need for dealing with a nuclear disaster, you also need for dealing with an earthquake disaster. There is a wealth of other data, such as FEMA flood maps and U.S. Geological Survey land use and land cover information. We also have probabilistic ground-shaking mappings, information on elevation levels and hurricane data."
What RMS was studying were the mechanics behind how to theorize an earthquake in a particular location, and understand the impacts on the local community, region or state.
In Lawson's opinion, HAZUS' biggest contributions to the art of loss prediction are the way it analyzes and estimates building damage and its ability to estimate indirect economic effects. For example, it can provide economic "snapshots" of a region one to 15 years after an earthquake.
As described in a February 1997 paper by Robert V. Whitman, professor emeritus at MIT, and Henry J. Lagorio, professor emeritus at the University of California at Berkeley, the HAZUS methodology consists of three elements:
* Classification systems for assembling information on the building stock, the components of highway and utility lifelines, and demographic and economic data.
* Methods for evaluating damage and calculating various losses.
* Databases containing information usable for calculations and as default data.
These three components are implemented in the software package developed by RMS, which is based on MapInfo GIS (an ArcView version is in the offing). Part of the development process included two pilot studies -- one in Portland, Ore., the other in Boston. Mei Mei Wang, director of Earthquake Programs for the Oregon Department of Geology and Mineral Industries was involved in supplying information to the Portland pilot and was impressed with what she saw.
"I give it five stars, but you have to use your judgment," said Wang. "The further out you are [the broader the scope of the estimate], the better the results. When I was at the training session, I made the state of Oregon a study area. The program built all the census tracts in the state and figured out the building worth at $160 billion [not including bridges and highways], which is the right order of magnitude. But in a smaller area, my uncertainty gets bigger and bigger."
This uncertainty is particularly acute when only default information is used. While default-based estimates have a place, the full methodology calls for customizing the data to get more accurate estimates. In the Portland pilot, localization of much of the data was done to raise the level of confidence in HAZUS' results. And even though Portland is not generally considered a likely victim of earthquakes, it is subject to an interesting and complex set of geological and environmental circumstances that needed to be customized -- beginning with the buildings.
"In Portland, what was really important was categorizing the building stock, because the national database did not accurately portray the actual stock at all, so it would have underestimated the hazard," said Wang. "The buildings are worse than you would expect with many URM [unreinforced masonry] buildings."
URM buildings are particularly susceptible to earthquakes and are a key factor affecting how well or poorly a metropolitan area will fare during an earthquake. To improve the Portland estimate, a complete building inventory was done using a technique known as the FEMA 154 method. This is an easily learned technique that involves a 10-minute evaluation of each building in a locale. Portland State University Civil Engineering Department students were enlisted to help and spent several months combing the city, evaluating each building. That information was entered into the HAZUS databases and was used to improve the evaluation.
"Another important part of the Portland pilot was soil type," said Wang. "We have the Portland Hills, which tend to have landslide problems, and we have the Williamette and Columbia Rivers. The Williamette runs through downtown Portland, and the areas around the rivers would be subject to liquefaction and lateral spreading during an earthquake."
There are seven bridges in the greater Portland area and although HAZUS does not include bridge model capability, these would be strongly affected by an earthquake. It is common for bridge supports on each shore to move toward the center of a river during an earthquake due to liquefaction of the supporting soil. This would almost certainly be a problem in Portland and, according to Wang, the Portland bridges would be "a wreck."
"When you look at some of these old bridges you say 'that is how you would not design a bridge to withstand earthquakes,'" said Wang. "For example, one of the Portland bridges has big concrete counterweights, and when you lower the counterweights, it raises the bridge. In an earthquake, you don't want structures weighted at the top."
Portland at Fault
Geologically, Portland provides some other challenges. "There is a local fault -- the Portland Hills fault, which runs at the base of a hill very close to downtown," noted Wang. "It's a big fault structure and it's a young fault; and young faults are a concern because they might be triggered. With an earthquake there, you would have landslides in the Portland Hills."
However, that isn't the only fault potentially affecting the city. The Cascadia Subduction Zone exists off the Oregon shore. The Portland Hills fault occurs where two geologic plates are moving sideways past each other. In the Cascadia Subduction Zone, the Juan de Fuca plate in the Pacific is sliding underneath the North American plate. Although subduction zones such as this generate earthquakes less frequently, they have the potential of generating even bigger quakes than large lateral faults, such as California's San Andreas.
Because the Cascadia Subduction Zone is offshore and would tend to produce a vertical motion, tsunamis (large ocean waves) would be a likely effect. This wouldn't affect Portland itself, but could be devastating to the Oregon coast. However, subduction zone faults tend to produce many low frequency waves that travel further than high frequency waves. Such low frequency waves impact buildings 10 stories and higher because the buildings themselves resonate with the low frequency waves. Therefore, despite the distance from the Cascadia Subduction Zone to Portland, the taller Portland buildings are likely to be strongly affected by any major Cascadia earthquake.
Evaluating the Risk
With two major faults, the Portland pilot modeled the effects of both a local earthquake in the Portland Hills -- which would tend to generate higher frequency waves and cause landslides -- and a more remote earthquake in the Cascadia Subduction zone -- which would tend to affect the taller buildings. With the data entered, HAZUS evaluated two scenarios -- one, a magnitude 6.5 earthquake in the Portland Hills, the other, a magnitude 8.5 earthquake in the Cascadia Subduction Zone. The HAZUS estimate determined that the Portland Hills quake would produce the greater damage, although both would have a serious impact.
Because HAZUS' accuracy depends on the quality of the data, as well as the sophistication of the algorithms, it is generally believed that the estimates will improve over time as both data and algorithms improve. Nonetheless, the results to date have been quite positive.
"FEMA is very pleased with it," said Drury. "HAZUS gives us a standardized methodology that is applicable nationwide. Localities have done studies in the past, but they were done only if a community knew it had a risk."
Communities that didn't perceive a risk usually haven't invested the time or money in doing studies. What's more, because there was no standardized methodology, one community's studies weren't generally comparable to others. These factors contributed to the difficulty of making useful national estimates.
HAZUS was designed to overcome these problems and so, once the methodology was out of the pilot stage, FEMA began to distribute the software to state and local communities. The software was sent on CD-ROM, one for the East Coast and one for the West Coast. Each CD contains the default level building inventory data, as well as various national databases, and is available for free to state and local officials through NIBS. FEMA has also provided training seminars to help communities get their estimates started. In all, 45 states have had representatives trained on HAZUS.
Various communities have started their own estimate projects, but it may be some time before their results become generally available. In the meanwhile, FEMA is doing an annualized loss estimate for the United States by running estimates on individual counties and using probabilistic capabilities within the model to make generalized conclusions for the whole country.
Earthquake loss estimation is only the beginning for HAZUS. The intention all along was to expand the program to encompass other types of disasters, and work is already under way on these other components.
"We plan to expand the methodology to include wind and flood," said Drury, "and we have panels of experts similar to the earthquake panel at work on it. It will take a couple of years to complete -- we're still very much at the seminal periods for those projects."
As RMS' Lawson said, "If you build a robust enough data environment, then you can overlay disasters and assess the damages. You can throw an earthquake at it, or a flood, and see how it reacts to the disaster itself."
Such a tool gives planners the opportunity to catch a glimpse in advance of what they may face during a full-blown disaster. By understanding the possible effects, communities and their leaders have the opportunity to mitigate the effects of disasters, and that helps everyone.
October Table of Contents | <urn:uuid:6c86b2af-8961-4ff2-9b0e-b6c7cac6b64a> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Mastering-Disaster----Earthquake-Modeling-Piloted.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00034-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963172 | 2,727 | 4.03125 | 4 |
The race for ever more powerful mobile processors may be keeping Qualcomm, Nvidia, and Samsung occupied for now, but ARM is focusing on another goal – designing ultra-low power processors. After years of research and various internal designs, the microchip company is now developing a new low-power microcontroller core, which will be quite slow compared with the processors that we’re more familiar with.
Low power chips aren’t anything particularly new – a few companies already offer sub 2V microcontrollers for battery powered devices. But in order to take advantage of minute power sources, ARM intends to push the voltage requirements right down to the threshold of where a transistor can be turned on and off. However, there’s a trade-off with much slower performance.
The core will be working down at the bare minimum voltage of traditional transistors, meaning operating voltages of just 0.3-0.6 volts, and will be clocked in in the low kilohertz range, so you’re more likely to see this one ticking over as a 50 kHz chip, rather than a 2 GHz multi-core processor.
Don’t expect to see these chips powering a new range of super battery efficient smartphones, but such a development has interesting implications for low power communication devices and the Internet of Things. Speaking with a group of UK journalists, Mike Muller, chief technology officer at ARM, talked about the strategies required for processing small amounts of data and transmitting these small packets, and how such a device could be powered by energy scavenged from the local environment.
Normally, the best strategy is to do processing as fast as possible and then go to sleep for as long as possible—get in and get out. But for energy scavenging, it can be different.
As these chips could be made to work with limited power supplies, especially if they have to scavenge energy from other devices or sources, there might not always be the energy available to transmit information on demand. For the Internet of Things to become a reality, these microprocessors need to be able to cope with unreliable supplies, and that’s an unexplored area when it comes to processor technology.
Remember the “ambient backscatter” concept we covered a couple of weeks ago, whereby devices can communicate by piggybacking on background radio waves? Well ARM’s new chip seems to be based on the idea that it could potentially be powered by weaker power sources such as this, allowing for some level of computer processing without requiring a large main source or a battery for a power supply. | <urn:uuid:75d965e9-51ff-478f-9fff-d601a982b55f> | CC-MAIN-2017-04 | http://www.machinetomachinemagazine.com/2013/08/27/arm-processors-take-us-closer-to-internet-of-things/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00338-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942631 | 529 | 3 | 3 |
The U.S. Environmental Protection Agency (EPA) has named the city of Santa Clara a Green Power Community, making the city only one of 14 communities to receive such an honor. Santa Clara ranks as the second largest Green Power Community, both in terms of population and size of its green power commitment-57 million kilowatt-hours annually. Together, the 14 communities are protecting the environment by purchasing nearly 300 million kilowatt-hours (kWh) of green power annually.
"Santa Clara is known for its concern for the environment, so it is wonderful to receive national recognition for our ongoing commitment to renewable energy," says Patricia Mahan, Mayor for the City of Santa Clara. "This is a true community effort spearheaded by our municipal utility, and supported by the thousands of individual residents and businesses participating in the Santa Clara Green Power program."Santa Clara Green Power is a voluntary renewable energy program offered by Silicon Valley Power (SVP), the City of Santa Clara's municipal electric utility.
Power of Green
What makes the Green Power Community status of great importance is that these individual renewable energy purchases by thousands of Santa Clara residents and more than 130 businesses are above and beyond the standard utility electricity mix, creating more demand for clean energy. While SVP already has one of the cleanest and greenest power mixes in the nation, its Santa Clara Green Power program allows customers to match 100 percent of their energy use with renewables. The Santa Clara community has achieved its new Green Power status by purchasing energy credits that have surpassed benchmarks established by the EPA.
"EPA applauds our community partners for protecting our environment by purchasing green power," says EPA Administrator Stephen L. Johnson. "By voluntarily shifting to renewable energy, the community of Santa Clara, California, is proving you don't need to wait for a signal in order to go green."
Committing to More Than One Million kWh of Renewable Energy
It is fitting that the City of Santa Clara's municipal government is leading the charge as a green power champion. The City recently committed to more than a million kilowatt-hours of renewable energy annually for all municipal facilities-more than one-quarter of the output of one large-scale wind turbine. | <urn:uuid:5e435405-b9b0-4e1d-803e-e7bb1d37fa55> | CC-MAIN-2017-04 | http://www.govtech.com/technology/Santa-Clara-Calif-Named-Second-Largest-Green_Power_Community_in_US.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00154-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938 | 446 | 2.609375 | 3 |
Most people think that Apple's Siri is the coolest thing they've ever seen on a smart phone. It certainly is a milestone in practical human-machine interfaces, and will be widely copied. The combination of deep search, voice recognition and natural language processing (NLP) is dynamite.
If you haven't had the pleasure ... Siri is a wondrous function built into the Apple iPhone. It’s the state-of-the-art in Artificial Intelligence and NLP. You speak directly to Siri, ask her questions (yes, she's female) and tell her what to do with many of your other apps. Siri integrates with mail, text messaging, maps, search, weather, calendar and so on. Ask her "Will I need an umbrella in the morning?" and she'll look up the weather for you – after checking your calendar to see what city you’ll be in tomorrow. It's amazing.
Natural Language Processing is a fabulous idea of course. It radically improves the usability of smart phones, and even their safety with much improved hands-free operation.
An important technical detail is that NLP is very demanding on computing power. In fact it's beyond the capability of today's smart phones, even if each of them alone is more powerful than all of NASA's computers in 1969!. So all Siri's hard work is actually done on Apple's mainframe computers scattered around the planet. That is, all your interactions with Siri are sent into the cloud.
Imagine Siri was a human personal assistant. Imagine she's looking after your diary, placing calls for you, booking meetings, planning your travel, taking dictation, sending emails and text messages for you, reminding you of your appointments, even your significant other’s birthday. She's getting to know you all the while, learning your habits, your preferences, your personal and work-a-day networks.
And she's free!
Now, wouldn't the offer of a free human PA strike you as too good to be true?
When you dictate your mails and text messages to Siri, you’re providing Apple with content that's usually off limits to carriers, phone companies and ISPs. Siri is an end run around telecommunicationss intercept laws.
Of course there are many, many examples of where free social media apps mask a commercial bargain. Face recognition is the classic case. It was first made available on photo sharing sites as a neat way to organise one’s albums, but then Facebook went further by inviting photo tags from users and then automatically identifying people in other photos on others' pages. What's happening behind the scenes is that Facebook is running its face recognition templates over the billions of photos in their databases (which were originally uploaded for personal use long before face recognition was deployed). Given their business model and their track record, we can be certain that Facebook is using face recognition to identify everyone they possibly can, and thence working out fresh associations between countless people and situations accidentally caught on camera. Combine this with image processing and visual search technology (like Google "Goggles") and the big social media companies have an incredible new eye in the sky. They can work out what we're doing, when, where and with whom. Nobody will need to like expressly "like" anything anymore when OSNs can literally see what cars we're driving, what brands we're wearing, where we spend our vacations, what we're eating, what makes us laugh, who makes us laugh. Apple, Facebook and others have understandably invested hundreds of millions of dollars in image recognition start-ups and intellectual property; with these tools they convert the hitherto anonymous images into content-addressable PII gold mines. It's the next frontier of Big Data.
Now, there wouldn't be much wrong with these sorts of arrangements if the social media corporations were up-front about them, and exercised some restraint. In their Privacy Policies they should detail what Personal Information they are extracting and collecting from all the voice and image data; they should explain why they collect this information, what they plan to do with it, how long they will retain it, and how they promise to limit secondary usage. They should explain that biometrics technology allows them to generate brand new PII out of members' snapshots and utterances. And they should acknowledge that by rendering data identifiable, they become accountable in many places under privacy and data protection laws for its safekeeping as PII. It's just not good enough to vaguely reserve their rights to "use personal information to help us develop, deliver, and improve our products, services, content, and advertising". They should treat their customers -- and all those innocents about whom they collect PII indirectly -- with proper respect, and stop blandly pretending that 'service improvement' is what they're up to.
Siri along with face recognition herald a radical new type of privatised surveillance, and on a breathtaking scale. While Facebook stealthily "x-ray" photo albums without consent, Apple now has even more intimate access to our daily routines and personal habits. And they don’t even pay as much as a penny for our thoughts.
As cool as Siri may be, I myself will decline to use any natural language processing while the software runs in the cloud, and while the service providers refuse to restrain their use of my voice data. I'll wait for NLP to be done on my device with my data kept private.
And I'd happily pay cold hard cash for that kind of app, instead of having an infomopoly embed itself in my personal affairs. | <urn:uuid:e74eff9a-c9bb-42b5-bab6-892c1c050e67> | CC-MAIN-2017-04 | http://lockstep.com.au/blog/2012/03/12/a-penny-for-your-thoughts | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00274-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95982 | 1,130 | 2.625 | 3 |
Four-digit banking PINs are usually randomly assigned by banks after the issuing of credit and debit cards, but there are still some out there that let its customers choose their own PINs so that they might remember them more easily.
Wondering how easy to guess these self-selected PINs are and failing to find any concrete study about the matter, a team of researchers from the University of Cambridge Computer Laboratory have set up to find the answer to that question for themselves.
“After modeling banking PIN selection using a combination of leaked data from non-banking sources and a massive online survey, we found that people are significantly more careful choosing PINs then online passwords, with a majority using an effectively random sequence of digits,” says one of the researchers, PhD candidate Joseph Bonneau. “Still, the persistence of a few weak choices and birthdates in particular suggests that guessing attacks may be worthwhile for an opportunistic thief.”
To do that, they analyzed passwords and PINs from two existing sources: the 32 million textual passwords leaked following the breach of the RockYou website (they took into consideration only consecutive four-digit sequences found in the passwords), and Daniel Amitay’s research on the 10 most common iPhone passcodes.
In addition to this, they deployed an online survey and asked people to anonymously share answers to questions such as “Do you use the same PIN for multiple cards?”, “Do you use the same PIN for several cards?”, “Have you ever used a PIN from a payment card for something other than making a payment or retrieving money?”, and others, including a number of questions that didn’t require them to share their exact PINs, but allowed the researchers to know whether the PINs were the users’ birth dates or years, dates or years of important events in their lives, the lives of other persons close to them or in history, patterns or other numbers such as the digits of a phone number, a ZIP code or postal code, their bank account number, a non-government identification number, etc.
“In total, 63.7% [of the respondents] use a pseudorandom PIN,” Bonneau shared. “Unfortunately, the final group of 23% of users chose a PIN representing a date, and nearly a third of these used their own birthday. This is a game-changer because over 99% of customers reported that their birth date is listed somewhere in the wallet or purse where they keep their cards. If an attacker knows the cardholder’s date of birth and guesses optimally, the chances of successfully guessing jump to around 9%.”
“A thief can expect to get lucky every 18th wallet — except for those banks which negligently allow their customers to choose really dumb PINs like 1111 and 1234,” Ross Anderson, another researcher on the team, commented for the NYT. “There the thief cashes out once every 11 wallets.”
Blacklisting easy to guess passwords such as those containing the same four digits, a repetition of two digits (also with minimal variations), and similar can help the matter a bit, but unfortunately doesn’t solve the problem when users use their birth dates as PINs.
All in all, the best solution would be for banks not to allow users to chose their own PINs, concluded the researchers.
For more details about the methodology of their research, download the paper. | <urn:uuid:31851c88-d292-4927-ad74-0fe0afb6967b> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2012/02/21/self-selected-pins-arent-that-hard-to-guess/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00055-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952796 | 719 | 2.734375 | 3 |
There’s a new storyteller in town. Transmedia storytelling, or multiplatform storytelling, is the technique of telling multiple stories across multiple platforms and formats using current digital technologies. With the emergence of transmedia in the past several years, the entertainment industry is embracing the capabilities of multiplatform storytelling—so much so that the industry held its first TransMedia Film Festival in October of 2012. Prominent film schools and media research centers such as UCLA and USC have been hosting symposia to bring together media creators, producers and executives to discuss how transmedia works and what it means.
What Is Transmedia?
From a production standpoint, transmedia involves creating content that engages an audience using techniques that provide an immersive entertainment experience.
Transmedia should not be confused with multimedia, which is the telling of a single story in multiple mediums; rather it is the telling of multiple stories over multiple mediums that fit together to tell one pervasive story. Through this method, creative teams are able to create a powerful cross-marketing effect, capturing the attention of demanding “Generation Y” audiences. By providing access to engaging, interactive content via a variety of platforms, movie and television franchises are able to create multiple points of entry into the story world of a franchise, including comics, web content, music and games.
David Howe, president of the Syfy cable network, was quoted in Business Week as saying that transmedia is the “Holy Grail of storytelling,” a good indication that studio and cable-network executives are taking this immersive technology to heart. In fact, Syfy invested in a project that incorporated the technology into one of its newest television shows, Defiance. The ambitious project included a TV show and a multiplayer online video game that includes plot elements from the weekly show.
As this technology permeates the entertainment industry, production companies, studios and content developers will need to look at how they can keep content in sync across the different platforms on the back end.
Additionally, as content developers spin up activities from around the world and consumers sign up to take advantage of this new convergence of digital data, subscription-management and data-location challenges should be addressed from the outset.
The Challenge of Managing Data
Imagine you are the CIO or CTO of a major studio or cable network. Executive management has decided to incorporate transmedia into several new productions. The marketing teams are gearing up to take advantage of the user data that will be collected as fans engage in the project. The CEO and CFO are anxious to see how this powerful cross-marketing vehicle will enhance the revenue stream of the organization, and all eyes are on you and your department to produce the desired results.
The challenges are clear: employees will need to access up-to-date creative data from anywhere in the world, and a subscription database will be required to allow consumers to access all media quickly and easily while satisfying the needs of the marketing executives on the back end.
Synchronization of Content Creation
Today’s reality is that a multitude of programmers, artists, designers and production teams collaborate from around the world and must be able to access the most current rendition of creative content. That data is often managed using metadata—or the data about data—and in transmedia environments, managing that metadata has become a task of epic proportions.
As content developers generate terabytes of data, they also create metadata on the back end, thus allowing computer systems to match iterations of the project. The associated metadata attached to creative content includes important descriptors of the larger data sets, including file name, path, size, data and encryption keys, the computer that created it, and ownership information.
Metadata is crucial to controlling the larger data sets that comprise photos, music, animation and movie files; without it, managing the workflow of these creative endeavors would be impossible. The industry is ripe for a mechanism to simplify the management of the surging volumes of unstructured data while providing access from anywhere in the world.
Relational databases assist in managing metadata by keeping track of where the data is stored, accessed and protected.
Until now, projects typically had only one point of entry, such as the theater experience, meaning not all potential revenue markets are captured. Film fans are able to enjoy the film at the theater, but comic-book, music and game fans are still a relatively untapped market when it comes to accessing a franchise from the myriad of end points available to consumers today.
When fans take advantage of a franchise’s associated transmedia, such as games or online comic books, they are joining an online community where subscription data is captured. As more and more transmedia projects are spun up, studios and networks must look at the myriad points of entry into the story to capture and use these invaluable revenue sources.
By engaging the fan through multiple end points, data is collected via subscriptions that can be used in a variety of ways, across multiple channels. This user data must be captured, managed, analyzed, stored and secured in such a way that it provides a satisfying experience for the end user while also providing measurable business value.
Management and analysis of this incoming data should be an integral part of the production cycle. Tying into the power of distributed data via the cloud or local servers provides marketing, financial and C-suite executives with access to data that can be analyzed as soon as the project is launched.
Your Audience Is Worldwide—Is Your Data?
Once a transmedia project is ready for launch, its success will be measured through user interaction and accessibility. It is critical to ensure that fans are able to access online databases for registration and login processes seamlessly and without interruption.
To resolve these challenges, media companies need to explore new technologies that will allow them to provide access to the content regardless of location. As user subscriptions grow, bringing data to the edge of the network will not only provide a quick and easy user experience but also protect the project from critical outages and synchronization issues.
Today, synchronization, data accessibility and subscription-management issues can be resolved by storing certain types of data in a geographically-distributed relational-database management system.
Decentralizing Data to Improve Business Resilience and Scalability
Emerging technologies today fundamentally decentralize data to greatly improve business resilience by creating a computing fabric that stays up even if part of it fails. This type of infrastructure automatically stores data across the nodes on the basis of policy, usage and geography, delivering information when and where it is needed. All information is automatically and intelligently replicated across multiple nodes to ensure availability. If a node fails, users reconnect to other nodes so that access to content is continuous and productivity is unaffected. When the original node recovers, it resumes participating in the flow of data, and local users may reconnect to it whenever it’s convenient.
Unlike conventional infrastructures where capacity and performance are increased by “scaling up” (ever larger systems), new approaches provide architecture for “scaling out.” By adding nearly identical database “nodes,” a computing fabric is created. This cluster of geographically distributed independent nodes not only eases bandwidth constraints and places data closer to the user, but it also provides a system that recovers quickly from network outages.
These nodes can be placed in both private and public clouds and can be mixed and matched as needed. The database nodes can span availability zones to ensure adequate response time from any access point, and they can span multiple public clouds to minimize vendor lock-in. Nodes can be easily added when needed, and as production winds down, the number of nodes can be reduced and redeployed when necessary.
Using the Economies of Scale
It is becoming painfully clear that traditional database solutions do not enable content developers, production companies, studios and cable networks to take advantage of the economics and efficiencies of transmedia and the rich data these projects produce.
Transmedia projects, and other content-development collaborations, require a technology infrastructure that provides global support and accessibility. A flexible, robust database infrastructure is quickly becoming an integral part of the creative process. This technology is the next phase in enabling engineers, content developers, designers and marketing teams to stay on the same page, producing measurable results while making sure the end user has an uninterrupted, easy-to-use experience.
Leading article image courtesy of ilovememphis
About the Author
Frank Huerta is CEO and cofounder of TransLattice, where he is responsible for the company’s vision and strategic direction. Before TransLattice, he was cofounder and CEO of Recourse Technologies. Recourse was purchased by Symantec Corporation, where Frank then served as a vice president. Previously, he was the director of business development for Exodus Communications, focusing on mergers and acquisitions. He also held positions at VeriFone, Seagate Software and Hughes Aircraft. | <urn:uuid:2c620e5e-1356-4e0a-bac8-09786f665fd4> | CC-MAIN-2017-04 | http://www.datacenterjournal.com/immersive-storytelling-managing-transmedia-environment/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00449-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926446 | 1,818 | 2.6875 | 3 |
We hope this news sinks in to the 100 finalists for the Mars One suicide flight. It's not too late to pull out.
From The Science Times:
A new study funded by NASA shows that the long term effects of space travel could spell trouble for astronauts attempting to fly to Mars. It seems that astronauts undertaking the long voyage to Mars, could arrive at the Red Planet with brain damage.
Researchers at the University of California dosed lab mice with radiation levels equivalent to a trip to the Red Planet. It messed up their brains. According to The Science Times, the radiation "changed the way their neurons fired, make them less efficient at transmitting electrochemical signals. This loss of efficiency hampered both their memory and problem solving skills."
Hey, but who needs to be on top of their game in a pitiless, foreign environment in which the slightest mistake, misstep or miscalculation can mean instant and perhaps immeasurably painful death? Besides, if the worst happens, you'll still have a kick-ass Wikipedia entry.
Bonus science understatement of the day (via The Science Times): "This is not positive news for astronauts deployed on a two- to three-year round trip to Mars." -- Dr. Charles Limoli, professor of radiation oncology in UC Irvine School of Medicine.
There'll be no round trip, Dr. Limoli. Not in the near future.
(Related pessimistic article: Mars emperor is naked to everyone except pandering politicians and space fanboys)
This story, "Yet more evidence that sending humans to Mars is a stupid, suicidal idea" was originally published by Fritterati. | <urn:uuid:d8fa942e-e083-4cdc-911f-ebaca06a5c26> | CC-MAIN-2017-04 | http://www.itnews.com/article/2918170/yet-more-evidence-that-sending-humans-to-mars-is-a-stupid-suicidal-idea.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00385-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945826 | 335 | 2.78125 | 3 |
A collaboration between the U.S. Geological Survey (USGS) and the San Diego Zoo Institute for Conservation Research holds new promise for wildlife conservation efforts. The new approach developed by the researchers combines 3D supercomputing and advanced range estimator technologies to track terrestrial, avian, and aquatic wildlife.
A paper detailing the project, called Movement-based Estimation and Visualization of Space Use in 3D for Wildlife Ecology and Conservation, was recently published in the PLoS-ONE online science journal. The three species that were selected were pandas (terrestrial tracking), California condors (air tracking), and dugongs, a close relative of the manatee (ocean-based tracking). The project is generating crucial data for conservation efforts.
The team worked with researchers from the San Diego Supercomputing Center (SDSC) to turn their tracking data into detailed visualizations. In order to create the 3D models the team first had to optimize the codes to make the best use of available supercomputing time.
“We were able to speed up their software by several orders of magnitude,” said Robert Sinkovits, SDSC’s Director of the Scientific Applications Group. “In this case, calculations that had formerly taken four days to complete were finished in less than half an hour.”
The project used two of SDSC’s most powerful computing systems, Gordon and Trestles. A key part of the project is making sure it can scale as it grows. This means minimizing data movement and replication, says Amit Chourasia, senior visualization scientist at SDSC. Chourasia explains that the next step is to fuse additional data about topography and climate in order to better understand the habitats of these animals.
The 3D approach to animal tracking is what sets this research effort apart. While, traditionally ecologists have used 2D tracking systems, the team’s 3D approach adds a vertical component, which is especially important for animals that fly, travel on steep terrain or dive into the water.
“Biologists and ecologists are only beginning to recognize the value of incorporating the vertical aspect into analyses, which more realistically represents the space used by an animal,” states Jeff Tracy, an ecologist at the USGS and lead author of the PLoS-ONE study.
“Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation,” according to the study’s authors.
While 3D increases realism, it is much more computationally demanding, and these computing challenges have limited increased adoption of the technique. By optimizing the software, the improved tracking method could be a huge boon to conservation efforts, like those that have helped repopulate the California Condor. The population of this endangered animal now stands at approximately 400 birds – up from only 22 in the mid- 1980s. Despite the increase, the effort has been hampered by a lack of understanding about movement and habitat use.
The 3D tracking system will be used as a “predictive management tool to inform conservation efforts to restore condor populations, particularly with regard to emerging threats such as climate change and wind energy impacts,” states team member James Sheppard, a senior researcher within the Applied Animal Ecology Division of the San Diego Zoo’s Institute for Conservation Research. | <urn:uuid:d698152e-ed33-4d59-b4de-5f17eea83df2> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/07/14/supercomputing-adds-depth-animal-conservation-efforts/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00503-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941752 | 688 | 3.546875 | 4 |
In my last blog post, I spent some time in a trough of sadness. Over 2 or 3 hours of not making any progress and banging my head against the keyboard (metaphorically), the IP camera project ending in a dud. I took another two week break before turning my Pi back on. Here I was: 1 month in and no real progress to report.
Next I decided to try my luck at learning some Scratch. Chapter 4 of my user’s guide covered the language, so I opened up the Scratch program on my Raspberry Pi and started to follow along with the user’s guide. Take a look at this fun mascot/icon for scratch. His or Her name is apparently scratch cat.
Scratch is a free program developed by the Lifelong Kindergarten Group at the MIT Media lab. Scratch allows you to create a little interactive game or animation. Instead of writing code, scratch has you drag and drop little commands that represent a code principle or item, etc. This is great if you have young kids and want to introduce them to concepts. I read through and tried out the different types of commands in the user manual and started to get a grasp on things. The program has a simple look and feel, which you can see below. If you do not have a Raspberry Pi, you can still download scratch and play along on your computer.
There are 6 main parts that I have arrows pointing out above.
- Cursor Tool Menu: lets a user change what your cursor (mouse) does. It can make it copy, cut, delete, grow, or shrink things in your project.
- Application Menu: This is where you do your typical saving and menu type stuff.
- Script Area: Is where you assemble your code “blocks” that run the project. Most of your learning about coding principles is done here.
- Blocks Palette: This section lets you select different blocks that do different things.
- Sprites List: This area lists all of the sprites in your project. Sprite is the term for any object in your object that is not the background.
- Stage: This is where your sprites move and interact with each other. Essentially this is where the game or animation you are making lives.
There are many things that you can do with Scratch, and I will not start to cover them all here. I spent a few hours working through the basics and I found that a few blocks allowed me to do most of what I wanted.
Whenever you start a new project in Scratch the workspace in scratch is blank except for a sprite of Scratch Cat. You can see this in the picture above. Things start with the Scratch Cat selected, so any blocks you start to use will be set to effect that Scratch Cat sprite. Let’s go through some block types.
To make anything start in Scratch you will need an “Events” block. The main ones I used were:
- “When Flag Clicked”
- “When (space or arrow) key pressed.
When Flag Clicked will start all blocks after it(the block) when the green arrow is clicked.
The other block can be changed to the space bar or the up, down, left, and right arrows. Putting this block before others will cause the following blocks to start only after the selected key is pressed.
The next block type I got used to using where the motion blocks. These blocks need to follow an event block or they will never start.
The motion blocks do what you would expect them to do. They move your sprite. You can
- Move forward (or backward with a negative value)
- Turn left
- Turn right
- Point in a direction
- Point towards something (like another sprite)
- Go to a coordinate on the stage.
- And more!
The next type of block that is the most common is the “Control” block. These blocks control the order in which other blocks are executed and are also used to stop your programming or make it wait.
- Wait “1” secs: will make an operation wait for the selected number of seconds.
- Repeat “10” times: will make anything in between the brackets repeat the selected amount of times.
- Forever: will make anything inside the brackets repeat forever.
- The “If then” and “if then else” blocks will look familiar to anyone who has some programing experience or uses excel formulas. These blocks start the things in between their brackets only if the condition is met, or in the case of “if then else” the block will start the blocks after the “else” if the first condition is not true.
After learning how to string together “Event”, “Motion”, and “Control” blocks and move my spirits across the workspace, I decided that it was time to try and make my own little game. My game would have a background, a sprite for the player, a sprite for the players weapon, and sprite for the “bad guy” which would come across my screen at the player. After experimenting with a lot of different sprite designs I ended up with a game that looks like this.
I call this game Knight Lighting! (exclamation point mandatory) Set on a background of a brick wall with a nicely mowed lawn. The player is a knight with a sweet purple scarf coming out of its helmet. The knight has a sword in his hand, but his weapon is actually a lightning bolt that appears whenever the user presses the space button. The knights nemesis is a large dinosaur who moves from right to left across the screen. If the knight hits the dinosaur with the lightning bolt then his score goes up by 1. But there is a delay after the lightning bolt is use, so if the knight misses he will get hit by the dinosaur and die. I also added the ability for the knight to jump with the up arrow. No big practical use for that.
If you want to take a look at how I built this game then go here for the details. I had a lot of fun making this game. A great part was the combination of learning something new, getting some programing “concept” experience, and actually making something! The scratch game took me about a week to get together. Working an hour every other day. There were a lot of false starts and rabbit holes. I looked into making the NES Double Dragon emulator, but decide that I wanted to move onto using a more “real” programing language. The next chapter in my users guide is on Python. Let’s take a look at that next. | <urn:uuid:383ac7b4-b5f3-4ce5-8a53-c876448f02a5> | CC-MAIN-2017-04 | https://blog.bandwidth.com/actually-using-your-raspberry-pi-part-2a-programming-with-scratch/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00229-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943368 | 1,392 | 3.09375 | 3 |
Communication among first responders post-disaster is essential, but when a city or town is hit by an earthquake or hurricane, for example, it's likely that broadband coverage will also take a hit.
In the same vein as Google's Project Loon, which aims to solve the digital divide using balloon-powered Internet access, mobile 4G LTE provider Oceus Networks has demonstrated that such a technology can be used to provide broadband coverage and communications services to first responders within 72 hours of an emergency or natural disaster.
Last week near Boulder, Colo., the company launched an airborne 4G LTE cellular network into the air that allowed engineers to "collect data and characterize the performance of a high-altitude 4G LTE public safety system," according to a press release.
And the trial was considered a success -- it traveled nearly 200 miles, reached an altitude of 75,000 feet, transmitted an LTE network signal that provided a 100 km radius of coverage, and it supports the FCC Deployable Aerial Communications Architecture initiative, which is exploring the role of High Altitude Platforms in the national public safety network.
To facilitate the test, the First Responder Network Authority (FirstNet), the FCC and the National Telecommunications and Information Administration allowed temporary authorization for Oceus Networks and project partner Space Data to use the public safety broadband spectrum in the 700 MHz band (Band 14) -- the same frequencies that will be used in the forthcoming nationwide public safety broadband network.
On the ground, the team used Band 14 devices – data modems (the VML 700) and smartphones (the LEX 700) – provided by Motorola Solutions, to connect to the mobile airborne LTE network.
Oceus Networks is compiling the test results, and will formally file a report with the FCC in its Deployable Aerial Communications Architecture docket. | <urn:uuid:4f8b4876-b4e5-4014-9a26-fdea66567899> | CC-MAIN-2017-04 | http://www.govtech.com/products/High-Altitude-Balloon-Provides-Broadband-to-First-Responders.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00045-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.908438 | 369 | 2.53125 | 3 |
7.22 How are hardware devices made tamper-resistant?
There are many techniques that are used to make hardware tamper-resistant (see Question 7.21). Some of these techniques are intended to thwart direct attempts at opening a device and reading information out of its memory; others offer protection against subtler attacks, such as timing attacks and induced hardware-fault attacks.
At a very high level, a few of the general techniques currently in use to make devices tamper-resistant are:
- Employing sensors of various types (for example, light, temperature, and resistivity sensors) in attempt to detect occurrences of malicious probing.
- Packing device circuitry as densely as possible (dense circuitry makes it difficult for attackers to use a logic probe effectively).
- Using error-correcting memory.
- Making use of non-volatile memory so that the device can tell if it has been reset (or how many times it has been reset).
- Using redundant processors to perform calculations, and ensuring that all the calculated answers agree before outputting a result.
- 7.1 What is probabilistic encryption?
- Contribution Agreements: Draft 1
- Contribution Agreements: Draft 2
- 7.2 What are special signature schemes?
- 7.3 What is a blind signature scheme?
- Contribution Agreements: Draft 3
- Contribution Agreements: Final
- 7.4 What is a designated confirmer signature?
- 7.5 What is a fail-stop signature scheme?
- 7.6 What is a group signature?
- 7.7 What is a one-time signature scheme?
- 7.8 What is an undeniable signature scheme?
- 7.9 What are on-line/off-line signatures?
- 7.10 What is OAEP?
- 7.11 What is digital timestamping?
- 7.12 What is key recovery?
- 7.13 What are LEAFs?
- 7.14 What is PSS/PSS-R?
- 7.15 What are covert channels?
- 7.16 What are proactive security techniques?
- 7.17 What is quantum computing?
- 7.18 What is quantum cryptography?
- 7.19 What is DNA computing?
- 7.20 What are biometric techniques?
- 7.21 What is tamper-resistant hardware?
- 7.22 How are hardware devices made tamper-resistant? | <urn:uuid:0f6937df-5eb5-44d1-a803-a32779be2b79> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/how-are-hardware-devices-made-tamper-resisistan.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00073-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913408 | 519 | 3.140625 | 3 |
Everything has security problems, even Linux. An old and obscure problem with the gcc compiler was recently discovered to have left a security hole in essentially every version of Linux that anyone is likely to be running. Here's what you need to know about fixing it.
The problem itself was discovered by Brad Spengler, the hacker behind the open-source network and server security program, grsecurity. What he found was that in some network code, there was a procedure that included a variable that could be set to NULL (no value at all). Now, this didn't appear to be a problem because the programmer also included a test which would return an error-message if the variable turned out to have a NULL value.
So far, so good. Unfortunately, the gcc code optimizer on finding that a variable has been assigned a NULL value removed the test! This left a hole, that didn't exist in the original program. Using this hole, and code provided by Spengler, any cracker with sufficient access to a Linux computer could get into the computer's memory and, from there, get into all kinds of mischief. For more on the down and dirty technical details, turn to Jonathan Corbet's story, "Fun with NULL Pointers."
That was bad. But, then Google Security Team members Tavis Ormandy and Julien Tiennes discovered that this kind of problem existed in numerous, network protocol programs. To be exact, the problem exists in implementations of AppleTalk, IPX, IRDA, X.25, AX.25, Bluetooth, IUCV, INET6 (aka IPV6), PPP over X and ISDN. Except for IPV6 (Internet Protocol Version 6) and Bluetooth, many of you may never even have heard of most of those protocols, never mind used them.
That said, if the code for those protocols is in your Linux kernel, your Linux is vulnerable. Most of you, whether you know it or now, have one or more of those protocols active on your system, so this is not a small problem.
Fortunately, there are fixes. Instead of trying to clean up the protocol implementations one by one--I mean seriously does anyone actually use IUCV (Inter-User Communications Vehicle), an old IBM VM networking protocol?--Linus Torvalds elected to force all these protocols to use kernel_sendpage(), which does the right thing with code having this problem. As Torvalds wrote on the LKML (Linux Kernel Mailing List), "Now, arguably this might be something that we could instead solve by just specifying that all protocols should do it themselves at the protocol level, but we really only care about the common protocols. Does anybody really care about sendpage on something like Appletalk? Not likely."
So, the latest versions of the Linux kernel, 188.8.131.52 and 184.108.40.206, and for those using old Linux versions, Linux kernel 220.127.116.11, include this universal fix. Of course, you have to get that fix into what you're actually running.
Most of the Linux vendors are rapidly pushing out the patch. Ubuntu has released it for its entire family from Ubuntu 6.06 to Ubuntu 9.04. This will also eventually cover Ubuntu downstream distributions like Mint. Tiennes has also written a grsecurity package to protect Debian and Ubuntu users .
For Red Hat Linux users, there is a work-around, but you should know that there are reports that it doesn't cover all the bases. CentOS, which is based on Red Hat Linux, is recommending a similar fix, but it probably has similar problems. There is, however, a fix that's now available for Fedora 11.
Novell's SLE (SUSE Linux Enterprise) and openSUSE just released patches today for all currently supported versions.
If I haven't listed your distribution, check with your vendor or community. Within a few days, at most, you should have a fixed, and secure, Linux system. | <urn:uuid:d7817954-7d43-44b7-ad20-ab1943b4a8b9> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2467470/open-source-tools/fixing-linux.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00339-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950521 | 829 | 2.875 | 3 |
What are logical ways for us to discuss what we do? That question alone raises a ton of questions. Who is we? What is it we do? Who does what? Can something fall into multiple categories?
Now that IO has a new definition, there is no longer a clear cut way to divide up the parts. My friend and mentor, Dr. Dan Kuehl, invented a model I like to use, called the Three C model. ‘What we do’ can be divided into Connectivity, Content and Cognitive. I’m going to paraphrase below, probably badly, so please excuse me for not reproducing his highly refined explanation.
- Connectivity is the how information is passed from Point A to Point B. This may be a broadcast message over FM radio, it might be via cyber in an email, it might be by fax, telephone, television, even the spoken word from your mouth to my ear.
- Content is how we put the message together, what is contained within the message or what is shown, heard or even felt, tasted or smelled. In Afghanistan there is a low literacy rate, so more pictures are used. This may also be a narrative, what words we use can also be less or more dependent on culture, history, religion and a myriad of other factors.
- Cognitive is how is the message received and then internalized by our audience. I prefer to use Measures of Effectiveness as part of my initial planning process, so when planning and then conducting the rest of an information operation we can better measure the efficiency of our campaign. My friend Dr. Lee Rowland uses the principle of “Under what conditions will a certain behavior change”, which is more difficult to determine but offers a much more refined approach and ensures cognition and efficiency of messaging is both easily measured and determined.
But IO cannot and will not work without including the rest of the government, not in peace, crisis or even war. I recently sat down with some friends and we discussed information operations at a higher level, at the governmental level. In the US the Department of Defense does IO, the Department of State is in charge of Strategic Communication and Public Diplomacy, but I was having problems describing a “whole of government” approach, and I was having even more difficulty explaining how a “whole of nation” effort might be divided. We finally came up with five categories for what I might call government/corporate/private information activities.
- Information Operations. The integrated employment, during military operations, of information-related capabilities in concert with other lines of operation to influence, disrupt, corrupt or usurp the decision-making of adversaries and potential adversaries while protecting our own. This will include Department of Defense, to include Cyber Command and the CIA. Most important about this category is these are the only entities that may conduct offensive operations; they can break things.
- Strategic Communication & Public Diplomacy: SC: the synchronized coordination of statecraft, public affairs, public diplomacy, military information operations, and other activities, reinforced by political, economic, military, and other actions, to advance U.S. foreign policy objectives. PD: communication with foreign publics to establish a dialogue designed to inform and influence. SC/PD would also include “liberation technologies” or ways to bypass, circumvent and/or thwart blocking, filtering and jamming by authoritarian governments. This will include the Department of State, the BBG and others as identified.
- Information Research and Analysis. Data, information and intelligence collection, reporting by all media, analyis, editing and publishing. This will include reporters, editors, intelligence collection, intelligence analysis and publishing.
- Technical Innovation. How we communicate information. This includes cyber, communication means of all types, and efforts of assuring information and managing risks related to the use, processing, storage, and transmission of information or data and the systems and processes used for those purposes. This will include information assurance, cyber defense and research and development efforts for the storage and transmission of information, broadcast, satellite, telegraph, even semaphores. This includes DISA, corporate and private R&D efforts.
- Information Infrastructure Assurance. Efforts to protect government, corporate and private infrastructure from natural and manmade threats to critical, corporate and private infrastructures. This would the Department of Homeland Security and other efforts to protect critical and private infrastructure.
The problem I seem to have is categorizing military public affairs, I might have to change the name to military information activities or some such generic name. Many Public Affairs officers seem to believe they can inform without influencing.
I am also not certain how to include discussions on content, such as a narrative. Cultural, religious and historical considerations also may be discussed. Where would they fit in?
I also can’t forget methodology of efficiency, how do we determine Methods of Effectiveness. Once again, the voice of my friend and mentor, Dr. Dorothy Denning reminds me of this important consideration.
If I take a whole of nation approach then I should include marketing, public relations, perception management, reputation management and strategic communications (with an s).
What have I missed? What are your suggestions for better divisions and inclusions?
Cross-posted From To Inform is to Influence | <urn:uuid:a50a6bb8-8e10-4bb6-a09d-cc84560b026b> | CC-MAIN-2017-04 | http://infosecisland.com/blogview/22480-How-Best-to-Discuss-a-Whole-of-Nation-Approach-to-Information-Activities.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00183-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943612 | 1,086 | 2.546875 | 3 |
There is really so much junk floating around in space the government needs help keeping track of it all. This week the Defense Advanced Research Projects Agency announced a program to utilize amateur astronomers to help watch space for any dangerous junk that maybe be threatening satellites or other spacecraft and even the Earth. If you have a telescope, great but the program will even install equipment if you are in a strategic area the government want to watch.
DARPA's program , known as SpaceView is strategically aimed at offering more diverse data to the Space Surveillance Network (SSN), a US Air Force program charged with cataloguing and observing space objects to identify potential near-term collisions.
With SpaceView DARPA will provide "state of the art hardware and relatively minor financial compensation may be provided in exchange for the shared telescope time, site security, and routine maintenance. This allows the SpaceView concept to significantly reduce deployment costs when compared to traditional optical space-surveillance facilities. Equally important, remote observing and the availability of the local SpaceView member for troubleshooting eliminates the need for any paid employees at the site, further decreasing operational costs," DARPA stated.
According to the agency, SpaceView is in its initial developmental phase which consists of developing the network architecture and demonstrating the ability to remotely and automatically operate a network of sites from a central location. A large part of developing the network architecture consists of determining the needs of the amateur astronomy community so that these needs can be aligned with the space surveillance needs of SpaceView, DARPA stated.
If you are interested in signing up go here. According to DARPA, by providing contact information and the answers to a few basic questions you will be helping us to begin the process of gathering the information we need to develop the network architecture concept more thoroughly. Once your information has been received by SpaceView interested parties will most likely receive a link via email to a questionnaire requesting more detailed information regarding your astronomy background, observing habits, as well as other demographic information. This information will be used by SpaceView to determine the habits and needs of candidate network members.
NASA estimates more than 500,000 pieces of hazardous space debris orbit the earth, threatening satellites that support peacekeeping and combat missions.
Examples of what NASA calls orbital debris include: "Derelict spacecraft and upper stages of launch vehicles, carriers for multiple payloads, debris intentionally released during spacecraft separation from its launch vehicle or during mission operations, debris created as a result of spacecraft or upper stage explosions or collisions, solid rocket motor effluents, and tiny flecks of paint released by thermal stress or small particle impacts. "
According to NASA the Top 10 space junk producing missions are:
Name Year of Breakup Debris items Cause of Breakup
- Fengyun-1C 2007 2,841 Intentional Collision
- Cosmos 2251 2009 1,267 Accidental Collision
- STEP 2 Rocket Body 1996 713 Accidental Explosion
- Iridium 33 2009 521 Accidental Collision
- Cosmos 2421 2008 509 Unknown
- SPOT 1 Rocket Body 1986 492 Accidental Explosion
- OV2-1 Rocket Body 1965 473 Accidental Explosion
- Nimbus 4 Rocket Body 1970 374 Accidental Explosion
- TES Rocket Body 2001 370 Accidental Explosion
- CBERS 1 Rocket Body 2000 343 Accidental Explosion
Check out these other hot stories: | <urn:uuid:0005345e-f0de-4aca-8d9f-c757a8588692> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2223503/security/darpa-wants-army-of-networked-amateur-astronomers-to-watch-sky-for-space-junk.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00485-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.89902 | 687 | 3.015625 | 3 |
Forty-five percent of legislators and cybersecurity experts representing 27 countries think cybersecurity is just as important as border security, according to a report by McAfee and the Security Defense Agenda that was released this week.
The organizations surveyed 80 professionals from business, academia and government to gauge worldwide opinions of cybersecurity issues and revealed the findings in a report called Cyber-Security: The Vexed Question of Global Rules.
Researchers found differing views on what cybersecurity means and how to approach it. They also analyzed the steps countries are taking to handle national cybersecurity.
The executive summary highlights the following findings:
The report noted an interesting paradox in the area of national cyberhealth. The largest countries with the most sophisticated Internet access are the most at-risk but also are the most “cyberliterate,” and thus the best prepared to react if attacked. Countries with less sophisticated Internet connections generally are less vulnerable to cyberattacks.
Twenty-three countries were ranked on a scale of 1 (lowest) to 5 (highest) on their cyberdefense: Finland, Israel and Sweden scored the highest at 4.5. Eight countries — Denmark, Estonia, France, Germany, Netherlands, Spain, the UK and America — scored 4. Australia, Austria, Canada and Japan scored 3.5; China, Italy, Poland and Russia scored 3; Brazil, Romania and India scored 2.5; and Mexico scored 2.
It’s no secret that nation-states are potential culprits of cybercrime, not just the targets. David Marcus, director of advanced research and threat intelligence at McAfee Labs, feels that the IT community needs to develop a way to prove if a foreign government is behind an infiltration.
“No one has said, ‘Let’s take the 30 or so countries we think have offensive cybercapabilities and grade what they are and how they differ,'” he said. Marcus wants a country-by-country rating methodology for offensive capabilities as well as defensive scores.
Interestingly enough, the report revealed disagreement between cyberexperts on how to view international cyberterrorism. Twenty-six percent of the respondents felt that the term “cyberwar” was inaccurate or scaremongering, but 45 percent felt it was accurate.
Sixty-two percent consider cyberspace a common global field, like sea or space. “The people who pooh-pooh cyberwar do so mainly by saying that no war takes place in cyberspace only. That’s like saying air wars only took place in the air, when air warfare is always part of a larger battle,” said Stewart Baker, former assistant secretary of homeland security under President George W. Bush.
The executive summary offers recommendations as well. Suggestions include improving communication between influential groups, like technology experts, business leaders and legislators, at national and international levels, and setting up bodies to share cybersecurity best practices.
Photo courtesy of Bill Morrow / Flickr CC | <urn:uuid:f1bf723a-3335-4a8e-bf2a-a3fc25db6677> | CC-MAIN-2017-04 | http://www.govtech.com/security/Is-Cybersecurity-as-Important-as-Border-Security.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00513-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95885 | 607 | 2.90625 | 3 |
The chip maker has produced a 45-nanometer test chip, a sign that it's on track to roll out its next generation of manufacturing technology in the latter half of 2007.
Intel said it has reached an important milestone on the path to rolling out the next generation of chip manufacturing in 2007.
The chip giant said on Jan. 25 that it has completed a test chip using its forthcoming 45-nanometer process, dubbed P1266, which it expects to roll out in the second half of 2007.
The test chip, completed earlier this month, uses the same circuitry that Intel will put into production when it begins 45-nanometer manufacturing, scheduled for the second half of 2007, Intel representatives said.
Because the chipwhich contains static RAM memory cells and logic circuitsincludes the same circuitry Intel will put into production, its arrival signals the chip maker is keeping pace with its own internal targetsIntel historically rolls out a new manufacturing process every two yearsas well as Moores Law
The prediction by Intel founder Gordon Moore, says the number of transistors inside chips double about every two years, thus raising performance.
"Were pretty excited weve made such a large, dense structure with such a tiny SRAM cell and its working early in the programprobably earlier than what weve done on earlier technology," said Mark Bohr, director of process architecture and integration at Intel in Hillsboro, Ore.
The rivalry between AMD and Intel heats up. Click here to read more.
"So its a very encouraging start to our 45-nanometer program. It makes me confident that we are on track for second half 07 [processor] shipments."
Intel and other chip makers generally us SRAM chips to test out new manufacturing processes. The company has added logic circuits to its SRAM process technology test chips before. But it did not disclose doing so until now.
Intel typically creates its test chip about year-and-a-half before it aims to begin full manufacturing on a given process.
"So far were on track for doing the same thing on the 45-nanometer node," Bohr said. "This combination is showing not only a fully functional SRAM test chip, but also the logic [inside it] really demonstrates how far ahead we are of our competitors," Bohr said.
"Many of them are still trying to achieve this on their 65-nanometer technology."
Swapping manufacturing processes has become more difficult as the feature sizes inside each chip get smaller with each generation, chip makers said.
However, the move generally yields gains in performance, power consumption and reductions in chip size, meaning chip makers can crank out more processors per wafer than before.
The size reduction also cuts chip manufacturing costs, however, which can offset the billions of dollars it takes to develop new processes and construct or re-outfit manufacturing plants.
The 45-nanometer transition could also offer Intel, which has been battered by rival AMD in recent months, the ability to use its manufacturing might to fight back in the long term.
During the fourth quarter, for example, AMD increased its market share by almost four points,
while Intel admitted to losing share.
Using less energy. | <urn:uuid:f212af92-0843-4c0c-8f7b-7005747f5a8d> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Desktops-and-Notebooks/Intel-Gets-on-Its-Way-to-45-Nanometers | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00056-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959637 | 652 | 2.6875 | 3 |
Photo: A Centers for Disease Control and Prevention microbiologist reveals an egg’s contents through a translucent shell. This procedure allows microbiologists to determine the viability of eggs used in the isolation of influenza virus.
The lack of H1N1 vaccines is causing some localities to halt or postpone mass vaccination clinics, and even close their emergency operations centers (EOC) that were opened to organize events to vaccinate their at-risk populations. U.S. Health and Human Services Secretary Kathleen Sebelius said 69 million H1N1 vaccine doses are available or have been administered, but the federal government’s goal was to have 160 million people vaccinated by the first week of December. Sebelius told a meeting of the American Medical Association that technology is one of the impediments to creating new vaccines.
“We were fighting the 2009 H1N1 flu with vaccine technology from the 1950s,” she said. “We could race to begin vaccine production, but there was nothing we could do if vaccine grew slowly in eggs. We could make deals with foreign vaccine producers ahead of time, but we still wouldn’t have as much control over the vaccine as if they were based in the U.S.”
Cities and counties nationwide are trying to vaccinate their at-risk populations — which include pregnant women, people between the ages of 6 months and 24 years old, and people with chronic health disorders or compromised immune systems — but the lack of vaccines is making that task difficult. Washington County, Ore., recently closed its EOC that was opened to plan and coordinate mass vaccination clinics. According to Scott Porter, director of the county’s Office of Consolidated Emergency Management, nine mass vaccination clinics were completed during a two-week period.
Porter said there were two main reasons why the EOC was shut down:
“We decided that doing these mass vaccination clinics where all priority groups were invited was not the best way to get to those people with chronic medical conditions,” he said. “So we decided to stop doing those clinics and move the vaccines to health-care providers who know who their patients are with chronic medical conditions.”
Porter added that this was a position the state took that the county agreed with. Although the Centers for Disease Control and Prevention identified the at-risk populations, the information didn’t match data that showed who was being hospitalized and who was dying from H1N1. “It became clear that there were some populations within those priority groups who were suffering at a much higher rate than the rest of the priority groups,” he said. “And those were primarily people who have chronic medical conditions and also pregnant women.”
Other localities also are changing vaccination plans due the lack of vaccine. Crawford County, Ark., postponed its mass vaccination clinic that was scheduled for Thursday, according to the Press Argus-Courier. A county health administrator told the newspaper the county is waiting until it has an adequate supply of both H1N1 and seasonal vaccines.
Sebelius told the American Medical Association meeting that the federal government has talked about updating vaccine technology for years. She said action is being taken and in late November a cell-based vaccination clinic was opened in North Carolina with support from the U.S. Health and Human Services Department.
“When this plant is up and running in 2011, it will be able to produce vaccine for a significant share of our population within six months of the onset of a pandemic,” Sebelius said. “What’s even more important is that this process will end our reliance on egg-based technology. That will allow the plant to produce vaccine faster and with no danger of egg-based allergies.”
The Health and Human Services Department also will review how its policies affect vaccine development and production, and strengthen its surveillance capability to prepare for future public health threats.
[Photo courtesy of James Gathany/CDC.] | <urn:uuid:db282654-1e11-477a-b64c-1d7bbdfd0036> | CC-MAIN-2017-04 | http://www.govtech.com/em/health/Lack-of-H1N1-Vaccines-Causes-Localities-to-Postpone-Mass-Vaccination-Clinics.html?elq=81e59b3baec64c3f874fa7115ed4aeb6 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00056-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.981499 | 821 | 2.875 | 3 |
For years we've been obsessed with increasing chip processing power. Intel's i386, launched in 1985, followed by the i486 in 1989, introduced economical multitasking and number crunching to the enterprise.
In the following years, the chips got more powerful still, culminating with today's hundred-dollar smartphone threatening the PC.
It could be argued that we've reached an acceptable level of multitasking and personal computing power for cost. We've found it in small-form-factor smartphones, and it may be all we really need now.
Just as well then, because this processing power as the chip's holy grail is about to be completely replaced by another requirement: battery life.
Internet of Things
The Internet of Things (IoT) is about to reverse a lot of what we've wanted in a chip.
Soon, we won't need vast amounts of calculations per second—just how many instructions does it take for your fridge to send an order to your supermarket? Not that many when you compare it to something complicated that chip design has been working towards, like a Computer Aided Design drawing in 3D, for example.
Size is important. However, the real big issue, when it comes to a ubiquitous IoT where everything is connected, will be battery life.
The reason is that we are not going to want to change the batteries within the base of a dozen bottles of water that we may have sitting around just to discover whether we've drank their contents or not. Even if your fridge orders fresh stock, it wouldn't be worth it.
Same with a dozen or so planter pots sitting in the yard. Great, they talk to the sprinkler system. Very eco-friendly, and maybe they score a 10 out of 10 on the carbon footprint elimination scale, but it's not so great if you've got to change dozens of batteries, even annually.
That battery has to last the life of the connected object in the IoT. And that could be 10 years away, possibly longer.
Chip-maker Atmel reckons it has a solution. It says its new 32-bit ARM-based chips will last decades. Note the plural.
Atmel says its new chips combine battery-saving low power with flash and SRAM that is big enough to run both the application and the IoT-needed wireless stacks.
In its marketing, Atmel proffers IoT use scenarios such as "fire alarms, healthcare, medical, wearable, and devices placed in rural, agriculture, offshore and other remote areas."
Along with IoT, wearable is a key word here. Atmel says that its chips, called SAM L21, are so low-power that they can be run off energy captured from the body.
Sean Gallagher, writing about the SAM L21 chip for Ars Technica, says that the manufacturer demonstrated that human energy-sourcing with this chip at CES in January.
In the article, Gallagher also says that an Atmel marketing person told him that the SAM L21 chips were 50% more efficient than other low-power microcontrollers when comparing microamps per MHz.
In fact, power consumption in these chips is 35µA (microamperes) per MHz in active mode and 200nA (nanoamperes) in deep sleep mode. There are a million microamperes in an amp, and a billion (1,000,000,000) nanoamperes in an amp, to give you some perspective.
A traditional laptop charger ordinarily uses a little over three amps, for comparison.
How is Atmel doing it? It's mainly creating efficiencies in sleep modes, where power has leaked when it doesn't need to be used. That's the main way that this chip saves power, and thus provides the battery savings.
This article is published as part of the IDG Contributor Network. Want to Join? | <urn:uuid:ee471cdc-6915-4a16-9e19-8ac8afeb3c03> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2905415/internet-of-things/upcoming-low-power-chips-will-last-decades-on-a-battery.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00056-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952493 | 798 | 2.90625 | 3 |
The molecule cascade technique, in which individual molecules move across an atomic surface like toppling dominoes, enabled IBM scientists to construct working digital-logic elements 260,000 times smaller than those used in today's most advanced semiconductor chips.
The findings were published today on the Science Web site.
The technique works because carbon monoxide molecules can be arranged on a copper surface in an energetically metastable configuration that can be triggered to cascade into a lower energy configuration, similar to toppling dominoes. The metastability is due to the weak repulsion between carbon monoxide molecules placed only one lattice spacing apart.
IBM scientists compare the technique to placing tennis balls next to each other in an egg carton. Since the tennis balls are slightly larger than the lattice spacing of the carton, they push against each other and can't nestle down into the hollows of the carton as deeply as they could if they were more widely separated.
Just as placing three tennis balls in a row of an egg carton is unstable, Andreas Heinrich, a physicist at IBM's Almaden Research Center in San Jose, Calif. and his colleague Christopher Lutz discovered that a triad of carbon monoxide molecules arranged in a chevron-shaped pattern on the copper surface would spontaneously rearrange by the outward motion of the central molecule. They then designed ways to link pairs of molecules so the rearrangement of an initial chevron formed a new chevron, and so on, in a cascade of molecular motion.
With respect to the computing done by the circuit, think of a cascaded molecular array as a 1, and a non-cascaded molecular array as a 0 ? the bits that make up all logic computing. The logic AND and OR operations and other features needed for complex circuits are created by intersections of two cascades. Heinrich and Lutz designed molecular arrangements that acted as "crossovers" (allowing two cascade paths to cross over each other) and "fanouts" (splitting one cascade into two or more paths).
The most complex circuit built using the technique was a 12x17-nanometer three-input sorter. It is so small that 190 billion of the circuits could fit atop a standard pencil-top eraser 7 millimeters in diameter.
"This is a milestone in the quest for nanometer-scale computer circuitry," Heinrich said. "The molecule cascade is not only a novel way to do computation, but it is also the first time all of the components necessary for nanoscale computation have been constructed, connected and then made to compute. It is way smaller than any operating circuits made to date."
As a sign of how far this technology has to go before it can be used commercially, IBM noted that the molecule cascades are currently assembled by moving one molecule at a time using an ultra-high-vacuum, low-temperature scanning tunneling microscope. That means it takes several hours to set up the most complicated cascades, and because there is no reset mechanism, these molecule cascades can only perform a calculation once. | <urn:uuid:4ee3a416-61fd-4af8-99af-7888e1fc953c> | CC-MAIN-2017-04 | http://www.cioupdate.com/trends/article.php/1488071/IBM-Claims-Milestone-in-Tiny-Circuitry.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00358-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940735 | 638 | 3.78125 | 4 |
File Descriptor An integer that describes an open file within a process. The number is created at the time of the file being opened. Anything that reads, writes, or closes a file uses the file descriptor as an input paramater. In Unix, file descriptors 0, 1, and 2 refer to the standard input, standard output, and standard error files respectively.
Perl Practical Extraction and Report Language Created by Larry Wall in 1987
CGI Common Gateway Interface This is where servers process user input on the server side and return output
*Always set $PATH and $IFS in all CGI scripts. DON’T trust the preset values of these variables
*VALIDATE USER INPUT. Only allow the characters that are needed for a particular field. Rather than eliminating what shouldn’t be used, disallow everything and allow only certain things (a default closed configuration). Investigate the possible use of well known input validation functions if you don’t want to write your own.
PHP Created by Rasmus Lerdorf originally as a Perl CGI script called “Personal Home Page”, or simply “PHP”. The original purpose for the script was to log visitors to his resume page on his website. Like Perl, PHP must be used within HTML in order to work over the web. | <urn:uuid:be4c28b0-48c3-40eb-98c5-70fc12315b40> | CC-MAIN-2017-04 | https://danielmiessler.com/blog/study-web-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00174-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.891469 | 276 | 3.390625 | 3 |
After the dust settled from the controversial 2000 presidential race, election systems across the country gained substantial notoriety as policy-makers began scrutinizing the technology behind voting and the tracking of registered voters.
Former Presidents Jimmy Carter and Gerald Ford headed the National Commission on Federal Election Reform, which was given the task of recommending how to improve the accuracy and fairness of federal elections. From those recommendations came two pieces of election-reform legislation from Congress that, among other things, targeted statewide voter registration systems. In early October, a congressional conference committee resolved the differences between the House and Senate bills, and Congress passed a single election-reform bill. Though the House bill offered very specific language describing the structure of statewide voter-registration systems, the Senate bill didn't, partly leading to the need for a conference committee to hammer out one piece of legislation.
The good news: The wait for a federal decision on statewide voter-registration systems is finally over. The bad news: States still face uncertainty in deciding what Congress actually meant under the provisions of the election-reform bill referring to statewide voter-registration systems.
Some observers doubt that states can create new voter-registration systems in time for the 2004 presidential election. Along with uncertainty over the federal mandate, natural tension between state and local officials often increases the difficulty of transitioning to statewide voter-registration systems.
Redefining the Relationships
"If you look at Florida 2000, that was a snapshot of the existing relationship between state and local election officials," said Doug Chapin, director of Electionline.org, a project of the University of Richmond, in Virginia. The project, supported by a grant from the Pew Charitable Trusts, serves as a clearinghouse for information on election reform efforts across the country.
State officials typically have responsibility for elections, he said, which includes tasks such as certifying election results, ensuring compliance with state election laws and coordinating among local jurisdictions. On the other hand, local officials decide how votes are counted, which voting machines are used, where polling places are located and how those polling places are staffed, he said.
"What the federal law is going to do is take some of the authority held by local officials and either legally or functionally shift it to state officials," he said. "State officials are going to not only have responsibility to implement the law, but they're going to have to assume some authority to make those things happen, rather than simply persuade local officials to go along.
"Statewide voter registration databases are going to be a prime example of that," he said.
According to Electionline.org's research, only 13 states have implemented a unified voter registration system (in which states and localities share the same database, and changes are made by local or state officials, or by both) that complies the House election-reform bill - the measure that requires a statewide system that local elections officials can access and use to view the voter lists of other jurisdictions.
"Given the way that elections run in this country, there are lots of states that have the functional equivalent of a statewide database," Chapin said. "It's the difference between a unified database and an accessible compilation database - where everyone has their own database but it's on a common dictionary and they can exchange data between jurisdictions."
Many states, needing to walk a fine line in balancing state and local control of voter data, will point to these "functional equivalents" as proof that they have met federal requirements for statewide voter-registration databases But ultimately, those functional equivalents may not meet the federal mandate, which specifies state control.
"Local election officials want to keep some hand in their own voter rolls, rather than completely turning over control to the state elections | <urn:uuid:40111732-4892-489e-b3f7-0c5150caf921> | CC-MAIN-2017-04 | http://www.govtech.com/e-government/Waiting-for-Direction.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00082-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956052 | 758 | 2.828125 | 3 |
In an effort to improve security against explosives without further disrupting the flow of foot traffic through an airport, researchers developed a boarding gate prototype capable of detecting minute quantities of explosives. The boarding gate, which will be on display at the Special Equipment Exhibition & Conference for Anti-Terrorism in Tokyo on Oct. 17, uses high-speed air currents to enable high-sensitivity to certain particles.
The technology, developed by Hitachi, the Nippon Signal and the University of Yamanashi, is expected to be used in places like stadiums, train stations and airports. Passengers simply walk through the device as they would a normal hallway and within one to two seconds, the presence of explosive compounds can be identified. Using this method, it is possible to inspect 1,200 passengers per hour, Phys.org reported.
The device also scans smart cards or portable devices while passengers walk through to ensure they are supposed to be there. Similar high-sensitivity mass-spectrometer technologies are being developed that can reportedly identify the presence of drugs, a change in mood or what a person ate for breakfast by scanning a few molecules from meters away. | <urn:uuid:d9b10eb4-447a-4b49-9edb-61ccb4970c98> | CC-MAIN-2017-04 | http://www.govtech.com/public-safety/New-Explosives-Scanner-Keeps-People-Moving.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00386-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950889 | 231 | 2.625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.