text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
List of Quality Management Standards and Frameworks
The ISO 9000 Family of Standards
The ISO 9000 family of standards has three documents, with one additional supplementary document attached to the family. ISO 9000, ISO 9001 and ISO 9004 compose the family of ISO 9000 documents; ISO 19011, guidelines for auditing management systems, is attached, as it is the auditing requirements document used to audit an ISO 9001 quality management system.
ISO 9000: This is a standard that is referenced in ISO 9001, ISO 9004, AS9100 and many other documents regarding a quality management system. ISO 9000 is the first document in the ISO 9000 family of standards and has two main purposes. Firstly, it is used to define the many terms that are used throughout the quality management system standards. Secondly, it describes the fundamental quality management principles that are behind the ISO 9001 standard for implementing a quality management system. It is not, however, a document containing requirements against which a company can certify its quality management system; this is available through the ISO 9001 standard.
ISO 9001: The most commonly used set of requirements for designing a QMS, it includes requirements for developing and implementing a quality management system based on improving customer satisfaction. The requirements are aligned in a PDCA improvement cycle (Plan-Do-Check-Act cycle) of Planning for the work of the QMS, Doing the work of the QMS, Checking the work of the QMS against requirements and Acting to correct any problems that occur which will feed back into the next round of planning. For more information on how this works, see Plan-Do-Check-Act in the ISO 9001 Standard. ISO 9001 provides the information necessary for a company to implement a quality management system, and a QMS certification against ISO 9001 is recognized worldwide.
ISO 9004: This is a standard that can accompany ISO 9001 for implementing a quality management system, but is not necessary to do so. This document is designed to provide guidance to any organization on ways to make their quality management system more successful. Unlike ISO 9001, ISO 9004 is not intended for certification, regulatory or contractual use. This means that you cannot certify your quality management system to ISO 9004. It also means that the use of ISO 9004 is not intended to be mandated as a legal or contract requirement. The standard is, however, a good reference to turn to for ideas in how to make your implementation of ISO 9001 more effective and successful. For more information on this standard, see ISO 9004, which explains the structure in greater detail.
ISO 19011: This is also a standard published by the international organization for standardization, and includes the requirements for auditing a management system. The standard defines all the requirements for an audit program, as well as how to conduct successful audits. It is used as a resource to train anyone who audits quality and environmental management systems, and the auditors who certify that companies have met the requirements of standards such as ISO 9001, ISO 14001 and the like are trained using this standard.
Other Common Quality Management System Standards
Below are some of the more common quality management standards that are specialized for certain industries. These systems, like ISO 9001, provide requirements that can be used to design and create a quality management system for a company.
AS9100: This is a standard that is based on ISO 9001 and has additions designated for use in the Aerospace Industry. The additions include such main topics as Risk Management and Configuration Management. A QMS can be certified by a third party to comply with this standard. For more, see AS9100: What it is and how it relates to ISO 9001.
ISO 13485: This is a standard published by the ISO organization for use by companies that want to design a QMS for medical devices and the requirements for regulatory purposes surrounding them. A third party can certify a company’s QMS to this standard.
ISO/TS 16949: This document includes requirements for the application of ISO 9001 for automotive production and service part organizations. The requirements include all additional QMS requirements agreed by the main automotive manufacturers to accompany ISO 9001. In addition, though, each main automotive customer that a company works with has an addendum to the TS 16949 requirements that are specific to that customer. A QMS designed using these requirements can also be certified against them.
MBNQA: The Malcolm Baldridge National Quality Award recognizes U.S. organizations for performance excellence. The award has a set of requirements against which a company could design and assess a QMS built around the criteria for promoting business excellence. Apart from external assessments to attain the award, there is no ongoing certification against these requirements.
Quality Frameworks that support Quality Management
The following items are quality concepts that support an organization in pursuing improvements and quality excellence, but they are not designed as sets of requirements against which to create a quality management system, and a QMS cannot be certified against these guidelines.
Lean: The core idea is to maximize value by eliminating waste. The main concept is that anything that adds cost to a product, but not value, is waste and should be controlled or eliminated. Lean concepts are used to improve processes by removing waste, thus making them more efficient. The concept of lean (also referred to as lean manufacturing, lean enterprise or lean production) was derived in the 1990s mostly from the Toyota Production System, which used a concept of the reduction of “seven wastes” to improve customer value.
Six Sigma: This is a set of tools and techniques used for process improvement by focusing on using the statistical outputs of the process to improve the process. It is used in many organizations to support the QMS by helping to improve processes, but Six Sigma does not define a QMS. The tools of Six Sigma were developed by Motorola in 1986 as a means of improving the quality of processes and their outputs by identifying and eliminating the causes of defects.
TQM: Total Quality Management consists of practices designed to improve the process performance of a company. The techniques help improve efficiency, problem solving and standardization of processes. These techniques are used to aid in quality management, but do not provide a framework for a Quality Management System. The concept of TQM was originated in the early 1980s and became widespread near the end of that decade. It was mostly supplanted by ISO 9001, Lean and Six Sigma by the late 1990s; however, many of the concepts are still used in conjunction with these other philosophies. | <urn:uuid:964e80a3-ee17-43b3-9fba-761c71db3a68> | CC-MAIN-2017-04 | https://advisera.com/9001academy/knowledgebase/list-of-quality-management-standards-and-frameworks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00044-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954236 | 1,331 | 2.625 | 3 |
Most implementations of “trace” will send several probe packets at each TTL and display the round-trip time for each. For example, Cisco IOS and MS Windows do three probes per hop by default. The display then resembles a table with a row for each hop, and the columns are hop number, the round-trip times, and the router address at that hop. If DNS or host table info is available, the trace program can also supply the hostname of the device at each hop. Doing our previous trace from H1 to H2 with the default settings might look something like this:
H1#trace ip 18.104.22.168
Type escape sequence to abort.
Tracing the route to 22.214.171.124
1 R1 (126.96.36.199) 1 msec 2 msec 1 msec
2 R2 (188.8.131.52) 2 msec 2 msec 2 msec
3 R3 (184.108.40.206) 4 msec 3 msec 3 msec
4 H2 (220.127.116.11) 5 msec 4 msec 5 msec
The Cisco “traceroute” program offers many other options as well, but you have to run it from privileged mode to use them. These include the ability to specify:
- DNS resolution of IP addresses
- Source address or interface
- Number of probe packets at each hop
- UDP port number
- Reply timeout
- Range of TTLs
Now, here’s a “tracert” from a Microsoft Windows 7 host, using the default options:
Tracing route to 18.104.22.168 over a maximum of 30 hops
1 1 ms 1 ms 1 ms 22.214.171.124
2 1 ms 1 ms 1 ms 126.96.36.199
3 2 ms 1 ms 1 ms 188.8.131.52
4 2 ms 1 ms 1 ms 184.108.40.206
As you can see, aside from the ordering of the columns, the Microsoft “tracert” displays pretty much the same information as the Cisco “traceroute”. What’s not apparent from the display is that there is one *BIG* difference between the implementations, which is that the Cisco “traceroute” program uses UDP probe packets, while Microsoft’s “tracert” uses ICMP echo requests (“pings”). This could yield dramatically different results when tracing through firewalls or routers with access control lists.
For this reason, there are also utilities that support tracing using TCP probe packets, such as “tcptraceroute”. With this, you could do your traces using TCP port 80, or some other port that matches that used by an allowed application. By the way, Unix implementations of “traceroute” also use UDP probe packets. There are other related utilities, such as Microsoft’s “pathping” and the open-source “MTR”, which trace to the destination and then ping each hop repeatedly in an effort to gather sufficient timing information to calculate more reliable timing statistics than does “tracert”.
One more thing … on a Cisco, of course, you can shortcut “traceroute” to “trace” or even “tr”. Annoyingly, typing “tr”, “trace” or “traceroute” on a Microsoft machine will give you an error and likewise for typing “tracert” on a Cisco. How aggravating! On the Cisco, you could set up an alias for “tracert” like this:
Router(config)#alias exec tracert traceroute
Now you could use “tracert” on the Cisco, as well. Maybe a better plan is to keep using “tr” (or whatever) on the Cisco and instead create the alias on the Windows machine. For example, you could create a file containing the lines:
Save the file with the name “trace.bat” to the C:\Windows\System32 directory (or save it to another directory, and add that directory to the system’s PATH statement). Now you can use “trace” on the Windows machine (you could also create batch files with the same contents named “tr.bat” or “traceroute.bat”, if you like).
Okay, that’s it for now. Next time, we’ll talk about directionality and load sharing when it comes to trace utilities. | <urn:uuid:c233d384-ad52-42f2-9011-d1a66feb1ba4> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2011/01/31/traceroute-part-4/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00530-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.865205 | 984 | 2.765625 | 3 |
There are two basic categoies of fiber optic cables due to the different fiber optics cores of the optical fiber, which are singlemode/multimode and simplex/duplex.
Singlemode Fiber Optic Cable
Singlemode fiber has a small core and just one path for light to travel down. Since there is only one wavelength of light moving through the core, single mode light aligns at the center of the optic core rather than bouncing about the interior of the core, which is the technology that enables multimode cable to carry multiple light signals. With only a single wavelength of light passing through its core, singlemode fiber optic cable realigns the light toward the center of the core instead of simply bouncing it off the edge of the core as with multimode. Singlemode fiber optic cable is typically used in long-haul network connections spread out over extended areas (longer than a few miles). For example, telcos use it for connections between switching offices. Singlemode fiber optic cable features a 9-micron glass core.
Multimode Fiber Optic Cable
Multimode fiber has a large diameter fiber core. Due to its relatively large size, multiple channels of light can be transmitted, allowing multiple bandwidths and signals to be transmitted simultaneously. Multimode works great for most fiber applications. It works especially well in alarm systems, audio/video systems and production, desktops and laptops, and display systems. Multimode fiber optic cable has a large-diameter core that is much larger than the wavelength of light transmitted, and therefore has multiple pathways of light-several wavelengths of light are used in the fiber core. Multimode fiber optic cable can be used for most general fiber applications. Multimode fiber optic cable is used for bringing fiber to the desktop, for adding segments to your existing network, or in smaller applications such as alarm systems. It comes with two different core sizes which are 50 micron or 62.5 micron.
Simplex Fiber Optic Cable
Simplex fiber optic cables consist of just one optical fiber. Analog to digital data readouts, interstate highway sensor relays, and automated speed and boundary sensors (for sports applications) are all great uses of simplex fiber optic cables. This form of fiber optic cable can be cheaper than duplex fiber optic cables, because less material is involved. Simplex fiber optic cable consists of a single fiber, and is used in applications that only require one-way data transfer. For instance, an interstate trucking scale that sends the weight of the truck to a monitoring station or an oil line monitor that sends data about oil flow to a central location. Simplex fiber optic cable is available in singlemode and multimode.
Duplex Fiber Optic Cable
Duplex fiber optic cable is simply two optic fibers structured in a zipcord arrangement. Zipcord style means the cables run next to each other. Multimode and singlemode duplex fiber alike are used for two-way data transfers. Larger workstations, optical switches, network servers, and major networking hardware tends to require duplex fiber optic cables. Duplex fiber optic cables can be more expensive than simplex fiber optic cables, and are compatible with any extender. Duplex cable consists of two fibers, usually in a zipcord (side-by-side) style. Use duplex multimode or singlemode fiber optic cable for applications that require simultaneous, bi-directional data transfer. Duplex fiber optic cable is available in singlemode and multimode. | <urn:uuid:f2748928-aece-47e0-8921-6237eabb3b7b> | CC-MAIN-2017-04 | http://www.fs.com/blog/basic-types-of-fiber-optics-cables.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00494-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.897119 | 727 | 3.3125 | 3 |
Four years ago, a friend dropped a Sheeva Plug into the hands of Ronald Luijten, a system designer at IBM Research in Zurich. At the time, neither could have realized the development cycle this simple gift would spark.
If you’re not familiar, Sheeva Plugs are compact devices that look a lot like your laptop power adapter, except instead of an electrical output plug, there’s a handy gigabit Ethernet port. Luitjen, whose primary interests lie in data movement and energy management, immediately saw the potential. He put his minimalist inclinations to work, and within a few months, had a VNC, an OS and a web server running from a USB attached hard drive. What struck him the most, however, was when he measured it from the mains and found the whole thing was running at a mere 4.3 watts. “I couldn’t believe this,” he said. “When I thought about it further, I saw it was the beginning of a revolution.”
This discovery coincided with a much larger project Luitjen was involved with at IBM Research. In conjunction with ASTRON, a team tapped some of Big Blue’s best minds to help the Square Kilometer Array (SKA) team discover new solutions to solve the unprecedented power, compute and data movement challenges inherent to measuring the Big Bang. Over the next decade, SKA researchers will be able to look back 13.8 billion years (and over a billion dollars) with 2 million antennae that will pull together a signal at the end of each day based on 10-14 exabytes of data, culminating in a daily condensed dose of info in the petabyte range. To do this will require well over what the exascale machines of the 2020 timeframe will offer but there’s another problem. The signals are being collected at the most radio wave-free locations one earth, which happen to be places where there’s no power grid or internet.
This was the perfect set of conditions for IBM and SKA/ASTRON researchers to think outside of the power-hungry boxes that are required to feed this kind of science. And the perfect opportunity for an ultra low-power approach that recognizes that the compute is easy–it’s the data movement that’s the real power drain. Since altering the speed of light is out of the question, the only answer seems to be integrating as much as possible into a neat whole. While some of that technology still needs to mature (particularly in areas like stacked memory), Luitjen was able to demonstrate how big compute and little movement can be lashed together for maximum efficiency and multiple workloads.
But this isn’t all in the name of grand science. In addition to seeing a path to helping SKA with its noble mission, IBM too was able to see a path to meeting the “compute is free but data is not” paradigm. Luijten says their needs were specific; they wanted to see a microserver that could provide an ultra low-power “datacenter in a box” that could leverage commodity parts and condensed packaging. Further, it would have to be true 64-bit to be of commercial value (which meant no ARM since it wasn’t on the near horizon then), and would have to run a server-class operating system.
Building off the lesson learned during his Sheeva Plug jaunt, Luijten set to work with the one and only 64-bit chip on the market. In this case, it was the P5020 chip from Freescale—a product made specifically for the embedded market, thus without any of the software required for doing anything other than powering small devices operating on custom code. He says the Linux that came in the box was limited and he couldn’t even run the compiler. There was certainly no OS to meet IBM’s eventual needs, but with the help of a colleague and folks at Freescale, Luijten was able to get Fedora up and running on the 2.0 GHz Power-based architecture. And so the DOME Microserver was born.
Getting Fedora to sing on the DOME was one the first hurdle; the absence of an ecosystem was an incredible challenge and multiple iterations of attempting the use of different OS approaches that blended server and embedded realms. He imagined that finally being able to implement a functional server-class OS would be half of the trouble–that the real challenges were ahead in being able to build some functionality application-wise around that.
However, to Luijten’s surprise, just two days after the Fedora success, they were able to get IBM’s DB2 up and running on the tiny motherboard. Without compiling. This is indeed the same DB2 that requires ultra-pricey System X datacenters at a much greater up-front and of course, operational/power cost.
Luijten relayed a quick story about how he had a chat with upper management on the development side at IBM about what they were able to do and he flat-out denied it was possible. “He probably still doesn’t believe it to this day,” he laughed. But sure enough, he said, they had a program that was running for weeks on a single node end atop DB2 with a PHP app on a web browser that could kick through a basket of workloads on the Freescale-carried DB2 engine, all at around 55 watts.
The very small team (just Luijten, another comrade and a group of researchers at Freescale) grabbed the chance to take hold of the new incarnation of the chip, which moved them from dual-core to 12 cores—a major leap that didn’t require a recompile to run DB2 again. The newest part, the T4240 runs at 60 watts but comes with some major enhancements to his aims in terms of threading (this is “true threading” he says, not hyperthreading), bumps to three memory channels, and moves them down to 28nm (versus 45 nm).
The datacenter in a box approach with 128 of these boards using the newest chip yields 1536 cores and 3072 threads with between 3 or 6 TB of DRAM with a novel hot water cooling (ala SuperMUC) installation makes this a rather compelling idea for cloud datacenters and of course, for power-aware, poor folks who want to their commercial or research applications to run in a lightweight, cheap way. As for HPC, it’s all about potential and possibilities at this point versus anything practical. Again, this is a proof of concept project. Benchmark results and scaling capabilities will be forthcoming, but for anyone who wants a firsthand lesson in some of the lessons of a non-existent software ecosystem, the ARM guys aren’t the only ones to look to for war stories.
Just as a side note, while sitting with Luijten at the IDC User Forum this week, we set the little server node motherboard next to my iPhone—it was just a tad longer, do some mental comparisons for size scale or take a look below at his part versus a BlueGene board. Sitting this next to a Calexda or Moonshot offers about the same viewing experience.
Microservers should package the entire server node motherboard into a single microchip, leaving off some elements that wouldn’t make sense (including DRAM, power conversion logic and NOR Flash since they don’t fit), says Luijten. There are many motherboards that have graphics and such, but this is pared down.
And yes, this was from a conversation at an HPC-centric event, which might strike some of you as a bit strange. Luijten says that he definitely does not do HPC but Earl Joseph believes strongly that the DOME microserver project is a perfect example of the type of technology that could be disruptive to the industry going forward. It’s power constrained, price-aware, and performance-oriented. While the specs on the flops front are in short order (you can do some quick math based on what Freescale has made available—not shabby for the size and power envelope), Joseph is spot-on. This was one of the more compelling presentations during the two days in Santa Fe and based on sideline conversations, one of the most widely-discussed.
It should be noted that these aren’t coming to a rack near you anytime soon. It’s still a research project, but it’s one that Freescale isn’t taking lightly, even if it’s not been as mainstream at IBM as Luijten might like to see one day. This would make a pretty compelling cloud server for Freescale and they’re working with him now to run some benchmarks to get a better baseline on the performance capabilities that will be shared in a press release eventually.
What IBM will do with the eventual success or interest in the concept on the development side remains anyone’s best guess—especially as the first drums of the ARM invasion can be heard beating in the not-so-far distance. “IBM sold off its SystemX business is because the moment a technology becomes commodity, they get out of the game,” Luijten reflected. They can’t sustain a business on driving a commodity market, hence they’re looking now to things like cognitive computing, among other efforts.
He says that while IBM is not incredibly interested in what he’s working on now, at least in any serious product-driven way, he’s found that with research like this, it helps to be more than just a good technical engineer. “Someone said I’m like an entrepreneur,” he laughed. “It’s not enough to develop this technology, it has to be marketed and you have to find interest however you can.”
We’ll close with the most recent development/progress via one of his slides. And of course, we’ll continue to watch this, even if it’s remote from the HPC we’re looking at now. | <urn:uuid:6b33431a-f060-47f3-9b21-7b01cb1be915> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/04/10/dome-ibm-research-microserver-freescale/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00402-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961703 | 2,117 | 2.984375 | 3 |
In 1981, hardware giant IBM unveiled the IBM Personal Computer. While the name (along with other terms like “microcomputer” and “home computer) had already been in use, most machines had limited compatibility and therefore limited value in the work environment. Within a few years of the PC’s release, the popularity of the product caused other companies to release clones of the system, ensuring widespread compatibility and standardization. Within the next two decades, the PC became as ubiquitous as the television, the telephone and the automobile. It was seen as a life necessity, to the degree that experts soon began defining PC ownership as the divide between the “haves” and the “have nots.” Simply put, the PC is everywhere.
So why does Mark Dean, a CTO for IBM Middle East and Africa as well as one of the developers of the first IBM PC, say that we’re moving into a “post-PC era?”
“I […] have moved beyond the PC as well,” he says, explaining that he now uses a tablet device as his primary computer. “While PCs will continue to be much-used devices, they’re no longer on the leading edge of computing. They’re going the way of the vacuum tube, typewriter, vinyl records, CRT and incandescent bulbs.”
Many insiders have been wondering when the tipping point will occur, when sales of mobile devices will outpace those of personal computers. What many don’t realize is that it’s already happened. Smartphones and tablets combined for a whopping 487.7 units shipped in 2011, compared to 414.6 million PCs. The PC era, it seems, is over.
Or is it? Before we forget, over 400 million PCs were shipped in 2011, a year that saw a weak overall economy and an Asian market that was affected by natural disaster. Microsoft posted revenues of $17.41 billion for Q1 2012. The 6 percent increase over last year’s numbers led Wall Street analyst Josh Olsen to quip, “Perhaps the demise of the PC is not as great as everyone is anticipating here.”
Are PCs still the cutting edge of the technology world? Probably not. At their core, they’re a 30+ year old piece of technology that has already innovated the business world as much as it ever will. But the same could be said of the automobile, which fundamentally hasn’t changed since the first Model T rolled off the assembly line.
Perhaps this is a case of apples and oranges. After all, people use mobile devices far differently than they use PCs. Tablets and smartphones are fundamentally media consumers; PCs are fundamentally media producers. This article, for example, was written on a PC. And as long as the PC continues to do what it does so well, it’s safe to say that it’s here to stay.
– Dan Lothringer
Dan is a contributing writer for VideoConferencingSpot.com | <urn:uuid:7476d3f5-5bbe-466f-b175-2c75e1e5c89a> | CC-MAIN-2017-04 | http://www.lifesize.com/video-conferencing-blog/reports-of-the-pcs-death-are-greatly-exaggerated/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00126-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.966473 | 634 | 2.515625 | 3 |
A French translation of this page is also available thanks to Vicky Rotarova.
What’s in a name? Everything. When a good name gets manipulated for bad intentions, it costs money, trust, time and resources. UCAPI detects confusable and visual similarity in strings – tactics used by fraudsters to manipulate everything from Internationalized Domain Name (IDN) and page content to user-interfaces and dialog boxes - and helps protect your good name from bad people.
Contact us now for more information.
As Internet software and service offerings continue to globalize, a better understanding of the threats around visual spoofing is required to help product teams make safe and secure designs. As Web browsers, mobile devices, and applications evolve to support Unicode in all facets, Casaba saw the need to investigate problem areas of string confusability more closely. During our research, we registered several popular domain-name lookalikes with IDN, and reported many vulnerabilities to software vendors including Apple, Google, Microsoft, and Mozilla. Defenses not implemented in major applications (or at the registrar level) create openings for well-planned phishing attacks. In looking beyond IDNs, the attack surface for visual spoofing is even broader.
Clearly, the threat of a widespread visual spoofing attack is still all too real and accessible. To this end, Casaba developed UCAPI - a solution to analyze strings for confusable characters, and compare two strings for visual appearance. Many costly examples in which confusable and visual string similarity can be used include:
To a human reader, some of the following letters are indistinguishable from one another while others closely resemble one another:
To a computer system however, each of these letters has very different meaning. The underlying bits that represent each letter are different from one to the next. How then, could a software vendor possible implement a solution that guaranteed expected visual appearances?
Using a core library solution developed in C and C++ (with .NET wrappers), UCAPI hinders attacks like these by recognizing the visually confusable characters and similar strings from the wide variation of languages being employed. Partially based on Unicode TR39, UCAPI can provide software vendors with safety options not currently available in Win32 or .NET libraries.
Implemented as a cross-platform core library solution developed in C and C++ (with language-specific wrappers), UCAPI hinders attacks like these by recognizing visually confusable characters and similar strings from the wide variation of languages being employed. Partially based on the specification defined by Unicode TR39, UCAPI can provide software vendors with safety options not currently available in Win32 or .NET libraries. UCAPI is not Windows-specific however, as it supports BSD and Linux flavors as well.
Clear Vectors within Reach of Attack
As technology continues to become more and more a part of our consumer and business worlds, people are making important decisions based on what they visually see on their screens. Whether it's an email message on a PC, a Web browser on a Mac, or a social networking URL on a mobile device, we believe that people respond based on what they see. With most major computing platforms and mobile devices supporting Unicode attackers have capability to use lookalike characters to fool people in many different contexts.
The image here depicts just some of the scenarios where 'string confusability' can play an important role in the end-user decision-making process. A well-designed phishing, pharming or spam campaign can exploit the use of lookalike characters in all of these scenarios to fool end-users. To combat these fraudulent attacks on brands and names, Casaba has developed the UCAPI library which can help you minimize the threat of visual attacks in your Internet applications.
Web browsers and email clients are the portals to our ubiquitous information. It’s no wonder their the constant target of attack. Phishing attacks continue to evolve with email spam campaigns that look more and more like they're coming from authentic sources. As IDN's continue to become more mainstream and new glTLDs emerge phishers will have renewed ammunition at their disposal to craft fraudulent messages and domain names that look visually identical to their legitimate counterparts. Web browsers, email clients, and anti-phishing platforms could implement confusability detection now to start collecting data on IDN abuse and gaining insight into these attack vectors - before a widespread attack occurs.
Email addresses have long been confined to ASCII, but there will most certainly be a time when they're opened up to UTF-8 and international characters. In preparing for that transition, email client designers need to anticipate and handle the case of visually identical email addresses. If not, end users could easily be fooled. Digital certificates provide good mechanisms for proving authenticity of a message; however, such certificates also support Unicode and are therefore vulnerable to the same attacks.
The Internet registries and registrars are in a unique position to handle the problem of visual spoofing attacks in IDN. We can think of a number of ways that this technology could be applied to address the problem of attackers maliciously registering lookalike domain names. A registry could work in partnership with registrars to detect potentially confusable domain names during registration. Perhaps more effective, this partnership could be used to detect when a new domain registration is visually similar or identical to one that already exists. With millions of domains registered this may not seem the best use of resources. Instead, a visual spoofing protection service could be offered for domain names who sign up for it, or as a protection against the world's top 10,000 domains. As a registry or registrar, you could assess your capability around this threat-area by asking a few questions:
In today’s Internet-connected world, domain names and URLs are real estate, and social networking sites like Facebook, Twitter, and LinkedIn offer some form of a vanity URL to consumers. A social networking service might want to allow the registration of vanity URI's using international characters, but can't risk the security threat posed by the endless ways in which they can be manipulated to visual fraud and confusion. Because Unicode characters are well supported in the path portion of a browser's URI display, a well-crafted vanity URI could easily fool victims and be the landing page for a phishing attack. Modern desktop and mobile browsers will display characters after the first '/' in their pure Unicode form, making for good usability but also increasing the opportunities for phishing and spoofing.
|For example,||can be spoofed using completely different Unicode:|
Many online systems and video games featuring instant messaging and user forums employ filter security in order to prevent the use of violent and profane words. There are however, many simple ways to bypass such filters - such as the use of spacing and punctuation between letters in a word (e.g. c_r_a_p), misspellings that give the same effect (e.g. crrap), and using confusable characters which have no visual side-affect (e.g. crap).
Many Internet applications give security decisions to their users in the form of dialog boxes. For example, when a user downloads a file through a Web browser, they're asked to confirm their decision. When they launch the file, they may also be presented with a dialog box asking for confirmation (if the file is an untrusted application. However, a clever attack may use special BIDI or other characters that reverse the direction of text to fool end users into executing a harmful file that looks innocent.
Consider an advertising network that needs to mitigate malicious ads, malvertisements, by protecting brand name trademarks from being registered by anyone other than their owner. An attacker could place an ad that bypasses trademark filters by using confusable characters that fool users into visiting a phishing site. For example, "Download Microsoft Service Pack 1 for Windows 7 here" where the trademarked name 'Microsoft' was crafting using non-English script.
UCAPI is a solution that transcends all these issues with its detection of confusable characters and visually similar strings. Using a high-performance native library with ultra-fast sub-nanosecond lookups, its scalable, cross-platform capabilities create a roadmap that provides coverage for all these scenarios - and even more attack vectors not mentioned here. | <urn:uuid:b04a1985-af5c-4629-9baf-f85552ddb136> | CC-MAIN-2017-04 | https://www.casaba.com/products/UCAPI/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00034-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924731 | 1,696 | 2.609375 | 3 |
Humans use body characteristics to recognize each other. Some characteristics don’t change over time and some do. What characteristics do we use for identifying people? Are they accurate? Can we depend on them in our daily life?
A biometric system is a pattern recognition system; it operates by acquiring biometric data from a person, extracting a feature set from the acquired data and comparing this feature against the templates in the database.
Download the paper in PDF format here. | <urn:uuid:bc382309-8274-4a71-a06c-347361db7fe3> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2007/03/19/biometrics-what-and-how/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00154-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.918766 | 96 | 2.953125 | 3 |
An Overview of the Latest Green Data Center Mistakes
Green data centers are all the rage now because of the benefits they bring, but, surprisingly, not everything “green” is good or even recommended. And, many companies tend to go overboard with green initiatives. Here is an overview of the top green data center mistakes to avoid.
Building Too Soon
Data centers are in the midst of exponential growth, thanks to big data and the preference among more and more companies to put their data in the cloud. Demand has grown by about 14% a year, whereas supply has grown by only 6%, increasing the business prospects of existing data centers. However, demand varies considerably among data centers, and data centers may need to plan ahead but also build for the present. Building for future needs, even fully optimized, "green" space, is actually a waste as technology changes by the day and what counts as most efficient and practical today may be irrelevant and obsolete by the time this capacity is actually needed. It is best to make provisions for expanding, but it is best to actually implement when it is truly required. When it comes to green architecture and deployment, the best “green” initiative is not to build until absolutely needed.
Miscalculation of Costs
Most green initiatives focus on improving Power Usage Effectiveness (PUE). This is understandable, as data centers are one of the biggest consumers of energy and energy accounts for up to 80% of all data center costs. However, many data centers miscalculate PUE. One big mistake is the failure to take total cost of ownership into account. Data centers may refrain from taking up an initiative considering the high upfront cost, or, conversely, they may make that investment in order to save in the future without considering the maintenance or operating cost of the investment. For instance, there is ongoing debate in the industry regarding the efficiencies of water-cooled chillers and air-cooled chillers. The lower PUE resultant from these initiatives may actually be offset by the cost of the makeup water and water treatment maintenance requirement for the water-cooled solution, and data centers may not be factoring in this cost.
Ineffective LEED Implementation
LEED certification is the latest “fad” in green data centers. While the benefits of LEED certified data centers are obvious, the fact remains that the process is often implemented ineffectively. Obtaining LEED certification ideally begins at the design concept and ends with a formal certification after project completion.Many data centers fail to develop a base understanding of the qualifying criteria and pursue LEED certification as an afterthought.
Finally, there is one thing that matters the most for optimizing green initiatives – location, location, location. Not all green initiatives work well everywhere, and the best implementations are the ones which work with the environment. A case in point is solar energy. While it makes sense to deploy solar panels in sun baked lands such as Arizona or Dubai, solar panels may not actually be a good idea in colder regions that receive lesser sunlight. Here, harnessing wind power may be a better option.
Going green is worth its while, but only when done right. If implemented incorrectly, it may actually do more harm than good.
If you're looking for a data center that places an importance on compliance and efficient use of energy, schedule a tour of Lifeline Data Centers today. | <urn:uuid:39924b16-827d-418e-981a-c32e62aacf72> | CC-MAIN-2017-04 | http://www.lifelinedatacenters.com/data-center/overview-latest-green-data-center-mistakes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00062-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941162 | 693 | 2.515625 | 3 |
Rahman M.S.,Bangladesh Agricultural University |
Her M.,Plant And Fisheries Quarantine And Inspec Agency |
Kim J.-Y.,Plant And Fisheries Quarantine And Inspec Agency |
Kang S.-I.,Plant And Fisheries Quarantine And Inspec Agency |
And 4 more authors.
African Journal of Microbiology Research | Year: 2012
Brucellosis causes a great economic loss to the livestock industries through abortion, infertility, birth of weak and dead offspring, increased calving interval and reduction of milk yield and it is endemic in Bangladesh. The present study was performed to know the seroprevalence of brucellosis for 1000 ruminants (135 Buffaloes, 465 cattle, 230 goats and 170 sheep) in five different districts of Bangladesh by four conventional serological tests such as: Rose Bengal Plate Test (RBT), tube agglutination test (TAT), competitive enzyme-linked immunosorbent assay (C-ELISA), and Fluorescent polarization assay (FPA). Sheep has the highest prevalence (8.24%) of brucellosis. The seroprevalence of brucellosis was significantly higher in animals with previous abortion record in case of buffaloes, cattle, goats and sheep than that with no abortion record. C-ELISA can be the most suitable choice for extensive use in many kinds of livestocks and accurate estimation of Brucella antibodies in ruminants in Bangladesh. © 2012 Academic Journals. Source | <urn:uuid:244c509b-aaaf-42aa-abf6-98260b2f8da4> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/plant-and-fisheries-quarantine-and-inspec-agency-1641058/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00062-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.850694 | 308 | 2.765625 | 3 |
Mechanized assistants are becoming more lifelike
Just one word: robots.
Thats the next big boom being buzzed about by the worlds leading technology visionaries.
"In the last millennium, we came to rely on machines. In the new millennium, we will become our machines," Rodney Brooks, director at the Artificial Intelligence Laboratory and Fujitsu professor of computer science at the Massachusetts Institute of Technology, said at the Association of Computing and Machinerys Beyond Cyberspace conference in San Jose last month.
And that seemed to be the consensus of the whole group, which in its last gathering four years ago was all abuzz about the just-exploding Internet.
Indeed, its no secret that robotics are no longer just the stuff of science fiction. From robotic pets to assembly lines and hospitals, humanoid machines are gradually infiltrating everyday life.
Within a decade, robots that answer phones, open mail, deliver documents to different departments, make coffee, tidy up and run the vacuum could occupy every office, experts insist.
Scientists and engineers in laboratories across Europe, Japan and the U.S. are building so-called "robo sapiens" that can navigate the corridors of todays office buildings and perform the tasks of an office assistant.
Already, robots have taken over many tasks that humans once performed. Robots stroll the hallways of many hospitals in Japan and the U.S., carrying medications to nurses stations, and they dominate many manufacturing plant assembly lines. They also assist in some surgeries.
In agriculture, robots spray chemicals, milk cows, and assist in farming and forestry. Industrial service robots help with inspections, cleaning, security, fire fighting, bomb removal, search and rescue, and mining. And throughout manufacturing plants, robots help build parts and assemble everything from computers to automobiles.
In the next few years, they will begin appearing in more industries. A multifunction android capable of almost substituting for a general-purpose waiter is likely five to 10 years away, according to ActivMedia Research. And a food delivery robot in the predefined venue of a fast-food restaurant could be a reality very soon.
ActivMedia projects more than 3,500 percent growth in the number of robots produced and 2,500 percent growth in the dollars spent on robot development worldwide in the next five years. Mobile robot sales are expected to soar from $665 million in 2000 to more than $17 billion by 2005.
Technologies such as artificial intelligence, sensing, navigation, communications and response are beginning to help form practical mobile robots.
"Robots are becoming more human, and humans are becoming more robotic," says Bob Metcalfe, Ethernet inventor, founder of 3Com and vice president at International Data Group.
Helpers or Replacements?
But the idea of a fully automated and intelligent robot has many technology futurists worried about the dangers robots may pose to humans.
Like Frankensteins monster, robotic creations might one day replicate themselves and contribute to humankinds demise, according to Bill Joy, CEO and a co-founder of Sun Microsystems.
Indeed, in the U.S., robots have traditionally been feared as control-freaks out to wreck lives. In movies, robots are often portrayed as having brutish strength, grim personalities and remorseless logic.
Yet the view of robots in Japan is much better. Japan is leading the way in the use of robots in commercial ventures. And many Japanese researchers credit their childhood love of fictional robots especially Astro Boy, who served as a national poster boy to inspire the development of helper robots following World War II.
Throughout Japan, service robots are functioning as guards in warehouses, delivering trays of food in hospitals and carrying documents from one office to another. Honda Motor is investing heavily in practical humanoid robots that operate household switches, turn doorknobs and perform tasks at tables.
The Japan Robot Association estimates that by next year, some 11,000 service robots will be deployed, with 65 percent of them in hospitals and nursing homes. The association also projects that by 2005, health-care robots will be a $250 million market, with a possibility of growing to a $1 billion market by 2010.
Within 10 years, personal robots are expected to be as common in Japan as personal computers and cellular phones.
One of the first humanoids on the market will be Hondas Asimo, a child-sized android, that can walk, climb stairs and negotiate corners. It can turn out the lights and do other small tasks. The humanoid robot is being outfitted with programs and artificial sensors that will make it autonomous.
This fall, Honda plans to start renting Asimo to companies and museums for use as a visitors guide for an undisclosed fee.
In the U.S., consumers have already begun to adopt robots such as Hasbros My Real Baby, Manley Toy Quests robotic dog Tekno and Sonys robotic dog Aibo as toys and pets.
Consumers also rely on robots to perform housekeeping duties. The commercial success of the robotic lawnmower and robotic vacuum cleaners suggests most people are very open to single-function robots to handle daily tasks. And this summer Steven Spielberg is expected to help ignite the consumer robotic craze with his movie A.I., which will feature supersensitive, superhuman robots.
At Sony, engineers are developing the next generation of humanoid robots. The company last fall demonstrated its prototype at Japans Robodex, a new expo for personal robots. Sonys robots, dubbed SDRs for Sony Dream Robots, performed all kinds of acrobatics, jumped, danced and kicked balls. They are expected to hit the market within five years.
To help bring the dream to life, labs around the world are busily working on the robotic parts feet and knees for walking, hands for grasping, versions of eyes and ears that will someday be stitched together into a fully functional humanoid robot.
In addition to the jointed metal or plastic frames that serve as a skeleton, robots today also have sophisticated sensing machines, packed with cameras, microphones and even "haptic" sensors that mimic the sense of touch. Big engineering challenges still remain to make the robots human, including finding a practical way to power the energy-hungry machines.
Yet, most researchers believe the physical obstacles can be easily worked out in the near future.
At the MIT Media Lab, researcher Cynthia Breazeal has been working to create robots that are socially savvy. Her creation, dubbed Kismet, is learning to recognize human emotions, and has a primitive face that can express its own moods, from happy to sad to angry. The goal is not just for Kismet to learn to think, but also for it to understand that actions have consequences, just like a child learns how to behave through interaction with other children and adults.
Still, Kismet is far from relying on its own senses. The robot relies on a bank of 15 external computers to control its social abilities and facial expressions.
Robots that walk, talk and think like humans but have extensive memories, computational skills and physical strength can have a lot of applications in the business world, MITs Brooks says. Heavy industry could use robots for labor in hazardous environments, and the military could use them on the battlefield.
In 1993, Brooks created Cog a robot with a humanoid torso. Cogs eyes are cameras that track moving people, and the robot has been learning to interact with its surroundings and people. Cog is still in its infancy, but Brooks predicts that with the ever-increasing power of todays computer chips, smart robots are inevitable.
Ray Kurzweil, a pioneer in the field of artificial intelligence, sees a future in which humans and robots are so alike its difficult to tell them apart. Within 20 years, he says, computers will not just be intelligent, they will be conscious, feeling beings deserving of the same rights, privileges and consideration people give each other. Kurzweil has created his own cyberspace alter ego, named Ramona.
Robotics applications will also start showing up in computers, according to Michael Dertouzos, professor and director at the MIT Laboratory for Computer Science. Dertouzos says that todays computers should act more like robots and adopt more human-like qualities, so people will be able to interact with them more easily. For example, advances in speech software will open up the Internet to an estimated 2 billion people worldwide who cannot read or write, helping to harness the power of the Internet to tap workers in foreign countries, he says. "We are not exploiting this technology revolution," Dertouzos says. "Were hardly scratching the surface."
Brooks says the seeds of a human-centered computing revolution have already been planted in the robotics field. The vaguely human look of the typical robot has encouraged an entire generation of robot engineers who intuitively design and program robots on a human scale.
Even simple consumer robots such as Tiger Electronics Furbys, sophisticated dolls that learn words that are taught to them via repetition, help educate people about the potential of robots.
"We are talking about the emotional coupling between the robot and the human," Brooks says. "Its inevitable." | <urn:uuid:8dade9ab-3cec-4191-bb3e-f61c52be0a5e> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Desktops-and-Notebooks/March-of-the-AI-Robots | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00456-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948797 | 1,878 | 2.578125 | 3 |
Ben Wildavsky, senior scholar at the Kauffman Foundation, says technology has to be a big part of the solution to access and affordability, but the key is to do it in a smart way.
More than 160,000 students from more than 190 countries signed up for Udacity's first artificial intelligence course. Udacity was co-founded by a former Stanford professor, and offers high-quality, low- cost classes online.
Michael Staton is co-founder of Inigral, a private Facebook community for colleges and universities. Photo by Jessica Mulholland.
Photo by Jessica Mulholland
Coursera was also launched by Stanford professors, and offers massively open online courses at no charge.
In 2012, Harvard University and the Massachusetts Institute of Technology teamed up to start the not-for-profit edX.
Though the future of health care is cloudy given the many changes that will take place over the next several years, it's clear that technology will play a vital role in making the system more sustainable. | <urn:uuid:85a2a9f4-737c-4df5-bf58-b8a5e092c926> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Government-Technology-November-2012.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00026-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964415 | 213 | 2.65625 | 3 |
Ways to ensure the voice network security are many. In this article we will se the first of them that must be configured in every serious network. Implementing Auxiliary VLANs will make VoIP Networks more secure using separated VLANS for data traffic and voice.
Voice and data traffic will be transferred in the same way across the same cable and same switch by default. That means that calls and all other network traffic will be transferred in the same time in the same way and every user on the network will be able to see that data using some network sniffing tool like Wireshark. This default network setting may be used to capture call packets that are crossing the network and attacker can reproduce the call in .mp3 or some other sound format. We need to separate voice network from data network completely in order to make impossible to sniff call packets from user computer.
You surely want to avoid VoIP calls to be transferred in the mix with data traffic. This is very simple config in which you easily implement separate VLAN and insert all communication equipment into than VLAN.
This voice VLAN that you have now is normally called an auxiliary VLAN.
Many Cisco IP Phones have an extra Ethernet port so in this way they can connect a PC into it. This example is showed in the image that you can se on the top of the page. The attached PC is sending data across the Cisco IP Phone to Cisco switch at the access layer. In this case the story from upper part of text becomes important. PC and the Phone are transmitting all the traffic by the same cable to the switch but data traffic is in separate VLAN than call traffic. The switch and phone’s port are configured with “switch port mode trunk” to be able to tag packets for every separate VLAN. These two devices are connected on the same switch port but the traffic is separated and the switch is making it seem that every device is connected to his own port and more than that, the switch is making it seem that these devices are connected to separate switches to. They are basically connected to different networks while still connecting to a single Cisco Catalyst switch port. | <urn:uuid:39fd4b59-788d-4b05-bc93-a3d86b8f146b> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2012/implement-auxiliary-vlans | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00420-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938436 | 438 | 2.671875 | 3 |
The Internet of Things (IoT) is arguably the ultimate expression of big data. Conservative estimates anticipate as many as 22 billion connected devices by the end of the decade – all constantly generating semi-structured or unstructured data in real time. Methods of processing, analyzing, and deriving action from that data will be markedly different than conventional methods currently deployed by most centralized data centers.
Traditionally, copious amounts of big data have been best managed in the cloud, as organizations take advantage of cloud’s scalability, abundant storage, flexible pricing, reduced physical infrastructure, and elastic computing capabilities. Nonetheless, the sheer amounts of disparate data facilitated by the IoT are projected to create immense strain on organizational bandwidth and network availability, which can create instances of failure and delay on time-sensitive data.
In the wake of such predictions, a new paradigm has emerged to account for the massive amounts of data that mobile computing, the IoT, and big data are producing. By utilizing a decentralized cloud model referred to as fog computing or edge computing, organizations can realize decreased time to action; reduced costs, infrastructure and bandwidth; as well as greater access to data.
The advantages of the decentralized method of fog computing and IoT analytics extend to both the enterprise and end users. Organizations and those operating in data centers benefit because the majority of computations are performed at the edge of the cloud and closer to the mobile device. Instead of the massive amounts of big data produced by an equipment asset in the industrial Internet continuously being transmitted to a data center, which requires enormous quantities of data, fog computing enables only the results of data computations or analytics that pertain to asset management to be transmitted to the center.
Thus, 90 percent of the data that is transmitted and indicates that the asset is functioning properly is processed at the source and no longer requires any movement. The 10 percent that reveals that an asset is malfunctioning or in need of preventative maintenance is all that is transmitted. This greatly decreases network strain and time to action, and organizations don’t need to increase their physical infrastructure and network capacity and can maintain sufficient network availability even with analytics for the IoT.
This approach provides a win both for those monitoring data transmitted by the IoT and mobile computing methods, and for those depending on that data. By facilitating computations near the edge of the cloud and closer to the source of the data, fog computing enables the devices and end users that need the results of those calculations to get them much more quickly than they otherwise could. There is no need to wait for untold amounts of data to travel across the country (or even across the world), to perform analytics at a centralized data center, or to hope the network’s availability remains consistently operable. Instead, only the results of analytics undergo that process. And in some instances, everything is processed to produce action at the edge of the cloud by the device itself. The decreased time to action and greater availability of this paradigm can reinforce the trends of mobile computing and the IoT, helping them to gain further traction while satisfying end users in a way that is much more expensive and tenuous to achieve with centralized cloud approaches.
Fog computing, however, is far from a panacea. One of the immediate costs associated with this method pertains to equipping end devices with the necessary hardware to perform calculations remotely and independent of centralized data centers. Some vendors, however, are in the process of perfecting technologies for that purpose. The tradeoff is that by investing in such solutions immediately, organizations will avoid frequently updating their infrastructure and networks to deal with ever increasing data amounts as the IoT expands.
Although cloud security has made considerable strides in recent years, organizations and service providers will have to adjust those models to focus more on end-point devices with fog computing.
Additionally, there are certain data types that actually benefit from centralized models. Data that carries the utmost security concerns, for example, will require the secure advantages of a centralized approach or one that continues to rely solely on physical infrastructure.
“One of the benefits of centralization is that you can focus your efforts and understand where data is and who has access to it,” said Jack Norris, SVP, data and applications at MapR. “So in a way, it can simplify some of the aspects of protecting that information.” Data that requires a high degree of complexity for its queries also would benefit from the traditional centralized model. In general, data that merely requires network availability and celerity is best suited for the decentralized paradigm.
Resource Allocation for the IoT
The Internet of Things is already a reality. This application of big data is in the process of broadening and continually incorporating new devices to generate ever more amounts of data. Fog computing is a way of accounting for the future of the IoT and the cloud so that organizations take a more prudent approach to resource allocations. The centralized paradigm can still work, yet will require constant additional resources, upgrades, networking investments, etc. The decentralized method, however, is better aligned with the flexibility and agility that tends to characterize the more prevalent data management trends and applications today. Ultimately, the latter is much more sustainable than the former.
Jelani Harper has written extensively about numerous facets of data management for the past several years. His many areas of specialization include semantics, big data, and data governance.
Subscribe to Data Informed for the latest information and news on big data and analytics for the enterprise. | <urn:uuid:9ef7cd9d-34c9-4f81-912d-77505ebb7876> | CC-MAIN-2017-04 | http://data-informed.com/the-internet-of-things-and-the-necessity-of-fog-computing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00566-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941989 | 1,107 | 2.9375 | 3 |
Photo of the Week -- Ice Sculptures that Rival Skyscrapers Found Beneath Greenland Ice Sheet
/ June 17, 2014
The constant melting and re-freezing occurring in the Greenland Ice Sheet has had an unexpected effect -- blocks of ice as tall as city skyscrapers and as wide as the island of Manhattan have formed at the ice sheet's base (as shown below), a discovery researchers made using ice-penetrating radar, according to the Earth Institute at Columbia University.
Image courtesy of the Earth Institute at Columbia University/Mike Wolovick.
According to the institute, these skyscraper-sized blocks are formed as water beneath the ice refreezes and warps the surrounding ice upward. The researchers estimate that they cover about one-tenth of northern Greenland, and are becoming bigger and more common as the ice sheet narrows into ice streams, or glaciers, headed for the sea. | <urn:uuid:236a58b5-6a3e-4573-aae8-0cbc42f1b118> | CC-MAIN-2017-04 | http://www.govtech.com/photos/Photo-of-the-Week-Ice-Sculptures-that-Rival-Skyscrapers-Found-Beneath-Greenland-Ice-Sheet.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00321-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957941 | 187 | 3.4375 | 3 |
Sardou M.,University Sidi Mohammed Ben Abdellah |
Maouche S.,CRAAG |
Missoum H.,University Sidi Mohammed Ben Abdellah
Arabian Journal of Geosciences | Year: 2016
Floods are among the significant natural hazards in Algeria. They cause severe casualties, damage to building and destruction of roads, public works, and infrastructures. Northwestern Algeria has experienced devastating floods in the past that caused considerable damage, e.g., Mohammadia 1881, Mostaganem 1927, and El Asnam 1966. Analysis of historical floods is one of the major tools in flood hazard assessment that enables predicting future flood events. One of the objectives of this work is to address the gap of available information regarding the floods that happened in Algeria, particularly during historical times. The first historical data analysis that we performed on the basis of an intensive survey shows that the study area is subject to flooding. This inventory is the first step towards flood zonation and construction of an atlas of extreme floods in northwestern Algeria. The catalog of historical floods contains more than 127 documented events. As results, this paper presents a method of compiling historical flooding data and the analysis of the obtained database which represents a major contribution to flood risk assessment. © 2016, Saudi Society for Geosciences. Source
Maouche S.,CRAAG |
Maouche S.,University of Science and Technology Houari Boumediene |
Meghraoui M.,Institute Of Physique Du Globe |
Morhange C.,CEREGE |
And 3 more authors.
Tectonophysics | Year: 2011
Major uplifts of late Quaternary marine terraces are visible along the coastline of the Tell Atlas of Algeria located along the Africa-Eurasia convergent plate boundary. The active tectonics of this region is associated with large shallow earthquakes (M ≥ 6.5), numerous thrust mechanisms and surface fault-related fold. We conducted a detailed levelling survey of late Pleistocene and Holocene marine notches in the Algiers region that experienced 0.50. m coastal uplift during the 2003 Zemmouri earthquake (Mw 6.8). East of Algiers, Holocene marine indicators show three pre-2003 main notch levels formed in the last 21.9. ka. West of Algiers on the Sahel anticline, the levelling of uplifted marine terraces shows a distinct staircase morphology with successive notches that document the incremental folding uplift during the late Pleistocene and Holocene. The timing of successive uplifts related to past coseismic movements along this coastal region indicates episodic activity during the late Holocene. Modelling of surface deformation in the Zemmouri earthquake area implies a 50-km-long, 20-km-wide, NE-SW trending, SE dipping fault rupture and an average 1.3 m coseismic slip at depth. Further west, the 70-km-long Sahel fold is subdivided in 3 sub-segments and shows ~0.84-1.2 mm/yr uplift rate in the last 120-140. ka. The homogeneous Holocene uplift of marine terraces and the anticline dimensions imply the possible occurrence of large earthquakes with Mw ≥ 7 in the past. The surface deformation and related successive uplifts are modelled to infer the size and characteristics of probable future earthquakes and their seismic hazard implications for the Algiers region. © 2011 Elsevier B.V. Source
Giresse P.,CNRS Training and Research Center on Mediterranean Environments |
Bassetti M.-A.,CNRS Training and Research Center on Mediterranean Environments |
Pauc H.,CNRS Training and Research Center on Mediterranean Environments |
Gaullier V.,CNRS Training and Research Center on Mediterranean Environments |
And 3 more authors.
Sedimentary Geology | Year: 2013
From the analysis of seven new sediment piston-cores sampled in 2005 (MARADJA-2 French-Algerian cruise), this study aims to identify for the first time possible late Pleistocene to recent sedimentary instabilities controlled by seismicity off or close to the eastern coast of Algeria. The detailed lithologic study allows us to identify the frequency of the gravity events (turbidites, debrites) and to discuss their geographical sources and triggering mechanisms. Based on a chronostratigraphy of 24 14C AMS datings, sediment accumulation rates in zones extending off Bejaia and Annaba and, in particular, semi-quantitative analysis of the microfossils and lithogenic tracers of the origin of gravity flows was discussed. Two sediment cores, here considered as reference cores, enabled the estimation of palaeoenvironmental parameters that controlled sedimentation: (1) in the prodelta of Soummam Oued, after 2215calyrBP, floods were less frequent and sediment accumulation rates decreased because of a drier climate; (2) in the middle slope to the NE of Annaba, a location shielded from gravity flows, an increased sedimentation rate coincided with the passage of warmer waters leading to maxima of carbonate biogenic fluxes (particularly pteropods). Off Bejaia, two deep sediment cores show a spectacular increase in sediment accumulation rate between 2200 and 1000calyrBP while turbidites become more frequent. According to the eustatic and climatic stability of this interval, an episode of strong slope instability of the slope is suggested. Both sediment cores on the slope of Annaba indicate an increase in gravity flows during the same last thousand years, which is tentatively related to a regional increase of seismicity during this interval. This spatial distribution of gravity events is clearly different to that of the western margin where the sedimentation is less perturbed.© 2013 Elsevier B.V. Source
Naitamor S.,CRAAG |
Cohen M.B.,Stanford University |
Cotts B.R.T.,Exponent, Inc. |
Ghalila H.,University of Tunis |
And 2 more authors.
Journal of Geophysical Research: Space Physics | Year: 2013
Lightning strokes are capable of initiating disturbances in the lower ionosphere, whose recoveries persist for many minutes. These events are remotely sensed via monitoring subionospherically propagating very low frequency (VLF) transmitter signals, which are perturbed as they pass through the region above the lightning stroke. In this paper we describe the properties and characteristics of the early VLF signal perturbations, which exhibit long recovery times using subionospheric VLF transmitter data from three identical receivers located at Algiers (Algeria), Tunis (Tunisia), and Sebha (Libya). The results indicate that the observation of long recovery events depends strongly on the modal structure of the signal electromagnetic field and the distance from the disturbed region and the receiver or transmitter locations. Comparison of simultaneously collected data at the three sites indicates that the role of the causative lightning stroke properties (e.g., peak current and polarity), or that of transient luminous events may be much less important. The dominant parameter which determines the duration of the recovery time and amplitude appears to be the modal structure of the subionospheric VLF probe signal at the ionospheric disturbance, where scattering occurs, and the subsequent modal structure that propagates to the receiver location. Key Points Signal Mode Composition Long Recovery Early events Lightning peak current role ©2013. American Geophysical Union. All Rights Reserved. Source
Sawires R.,Assiut University |
Sawires R.,University of Jaen |
Pelaez J.A.,University of Jaen |
Ibrahim H.A.,Assiut University |
And 3 more authors.
Natural Hazards | Year: 2016
In the present study, a new seismic source model for the Egyptian territory and its surroundings is proposed. This model can be readily used for seismic hazard assessment and seismic forecasting studies. Seismicity data, focal mechanism solutions, as well as all available geological and tectonic information (e.g. active faults) were taken into account during the definition of this model, in an attempt to define zones which do not show only a rather homogeneous seismicity release, but also exhibit similar seismotectonic characteristics. This work presents a comprehensive description of the different tectonic features and their associated seismicity to define the possible seismic sources in and around Egypt. The proposed seismic source model comprises 28 seismic sources covering the shallow seismicity (h ≤ 35 km) for the Egyptian territory and its surroundings. In addition, for the Eastern Mediterranean region, we considered the shallow seismic source zones (h ≤ 20 km), used in the SHARE project for estimating the seismic hazard for Europe. Furthermore, to cover the intermediate-depth seismicity (20 ≤ h ≤ 100 km), seven intermediate seismic source zones were delineated in the Eastern Mediterranean region. Following the determination of zone boundaries, a separate earthquake and focal mechanism sub-catalogue for each seismic zone was created. Seismicity parameters (b-value, activity “a-value” and maximum expected magnitude) have been computed for each source. In addition, the predominant focal mechanism solution was assigned for each source zone using the stress field inversion approach. The proposed seismic source model and its related seismicity parameters can be employed directly in seismic hazard assessment studies for Egypt. © 2015, Springer Science+Business Media Dordrecht. Source | <urn:uuid:18f20514-4636-4319-a7cc-15a21d0ecd23> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/craag-171414/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00045-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.885113 | 1,977 | 2.609375 | 3 |
Variable-length deduplication is an advanced method for breaking up a data stream via context-aware anchor points. This subfile intelligent segmentation method provides greater storage efficiency for redundant data regardless of where new data has been inserted. As the name suggests, the length of segments vary, thus achieving higher deduplication ratios.
Who chooses variable-length deduplication, and why
Organizations with fast data growth, highly virtualized environments, and remote offices greatly benefit from variable-length deduplication over a fixed-block approach. Variable-length deduplication reduces backup storage and, when performed at the client, also reduces network traffic, making it ideal for remote backup.
How variable-length deduplication works
Client software examines the file system and applies a secure hash algorithm (SHA)-1 to variable-length data segments. Each data segment is assigned a unique identifier (ID). The client software then determines whether this unique ID has already been stored. If this object exists, a link to the stored object is referenced in the backup. In this way, the same segment is never backed up twice.
Benefits of variable-length deduplication
By providing the highest possible level of deduplication, variable-length deduplication reduces backup storage, improves backup times, and lowers costs. When deployed at the client, it enables organizations to leverage existing bandwidth while reducing resource contention in highly virtualized environments. | <urn:uuid:84e5d4b2-9946-43bd-8633-a3f221f6cdc9> | CC-MAIN-2017-04 | https://www.emc.com/corporate/glossary/variable-length-deduplication.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00531-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.865065 | 298 | 2.5625 | 3 |
In most industries today, (whether it is financial services, manufacturing, academic research, healthcare and life sciences, or energy exploration) data analysis, modeling, and visualization efforts are critical to success.
To gain a competitive edge, most organizations are incorporating ever-large data sets and more variable data formats into these computational workflows to help derive better information upon which to make smarter decisions.
These big data applications are placing new attention on the high performance computing (HPC) solutions used to run the algorithms and process the raw data. Due to the larger volumes and greater variety of data types, as well as the desire to use more robust analysis, modeling, and visualization routines, HPC solutions can be used to provide high sustained I/O and throughput, while being optimized to cost-effectively handle highly variable workflows.
The essential element in all of this work is a need for speed. Organizations need fast time-to-results so that they can make the right decisions (which well to drill, which new drug candidate to develop, which product design to produce, which customer to award a lower rate loan to) before their competitors.
Complications and challenges that can impede HPC workflows
When looking to accelerate HPC workloads, there are several factors that can play a major role in overall performance.
To start, today’s analysis, modeling, and visualization efforts are carried out using much more sophisticated algorithms in order to derive more detailed and realistic results. The output from these routines offers finer spatial or temporal resolution and consequently results in much larger size output data sets. In a typical workflow, those output files might be used as input to another analysis, modeling, or visualization application.
These operations can impact HPC workflows since the great volumes of data produced by the initial run must be written to disk and saved and then the data must be ingested by yet another routine. Both operations can generate high I/O and throughput demands on an infrastructure. And if the infrastructure is not capable of sustaining these data transfers, the computational workflows can slow significantly.
Another factor has to do with the data that is being used in today’s analysis, modeling, and visualization efforts. Nearly every industry is now making use of much larger data sets, richer sets (such as that produced from newer seismic imaging tools or next-generation sequencers), and many more types of data. However, most users, even those who primarily have large data sets, also have large numbers of small files – even if they consume a relatively small percentage of the total capacity.
Big data and HPC solutions must therefore not only be capable of quickly accessing the large volumes of data required for the computations, they also must intelligently stage the different types of data, which comes in varying file formats and sizes, on suitably high performance storage.
Required storage solution characteristics
Organizations continually deploy new servers with more powerful CPUs to improve and speed up their analysis, modeling, and visualization efforts. To make the best use of such computing resources, an HPC solution must have a suitable storage solution to sustain HPC workflows.
A storage solution for today’s big data and HPC environments must be able to easily scale. Some solutions offer help meeting the growing data volume demands, but fall short when trying to keep CPUs satiated. To help accelerate HPC workflows, a storage solution must also scale in performance so that as the data volumes grow, the system supports the higher I/O and throughput required to get faster results.
Finally, a storage solution must be optimized to handle today’s HPC big data workflows consisting of data sets of files of all sizes. If all data used were in the same format – a structured database, for example – or of the same relative file size, a solution could be highly optimized to handle the specific data. Working with the mixed data sets used today requires a storage solution that optimizes workflow performance for each data type.
Panasas introduces an integrated SSD/SATA approach
Panasas ActiveStor storage systems have a modular blade architecture integrated with its PanFS parallel file system. The design eliminates the bottleneck of a single RAID controller to deliver high-performance, scalable storage. Prior generations of ActiveStor have been based solely on SATA drives and were well-tuned for high throughput.
With the fifth-generation ActiveStor 14, Panasas has taken a unique approach, leveraging lightning fast SSDs integrated with high capacity SATA disk to improve storage performance while keeping costs down. Rather than use SSD for caching or for “most recent” file access as many other vendors have done, ActiveStor 14 stores all metadata and small files (less than 60KB) on the SSDs and larger files on SATA drives.
Metadata is accessed frequently so fast metadata access benefits all types of workloads. All file operations, including reads and writes, require access to metadata. In many cases, such as directory listings, access to the metadata is all that is required to satisfy an I/O request. Storing metadata on SSD boosts performance for all storage operations, especially for directory functions (listing, searches, etc.) and RAID rebuilds in the event of a drive error. Rebuild performance has been improved so that the new 4TB drives can be rebuilt in the same amount of time as the 3TB drives in the prior generation ActiveStor 12, maintaining a high level of data integrity and system reliability.
Small file access can be disproportionately slow when read from, or written to, standard hard disk drives. Accesses of less than a full sector are inefficient, particularly for random I/O. Furthermore, reads and writes of small files can conflict with streaming reads or writes of large files to the same disk. By maintaining small files on SSD, such conflicts are eliminated. In addition, ActiveStor 14 stores the first 12KB of all files inside the file system metadata, improving SSD efficiency while increasing small file performance. This efficient storage of small files on SSD, dramatically improves response time and IOPS, as evidenced by very impressive SPEC sfs2008 NFS IOPS results that Panasas has published.
ActiveStor 14 is available in three configurations with varying sizes of SSD, SATA and cache. The amount of SSD for acceleration ranges from 1.5 percent up to 10.7 percent of total storage capacity. The bulk of the storage capacity, however, is on cost-effective SATA drives, keeping the overall cost per terabyte lower than the prior generation, and very competitive in the market today.
The Importance of Ease of Use and Management
Equally important to the performance and reliability of any storage system is the ease of use and management of the product. With ActiveStor, organizations can simply add blade enclosures to non-disruptively increase capacity and performance of the global file system as storage requirements grow. Parallel access to data and automated load balancing ensure that performance is optimized. This makes it easy to linearly scale capacity to over eight petabytes and performance to 150GB/s or 1.4M IOPS.
The end result is a high-performance storage system that delivers high throughput and IOPS, ideal for the most demanding HPC and big data workloads and accelerates time-to-results. ActiveStor delivers unmatched scale-out NAS performance in addition to the manageability, reliability, and value required by demanding computing organizations in the biosciences, energy, finance, government, manufacturing, media, and other research sectors.
To learn more about how the Panasas ActiveStor 14 can help your organization, register for the live webinar: http://www.panasas.com/news/webinars | <urn:uuid:332fbd11-ff07-41d2-b784-16f78a46d8da> | CC-MAIN-2017-04 | https://www.hpcwire.com/2012/10/01/intelligent_application_of_ssds_to_accelerate_hpc_workloads/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00073-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927626 | 1,573 | 2.75 | 3 |
Network interface card the full name of NIC, which can be also called network card, network adapter, the basic component of the LAN networks to connect the computer and the network hardware equipments. Whether it is twisted pair connection, coaxial cable, or fiber connections, it must be with the help of the network card to implement the data communication. The connection type of the network interface card can be either optical or electrical. Optical interface are generally via the fiber optic cable for data transmission, transceiver module for which is usually GBIC or SFP module, with LC, MTRJ, or SC connector.
Fiber Ethernet card is mainly used in fiber optic Ethernet communications technology. The fiber optic Ethernet network card can provide fast and reliable Ethernet connection for users computer, especially suitable for the transmission distance exceed the Cat5 cable access distance (100m). It can completely replace the current commonly used network construction that using RJ45 connector Ethernet card external connect the photoelectric converter. The network interface card provides a reliable fiber-to-the-home and fiber to the desktop solution. Users can choose the its parameters according to the application occasion including its connector types, single or multimode fibers, working distance, etc.
Correct choosing, connecting, and set up the network interface card is essential for a good network connection. Then let’s discuss what should be taken into considerations when choosing the right fiber optic network card.
First, you should know what type of network you are using. What are popular now are Ethernet, Token Ring, FDDI network and more. Select the corresponding card for your network type.
Second, take the transmission rate into consideration. Based on bandwidth requirements of server or workstation combined with the physical transmission medium that provided maximum transfer rate to select the transmission rate of the network card. Take Ethernet for example, the speed option are variety including 10Mbps, 10/100Mbps, 1000Mbps, even 10Gbps. It is not true that the higher the more appropriate. For example, it is a waste to configure a computer that linked with the 100M twisted pair with a 1000M card, which can be at most achieve a transmission rate of only 100M.
Third, pay attention to the bus type. Servers and workstations typically use the PCI, PCI-X or PCI-E bus intelligent card, the PC is basically no longer supported by the ISA connector, so when you purchase network card for you PC, do not buy the outdated ISA network card, what you should choose is the PCI-X and PCI-E or PCI card.
Fourth, you should also consider the connector type that the NIC supports. The network card finally needs to be connected with the network, so it must be with a fiber optic connector to link with other computer network equipment. Different network interface is suitable for different network types. Common used connector types are Ethernet RJ45 connector, LC, FC, SC connector.
Finally, you should take the cost and brand into consideration, because different rates, different brands of the NIC card, the prices of them are surely greatly differ from each other.
Choosing a right network interface card is important to network connection. With the above steps, you can easily find your desired network card. Also, Fiberstore provides different types of fiber optical network interface cards. If you want to know the detail information of the PCI Card, you can go to visit Fiberstore official website. | <urn:uuid:3b19e6f9-880b-4848-9057-63003b0f56b5> | CC-MAIN-2017-04 | http://www.fs.com/blog/how-to-choose-a-network-interface-card.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00009-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.906694 | 696 | 2.734375 | 3 |
As you’ve probably noticed in several past posts, we love to display the ruggedness of mobile devices that can withstand use in extreme environments and put up with daily use and accidental abuse (or abuse caused just for a video). Rugged devices are designed for field workers and provide a longer life span than commercial grade devices, which is money well spent in the long run. Today I’m going to provide an overview of what a mobile device must withstand to be labeled rugged. A variety of standardized tests have been developed to test the ruggedness of devices, including:
Drop Rating-is a measure of a devices ability to withstand repeated falls from a specified height to concrete. Standard testing states that a drop height of 4-5 ft. to concrete is considered rugged, but as you’ve seen in past videos, numerous Motorola devices exceed this height.
IP Rating-the Ingress Protection Rating (IP Rating) is the standard used to identify how well the device can withstand dust, dirt, liquids and other solid particles. IP ratings are an important consideration for devices that will be used outdoors. Learn more about 2-digit IP Ratings and view a comparison chart in our Feb. 22nd post.
Operating Temperature-rugged devices are designed to withstand temperatures that aren’t considered “normal” and can work in a variety of temperature ranges that even include temperatures below freezing. This too is an important consideration for devices that will be used in the field.
Download the Motorola white paper, “What Does It Mean To Be Rugged?” for additional information about the stringent testing involved. Be sure to visit us again next week when we compare rugged and consumer devices. | <urn:uuid:c42734d7-73e8-48ad-8049-93ce01938187> | CC-MAIN-2017-04 | http://blog.decisionpt.com/ruggedness-test-overview | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00311-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928161 | 345 | 2.625 | 3 |
Port Numbers – How does Transport layer identifies the Conversations
Computers are today equipped with the whole range of different applications. Almost all of these applications are able in some way to communicate across the network and use Internet to send and get information, updates or check the correctness of user purchase. Consider they all these applications are in some cases simultaneously receiving and sending e-mail, instant messages, web pages, and a VoIP phone calls. In this situation the computer is using one network connection to get all this communication running. But how is it possible that this computer is never confused about choosing the right application that will receive a particular packet? We are talking about the computer that processes two or more communications in the same time for two or more applications running.
The TCP and UDP transport layer protocols based services have the possibility to keep track of the applications that are communicating in real time. To be able to differentiate the segments and datagrams of each separate application that is using connection in the same time, TCP and UDP have header fields that can identify these applications. These unique identifiers are the port numbers.
Header of each segment or datagram is made of different fields; two of these fields are source and destination port. The source port number is the number for one particular communication and is associated with the originating application on the local computer. The destination port number is the number for the same communication associated with the destination application on the remote host that is receiving the application. It can be for example port 80 on the web server that is open by deamon application and is waiting for GET request for html web page.
Port numbers are assigned in several ways, depending on whether the message is a request from local host or a response from remote host. While server processes have static port numbers assigned to them, clients dynamically choose a port number for each conversation to be sure that is not some port that is already used.
When a client application sends a request to a server, the destination port that is in the header of the request frame is the port number that is assigned to the service daemon running on the remote host. The client application must be configured to know what port number is associated with the server process on the remote host. This destination port number is usually configured by default but can also be changed manually. Let’s take an example in which user wishes to open a webpage. In that case web browser wants to open a webpage and makes a request to a web server, the browser uses TCP protocol and port number 80 because port number 80 is well-known port number used by Hypertext Transfer Protocol (http). Because TCP port 80 is the default port assigned to web-serving applications the server will receive request from web browser and it will know they this is a webpage request. Many common applications have default port assignments.
In the clients request segment or datagram header, the second port number is source port. This port number is a randomly generated port number greater than 1023. As long as it does not conflict with other ports in use on the system at that moment, the client can choose any port number from the range of default port numbers used by the operating system. This port number acts like a return address for the requesting application. The Transport layer keeps track of this port and the application that initiated the request so that when a response is returned, it can be forwarded to the correct application. The requesting application port number is used as the destination port number in the response coming back from the server.
The combination of the Transport layer port number and the Network layer IP address assigned to the host uniquely identifies a particular process running on a specific host device and is called a socket. Term socket refers to the unique combination of IP address and port number. A socket pair, consisting of the source and destination IP addresses and port numbers, is also unique and identifies the conversation between the two hosts.
For example, If we want to open a webpage from the server on the address 10.0.0.5 an HTTP web page request being sent to that web server to destination port 80. This request for the webpage will be destined to socket 10.0.0.5:80. Let’s say that our computer has a Layer 3 IPv4 address of 192.168.1.100. In the moment when the web browser requests the web page, the computer will also generate a dynamic port number 49152 assigned to the web browser instance (it can be one for every open tab). Dynamically generated port will be used by the server to uniquely describe the web browser instance with the socket 192.168.1.100:49152 to respond with the webpage content to the host.
So when our computer receives the page from server, the server has destinated the webpage to our host computer on socket 192.168.1.100:49152
This is the use of port numbers. | <urn:uuid:688343e7-1f8b-4e8c-a81e-3c1ff08e9c14> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2012/port-numbers | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00219-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.910548 | 988 | 3.765625 | 4 |
Boil it all down and last week's Black Hat conference in Las Vegas discussed just two things - identity and privacy in cyberspace. Both are at risk as the internet enters a period of massive expansion.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
IT managers need to deal with these issues in the light of the increasing volume and subtlety of attacks by ill-intentioned people.
Identity and privacy are two sides of the same coin. For the internet to work, everything connected to it requires a unique identifier, known as an internet address or uniform resource locator (URL). This allows network routers, which act as postmasters, to direct messages to the right address.
The internet was designed to be flexible. This makes it possible for people to pretend to own someone else's address and thus to divert traffic elsewhere, or even to take over the address.
In addition, many people want to hide their identities and activities on the internet for both legitimate and illegitimate reasons.
Bob Lentz, the US Department of Defense's chief security officer, says the internet is now a "global commons". This means everyone has a right to access and share in the benefits it can bring.
But it is a fragile ecosystem, he says. Far too many are abusing the right. This abuse includes hacking, criminal acts and borderline legal acts, such as spamming.
Lentz says it is impossible for the Department of Defense, which paid for the original development of the internet, to take back control, clean it up and lock it down to make it safe.
Instead, he proposes a six step plan to increase the resilience of the network. This, he says, will allow people to use the internet safely despite the hazards.
|The US Department of Defense's chief security officer has set out a six-step plan to improve the resilience of the internet and make it safe to use despite the hazards posed by spammers, web site hackers, identity thieves, spies and other criminals.|
|1. To strengthen the network's physical and logical underpinnings so that commercial off the shelf applications could run safely|
|2. To ensure that software and systems were written and ran securely|
|3. To reduce the "attack surface", meaning leaving fewer opportunities to compromise network elements and applications|
|4. To reduce anonymity (but not necessarily the privacy) of network elements, including users, so that bad behaviour could be isolated and removed|
|5. To build security into the network from scratch|
|6. To build ad hoc IT architectures that could serve their purpose and disappear as soon as their mission was over.|
IT managers' role is essentially to practice safe computing. Use firewalls, anti-virus and intrusion detection systems. Use the latest software patches. Make sure that networks are properly configured. Delete or change default passwords. Identify properly everyone, and increasingly everything, to the network. Define their resulting privileges. Monitor them for transgressions. Revoke their privileges instantly when they are no longer needed.
This will become even more critical as the internet migrates from the IPv4 addressing scheme to the IPv6 scheme. IPv6 will create a possible 2 to the power 128 IP addresses.
Many of the new addresses will identify machines such as CCTV cameras, mobile phones, package labels, even GPS-tagged cows and killer whales, as machine-to-machine communication moves from closed proprietary networks onto the internet.
Many things will need only temporary addresses. This will create a headache for the people who have to ensure that they are taken out of circulation at the right time and that, despite the huge number of available URLs, that they can be reused and still keep their uniqueness.
The organisation that has to do this and so protect the owners' right to their unique addresses is the Internet Corporation for Assigned Names and Numbers (Icann).
Icann works through licensed national domain name registrars. They are responsible for keeping the register of who owns which domain names, now 200 million, and their associated URLs. The national registrars also resolve disputes and run the domain name servers, the internet's post offices. Nominet is the UK's domain name registrar.
This set-up has worked for 40 years. There are proposals to change it. They would see control of the domain registries and possibly its technical development centralised.
Rod Beckstrom, the new head of Icann, argues that the internet is already too big for centralised control. "If you chop off a spider's leg, the spider loses the leg entirely," he says. "But if you chop off a starfish's leg, it grows a new one, and the chopped off leg grows into a new starfish."
The proposals would hand control to a central body such as the International Telecommunications Union. The ITU has little experience in resolving business issues quickly. This would lead to delays in resolving disputes over who owns a domain name or a URL. It would also create a vast new bureaucracy and raise costs.
Icann uses the starfish model, which the Department of Defense's Lentz approves of, because it provides resilience against a catastrophic central failure. IT managers should therefore resist efforts to centralise control of the internet.
Nominet is presently consulting on the future of the domain name business in the UK. It is looking for comment from CIOs and IT managers that will help it shape the future of the internet here and elsewhere. | <urn:uuid:7ed696a0-d887-4e03-92df-3d41e91ea6c3> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/1280096976/Identity-and-privacy-at-risk-on-new-internet | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00247-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947928 | 1,143 | 2.8125 | 3 |
A look at virtualization for CompTIA’s Cloud+
Last month, we looked at the first of the seven domains that are on the CompTIA Cloud+ certification entry-level exam (number CV0-001). This month, the focus turns to the second domain — Virtualization — and the five topic areas beneath it:
● Explain the differences between hypervisor types
● Install, configure, and manage virtual machines and devices
● Given a scenario, perform virtual resource migration
● Explain the benefits of virtualization in a cloud environment
● Compare and contrast virtual components used to construct a cloud environment
Virtualization is an important technology in cloud computing because it removes the barrier of needing one-to-one relationships between the physical computer and the operating system. Signifying that importance, these topics (some of which appear again in other domains) make up 19 percent of the exam questions — the second highest percentage of the seven domains. Being an entry-level exam, there is a heavy focus on definitions and knowledge as opposed to actual implementation. That said, each of the five topic areas are examined in order below.
In essence, virtualization means that there is not necessarily a one-to-one relationship between a physical server and a logical (or virtual) server. There could be one physical server that virtually hosts cloud servers for a dozen companies, or there could be several physical servers working together as one logical server. From the end user’s side, they have no concept of whether they are interacting with a physical machine or a virtual machine: it is all handled behind the scenes. At the risk of redundancy, virtualization can be thought of as creating virtual (rather than actual) versions of something (a desktop, a server, etc.).
The purpose of going virtual, in almost every instance, is to save money. Providers can achieve economies of scale, because adding additional clients doesn’t always require the purchase of additional hardware. Clients can pay only for the services they use and don’t have to pay for hardware (or the utilities needed to keep the hardware cool). Developers and other end users can have multiple environments to use without needing to buy additional hardware as well.
To implement virtualization, there needs to be a hypervisor (also known as a virtual machine manager or VMM). The hypervisor allows multiple operating systems to share the same host, and it manages the physical resource allocation to those virtual OSs. There are two types of hypervisors: bare metal (Type I) and operating system dependent (Type II).
With Type I, the hypervisor runs independent of the operating system — booting up before the OS and it is basically the operating system for the physical machine. This setup is most commonly used for server-side virtualization, because the hypervisor itself typically has very low hardware requirements to support its own functions.
Type 1 is generally considered to have better performance than Type 2, simply because there is no host OS involved and the system is dedicated to supporting virtualization. Virtual OSs are run within the hypervisor, and the virtual (guest) OSs are completely independent of each other.
Type II is dependent on the operating system — it cannot boot until the OS is up AND it needs the OS running in order to stay up. Because of this, it is also called the “host OS” and it is commonly used in client-side virtualization environments where multiple OSs are managed on the client machine as opposed to on a server.
An example of this would be a Windows user who wants to run Linux at the same time as Windows. The user could install a hypervisor and then install Linux in the hypervisor and run both OSs concurrently and independently. The downsides of Type 2 are that the host OS consumes resources such as processor time and memory and a host OS failure means that the guest OSs fail as well.
For the exam, remember that when it comes to performance and scalability, Type I is superior to Type II.
When it comes to hypervisor-specific system requirements, bear in mind that running multiple OSs on one physical workstation can require more resources than running a single OS, so the system(s) should be well equipped (CPU, RAM, hard drive space, and network performance). This is especially true for systems running a Type 2 hypervisor, which sits on top of a host OS.
The host OS will need resources too, and it will compete with the VMs for those resources. Additionally, all hosts within a cluster need to be homogenous. Statically allocated IP addresses are recommended and it is important to have sufficient memory, lots of hard drive space, and systems that have current patches installed when issued.
Both proprietary and open source solutions are widely available. Some, like Xen, fork into both proprietary and open source solutions. VMware ESX is offered for free, but you pay for features. Xen is free and open source; ESX is free but not open source (proprietary). KVM is free and open source; Microsoft’s Hyper-V is usually free, but not open source (proprietary).
Both consumer and enterprise solutions can be used. Consumer implementations can include embedded deployments, but consumer implementations should not use enterprise applications due to the excessive overhead. As a general rule, “workstation” implementations can be equated to “desktop” use and “cloud” can be equated to the “infrastructure” utilization. Note that Type I hypervisors are more likely to be used in enterprises and Type II by consumers.
Virtual Machines and Devices
The actual option choices for creating, importing, and exporting templates depend on the software being used, but most (Xen, VMware, etc.) have similar options:
● To export, you choose File, Export Template.
● To import, you choose File, Import.
Depending on the importing/exporting that are doing, you often have to agree to the terms of a EULA.
Know that each virtual desktop (often called a virtual desktop interface or VDI) will usually need full network access, and configuring the permissions for each can be time consuming to configure without templates. The virtual machine create a virtual NIC and allows you to manage the resources of that NIC appropriately.
Theoretically, the virtual NIC would not have to be connected to the physical NIC — an administrator could create an entire virtual network within the virtual environment where the virtual machines just talk to each other — but that is not normally practical in the real world.
In most situations, the virtual NIC will be connected to the physical NIC and configuring a virtual switch within the hypervisor normally does this. That virtual switch manages the traffic to and from the virtual NICs and logically attaches to the physical NIC. Because of this, network bandwidth is often the biggest bottleneck when running multiple virtual OSs
Guest Tools are helpers added after the VM/OS has been installed. With VMware, for example, install VMware Tools on workstation menu (this is available as an ISO file). A clone is a copy of an existing virtual machine. Changes made to a clone do not affect the parent virtual machine. Changes made to the parent virtual machine do not appear in a clone.
A snapshot is a point-in-time copy of the virtual machine. File-Level backups are incremental backups of virtual machines. An image backup is an online backup of the virtual machine(s). Virtual disk limits are based on the virtual machines used. The following table shows the maximums for VMware vSphere 5.1:
A VLAN makes it possible for VNIC to communicate with other network devices. The Virtual NIC needs an IP address, subnet mask, and default gateway values the same any physical NIC would.
Virtual Resource Migration
Before migrating to a virtual platform, it is important to plan carefully. Not every physical server is a perfect candidate for migration. You need to create a baseline. Try not to overprovision or underprovision. Know that migration will entail some downtime and plan for the least disruption possible. With online migration, the source computer stays up during migration. With offline migration, the source computer taken offline during migration.
Reasons for migrating can include performance issues, testing needs, upgrading existing systems, and better resource utilization. The three possible migration types are:
● Physical to Virtual (P2V) — from a physical to virtual
● Virtual to Virtual (V2V) — from one virtual to another
● Virtual to Physical (V2P) — from a virtual to a physical
Virtualization in a Cloud Environment
Virtualization simplifies the sharing of resources and it is possible to share almost any resource: the processor, the disk, network, memory, etc.
Elasticity offers the ability to scale up resources as needed and it has a number of other benefits that go along with it: the time to service — the mean time to implement — is quicker inside rather than outside the virtual model, resource pooling is possible as are multitenant models, it is scalable not only up but also down, and applications are both available and portable.
Using network and application isolation it is possible to increase security and control resources. When planning isolation, think of security, chargeback, etc. Infrastructure consolidation can range from SaaS to IaaS and allow multiple machines to run on the same host.
A virtual datacenter appears the same as a physical datacenter from an administration standpoint and features elasticity, scalability, etc. A big benefit of the virtual center is that it can employ a pay-as-you-go model.
There are a number of terms to know related to virtualization for this topic:
Virtual NIC: While software only, it allows interaction with other devices on network and has MAC/IP address, network configuration settings, etc.
Virtual HBA: Enables a single physical Fibre Channel HBA port to function as multiple logical ports, each with its own identity.
Virtual Router: Software only, but acts like hardware router.
Shared Memory: Virtual memory settings can be changed as needed. This can usually be configured as a static value or dynamically.
Virtual CPU: It is installed on the guest virtual machine and appears the same as a physical CPU. A vCPU is also known as a virtual processor.
Shared storage: Can be done on SAN, NAS, etc. Virtual machine sees only “physical disk.”
Clustered storage: Using multiple devices can increase performance. Microsoft Clustering Services would be an alternative to this.
NPIV(N_port ID Virtualization): Multiple hosts share the same physical fibre channel port ID. This is used for High Availability with a SAN.
While not appearing in the CompTIA list, know that the word emulator is tossed around often. The word is often used synonymously with hypervisor but they aren’t exactly the same. While the hypervisor can support multiple OSs, technically an emulator appears to work the same as one specific OS. It is helpful to keep this difference in mind.
Summing It Up
There are seven domains on the CompTIA Cloud+ certification exam (CV0-001) and this month we walked through the topics covered by the second one. Next month, the focus will move to the third domain, Infrastructure, and what you should know about it as you study for the exam. | <urn:uuid:e3d0757a-97a2-4f85-86b8-5debc0e0dd5f> | CC-MAIN-2017-04 | http://certmag.com/look-virtualization-comptias-cloud/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00549-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921871 | 2,353 | 2.890625 | 3 |
News Article | April 18, 2016
Thanks in part to the efforts of one dedicated mother, who took to Facebook to document her son’s mysterious developmental disability, an international team of researchers led by scientists at UC San Francisco and Baylor College of Medicine in Houston has now identified a new genetic syndrome that could help illuminate the biological causes of one of the most common forms of intellectual disability. In a study of 10 children published online in the American Journal of Human Genetics on April 14, the researchers linked a constellation of birth defects affecting the brain, eye, ear, heart and kidney to mutations in a single gene, called RERE. The discovery is likely to aid researchers striving to understand the cause of birth defects more broadly, the study’s authors said, but it is also a boon to families who know for the first time the reason their children share this group of developmental disabilities. “Just having an answer can be hugely beneficial for families,” said co-senior author Elliott Sherr, M.D., Ph.D., a UCSF pediatric neurologist who directs the Brain Development Research Program and the Comprehensive Center for Brain Development at UCSF. “Of course, getting a genetic answer is just the first step, but for the longest time we didn’t even have that much. It gives these families hope that we can move forward.” Finding could speed search for answers in more common genetic syndrome In their paper, the researchers demonstrate that the developmental disabilities suffered by children with RERE mutations correspond almost perfectly to the well-known pattern of intellectual disabilities, heart defects, craniofacial abnormalities, and hearing and vision problems seen in 1p36 deletion syndrome, one of the most common sources of intellectual disability in children. This syndrome occurs in approximately 1 in 5,000 newborns, and is caused by a much larger (and harder to study) pattern of genetic damage in the so-called 1p36 region at the tip of human chromosome 1. The research group of Daryl Scott, MD, PHD, an associate professor of molecular and human genetics at Baylor College of Medicine, has been working for many years to identify the specific genes that cause the medical problems in children with 1p36 deletion syndrome. “Previous research had narrowed it down to two smaller ‘critical regions’ within the 1p36 region, but even these smaller regions contain dozens of different genes,” said Scott, who was co-senior author on the new study. Scott’s group had focused on the RERE gene, which lies within one of these 1p36 critical regions, because it plays a role in retinoic acid (vitamin A) signaling, an important pathway regulating the development of many organs, including the brain, eye and heart. The Baylor researchers found that mice with Rere mutations had birth defects that were very similar to the children with 1p36 deletions, but had initially been unable to prove that damage to this gene was sufficient produce the same developmental problems in humans. Sherr and Scott credit the genesis of their collaboration to Chauntelle Trefz, the mother of one of Sherr’s patients who connected the two researchers after discovering Scott’s work on mice with Rere mutations online and whose Facebook page about her son, Harrison, became a hub for identifying other children with the same condition. Trefz says that getting the whole exome sequencing results from Sherr and learning that a single gene mutation was responsible for her son’s dizzying array of symptoms — which include global developmental delay, vision problems, hearing problems, weak muscles, and constant acid reflux — was “a game-changer.” “Learning about the mutation was like a huge weight had been lifted,” she said. “When you bring a child with special needs into the world you feel so guilty, like you’ve done something wrong. Hope can be a hard thing to find. Dr. Sherr gave us hope.” Trefz started a Facebook page documenting the joys and challenges of raising and caring for Harrison, who is now 4, hoping to find other families whose children had the same condition. “Harrison is such a happy kid, and he seems normal in many ways, but he’s really not,” she said. “I could see a lot of kids falling through the cracks without the right diagnosis. I wanted other parents to see this and say, ‘that sounds like my son.’” Soon Sherr and Scott had identified 10 children with RERE mutations through collaborators around the US, as well as several from the Netherlands who had found the researchers through Trefz’s Facebook page. The researchers began a thorough comparison of these 10 children with a cohort of 31 patients with the more common 1p36 deletion syndrome, and found that RERE mutations alone produced almost exactly the same pattern of symptoms as 1p36 syndrome, with the exception of a few of the craniofacial abnormalities and cardiomyopathies often seen in that more common syndrome. Additional experiments showed that unique brain and eye problems first observed in human patients were also seen in the mice with Rere mutations. “It’s still a shock that [a mutation in] one gene is capable of causing all these different problems,” Scott said. “But this finding really brings everything together, from molecular studies to mouse experiments and all the way to human patients. We’ve finally proved what we’ve been talking about for all these years.” Though much more study is needed to understand the syndrome fully, Scott said, RERE mutations may be capable of inducing a diverse set of developmental problems because the protein encoded by the gene interacts with important developmental processes in many organs throughout the body, such as the retinoic acid signaling crucial for proper eye and heart development. When RERE doesn’t function properly, the development of all of these organs is affected. Sherr acknowledges that the current sample of just 10 patients with RERE mutations, who each experience slightly different symptoms with notably different levels of severity, is too small to give a complete portrait of the new syndrome. “Now that we’ve seen the first 10 cases, we want to know what the next 10, the next 20 look like,” he said. “That may not take very long. Before we’d even published the paper, we’d already gotten calls from more clinics around the country whose patients have similar mutations. We suspect this syndrome may be significantly more common than we previously appreciated.” The empowerment of families through social media and the plummeting cost of of gene sequencing technologies have produced a revolution in the pace of discovery about rare genetic conditions, Sherr said. “In the last five years alone there’s been a huge explosion in the number of conditions we can decipher genetically – we can take a few kids with developmental disabilities, come up with a coherent genetic explanation for what has happened and use that as first step for how to move forward” he said. “When I started working in child neurology as a fellow back in the late ‘90s, we understood just a few of these super-rare genetic disorders but now there are hundreds. And we’re just getting started.” The authors acknowledge the following industry ties: Sherr is a member of the clinical advisory board of genetic testing company InVitae and consults for Personalis. Four of the authors are employees of GeneDx, which provides exome sequencing on a clinical basis. The Department of Molecular and Human Genetics at Baylor College of Medicine derives revenue from clinical laboratory testing conducted at Baylor Miraca Genetics Laboratories, which provides exome sequencing on a clinical basis.
Yuan B.,Baylor College of Medicine |
Yuan B.,Baylor Miraca Genetics Laboratories |
Liu P.,Baylor College of Medicine |
Liu P.,Baylor Miraca Genetics Laboratories |
And 2 more authors.
Genomics Data | Year: 2016
Array comparative genomic hybridization (aCGH) has been widely used to detect copy number variants (CNVs) in both research and clinical settings. A customizable aCGH platform may greatly facilitate copy number analyses in genomic regions with higher-order complexity, such as low-copy repeats (LCRs). Here we present the aCGH analyses focusing on the 45 kb LCRs at the NPHP1 region with diverse copy numbers in humans. Also, the interspecies aCGH analysis comparing human and nonhuman primates revealed dynamic copy number transitions of the human 45 kb LCR orthologues during primate evolution and therefore shed light on the origin of complexity at this locus. The original aCGH data are available at GEO under GSE73962. © 2016 The Authors. Source
Westerfield L.E.,Baylor College of Medicine |
Stover S.R.,Baylor College of Medicine |
Mathur V.S.,Baylor College of Medicine |
Nassef S.A.,Baylor College of Medicine |
And 6 more authors.
Prenatal Diagnosis | Year: 2015
Objective: Diagnostic whole exome sequencing (WES) is rapidly entering clinical genetics, but experience with reproductive genetic counseling aspects is limited. The purpose of this study was to retrospectively review and report on our experience with preconception and prenatal genetic counseling for diagnostic WES. Method: We performed a retrospective chart review over 34months in a large private prenatal genetic counseling practice and analyzed data for referral indications, findings, and results of genetic counseling related to diagnostic WES. Results: Ten of 14 patients counseled about diagnostic WES for ongoing pregnancies pursued the test, resulting in identification of three pathogenic variants (30%). Five of 15 patients seeking counseling about familial WES results in an affected proband pursued prenatal diagnosis, resulting in identification of one affected fetus and five unaffected fetuses. We experienced challenges related to complexity and uncertainty of results, turnaround time, cost and insurance overage, and multidisciplinary fetal care coordination. Conclusion: Despite having experienced complexity and identified challenges of the reproductive genetic counseling, availability of diagnostic WES contributed important information that aided in prenatal care planning and decision-making. Future enhanced provider education and larger studies to systematically study the integration of WES in reproductive genetic counseling and prenatal care will be important. © 2015 John Wiley & Sons, Ltd. Source
Zhang J.,Mount Sinai School of Medicine |
Zhang J.,Baylor College of Medicine |
Lachance V.,Mount Sinai School of Medicine |
Schaffner A.,Mount Sinai School of Medicine |
And 27 more authors.
PLoS Genetics | Year: 2016
Genetic leukoencephalopathies (gLEs) are a group of heterogeneous disorders with white matter abnormalities affecting the central nervous system (CNS). The causative mutation in ~50% of gLEs is unknown. Using whole exome sequencing (WES), we identified homozygosity for a missense variant, VPS11: c.2536T>G (p.C846G), as the genetic cause of a leukoencephalopathy syndrome in five individuals from three unrelated Ashkenazi Jewish (AJ) families. All five patients exhibited highly concordant disease progression characterized by infantile onset leukoencephalopathy with brain white matter abnormalities, severe motor impairment, cortical blindness, intellectual disability, and seizures. The carrier frequency of the VPS11: c.2536T>G variant is 1:250 in the AJ population (n = 2,026). VPS11 protein is a core component of HOPS (homotypic fusion and protein sorting) and CORVET (class C core vacuole/endosome tethering) protein complexes involved in membrane trafficking and fusion of the lysosomes and endosomes. The cysteine 846 resides in an evolutionarily conserved cysteine-rich RING-H2 domain in carboxyl terminal regions of VPS11 proteins. Our data shows that the C846G mutation causes aberrant ubiquitination and accelerated turnover of VPS11 protein as well as compromised VPS11-VPS18 complex assembly, suggesting a loss of function in the mutant protein. Reduced VPS11 expression leads to an impaired autophagic activity in human cells. Importantly, zebrafish harboring a vps11 mutation with truncated RING-H2 domain demonstrated a significant reduction in CNS myelination following extensive neuronal death in the hindbrain and midbrain. Thus, our study reveals a defect in VPS11 as the underlying etiology for an autosomal recessive leukoencephalopathy disorder associated with a dysfunctional autophagy-lysosome trafficking pathway. © 2016 Zhang et al. Source
Pupavac M.,McGill University |
Tian X.,Baylor Miraca Genetics Laboratories |
Chu J.,McGill University |
Wang G.,Baylor Miraca Genetics Laboratories |
And 11 more authors.
Molecular Genetics and Metabolism | Year: 2016
Next generation sequencing (NGS) based gene panel testing is increasingly available as a molecular diagnostic approach for inborn errors of metabolism. Over the past 40 years patients have been referred to the Vitamin B12 Clinical Research Laboratory at McGill University for diagnosis of inborn errors of cobalamin metabolism by functional studies in cultured fibroblasts. DNA samples from patients in which no diagnosis was made by these studies were tested by a NGS gene panel to determine whether any molecular diagnoses could be made. 131 DNA samples from patients with elevated methylmalonic acid and no diagnosis following functional studies of cobalamin metabolism were analyzed using the 24 gene extended cobalamin metabolism NGS based panel developed by Baylor Miraca Genetics Laboratories. Gene panel testing identified two or more variants in a single gene in 16/131 patients. Eight patients had pathogenic findings, one had a finding of uncertain significance, and seven had benign findings. Of the patients with pathogenic findings, five had mutations in ACSF3, two in SUCLG1 and one in TCN2. Thus, the NGS gene panel allowed for the presumptive diagnosis of 8 additional patients for which a diagnosis was not made by the functional assays. © 2016 Elsevier Inc.. Source | <urn:uuid:6d9bf1a3-7335-4618-aa4a-eb2b7245babc> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/baylor-miraca-genetics-laboratories-141437/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00301-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936203 | 2,957 | 2.671875 | 3 |
Table of Contents
System Restore is a recovery feature in Windows 8 that allows you to restore your computer to a previous state. This is useful if your computer starts to function poorly or crashes and you cannot determine what the cause is. To resolve these types of issues, you can use System Restore to restore your computer back to a previous state that was saved before your problems started occurring. This will allow your computer to start operating correctly again.
When System Restore is enabled in Windows, it will automatically create snapshots called restore points that contain a backup of your Windows Registry, system configuration, program files, and system drivers and executables. These restore points are created automatically every day and before a significant event such as installing a program or adding hardware drivers to your computer. It is also possible to manually create a restore point at any time you wish. As previously said, when a restore point is created it only backups up your system files, program files, and the Windows configuration. It does not backup your personal data such as email, pictures, documents, videos, saved games, and music. Therefore, you should not use System Restore as a method of backing up and restore these types of files.
If used properly, System Restore can be an incredibly useful tool for the well being of your computer. Since System Restore creates new restore points every day and every time a program is installed, you always have a way to fall back to a working Windows configuration in the event that something causes a problem on your computer. For example, if you install a new program or hardware and find your computer is no longer working properly, you can simply restore back to a restore point that was created before you made the changes. This allows you to save considerable time and money by being able to quickly and easily resolve these issues by yourself.
Another powerful feature of System Restore is that you can use it from the Windows Recovery Environment in the event that you are unable to start Windows. This allows you to easily resolve an issue where Windows does not start by restoring to a time and date when you know Windows was working properly. More information on using System Restore from the Windows Recovery Environment can be found here:
System Restore points are created when the following event occur in Windows:
System Restore does have some requirements to operate properly. These are:
This tutorial will guide you through using System Restore in Windows 8 to protect and restore your PC to a working configuration when it is necessary.
If you have a problem on your computer and you cannot fix it by normal means, then you can use System Restore to restore your computer to a previous state when your computer was working properly. To restore your computer to a previously created restore point please go to the Windows 8 Start Screen and type restore point. When the search results appear click on the Settings category as shown below.
Now click on the option labeled Create a restore point and you will be brought to the System Protection tab of the System Properties control panel.
To restore your computer, click on the System Restore button and you will be presented with the main screen for System Restore. Now click on the Next button and you will be shown a list of available restore points that you can restore.
Select the restore point you wish to restore by left-clicking on the entry once. This will then make the Scan for affected programs button available. If you click on that button you will be shown a list of programs that will be removed when you perform a restore.
If you are okay with the programs that will be deleted, please click on the Close button and then click on the Next button at the restore point selection screen. You will now be at a screen asking if you are sure you wish to perform the restore.
If you are sure you wish to continue, please click on the Finish button. System Restore will once again ask if you are sure you wish to continue. If you are sure, please click on the Yes button.
System Restore will now reboot your computer and begin the restore process. Please be patient as this can take quite some time.
When the restore point has finished being restored, Windows will start back up and you will be at your login screen or desktop. You will then be shown a confirmation box as seen below.
Your computer has now been restored back to the selected point in time.
If you restore a restore point and find that your system has become more unstable or you need the applications that have been deleted, you can undo a restore point. As System Restore creates a restore point right before it restores another one, you can revert back to the exact configuration you were using before you ran System Restore.
To undo a system restore, please go to the Windows 8 Start Screen and type restore point. When the search results appear click on the Settings category. Now click on the option labeled Create a restore point and you will be brought to the System Protection tab of the System Properties control panel. Now click on the System Restore button and you will be presented with the main screen for System Restore.
To undo a system restore, select the Undo System Restore option and then click on the Next button. Windows will now ask you to confirm whether or not you wish to perform the Undo: Restore Operation. If you wish to, please click on the Finish button. Once again, Windows will ask if you are sure you wish to continue and you should now click on the Yes button.
Windows will now restore your computer and begin to undo your previous system restore. When it has finished, you will be brought back to the Windows login screen. Once you login, you will see a confirmation box on the classic desktop stating that the restore was successful.
Your previous system restore has now been undone.
It is possible to manually create a new restore point when you wish rather than waiting for the daily interval. If you wish to create manual restore point you need to go to the Windows 8 Start Screen and type restore point. When the search results appear click on the Settings category. Now click on the option labeled Create a restore point and you will be brought to the System Protection tab of the System Properties control panel.
Please click on the Create button and you will be shown a prompt asking you what you would like to name the new restore point.
Enter a descriptive name and then click on the Create button. The restore point will now be created.
When the restore point is finished, you will be shown a dialog box where you can click on the Close button.
It is advised that you do not disable System Restore as your computer will no longer be protected and all the previous restore points will be deleted. If you still wish to disable System Restore, please go to the Windows 8 Start Screen and type restore point. When the search results appear click on the Settings category. Now click on the option labeled Create a restore point and you will be brought to the System Protection tab of the System Properties control panel.
To disable System Restore you need to disable it for each drive that is currently protected. To do this left-click on each drive listed in the Protection Settings box so that it becomes highlighted. Then click on the Configure button. This will open up the System Protection properties for the selected drive.
To disable System Restore, select the Disable system protection option and then click on the Apply button followed by the OK button. System Restore will now be disabled for that particular drive.
Now go through all of the other drives and disable System Restore for those drives as well. Once all drives have been disabled, System Restore will be disabled.
If you, or a computer infection, has previously disabled System Restore, you should enable it again so that your computer is protected. To do this, follow the steps in the previous section until you are at the System Protection properties for a particular drive. This time you should select the Turn on system protection option and then press the Apply button followed by the OK button.
You need to perform this step for each drive on your computer for your system to be fully protected. Once you have enabled System Restore on each drive, you should see that protection is On for each of the drives on your computer.
Using System Restore is an important step to keeping your system safe and secure. In the event that you have an issue in the future, you can use System Restore to easily revert your computer's configuration back to a point where the computer was working normally. This makes it much easier to manage your computer and make sure it continues to run efficiently.
As always if you have any comments, questions or suggestions about this tutorial please do not hesitate to tell us in the Windows 8 Forum.
Windows 8 includes a recovery feature called Automatic Repair that attempts to automatically diagnose and fix common issues that may cause Windows 8 to not start properly. Automatic Repair will start automatically when Windows is unable to start properly. Once started, it will scan various settings, configuration options, and system files for corrupt files and settings. If it detects ...
When Windows is no longer able to start it is typically because of a problem in the Windows Registry, a driver conflict, or malware crashing the computer. Windows startup issues can be one of the most frustrating issues to deal with because you do not have easy access to the file and data we need to fix these problems. Thankfully, we can use the Windows 8 Recovery Environment Command Prompt to ...
System Restore is a Windows service that runs in the backgrouns and creates restore points, or snapshots, of your operating system every day and at other times. If Windows 8 starts displaying problems that you are unable to repair, you can restore your computer to a restore point that you know your computer was working properly. This guide will walk you through using System Restore from the ...
Windows 8 introduced a new boot loader that decreased the time that it takes Windows 8 to start. Unfortunately, in order to do this Microsoft needed to remove the ability to access the Advanced Boot Options screen when you press the F8 key when Windows starts. This meant that there was no easy and quick way to access Safe Mode anymore by simply pressing the F8 key while Windows starts. Instead in ...
Safe Mode is a Windows mode that uses only the most basic drivers and programs that are required to start Windows. This mode will also not launch any programs that are set to start automatically when Windows starts. This makes Safe Mode is very useful for diagnosing hardware driver problems and computer infections in Windows 8. It can also be useful when you want to uninstall a program or delete a ... | <urn:uuid:6c14a448-b8be-44b7-bceb-4a3c16873e84> | CC-MAIN-2017-04 | https://www.bleepingcomputer.com/tutorials/windows-8-system-restore-guide/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00329-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.911982 | 2,124 | 2.5625 | 3 |
People who are deaf or have a hearing impairment can have problems access websites and other ICT systems.
Guideline 1 of the Web Content Accessibility Guidelines (WCAG) states "Provide equivalent alternatives to auditory and visual content". The guidelines suggest that text is a suitable equivalent for auditory content and should include text of any audio files and captioning of any audio-video file.
Although text is an important aid to people who cannot hear it is not necessarily a complete or optimum solution. The reason is that sign language is their first language and written English is a second language. As I am based in the UK I will write about British Sign Language (BSL) rather than International sign language (ISL) or American Sign Language (ASL) although the basic message is the same.
BSL is different from spoken English in syntax, semantics and cultural nuances. To understand this a little consider BSL poetry which is judged not just by the meaning of the signs but how the gestures are put together to provide a flowing aesthetic performance.
Many in the deaf community are forceful proponents of BSL as an independent language from English. They believe that information should be available to them in BSL. Because written English is their second language they find it much more difficult to understand.
The rest of this article will discuss the pros and cons of including BSL on websites and then suggest some guidelines. I hope this article will be used to discuss this issue in more detail. Comments from all interested parties are very welcome.
- BSL is easier to understand than written English. People who use BSL will appreciate the effort put in to provide BSL and are likely to return to the site and recommend it.
- However BSL is a ‘spoken’ language, in the sense that it is linear and you can only see (hear) and comprehend the current phrase. Written English can be and is accessed and comprehended in a more dynamic way with a lot of skipping, skimming and rereading taking place.
- Producing BSL is time consuming and expensive. It is difficult to see how the vast majority of text on the web could be translated into BSL.
- If there is spoken English in an audio or audio-video file on a website, BSL will be a more natural medium for the deaf listener than captioning. This is especially true of audio-video where it is easier for a deaf person to listen to the signing whilst watching the video than it is to read the captioning at the same time as watching the video.
- Signing, and even captioning, can be disconcerting to people who are trying to listen to the speech. Making sites more accessible to one group should not unduly interfere with the usability of the site by other users.
- Seeing BSL whilst reading written English can help a deaf person improve their ability to read written English, it also helps hearing people learn BSL. A powerful example of the benefits is a set of multi-modal books for children developed by ITV SignPost BSL. The books have written English, spoken English, BSL, and story-pictures all synchronised. The book enables deaf children to share a story with hearing parents, siblings and even friends with vision impairments; this significantly improves understanding of all participants hence improving the communications between the different groups.
- BSL has to be distributed as a video file, which, by its nature, will be considerably bigger than the equivalent text. There is likely to be a delay between the text being visible and the BSL being available. Thus the benefit of the BSL over the written English has to be worth the wait.
- As yet there is no best practice guidance for implementing BSL on a website and there are no good technology solutions that are easier to implement and effective for the user.
- There are specialist companies for creating BSL on websites such as ITV SignPost BSL and see-bsl that can help you add BSL to your site.
So with these conflicting pros and cons what guidance should be given to website providers:
- It is unlikely that you can justify signing the whole site so you have to decide what areas are most important.
- If the site is aimed at the deaf community then more signing can be justified. Having said that I have looked at the RNID and a few other deaf association sites and they do not have any signing at all.
- Any spoken English on the site needs to be available in another format. Captioning is the most inclusive so needs to be provided, however BSL should be strongly considered as well, especially for audio-video.
- An initial introduction to the site in BSL that explains briefly who the site is for, what it provides and a little of the structure should be included. The idea is that the deaf person can decide quickly and effortlessly if the site is of interest. In fact I often think that a text section with that information is missing from many sites.
- Any pages that include safety, security or emergency information where it is essential that there is no misunderstanding should be in BSL.
- If there are pages available in other languages (Welsh, Urdu, Polish for example) then that is a strong indication that the information should also be available in BSL.
- The web is an excellent medium to include BSL and written English side by side as a way to improve comprehension of written English. If your site has a level of public social responsibility then some of the site should have BSL, daily news bulletins may be the ideal area to concentrate on as they are short and include topical vocabulary.
Finally I would suggest that there is a need for improved technology to make it much easier to integrate BSL in a seamless, flexible and usable way. I contend that extending the DAISY (talking book format) to incorporate BSL could be a way forward. The format has the ability to provide complex navigation and the synchronisation of multiple modes of presentation. Providing paragraph chunks of BSL would enable them to be synchronised with the text.
Please give me your thoughts by adding comments to this article. | <urn:uuid:f2949628-48d7-43f7-b7bf-20a62b5ac37f> | CC-MAIN-2017-04 | http://www.bloorresearch.com/analysis/should-websites-include-sign-language/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00567-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955384 | 1,257 | 3.03125 | 3 |
Traditional aid to developing countries has focused on fighting poverty, building infrastructure, protecting the environment, and strengthening democratic institutions including human rights. This formula has served development organisations well in the past, but going forward it may not be adequate to support the rapidly changing technological needs of developing countries.
Developing countries have nascent digital economies with critical infrastructure that relies on sophisticated technologies. In addition, citizens’ access to information, privacy, and freedom of expression—human rights according to the Universal Declaration of Human Rights—are fundamentally at risk without adequate information security protections.
The view of the populations in developing countries being technologically illiterate is now obsolete. This is especially true in areas of Africa where mobile technologies have leap-frogged an entire generation of telecommunications that rely upon on physical infrastructure. A recent survey conducted by the Pew Research Centre in 24 emerging and developing economies showed that in regions where mobile money is prevalent, people now regularly use their mobile phones for e-commerce. For example, in Kenya, 82% of the population owns mobile phones and of that number, 68% regularly use them to make or receive electronic payments.
These emerging digital economies can be at systemic risk from cyber-criminals. Rampant fraud or hacking attacks, for example, could stymie or crash a developing nation’s nascent digital economy. Widespread fraud could deter legitimate participants from using e-commerce and thus prevent nations’ macro-economies from benefitting from the increased speed and reduced costs of digital commerce. Without sufficient protection against cybercrime, economies in emerging nations may never make the jump to fully developed nation status.
Developing nations also face risks to their critical infrastructure from more advanced nation-state actors who, in times of crisis, might use their superior cyber-attack capabilities as a means of intimidation. For example, it was recently revealed by security researchers at Cylance that for the past two years, Iranian hackers have infiltrated the networks of many nations’ airports, defense industries, universities, hospitals, telecommunications firms, government agencies, and energy companies. The Iranian group purportedly used an internally developed software to hack other nations' critical infrastructure and obtain sensitive information. It does not take much imagination to understand how this clandestine information access might be used to intimidate or coerce nations in times of crisis.
Finally, protecting personal data, freedom of expression, and access to public resources for citizens in developing nations is fundamental to preserve human rights in the digital age. A US-based NGO, Freedom House, recently published a Freedom of the Net study showing that internet users in 41 out of 65 countries assessed in the report have either passed or have proposed legislation to penalize legitimate forms of online speech, increase government powers to control content, or expand government surveillance capabilities. In addition, governments, cyber criminals and state-backed intelligence entities have been known to hack citizens' personal computers for fraudulent purposes or to gain information of intelligence value. Protecting human rights in the digital realm translates to introducing full-fledged cyber security capabilities and information security awareness.
Developing countries may lack expertise or awareness of cyber risks to their critical infrastructure, their emerging digital economies, and security threats that affect basic human rights. One way this may be overcome is by enhancing programmatic assistance to developing countries through cyber security expertise.
The United Nations Development Programme, for example, is now providing information security assistance to developing countries by facilitating the development of local networks of experts to provide technical expertise; by preparing briefing notes that raise awareness about information security issues to policy- and decision-makers at high-level meetings; and by and sharing case studies, success stories, and lessons learned.
As just one example, the highly successful Access 2 Information project, a joint programme of UNDP and USAID with the Prime Minister's Office in Bangladesh, is adding, with UNDP's help, an information security programme to its arsenal of technological assistance for a digital Bangladesh. This kind of innovative new programmatic assistance might serve as a blueprint for effective global development aid in our digital age.
--The views expressed herein are those of the author and do not necessarily reflect the views of the United Nations Development Programme.
This article is published as part of the IDG Contributor Network. Want to Join? | <urn:uuid:8363301c-8a65-4d13-9a80-14fd64ea32e3> | CC-MAIN-2017-04 | http://www.csoonline.com/article/2878566/cyber-attacks-espionage/re-thinking-development-aid-in-the-digital-age.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00567-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.917881 | 855 | 3.03125 | 3 |
With a GDP of US$ 81.79 billion, and a population of 3.93 million people, Oman is one of the largest producers and consumers of fish in the GCC. With a high unemployment rate, the government has also chosen this sector as a key pillar for economic diversification to move away from over-dependence on earnings from hydrocarbons exports, encouraging artisanal fishermen participation to boost employment. The market for fisheries and aquaculture in Oman was worth US$ XX million as of 2015, and is expected to grow at CAGR X% per annum over 2016-2022.
Oman’s per capita consumption of fish is the highest in the GCC after UAE, at 27 kg per capita per annum. X% of this is imported, as local demand outstrips local supply for many varieties of fish. Oman, under its national food security program plans to double production from about 200,000 tons in 2014 to 480,000 tons in 2020.
An increase in population, affluence, preference for protein rich food, impetus by the government to increase self-sufficiency in terms of food, and the country has had a historically strong research wing for fisheries, active since 1997. These approaches include experimentation with different varieties of fish including oysters, shrimps and shellfish, using cages. Of the five currently ongoing pilot programs, the one in Musandam Governorate employs a marine cage technology after using GIS for site selection. Additionally, feed research and culturing tilapia are vital research projects for the country. Despite these, aquaculture production has contributed very insignificantly to the total output of fish in Oman. Given the problems of overfishing in common areas, a lack of regulation, consolidation and knowledge sharing in the informal sector leading to fishing of endangered species in conjunction with high demand has made the importance of aquaculture conspicuous. Oman’s fisheries, aquaculture and fish processing sector is set only to expand over the coming years.
Two types of aquaculture farming systems are prevalent in Oman – shrimp farming in Al-Wusta Governorate in Wilyat Mahout producing between 250 to 350 tons of shrimp annually, and integrated farming systems, wherein species like Tilapia are cultured in small farms which are unfit for typical agricultural activities. Types of species which can be grown include those endemic to the Omani waters, as well as exotic species. Some known fish are sea cucumbers, freshwater carp, seabass, and other high value fish. The government has instated Regional Aquaculture Information System (RAIS) which partakes in capacity building, research and evaluating projects submitted by private players.
Companies for fish processing services and fish rearing management can conduct business in Oman under PPPs. The process of bidding for tenders is highly competitive in Oman, attracting large multinational companies who partner with local companies to bid. Tendering in Oman has the downside of a lower number of prerequisites, attracting a high number of bids, enabling companies with poor standards bidding for the tender to drag down prices for bids. Businesses can integrate themselves as joint stock companies, LLCs, general or limited partnerships, foreign branch, joint venture or sole proprietorship in Oman.
Oman’s largest fish importers are the UAE and Saudi Arabia, constituting 50% and 16% of total export by Oman. Most important names include Oman Fisheries Company, National Prawn Company, Quriyat Aquaculture Company, Hesy, and others.
KEY DELIVERABLES IN THE STUDY | <urn:uuid:d34b7632-00bc-4fbd-8b55-6ba5a0d2f5aa> | CC-MAIN-2017-04 | https://www.mordorintelligence.com/industry-reports/aquaculture-in-oman-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00017-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943503 | 725 | 2.921875 | 3 |
TMI Syndrome in Web Applications
By design, Web applications are set up to process and present information, generally without the installation of any custom software on the user’s system. All the user needs to view information is a browser such as Microsoft Internet Explorer or Mozilla Firefox. With the current generation of Web application technologies (Web 2.0), Web pages no longer are plain vanilla and static but instead are rich user interfaces that are dynamic. Information is no longer merely presented but can now be interacted with per the user’s personal interests.
Herein lies the problem. If presenting the information is not properly protected, Web applications can suffer from TMI Syndrome (TMIS). TMI, an abbreviation for Too Much Information, according to Wikipedia is a slang expression indicating that someone has divulged too much personal information and made the listener uncomfortable. When Web applications suffer from TMI Syndrome, they divulge more information than is necessary, unsolicited or otherwise.
TMI Syndromes in Web applications can be categorized into two types: passive and active.
Passive TMIS: Web sites with Passive TMI Syndrome are those that divulge more information than is necessary, unsolicited or without any effort. Non-private MySpace and YouTube profile Web pages are a few classic examples of this.
Active TMIS: Web sites with Active TMI Syndrome are those that require additional efforts to glean out information that are meant to be private or protected. Just use your favorite search engine and search for “data breaches in Web sites” for a plethora of examples of Web sites that have suffered from Active TMI Syndrome. A Chronology of Data Breaches maintained by Privacy Rights Clearinghouse states, as of date, there has been more than 218 million data records of U.S. residents exposed due to security breaches since January 2005. A significant portion of this has been due to Web site mistakes.
So what are some of the sources for information leakage in Web applications? Recon efforts in which an attacker seeks to gather information about the Web application generally are comprised of, but are not limited to, the following sources:
- Unsolicited and non-private personal Web sites.
- See aforementioned section on Passive TMI Syndrome.
- Browser history and cache.
URI (Uniform Resource Identifier) is essentially a Web address with syntax rules. URIs of Web sites you visit using a browser are cached and recorded in history unless explicitly set not to. This is mainly to enhance performance. If users’ browsers record Web site URIs, and they visit your Web site, chances are that your Web site’s URI is cached and recorded in their browser history. This information can be stolen; URIs may pass sensitive information such as session IDs, log-in information and, in one case, even pricing and quantity ordered. When the browser’s cache and history is stolen and sensitive information is disclosed, the attacker has the ability to replay the session, find out the log-in information and possible submit orders, changing the price. TMI!
HTML Comments and Client Side Scripts
Using your browser and viewing the source can reveal a lot of information about the Web page. Developers that tend to instrument their code sometimes inadvertently put in comments that have sensitive information such as validation checks, connection strings to databases, production data for test purposes and steps to debug the business logic the Web page processes. Any sensitive information in client side scripts also is readable upon viewing source. Again, TMI.
Backup and Unreferenced Files
Due to versioning of Web applications, more often than not, older version of a Web page are renamed with extensions like .bak or .old and deployed in production environments without reference, along with the newer updated versions. Source control solutions have significantly reduced this; however, it is still prevalent in legacy applications. An attacker can guess these file names and request the files, and without proper access controls and handling of file types in place, TMI can potentially be revealed.
Include and Configuration Files
Include and Configuration files are used to support a “Write Once, Use Anywhere” mode of programming, wherein the developer would write common logic or settings that are to be used in various pages of the Web application in a file and then included in all the pages that need to use it. While this is good from a modular programming perspective, if these files contain sensitive information such as database connection strings and cryptographic routines in clear text (humanly readable form), it poses a serious threat of complete compromise by TMI disclosure.
Error messages are one of the first sources attackers will look at to determine vulnerabilities in a Web application because this requires little effort. For example, if a Web page requires you to specify the quantity of an item ordered in the shopping cart, the attacker can pass in a text string to see how the Web application handles the text input when expecting a numeric value. Without proper handling of the invalid input and the error message that is returned to the user, business logic flaws and underlying architecture of the Web application and back-end systems can be revealed. Clearly TMI.
The Risk of TMIS
So, you may ask, should I be concerned about this? On a personal front, Web applications that suffer from TMI Syndrome can result in serious maladies such as identity theft and cyberstalking.
On the corporate front, revelation of TMI in Web application could be disastrous, with results ranging from financial loss, levied fines, imposed regulations, auditors on-site, loss of the corporate brand and, more seriously, loss of customer trust.
It is critically important that precautionary measures are taken to prevent TMI Syndrome. Due diligence and secure coding practices help mitigate it. Some due diligence measures include divulging only need-to-know, publicly available information on personal Web communities and clearing browser cache and history upon closing of the browser window (manually or automatically).
Some secure coding measures include commenting HTML code and client side scripts wisely without revealing too much information, removing any old backup or unreferenced files from production environments, using include-and-configuration files only as needed and when used, encrypting sensitive information, handling error messages generically — instead of reporting “Password does not match” in the case of a failed log-in attempt, simply state “Log-in Invalid.” Do not allow the system to handle the error.
Avoiding Root From Recon
Failing to take necessary due diligence and secure coding control measures can lead to TMI being revealed and could lead to an attacker on a simple recon session gaining root access to your system, your company and maybe even your life.
Mano Paul is a founder and president of Express Certifications. He can be reached at editor (at) certmag (dot) com. | <urn:uuid:ce917102-3805-4946-b97d-aba57c6bd162> | CC-MAIN-2017-04 | http://certmag.com/tmi-syndrome-in-web-applications/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00074-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.905254 | 1,396 | 2.828125 | 3 |
New to Linux Online Training Courses
- 18th February 2016
- Posted by: Juan van Niekerk
- Category: Technology
What is Linux?
The Linux operating system, which was developed by the Finnish Software Engineer Linus Torvalds, is best known for the fact that it is used by the New York Stock Exchange to power the applications used for trading. But there are a few other factors that have contributed to its popularity. It is an open-source operating system, which means that it may be modified, enhanced and even shared by anyone, since it is publicly accessible.
It also runs very well on just about any system, from outdated PCs and laptops, to extremely advanced supercomputers. Other contributing factors to its popularity is the fact that it is hailed as being exceptionally secure since it features robust encryption and users find that it generally runs faster than other operating systems It comes as no surprise then, that there is a huge, and growing interest in Linux online training courses.
Popular Linux distributions
A Linux distribution (distro for short) is created when the Linux kernel, which is the core of Linux, is used to create an operating system. This is done by adding a desktop environment, applications, browsers etc. and ensuring that they all work together as they should. Some users prefer to create these themselves, undertaking Linux online training courses to gain the knowledge to do so, but most find it much more convenient to acquire a distribution that is ready to use.
Some of the more popular distributions are:
• Ubuntu – The most popular distribution which concentrates on desktop and Linux server use.
• Linux Mint – Built using Ubuntu as a base with added codecs (programs that code and decode data) and software.
• Debian – Built using only software that is freely available and completely open-source.
• Fedora – Concentrates strongly on free software.
• Red Hat Enterprise Linux – Based on Fedora, but much more stable and offers support.
Who uses Linux?
There are a multitude of large companies that use Linux to run their operations, many of which may come as a surprise. For example; governments and their institutions that use Linux include the U.S. Department of Defence, the U.S. Navy submarine fleet, the U.S. Federal Aviation Administration, the French Parliament, the Industrial and Commercial Bank of China, Macedonia’s Ministry of Education and Science, the Government of Mexico City and the Czechoslovakian Postal Service.
Many governments have implemented Linux for use in their schools in order to provide their students with proper computer training. These include Pakistani Schools & Colleges, Russian Schools, German Universities and schools in Switzerland. The “One Laptop per Child” program, which aims to distribute laptops to children in developing companies, decided to use Linux as their operating system based on its low cost.
It is also utilised by Google, Panasonic, Amazon, Peugeot, Wikipedia, Tommy Hilfiger and Cisco, to name but a few.
Internationally Recognised Linux Online Training Courses
Linux continues to grow in popularity, as is evident by the number of students who choose to study Linux online training courses such as those offered by ITonlinelearning. Undertaking and completing Linux online training courses will see you receive a Linux certification, which will open many doors when looking to start a career based on the training that you have undergone.
The fact that the Linux certification is recognised globally, lends credence to its reputation for being a reliable and secure operating system from which to run an organisations operations, something that is highly valued in the business world. Given that Linux-certified professionals have an in-depth technical knowledge and advanced technical skills, they are very much sought after, as they produce a high level of productivity which, in turn, provides their organisation with a competitive advantage.
Which careers follow a Linux Certification?
There are many companies that use Linux as their operating system of choice, giving those that have completed their Linux online training courses a decided edge when applying for positions in these organisations. Some of the careers that you will be able to aspire to after gaining your Linux certification are:
• Linux System Engineer
• Linux System Administrator
• Linux Software Engineer
• Secure Systems Platform Engineer
• Linux Development Engineer
It is easy to see why so many people have an entrusted interest in Linux, Linux online training courses and all it encompasses. It always keeps the user firmly in mind as far as ease of use and freedom of choice is concerned and there are many options to choose from when deciding on which version of Linux to use, depending on what exactly you will be using it for.
As for Linux certification, the options are just as many. Whether you are just starting your Linux online training courses or looking forward to a career in system engineering, development or support, having completed your Linux online training courses, you will have ensured that you are on your way to becoming an in-demand professional with skills that will ensure that you are an asset to your organisation. | <urn:uuid:066f8129-fac6-4c38-85f2-0ff1ab33da67> | CC-MAIN-2017-04 | https://www.itonlinelearning.com/blog/new-to-linux-online-training-courses/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00192-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961277 | 1,027 | 2.96875 | 3 |
General Security Concepts
Questions derived from the CompTIA SY0-101 – Security+ Self Test Software Practice Test.
Objective: General Security Concepts
SubObjective: Recognize the following attacks and specify the appropriate actions to take to mitigate vulnerability and risk: Dos/DDoS, Back Door, Spoofing, Man in the Middle, Replay, TCP/IP Hijacking, Weak Keys, Mathematical, Birthday, Password Guessing (Brute Force, Dictionary), Software Exploitation
Item Number: SY0-220.127.116.11
Multiple Answer, Multiple Choice
Which attacks are considered common access control attacks? (Choose all that apply.)
- SYN flood
- Dictionary attacks
- Brute force attacks
D. Dictionary attacks
E. Brute force attacks
Spoofing, dictionary attacks, and brute force attacks are common access control attacks. Spoofing occurs when an attacker implements a fake program that steals user credentials. A dictionary attack is a method where the attacker attempts to identify user credentials by feeding lists of commonly used words or phrases. A brute force attack is one in which the attacker tries all possible input combinations to gain access to resources.
Phreaking is a group of hackers that specialize in telephone fraud. It is considered a telecommunications and network security attack.
A SYN flood occurs when a network is flooded with synchronous (SYN) packages. As a result, the system is overloaded and performance suffers. Many times, legitimate users are denied access. A SYN flood is usually considered an application or system attack.
Wikipedia.org, Spoofing attack, http://en.wikipedia.org/wiki/Spoofing_attack
Wikipedia.org, Dictionary attack, http://en.wikipedia.org/wiki/Dictionary_attack
Wikipedia.org, Brute force attack, http://en.wikipedia.org/wiki/Brute_force_attack | <urn:uuid:6dd8d456-105b-4fad-ab70-9d5b89561064> | CC-MAIN-2017-04 | http://certmag.com/general-security-concepts/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00010-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.828172 | 396 | 3.0625 | 3 |
In both the home and the business, security cameras are becoming more and more commonplace as a means of preserving security. However, some malware can turn these devices, and others, into cyber security threats.
In October, 2016 the Mirai malware made headlines for doing just that. Utilized in the attack on Dyn, a company that hosts, manages, and maintains a substantial part of the Internet’s infrastructure, Mirai operates by attacking Internet of Things devices, gradually forming a botnet of zombified smartwatches, printers, and other Internet-connected “smart” devices to fuel a Distributed Denial of Service attack. These attacks essentially function by assaulting their target with so much traffic that the target shuts down. This brought down dozens of sites including Twitter, Netflix, Reddit, CNN, and many more in one of the largest-scale cyber attacks to date.
These DDoS attacks were once primarily powered by the familiar desktop computer, but with the boom in popularity of IoT devices, these devices are becoming a much more popular vehicle for the attacks.
This rise in popularity is due to a few factors. Firstly, the use IoT devices has been spreading both in popularity and in implementation, as was mentioned above. Therefore, zombifying them to be a part of a botnet boils down to basic tactics--there’s strength in numbers, so it makes more sense to utilize as many devices as possible. So, if there are seven IoT devices in a household that share one laptop, a botnet that utilizes on of the IoT devices will have six more devices at its disposal than it would have otherwise.
Secondly, there’s the matter of the security built into the devices themselves. How much thought would you think a manufacturer would put into the cyber security of a refrigerator? However, with refrigerators that now have “smart” features through Wi-Fi connectivity, cyber security is something that needs to be considered, and too often isn’t.
As an example that’s tinged with just a bit of irony, a security researcher decided to put the security of a particular IoT device to the test by monitoring a newly-purchased security camera. It took less than two minutes (closer to a minute and a half) for Mirai to infect the camera, despite the researcher’s precautions.
Unfortunately, there's little that a user can do to protect their IoT device from infection. However, the industry is gradually catching on and taking steps toward protecting these devices from external threats, so hopefully the trend of IoT botnets will be relatively short-lived.
How many IoT devices do you own; and, what precautions do you take to keep them from being a hindrance to your network security? Share your story with us in the comments. | <urn:uuid:745db666-53de-434d-99fe-ae27cd7c8c7d> | CC-MAIN-2017-04 | https://nerdsthatcare.com/nerd-alerts/entry/too-many-smart-devices-have-dumbed-down-security-protocols | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00036-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.974635 | 573 | 2.578125 | 3 |
What’s Your Learning Personality?
When it comes to learning and study, students and teachers alike are well advised to take note of the impact of personal preferences and proclivities that influence each individual’s learning experience. In fact, this is the stuff of which educational psychology and learning theory are made. Each individual certification candidate must normally study long and hard to earn various credentials that may be of interest—or even required for certain positions. Some knowledge of the concepts and terminology involved, as well as a sense of what your preferred learning styles might be, can be a real godsend when considering or selecting study materials, classes and so forth to help the certification preparation process along.
Without going into too much detail just yet, it suffices for now to say that individuals who stick to their preferred learning styles will usually do better on certification exams than those who overlook or ignore this aspect of the learning process. It’s also broadly observed, both anecdotally and in academic literature, that those who cater to their personal learning styles usually learn more easily, retain learning better and longer, and enjoy the learning experience more than those who don’t.
About Learning Styles
Although types and taxonomies of learning styles come in many shapes and sizes (see, for example, Richard Felder’s “Matters of Style” paper, which documents no less than four different models for learning style), there’s wide agreement among educational psychologists and professional instructors that various students tend to learn in multiple, generally recognizable ways. Because Felder’s work stresses teaching and learning on technical matters (he’s based in a chemical engineering program, rather than in an education or psychology program, as you might have expected), I’ll use his model as an example here, but it’s important to recognize that there are numerous ways to flesh out the notion that different people learn in different ways and respond best to certain kinds of input and activity. This model represents only one set of choices out of a number of well-documented and -recognized sets defined by other educators and researchers.
Felder’s model is formally known as the Felder-Silverman learning style model, because he collaborated with educational psychologist Linda K. Silverman in its research and development. This model classifies students by learning style into five distinct categories (the list that follows is taken from Felder’s work):
- Sensing vs. Intuitive: Sensing learners (concrete, practical, oriented toward facts and procedures) or intuitive learners (conceptual, innovative, oriented toward theories and meanings).
- Visual vs. Verbal: Visual learners (prefer visual representations of presented material—pictures, diagrams, flow charts) or verbal learners (prefer written and spoken explanations).
- Inductive vs. Deductive: Inductive learners (prefer presentations that proceed from the specific to the general) or deductive learners (prefer presentations that go from the general to the specific).
- Active vs. Reflective: Active learners (learn by trying things out, working with others) or reflective learners (learn by thinking things through, working alone).
- Sequential vs. Global: Sequential learners (linear, orderly, learn in small incremental steps) or global learners (holistic, systems thinkers, learn in large leaps).
The guiding principle behind this system is one that recognizes that different people learn best in different ways. Some people are action-, fact- or experience-oriented and work by collecting lots of close-ups to develop a big picture (sensing, inductive, active and sequential learners). Others prefer to understand principles, theories and concepts and start from the big picture to give structure to individual close-ups and details (intuitive, deductive, reflective and global learners). Some people do better with pictures, diagrams and flow charts (visual learners), while others do better with words or speech (verbal learners).
It’s important to understand that each of the Felder-Silverman categories represents a continuum, rather than a pair of discrete tendencies. That is, individuals can be placed on a line that goes between one pole and the other, rather than simply being either one or the other. Thus, for example, a learner might be more visual than verbal, but this is not to say that their preference is completely visual and not at all verbal.
What’s important about this scheme is to understand where one fits into the various categories. This helps individuals decide what kinds of training approaches or materials are likely to work better or worse for them, and to seek specific kinds of input that will help them learn–and review–when preparing for exams. Thus, someone who’s more visually than verbally oriented will probably benefit from using flash cards to prepare for an exam, while someone who’s more verbally than visually oriented might do better by recording a tape of important points to recall or consider to listen to during commute times to and from school or work. The same kinds of observations apply to each of the various categories and help define what’s desirable in classroom or e-learning courses, study guides, exam crams, practice tests and other forms of certification prep materials.
To help individuals understand their learning styles better, Felder has even created a self-scoring assessment tool that reports on four categories in the Felder-Silverman model. Interested individuals can visit and use this tool to help them understand where they fit on the continuum. (See the list of “Resources” for more information.)
Learning Styles in Action
Some of the most interesting material in Felder’s “Matters of Style” paper reports on the results of applying any of the four learning style models. Although methods of application and approaches to structuring learning materials based on those models differ in the 10 specific case studies cited, all show some evidence that students who understand which learning styles work best for them tend to experience improved learning and academic performance as a result. Certainly, certification candidates can benefit from the same kinds of self-analysis and understanding as they learn and test their way into various IT credentials.
Accommodating Learning Styles
Felder also makes some interesting observations about how training materials or courses can best be structured to deliver information that appeals to various learning styles. Though not all of these will apply to each certification candidate, this information is extremely useful because it essentially defines a “wish list” that can help candidates examine and analyze courses and training materials to see if they fit their particular learning styles. It’s a pretty good set of metrics that can help would-be buyers distinguish the “good stuff” from the merely adequate. Here are some suggestions for accommodating specific learning styles:
- Teach theory by presenting real-world cases and problems, then provide theory in the form of concepts, tools and ideas to help learners better understand, interpret and solve those problems. This appeals to sensing, inductive and global learners, while also providing opportunities for intuition, induction and sequence.
- Seek a balance between concepts and concrete information so that intuitive and sensing learners can benefit equally from materials. This means moving from ideas, formulas and theoretical models to actual data, practical applications and problems to solve in a predictable rhythm.
- Use visual means of information delivery and organization (sketches, models, diagrams, flow charts, graphical plots and displays, and demonstration) coupled with verbal discussions and a | <urn:uuid:1b29736a-b81f-4b35-b5f4-2367f8a8e519> | CC-MAIN-2017-04 | http://certmag.com/learning-with-style-whats-your-learning-personality/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00458-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934109 | 1,523 | 2.921875 | 3 |
Think about your current information technology job – would it even have existed 15 years ago? Even if it did, it’s likely to have either dramatically grown or shrunk, depending on your job description.
That’s according to the Pew Research Center, which analyzed data from the joint federal-state Occupational Employment Statistics program that sorts wage and salary workers into more than 800 different occupations. The program’s most recent estimates, which are based on data collected from November 2009 to May 2012, show that around 3.9 million workers – or 3 percent of the nation’s wage and salaried workforce – work in core IT jobs.
How have IT jobs changed in the past 15 years? According to Pew’s analysis, some IT jobs, namely information security analysts and Web developers, simply didn’t exist, or at least did not fall under those titles. Other jobs, such as database administrators, software developers and computer support specialists, have expanded dramatically, while occupations like computer programming and computer operating have shrunk.
“Since the World Wide Web was conceived 25 years ago, it’s become a major reason why computers, smartphones and other data/communication technologies are integral parts of most everyone’s daily lives,” Pew’s Drew DeSilva writes on Fact Tank. “Among other things, that means many more Americans are employed in developing, maintaining and improving those devices and the communications networks they use.”
How has your view of the IT field, particularly in the federal space, changed over the past 15 years? | <urn:uuid:51b54364-517f-45d8-a287-2b31f23c1a63> | CC-MAIN-2017-04 | http://www.nextgov.com/cio-briefing/wired-workplace/2014/03/how-it-jobs-have-changed-15-years/80659/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00458-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941144 | 324 | 2.734375 | 3 |
Massachusetts Institute of Technology researchers backed by funding from government contractor Northrop Grumman Corp. have developed a tool that serves as an undo button to restore computers after they are infected by viruses, a computer scientist leading the effort said.
The so-called intrusion recovery system is one of about a dozen research projects under way at MIT, as well as Purdue and Carnegie Mellon universities, sponsored by the Northrop Grumman Cybersecurity Research Consortium for possible deployment at government agencies. The industry-academia partnership, which was established in late 2009, shared some of its progress with reporters Wednesday.
For its part, Northrop Grumman has contributed a giant database comprised of tens of thousands of viruses and other malicious software that the researchers are using to test their work. One finding: The "Stuxnet" malware that apparently dented Iran's nuclear program by sabotaging the systems that operate reactors "was obviously written by a team of experts as opposed to a single person," said Robert Brammer, the company's information systems chief technology officer.
The worm -- about a million and a half lines of code -- is far larger and more sophisticated than the majority of viruses and reflects tremendous expertise in industrial control systems, he explained.
Computers overtaken by viruses far less vicious than Stuxnet -- or perhaps more so in the future -- can take days of wasted energy to fix. Often, employees inadvertently install such malware simply by downloading corrupted screen savers or greeting cards off the Internet.
"Many machines are compromised daily with backdoors for attackers to remotely log in to machines," said MIT computer science professor Ronald L. Rivest, adding that another big pest are botnets that hijack computers to distribute spam or inundate websites with useless traffic to halt service.
The goal of the MIT team's undo project is to automate the job of restoring systems after a breach.
"When an intrusion is detected, our system rolls back any files affected by the attack . . . and re-executes any legitimate computations -- of course skipping the attack itself," he said. "This both reverts the attack and preserves changes made by legitimate users in the meantime."
The apparatus works by, first, recording a history of all computations performed by a user and then retracing the actions to pinpoint when and where a botnet or backdoor penetrated the system, he said.
Northrop Grumman officials said some of consortium's initiatives would be ready for the federal government to use within the next two years, but the timeline for agency acquisitions is out of the consortium's control.
One concern that researchers are grappling with is the unintended consequences of their security innovations -- such as filters that oppressive regimes modify to cut off Internet access or track dissidents online.
This is not a new stressor for academics. Alfred Nobel, who invented dynamite, suffered the same cognitive dissonance and went on to found the Nobel Peace Prize, said Eugene H. Spafford, executive director of Purdue's Center for Education and Research in Information Assurance and Security. "He was horrified by some of the uses in warfare," Spafford said.
Purdue addresses the issue of nefarious applications of research by requiring students to take ethics courses. "We have deep discussions about privacy and about the appropriate use of technology and we try to ensure that as we look at how the technology is developed, there is broad discussion both of where the technologies can be used and how the people developing them should ensure that there is some attention paid" to civil liberties, he said.
On Monday, a separate group of researchers assembled by Washington think tank Center for a New American Security, issued cybersecurity recommendations -- one of which is a White House commission on the future of Internet security.
The task force, comprised of government, industry and academic experts, would grapple with how to change the underpinnings of the Internet to make the architecture more secure. Robert Kahn, who co-invented today's Internet infrastructure, devoted a chapter of the roughly 300-page report to the idea of defending systems by assigning and inserting trusted identity codes for every user and device. | <urn:uuid:ccf9f487-b50d-44af-a447-82b0e334fe2d> | CC-MAIN-2017-04 | http://www.nextgov.com/cybersecurity/2011/06/new-recovery-system-restores-virus-infected-computers-could-be-used-by-agencies/49163/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00276-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953786 | 836 | 2.65625 | 3 |
Cyber intellect worm “Flame” came up to notice of the experts first time at Kaspersky Lab when a specialized agency of UN “International Telecommunication Union” (in charge of information and communiqué technologies) approaches to these experts for assistance in finding an unfamiliar and mysterious malware. The Flame as a powerful computer virus has removed the highly sensitive information all over the Middle East; especially it has affected the Iran mainly. The way to spread out in systems of this virus is very mysterious because it is intelligent enough to disturb the computers systems by spreading rapidly and without coming into the notice of the computer users. Nobody is fully clear in mind till now about, how this malware is being communicated. Anyway, the list of affected countries is included: Iran, Israel, Palestine, Sudan, Syria, and Lebanon.
There is no definite date of Flame creation to be expressed but according to expert’s estimation, it is not before than the year 2010. The Flame is illustrated as one amongst the most dangerous with influential traits malware. According to some experts, it serves in the same way as Stuxnet virus has been served up by disabling the nuclear facilities of Iran, previously.
Flame’s Way to Attack:
The Flame virus is considered very damaging due to deviation in its facts and figures gathering manners. You can better understand its strength by knowing that it is able to take screenshots and can record the audio using the system microphone. Besides this, this virus can get access to nearby peripherals and devices with the help of Bluetooth technology. On the other hand, other malware can just attack via email or can be used maliciously for stealing money, etc. But this sophisticated toolkit for assault is really serious and complex threat to the security of businesses computer networks and even for the different countries secrets information’s security too for the reason of its several dangerous possibilities.
Flame is a name of huge programmed modules package with about 20 MB mass when it is fully installed on the system. The Flame is consisted of different libraries (zlib, ppmd, libbz2, and sqlite3, etc) and that is its big size reason too. Many modules of Flame virus are written in Lua (a scripting language). For your information, Lua usage in malware is not common. The reason is just coding large size. The current malware are being written using compact programming languages. That is because; small sized coding is able to be hiding easily. You can say, Flame is one of the biggest threats that ever discovered due to its certain new features like its audio recording ability. It can record audio data with the assistance of computer’s internal microphone. Moreover, Flame is able to gather data and facts about detectable devices close to the infected system when Bluetooth is on.
There are also lots of built-in but different timers possible into the Flame virus. The function of these timers is to monitor the connections accomplishment to C&C and to keep an eye on definite data theft operation’s frequency, etc. | <urn:uuid:524472f0-efb6-4605-82c2-f5adca7da4b6> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2012/flame-virus | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00422-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947331 | 617 | 2.65625 | 3 |
NAT444 (CGN/LSN) and What it Breaks
Before we look at what breaks, I should probably make sure that you know what it is that I’m talking about here. If you already know all about traditional NAT and address overloading, skip to the NAT444 section. If you are familiar with that as well, feel free to skip right to the list of what it breaks. In any case; enjoy! =)
When people talk about Carrier Grade NAT (CGN) or Large Scale NAT (LSN) they are talking primarily about NAT444. The NAT part of those acronyms stands for Network Address Translation and NAT is already very common in IPv4 networks, particularly on LAN/WAN gateway devices. The basic idea is to use non-globally-unique (“private”) addresses on the LAN (Local Area Network) and only use globally-unique (“public”) addresses on the WAN (Wide Area Network) / Internet facing interfaces. When deployed in this way, NAT is usually combined with less-often-spoke-of PAT (Port Address Translation) to allow address overloading on the WAN side.
Whoa – we just went into geek speak, I know. It’s really not too scary though; overloading just refers to a many to one relationship of inside (local/private) addresses to outside (global/public) addresses. This is sometimes called NAPT (Network Address Port Translation) or NAT overloading and it allows you to place a very large number of hosts (all with ‘private’ addresses) behind a NAT device with a much smaller number of globally unique (public) addresses. This is done by using port numbers as unique identifiers. Let’s walk through a quick example to see how this works (as always, click pics for full size).
On the far left of this diagram we have two hosts in our local network (a laptop and a tablet), the Internet is on the right, and there’s a NAT overload (NAPT) router in the middle connecting them. You can see that both of our local network hosts have locally unique (“private”) inside addresses (192.168.1.23 and .42) and the router has a single globally unique (“public”) outside address (203.0.113.57).
When the laptop wants to connect to another host out on the Internet, it sends its traffic to the router (it’s default gateway). The router takes that traffic and swaps it’s outside (global/public) address in place of the laptop’s inside (local/private) address and sends the traffic on, to the Internet. Now all Internet devices see this traffic as if it came from the router itself, so they send return traffic directly to the router. When the router receives that return traffic, it swaps the laptop’s inside address back in place of it’s own outside address and sends the traffic on, to the laptop. Simple enough.
Things get more interesting when both the laptop and the tablet need to talk to the Internet at the same time though. Now the router needs to know what return traffic goes to the tablet and which traffic is supposed to go to the laptop. This is where those port numbers I mentioned earlier come into play. Each two-way connection between network hosts is called a session (or sometimes a flow) and to keep these sessions from getting mixed up, the router uses a unique port number to identify each one.
Here you can see this in action. In this diagram, both the laptop and the tablet are sending traffic to the Internet (initiating one session each). When the router does the address translation (the swap), it not only changes the address but also adds a unique port number when sending the traffic out to the Internet (:2001 and :2002 in the diagram). In this way, when the router receives return traffic that is addressed to 203.0.113.57:2001 it knows that this should be swapped for 192.168.1.42 (which is sent to the laptop) and that 203.0.113.57:2002 translates to 192.168.1.23 and goes to the tablet. Easy peasy. It’s kind of like having a post office box. All the mail comes to the same physical street address (the outside IP address) but each letter is addressed to a unique box number (the port number), so everybody gets the right mail.
Of course, this example is very simple. In real life most networks have more than two hosts and almost all hosts open up many more than one session. Since each IP address has about 65,000 ports available, that is the absolute maximum number of sessions that one address can handle. This is plenty for most small networks, and larger ones can typically add a few outside addresses to what is called a NAT pool, so this scales pretty well at the LAN level. There are problems with this approach however.
Problems with NAT
The primary issue with NAT (we are still talking about NAPT/NAT-overlaod but just saying NAT is easier, it’s what most folks mean when they say NAT anyway) is that it breaks the end to end principle of networking. WTF does that mean? Well, basically, the end to end principle states that network communication should happen between the devices at the very edges of the network and that all devices in the middle should just pass that traffic along. This is a primary design goal of the Internet and for good reason. Without delving into all the gory philosophical details, we can over-simplify again and say that, in practical terms, the end to end principle allows network hosts to talk to each other unobstructed.
Aha! Now we start to see the problem with NAT: It introduces an obstruction into the communication path. In order for NAT to work, the device doing the NATing has to mangle each and every packet of data being sent and received. It must dig into those packets’ headers and change the source or destination address (and port number). The primary effect of this is that network hosts behind an overloaded NAT must initiate all communication. That is; inbound communication attempts are impossible because there is no way for a host outside of the local network to know or understand the inside (local/private) address of that host.
If we look at our laptop in the examples above, it has an RFC 1918 address of 192.168.1.42. Because this is a “private” address from a shared pool, it can’t be advertised on the public Internet. This means that other hosts on the Internet have no way to know the laptops address (and even if they did, there would be no route to get to it). See, we need that outside address and port combination set up on the router before any traffic can get in, to the laptop, so that the router knows where the traffic is supposed to go. This is fine for web browsing and many other client side applications but it is a major problem for server and peer to peer applications (VoIP, gaming, webcams, VPNs, bittorrent, video streaming, chat, etc) where communication needs to be initiated from the outside in. Major meaning they don’t work.
UPnP and NAT Traversal
Because there are simply not enough IPv4 addresses to provide a globally unique (public) address to all of the devices already connected to the Internet, network operators have been forced to deploy NAT in the LAN. As such, tools have been developed to help combat the problems it causes.
One notable hack is enabled through UPnP (Universal Plug and Play) and is called Internet Gateway Device (IGD) Protocol. UPnP-IGD is a protocol that gives the hosts behind a NAT some control over the NAT device. Things like discovering the router’s outside IP address and setting up static port mappings. This allows many of the applications that would otherwise be broken by the NAT to work in spite of it. NAT Port Mapping Protocol (NAT-PMP) offers similar functionality.
There are several other NAT traversal techniques and protocols that have been developed to get around the problems that NAT causes. These various tools are together a looming testament to two things:
- Human ingenuity and our ability to overcome obstacles.
- The brokenness that NAT introduces into inter-networking.
While it pains me to do so, I feel compelled to address this largest myth of NAT: Network Address Translation is NOT a security technique!
Some folks will tell you that because hosts outside of a NAT’d network can not initiate communication with hosts inside of the NAT’d network that NAT is providing security. This is a lot like saying that since a key snapped off in the door lock will make it harder to get into my car, that it provides added security. While it may be true that jamming a door lock will make it harder to get in the car, I would have a hard time recommending that as an anti-theft method. There is a difference between broken access and access security.
Stateful packet inspection is what you want for network access security, not NAT brokenness.
So far we have discussed traditional NAT (mainly NAPT/NAT overload) which is starting to be called NAT44 because it translates one IPv4 address for another (4 to 4). Now let’s explore what I (still) call NAT444, which was also called Carrier Grade NAT (CGN) at one point and is currently called Large Scale NAT (LSN) by all the cool geeks. I like NAT444 because it explains what is really going to occur in most places when LSN is implemented; a triple NAT (IPv4 to IPv4 to IPv4). Sounds like a nightmare already, doesn’t it? We just doubled the NAT, which means doubling the interference with network traffic and further impeding the end to end principle. Let’s see what that looks like.
This diagram illustrates the NAT444 model as envisioned in the IETF NAT444 draft and as it will most likely be deployed (unless we can figure out a way around it altogether). In this diagram you should notice two things right off the bat:
- Large Scale NAT for IPv4 will be deployed dual-stacked with global (“public”) IPv6.
- Large Scale NAT adds a second layer of NAT and thus a second area of “private”/inside addressing.
As you can guess from point two, NAT444 exacerbates all the problems that traditional NAT44 introduced. What may not be obvious at first is that LSN/NAT444 aggravates those issues in new ways as well.
In addition to adding a second layer of NAT that creates major problems with law enforcement and abuse logging as well as geolocation and others (all because many distinct customers are behind one provider address), we have to deal with the fact that the second layer of NAT is not going to participate in UPnP, NAT-PMP or other LAN-based NAT traversal protocols. No ISP (Internet Service Provider) in their right mind is going to open up their own routers (or other network devices) to customer control – which is exactly what these protocols require. They are simply not secure and the risk of one customer being able to impact other customers’ service is too great. So where does that leave us?
We are left with a number of applications (and application types) that currently break when Large Scale NAT is introduced. To avoid the doom and gloom feeling that is sure to follow a list of just the broken stuff, let’s start with a list of what isn’t broken by NAT444/LSN:
- Web browsing
- FTP download
- Small files
- BitTorrent and Limewire
- Leeching (download)
- Skype video and voice calls
- Instant messaging
- Facebook and Twitter chat
Not too shabby really, all things considered. That is quite a bit of functionality for being behind a fairly large kludge. If that were the end of the story I wouldn’t have written this article though. So, without further adieu, here is the list you’ve been waiting for; what NAT444 breaks:
- FTP download
- Large files
- BitTorrent and Limewire
- Seeding (upload)
- On-line gaming
- Video streaming
- Remote viewing
- VPN & Encryption
- Limited ALG/SIP support
- All custom applications with the IP embedded
- Lack of ALGs
Wow, is it just me or is that list a bit longer? There’s that doom and gloom feeling creeping up.
For our purposes here, “breaks” means that the service was degraded or completely failed. The data behind this list primarily comes from Assessing the Impact of NAT444 on Network Applications, an IETF draft which documents testing that was done by CableLabs, Time Warner Cable, and Rogers Communication on “many popular Internet services using a variety of test scenarios, network topologies, and vendor equipment.” If this kind of thing interests you at all, I highly recommend checking out the full draft, it’s a quick and informative read. I also have a bit of experience dealing with NAT444 myself, but that’s a story for another day.
Port Control Protocol
I would be remiss if I didn’t at least mention Port Control Protocol (PCP) in this discussion. The basic goal of PCP is to create a new, more advanced, technique to control port forwarding on NAT devices so that the brokenness fixed with protocols like UPnP-IGD and NAT-PMP in the LAN can be solved in a NAT444 (or other LSN) environment. I have not, as of yet, dug very deep into the work being done but I do see some challenges:
- New protocols require new equipment (or at least new code).
- Will providers sign on to allow customer application control of their network devices?
- Time. Is there enough? We will hit RIR exhaustion in at least 3 out of 5 regions before the proposed standards are published.
You knew I was going to say it eventually: The only true solution is to deploy IPv6. This is why the NAT444 model includes global IPv6; the only way to get around the brokenness introduced by NAT is to eliminate it. Luckily we have the means to do so; Internet Protocol version 6 (IPv6). So, dual-stack today, with NAT444 if you must, and then do everything you can to get everyone you do business with to do the same. | <urn:uuid:f65b1b20-9b31-4c91-8188-a0be6595757d> | CC-MAIN-2017-04 | https://chrisgrundemann.com/index.php/2011/nat444-cgn-lsn-breaks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00542-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944934 | 3,077 | 2.78125 | 3 |
It has been hard to miss the buzz circulating that the so-called "driverless car" is just around the corner. Futurists have even speculated that one day humans won't be allowed to drive cars—it's just too dangerous, and computers will do a better job.
Well, the driverless car is clearly on its way, and I, for one, am looking forward to creeping along in one. But in the meantime, we're just going to have to make do with "connected cars" for our entertainment.
And connected cars have a lot to bring to the future road party. Tangled traffic at intersections will be on its way out if scientists at the Urban Dynamics Institute (UDI) at the Department of Energy's Oak Ridge National Laboratory get their algorithms correct.
The researchers are trying to figure out how to reduce travel time and fuel consumption by getting vehicles to talk to each other and communicate with traffic controls, such as traffic lights.
Location, speed, and destination should be exchanged with traffic controls and other road users in order to create instructions for drivers. The researchers said that this can eliminate stop-and-go driving.
Now, you might think you've heard this before. And, in fact, you have. Many parts of the world have experimented with traffic management through road sensors, cameras, and so on over the years. I can remember hearing about this stuff when I was a teenager.
The difference, though, is that in those days we didn't have the sophisticated algorithms that we have today, and we also didn't have the connected cars. Drivers have had cellphones for a long time, but they haven't been hooked up to the workings and sensors of the automobile.
Cars are about to become even more connected. European regulators will likely force automakers to build cars with emergency call technology by 2018.
European parliamentarians want cars to dial a Euro-wide 112 emergency number automatically in the event of a collision. It will work a bit like GM's OnStar system, which provides location through GPS to an operator.
So all cars are ultimately going to have SIM cards and cellular radios in them anyway.
Future connected cars will tell drivers the "optimal speed, the best lane to drive in, or the best route to take," Andreas Malikopoulos, UDI deputy director, said in an Oak Ridge National Laboratory website article.
The first step is building the decentralized control algorithms, which determine how vehicles will interact.
No central control center
Oak Ridge's system intends to abandon the idea of the central control center—something usually seen in city-wide traffic management. It says that these centers generate too much data, and that it's not realistic to expect all vehicles in a city to communicate that much information all at once.
The system suggests that vehicles communicate among each other locally and across a city.
In addition to communications, the algorithms will also include analysis of traffic patterns and conditions. Large-scale data compiled through simulations in real urban areas will be used to make predictions, like when school zones are busiest—in the morning and middle of the afternoon, for example. That kind of information will be pumped into the system.
And unlike the "driverless car" —of which this kind of algorithmic traffic management is a precursor—in this case individualized instructions will be displayed for the driver to act upon.
Which begs the question—what happens if the drivers don't drive how the algorithms tell them to? Chaos, right? Well, the UDI scientists have thought of that, because one element that they're looking at including is digital ticketing.
This article is published as part of the IDG Contributor Network. Want to Join? | <urn:uuid:60bc69e9-3874-418d-a07f-2db894b36e3f> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2911456/big-data-business-intelligence/how-connected-cars-will-optimize-traffic-flow.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00386-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961928 | 755 | 2.890625 | 3 |
Apache Hadoop is 100% open source, and pioneered a fundamentally new way of storing and processing data. Instead of relying on expensive, proprietary hardware and different systems to store and process data, Hadoop enables distributed parallel processing of huge amounts of data across inexpensive, industry-standard servers that both store and process the data, and can scale without limits. With Hadoop, no data is too big. And in today’s hyper-connected world where more and more data is being created every day, Hadoop’s breakthrough advantages mean that businesses and organizations can now find value in data that was recently considered useless.
Hadoop can handle all types of data from disparate systems: structured, unstructured, log files, pictures, audio files, communications records, email– just about anything you can think of, regardless of its native format. Even when different types of data have been stored in unrelated systems, you can dump it all into your Hadoop cluster with no prior need for a schema. In other words, you don’t need to know how you intend to query your data before you store it; Hadoop lets you decide later and over time can reveal questions you never even thought to ask.
By making all of your data useable, not just what’s in your databases, Hadoop lets you see relationships that were hidden before and reveal answers that have always been just out of reach. You can start making more decisions based on hard data instead of hunches and look at complete data sets, not just samples. | <urn:uuid:c0927b69-be19-4316-88a6-56e8f9a472df> | CC-MAIN-2017-04 | http://www.fastlaneus.com/cloudera_apache_hadoop | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00294-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941352 | 321 | 2.671875 | 3 |
There is something thrilling about the very term “supernova factory” in that it invokes startling mental images culled from science fiction and our own imaginations. However, the real factory in question here is at the heart of an international research collaboration, although not one in the business of mass-producing supernovas in some kind of cosmic warehouse. It is instead examining the nature of dark energy to understand a “simple” concept — the expanding universe.
The universe is growing rapidly due to what physicists have dubbed as dark energy — a finding that was made possible by comparing the relative brightness of “close” supernovae to the brightness of those much farther in the distance (which culminates in the difference of several billion years). The comparison is not possible without understanding the underlying physics that produced the supernovae that is the nearest, which is where the Nearby Supernova Factory (SNfactory) enters the picture. The project relies on a complicated “pipeline of serial processes that execute various image processing algorithms on approximately 10Tbs of data” to step closer to understanding dark energy and its role in the universe’s constant expansion.
While all of this is interesting enough on its own, the project has a particularly unique HPC and cloud slant due to the efforts of Berkeley researcher Lavanya Ramakrishnan and her team. They have been able to shed light on how a public cloud like EC2 can (and cannot) be used for some scientific computing applications by bringing SNfactory’s pipeline to the cloud. During a recent chat with Ramakrishnan, it became clear that while there are attractive features of clouds, there are some hurdles that relate to just the issues that most concern scientific users, including performance, reliability, as well as ease of use and configuration.
In her research that spans beyond this particular project’s scope, Lavanya Ramakrishnan focuses directly on topics related to finding ways to handle scientific workloads that are reliant on high performance and distributed systems. Accordingly, she has looked extensively at the possibilities of deploying clouds to handle scientiic workloads as well as considering grid technologies and their relevant role in the area.
The SNfactory cloud computing evaluation project in question is important as it provides not only a case study of using HPC in a public cloud, but also because of the specificity of design tests to maximize performance outside of the physical infrastructure. The paper presenting their findings, entitled “Seeking Supernovae in the Clouds: A Performance Study,” won the top honor at the First Workshop on Scientific Cloud Computing this summer. This is not a surprise as the paper provides an in-depth examination of the benefits and drawbacks of public clouds in specific context along with detailed descriptions of the various configurations that produced their conclusions.
Getting Scientific Computing Off the Ground
Until just recently, the Supernova Factory’s complex pipeline was fed into a local cluster. With the oversight and alterations on the part of Berkeley researchers to refine the environment from application-level up, the pipeline was fed into a Amazon’s EC2 after significant experimentation, all of which is discussed at length in the paper. These experimental designs were for the specific purpose of determining what options were available on a design level to suit application data placement and more generally, to provide a distinct view of the performance results in a virtualized cluster environment.
Overall, the authors concluded that “cloud computing offers many features that make it an attractive alternative. The ability to completely control the software environment in a cloud is appealing when dealing with a community-developed science pipeline with many unique library and platform requirements.” While this is a bright statement about the use of the cloud for a project like this, according to Lavanya Ramakrishnan, who spoke with HPC in the Cloud recently about the results of the Berkeley team’s work, the cloud, at least as offered by EC2 is not an out of the box solution for scientific computing users and there were a number of challenges along the way that present some meaty discussion bits for those who debate that the cloud is not ready for HPC.
Ramakrishnan is not the first scientific HPC user to comment on the complexity that is involved when first preparing to send applications into the cloud and setting up the environment. She noted that while it was difficult to determine how long it took them to get started since their purpose was to test multiple designs and models, she advised that it was not a quick or easy process. Before even getting to the point where one would be ready to make the leap, there would have to be exhaustive research about how to best tailor their environment to the specific applications.
In addition to being a complex task to undertake, once the ideal environment is created and the applications and virtual machines have been synched into what might appear to be the best configuration, there are also some troubles with the predictable enemies of HPC and cloud — performance and reliability. The authors of the study encountered a number of failures throughout their experiments with EC2 that would not have been matters of concern with the traditional environment. As Ramakrishnan stated, “A lot of these [scientific] applications have not been designed with these commodity clusters in mind so the reliability issue, which wasn’t a major problem before, is now important.”
The Big Picture for Scientific Computing in the Public Cloud
The full paper provides deep specifics for those looking to design their cloud environment for scientific computing that can be of immense value and save a great deal of time and frustration. It is critical reading for anyone looking to use the cloud for similar (although chances are, on much smaller-scale) workloads.
What is important here in the scientific computing sense bears repeating. There are many questions about the suitability of public clouds for HPC-type applications and while there are many favorable experiences that bode well for the future of this area, some of the barriers and problems need to be addressed in a major way before the clouds will be a paradigm shift for scientific computing.
Ramakrishnan, who as it was noted earlier, spends much of her research time investigating alternatives to traditional HPC, sees how cloud computing is a promising technology in theory for researchers. For instance, as she noted, in physical environments “applications suffer because the people running the machines need to upgrade their packages and software to run in these environments. Sometimes there are compatibility issues and this gets even more complicated when they have collaborations across groups because everyone needs to upgrade to a different version. Software maintenance becomes a big challenge. Cloud has therefore become attractive to a lot of scientific computing users, including the Supernova Factory — cloud lets them maintain this entire stack they need and this alone is very attractive.”
Based on her experiences using a number of different configurations and models for cloud in scientific computing, Ramakrishnan indicated that while there is a class of scientific applications that are well-suited to the cloud, there are indeed many challenges. Furthermore, the important point is that researchers understand that this solution, even if the applications fit well with clouds, cannot be undertaken lightly. A great deal of preparation is required, especially if one is operating on the large scale, before making the leap into the cloud.
Scientific computing and cloud computing are not at odds; they live on the same planet but there is a vast ocean that separates the two at this point — at least if we are talking about public clouds. Performance and reliability — two keys to successfully running applications on bare metal systems — are in question in the public cloud and until ideal configurations can be presented across a wide range of application types more research like that being performed by Ramakrishnan and her colleagues is critical.
Many of the points that Ramakrishnan made about the suitability of the public cloud at large for this kind of workload correspond with what Kathy Yelick discussed in an overview of current progress at the Magellan Testbed, another research endeavor out of Berkeley. The consensus is that there is promise — but only for certain types of applications — at least until more development on the application and cloud levels takes place.
Still, Amazon insists with great ferocity that the future of scientific computing lies in their cloud offering, and this is echoed by Microsoft and others with Azure and EC2-like services. Until the scientific computing community fully experiments with the public cloud to determine how best to configure the enviornment for their applications, we will probably hear a great deal more conflicting information about the suitability of the public clouds for large-scale scientific workloads. | <urn:uuid:af6ad078-87db-4b48-a6fc-f8961a8326e9> | CC-MAIN-2017-04 | https://www.hpcwire.com/2010/07/09/supernova_factory_employs_ec2_puts_cloud_to_the_test/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00047-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960997 | 1,753 | 2.65625 | 3 |
Under the category of “Grand Challenge” applications, perhaps none is grander than simulation of the human brain. Reflecting the complexity and scale of the brain with current computer technology is truly a daunting task. But a group of researchers and computer scientists at a number of UK universities are attempting to do just that under a project named SpiNNaker.
SpiNNaker, which stands for Spiking Neural Network architecture, aims to map the brain’s functions for the purpose of helping neuroscientists, psychologists and doctors understand brain injuries, diseases and other neurological conditions. The project is being run out of a group at University of Manchester, which designed the system architecture, and is being funded by a £5m grant from the Engineering and Physical Sciences Research Council (EPSRC). Other elements of the SpiNNaker system are being developed at the universities of Southampton, Cambridge and Sheffield.
For the casual observer, constructing a facsimile of the most complex organ in the human body from digital technology may see like a natural fit for computers. The view of the brain as a biological processor (and the processor as a digital brain) is well entrenched in popular culture. But the designs are fundamentally different.
Operationally, computers are precise, extremely fast and deterministic; brains are imprecise, slow, and non-deterministic. And, of course the underlying architectures are completely different. Computers relying on digital electronics, while the brain employs a complex mix of biomolecular structures and processes.
The SpiNNaker design meets the architecture of the brain halfway by going for lots of simple, low-power computing units, in this case, ARM968 processors. The initial Manchester-designed SpiNNaker multi-processor is a custom SoC with 18 of these processors integrated on-chip. (The original spec called for 20 processors per chip.) The multi-processor also incorporates a local bus, called Network-on-Chip or NoC, which links up the individual processors and off-chip memory. Each SpiNNaker node is reported to draw less than one watt of power, while delivering the computational throughput of a typical PC.
The design is purpose-built to simulate the action of spiking neurons. Spiking in this context means when neurons are stimulated above a certain threshold level to generate an event that can be propagated across a neural net. But instead of using neurotransmitters to do this, the computer is just passing data packets around.
To be truly useful, the spiking needs to happen in real-time. Fortunately, this is where computer technology shines. Electrical communication is actually more efficient than the biochemical version, so nothing exotic needs to be done in the hardware to make all this magical neural spiking a virtual reality.
And that may happen soon. The design phase of the project is coming to a close and the SpiNNaker team is starting to gather the pieces together. According to a news release this week, SpiNNaker chips were delivered in June (from Taiwan — presumable TSMC), and have passed their functionality tests. The plan is to build a 50,000-node machine with up to one million ARM processors.
While that seems like a lot, researchers estimate that it will only be enough to represent about one percent of the real deal. A human brain contains around 100 billion neurons along with 1,000 million connections and a single ARM processor in the SpiNNaker chip can only handle 1,000 neurons. The good news is that one percent may be enough to answer a lot of questions about the functional operation of the brain.
Even at one percent, the scale of the machine is probably the trickiest part of the project. With so many processors in the mix, there are bound to be individual failures at fairly regular intervals. To deal with the inevitable, the designers made SpiNNaker fault tolerant at multiple levels. For example, each of the ARM processors can be disabled if they fail at start-up and a chip can remain functional even if “several processors fail.” If an entire chip goes south, data can be rerouted to neighboring chips thanks to redundant inter-chip links.
The other challenge to scaling out is power, but here is where the ARM architecture pays dividends. The initial system of 50,000 nodes is estimated to draw just 23 KW to 36 KW of power. By supercomputing standards, that’s just a pittance. Of course, judged against the 20 watt version in our heads, SpiNNaker has a ways to go.
The power profile suggests that if there are no inherent scaling limitations in the hardware or software, the design could conceivably be used to build a machine that would support a “complete” human brain simulation for just a few megawatts. With improved process technology, that could easily slip into the sub-megawatt level.
For all that, SpiNNaker isn’t designed to simulate higher level cognitive features — the most interesting function of the brain. Inevitably that will require more complex hardware and software. So even if someone builds a super-sized SpiNNaker, it won’t come close to the functionality of the 100 percent organic version anytime soon. | <urn:uuid:09097dd0-ca79-4215-9936-e4dbe762c56e> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/07/07/researchers_spin_up_supercomputer_for_brain_simulation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00075-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932898 | 1,077 | 3.5 | 4 |
Hardersen S.,Cent. National per lo Studio e la Conservazione della Biodiversita Forestale Bosco Fontana di Verona |
Toni I.,Cent. National per lo Studio e la Conservazione della Biodiversita Forestale Bosco Fontana di Verona |
Cornacchia P.,Cent. National per lo Studio e la Conservazione della Biodiversita Forestale Bosco Fontana di Verona |
Curletti G.,Museo Civico di Storia Naturale Entomologia |
And 2 more authors.
Bulletin of Insectology | Year: 2012
The highly fragmented floodplain forest remnants of the river Po (Italy) are protected at the European level, but surprisingly little is known about their ecology and in particular their invertebrate fauna. The present work investigates 11 selected beetle families sampled in the reserve of Isola Boscone (Lombardy Region, Mantua Province), which is situated inside the embankments of the Po. Twelve window traps were attached to dead trees, either in open and sun-exposed situations (n = 6) or in the understorey of small forest patches (n = 6), and were active from 16 June to 3 November 2009. The following 11 beetle families were studied: Histeridae, Lucanidae, Scarabaeidae, Lissomidae, Elateridae, Buprestidae, Cleridae, Aderidae, Tenebrionidae, Cerambycidae, Anthribidae. A total of 495 individuals belonging to 53 species were collected, including five species of particular faunistic interest. The species Aegosoma scabricorne (Scopoli) and Dissoleucas niveirostris (F.) were associated with the forest habitat, while Chlorophorus varius (Muller), Dorcus parallelipipedus (L.) and Nalassus dryadophilus (Mulsant) were associated with the open habitat. Analyses of the abundance data revealed that the traps from the two habitat types differed in their community composition and that more species were caught in the open habitat. However, individual-based rarefaction curves showed that species richness did not differ when the number of species was plotted in relation to the number of individuals caught. This finding shows that richness estimates need to be interpreted with caution. The study also highlights that monitoring of beetles in floodplain forest remnants is complicated by recurrent floods. Source | <urn:uuid:9895efed-1e04-41b0-9e36-4c8fcbe7139e> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/cent-national-per-lo-studio-e-la-conservazione-della-biodiversita-forestale-bosco-fontana-di-verona-2161115/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00285-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.882763 | 518 | 2.578125 | 3 |
Hear Ye, Hear Ye. I’m about to reveal one of the great mysteries of alerting. I’m going to disclose how geo-targeting of Wireless Emergency Alerts (WEA) really works. (OK, I embellish a bit.)
Even with success stories of lives saved through weather alerts and children found through AMBER Alerts, there still seems to be hesitation about WEA among local practitioners. We think some local public safety officials are reluctant to use WEA for “imminent threats” because they don’t understand how it delivers alerts to targeted geographic areas. No wonder. As it turns out, it's complicated and not an exact science.
At one point, I thought the cell carriers didn’t want us to understand it so they could protect proprietary information. I was wrong. In fact, there’s an organization made up of the carriers and others in the information and communications technology sector that recently published an explanation of how WEA geo-targeting works. In a 56-page document found here, the Alliance for Telecommunications Solutions (ATIS) lays it out thoroughly.
At the heart is the Common Alerting Protocol (CAP). The standard means everyone is essentially speaking the same language, but it doesn’t mean they’re doing the same thing with what they understand.
First, carriers are “broadcasting” alerts. They are sending them into the airwaves to be received by properly tuned cell devices. Stating the obvious here, but airwaves aren’t always predictable; they are influenced by topography and atmospheric conditions.
Second, even though alert originators use a relatively precise method to designate where they would like to deliver alerts, the carriers’ distribution systems aren't quite so precise.
Here’s how it works: The carriers refer to an alert originator’s desired area as an “alert area” and the area they actually broadcast the alerts as the, well, “broadcast area.” (The simplicity stops at the names.) An alert area is determined by the same type of GIS polygon that public safety practitioners commonly use for other types of notifications, analysis and dispatching. It’s pretty straightforward.
The broadcast area, however, is more complicated. It’s based on a labyrinth of cell towers and a honeycomb of tower signal “sectors.” Then, other factors enter the picture like RF engineering, traffic load distribution, and even local municipal policy. The area covered by the sectors may not, probably won’t, correspond precisely to the alert area polygon. This means any WEA alert will likely overshoot or undershoot the desired alert area. Overshooting occurs when towers are within the polygon, but their coverage sectors span beyond. Undershooting occurs when the towers within the polygon do not cover the full area of the polygon.
The figure below illustrates an example of overshooting and undershooting.
The figure was reproduced from ATIS-0700027, Feasibility Study for WEA Cell Broadcast Geo-Targeting, with permission from ATIS. A copy of the full report can be obtained here.
Add to the complexity is the fact that factors that determine the broadcast area are not the same for all carriers. So, even using the same polygon from an alert originator, the broadcast area for different carriers will likely look different.
WEA will probably be tweaked. In fact, that’s the purpose of the ATIS report and the FCC’s solicitation of recommendations from a working group called CSRIC (CIZZ-rick), the Communications Security, Reliability and Interoperability Council. Even with WEA tweaking, overshooting and undershooting will remain a reality. (Full disclosure, I’m a member of a CSRIC working group, focused on complementary alerting strategies to WEA and EAS.)
So as a practitioner planning to issue an alert, you’re faced with deciding whether to risk a bit of undershooting/overshooting and delivery variance among carriers. Or do you risk missing an opportunity to alert folks through one of the most accessible means of alerting we’ve seen so far, an individual’s personal device, even if you under-alert or over-alert a bit?
As you can probably tell through wording of the question, and if you’ve read any of my other stuff, you know my opinion. (See short video here.) When there’s a serious event, you have to accept realities that no single alerting channel is perfect. It takes a combination of alerting channels, working together, to be effective. In today’s world, it’s difficult to predict how to best reach people when there are so many communication options available. And, we know from social science study that people need to get alerts from at least two sources before they will actually take action. (Sad, but true.)
I think many public safety officials are missing an opportunity to use WEA for imminent threats. I hope it’s because of a lack of understanding of how it works, not concerns about an ability to precisely geo-target. WEA may not represent exact geo-targeting, but as the diagrams and ATIS report show, it gets close. The risk/return argument is strong. | <urn:uuid:a19a01f8-60c6-4a04-a164-320064ac9897> | CC-MAIN-2017-04 | http://www.govtech.com/em/emergency-blogs/alerts/mysteries-of-wireless-emergency-alerts-revealed.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00193-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943412 | 1,116 | 2.625 | 3 |
Last October I wrote a number of posts discussing the basics of wireless LANs, along with the associated frequency spectrums, security standards, and security vulnerabilities. The wireless networking process, defined through the IEEE 802.11 set of standards, has become common in private homes and has a significant and growing role in corporate and business settings. However, most of the existing standards are very rapidly being seen as inadequate as applications and business plans become more complex and require more bandwidth.
For instance, streaming voice and video, whether it’s a feature-length movie downloaded to your flat screen at home or a videoconferencing session at your office, have become an increasingly less-than-satisfactory proposition with the existing IEEE 802.11a, b, and g standards. One of the significant issues is the delay of packets passing through a congested wireless network. The worse the congestion, the more significant the delay. Many applications, such as Voice-Over IP (VoIP), are very negatively affected by delay in a network and can delivery choppy or garbled audio.
However, the good news is that a new standard, IEEE 802.11n, should eventually provide significantly higher speeds and range. To better appreciate where we are headed with IEEE 802.11n, it is good to relook at where we are now as a starting point.
802.11 is a set of IEEE standards that govern wireless networking transmission methods. They are commonly used today in their 802.11a, 802.11b, and 802.11g versions to provide wireless connectivity in the home, office, and many commercial establishments such as coffee shops and books stores.
- 802.11a – The 802.11a standard operates in the 5 GHz band and uses a 52-subcarrier orthogonal frequency-division multiplexing (OFDM) scheme with a maximum raw data rate of 54 Mbit/s. The advantages of using OFDM include reduced multipath effects in reception and increased spectral efficiency. However, in the real world, 802.11a usually yields a more realistic net achievable throughput in the mid-20 Mbit/s.Using the 5 GHz band gives 802.11a a significant advantage over other standards because of low utilization of that band of frequencies. However, this high carrier frequency also brings a slight disadvantage. The effective overall range of an 802.11a signal is slightly less than that of 802.11b/g. And, 802.11a signals cannot penetrate as far as those for 802.11b because they are absorbed more readily by walls and other solid objects in their path. In addition, 802.11a products are very expensive to manufacture because of the difficulty of manufacturing a solid state device that can provide a useful power output in the 5 Ghz range.
- 802.11b – This standard has a maximum raw data rate of 11 Mbit/s and uses the same Carrier Sense Multiple Access/Collision Avoidance (CSMA/CA) media access method. Due to the CSMA/CA protocol overhead, in practice the maximum 802.11b throughput that an application can achieve is about 5.9 Mbit/s using Transmission Control Protocol (TCP) and 7.1 Mbit/s using User Datagram Protocol (UDP).802.11b products appeared on the market in mid-1999 and are a direct extension of the Direct-sequence spread spectrum (DSSS) modulation technique. The dramatic increase in throughput of 802.11b, along with simultaneous substantial price reductions, led to the rapid acceptance of 802.11b as the definitive wireless LAN technology.802.11b devices suffer interference from other products operating in the 2.4 GHz band. Devices operating in the 2.4 GHz range include; microwave ovens, Bluetooth devices, baby monitors, cordless telephones and, very importantly, your children’s’ wireless game controller. Interference issues and user density problems within the 2.4 GHz band have become a major concern and frustration for users.
- 802.11g – This standard is the third modulation standard for Wireless LANs. Like 802.11b, this standard operates in the 2.4 GHz band. However, it operates at a maximum raw data rate of 54 Mbit/s. This equates to about 19 Mbit/s net throughput. This speed is identical to an 802.11a core, except for some additional legacy overhead for backward compatibility. The 802.11g hardware is fully backwards compatible with 802.11b hardware. In an 802.11g network the presence of a legacy 802.11b participant will significantly reduce the speed of the overall 802.11g network.The modulation scheme used in 802.11g is also the orthogonal frequency-division multiplexing (OFDM) process, which is copied from 802.11a. It offers data rates of 6, 9, 12, 18, 24, 36, 48, and 54 Mbit/s. And, it reverts to CCK, like the 802.11b standard for 5.5 and 11 Mbit/s, and DBPSK/DQPSK+DSSS for 1 and 2 Mbit/s. Even though 802.11g operates in the same frequency band as 802.11b, it can achieve higher data rates because of its heritage to 802.11a.
In my next post I will examine the specifics of the new IEEE 802.11n standard and describe how it uses the new Multiple Input Multiple Output (MIMO) process.
Author: David Stahl | <urn:uuid:4e351287-1749-40e9-9054-9d4c5bcaaa36> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2010/03/01/revisiting-the-ieee-802-11-wireless-network-standards/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00313-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932356 | 1,148 | 3.140625 | 3 |
The evolution of DWDM systems stemmed from a need to increase the capacity of a single fiber, and thus the entire network, in an inexpensive way. While the idea behind using a single fiber to carry multiple channels seems simple, in reality it is a complex endeavour. By converting incoming optical signals into the precise ITU-standard wavelengths to be multiplexed, transponders are currently a key determinant of the openness of DWDM systems. This article is mainly to introduce the transponder based DWDM system.
Generally, a transponder based DWDM system includes terminal multiplexer, terminal demultiplexer, intermediate line repeater or Optical Add-Drop Multiplexer (OADM), and optical amplifier or optical supervisory channel.
Terminal multiplexer consists of transponders and optical multiplexers (DWDM MUX). For each signal in the fiber, there is a corresponding transponder. The transponder takes the signal and transmits it in the C-Band through the use of a laser. The optical multiplexer transmits these signals in the C-Band through one fiber. In the course of ten years, DWDM system capacity grew from 4 signals to 128 signals. (Here is a picture of a transponder used in DWDM systems.)
DWDM systems also employ terminal demultiplexers which consist of transponders and optical demultiplexers. In their first incarnations, terminal demultiplexers were passive systems. As the complexity of DWDM systems increased, the need for an active approach did, too. Terminal demultiplexers take the signal, which is composed of several wavelengths by this point, and breaks it down to its constituent signals. These signals are then sent through individual fibers to their destinations. The active terminal demultiplexers first go through an output transponder before they are transmitted, which can also go through an error correction procedure. These transponders can also be placed a longside the input transponders.
Note: For a bi-directional transponder based DWDM system, the terminals contain both multiplexers and demultiplexers.
Intermediate line repeaters are placed between 80 and 100 km apart along the path of the fiber. If the optical signal has travelled more than 140 km before arriving at its destination, an OADM integrated optical amplifiers (aka an intermediate optical terminal) is placed. It serves to not only amplify the signal, but also as a diagnostic point. If locations further down the path of the fiber are having issues with the signal, these sites can be used to determine if the fiber has been damaged or otherwise impaired.
To counteract the losses in curred to the signal, optical amplifiers are needed. For example, an Erbium-Doped Fiber Amplifier (EDFA) is used to amplify the optical signal in the intermediate line repeater. An EDFA can also be placed in the terminal multiplexer as a pre-amplifier to amplify the signal before it is transmitted. (An EDFA is shown in the figure below.)
When an EDFA cannot be used, an optical supervisory channel is. This occurs when the signal occurs outside of the C-Band. Here is a table that shows the wavelength ranges for each optical wavelength band.
|Optical Bands||Wavelength Range (nm)|
|O-Band||1260 to 1360|
|E-Band||1360 to 1460|
|S-Band||1460 to 1530|
|C-Band||1530 to 1565|
|L-Band||1565 to 1625|
|U-Band||1625 to 1675|
Signal regeneration was not initially implemented in transponders. At first, these transponders were only used to convert the wavelengths of incoming external signals into wavelengths that worked with the DWDM systems: namely, those in the C-Band. This conversion also serves to stabilize the frequencies and amplify the power of these signals into something compatible with the EDFA in the DWDM system. The sophistication of the signal regeneration components in transponders grew as they progressed from 1R to 3R:
Within the DWDM system a transponder converts the client optical signal from back to an electrical signal and performs the 3R functions (the figure below shows the 3R functions of transponders in the terminal). This electrical signal is then used to drive the WDM laser. Each transponder within the system converts its client’s signal to a slightly different wavelength. The wavelengths from all of the transponders in the system are then optically multiplexed. In the receive direction of the DWDM system, the reverse process takes place. Individual wavelengths are filtered from the multiplexed fiber and fed to individual transponders, which convert the signal to electrical and drive a standard interface to the client. (Future designs include passive interfaces, which accept the ITU-compliant light directly from an attached switch or router with an optical interface.)
The figure above shows the end-to-end operation of a unidirectional DWDM system using transponder. And the following steps describe the operation shown in the figure:
Transponders in DWDM systems facilitate a wide variety of applications, some of which include broadcasters and cable operations, data networks, and satellite and wireless communications. Transponder based DWDM systems can be implemented as a replacement for any existing WDM systems if the advantage of doing so justifies the cost. If a company has already invested in laying down fiber, that initial investment can be protected by using such a DWDM system. Using this system multiplies the capacity of the existing fiber by up to 10 or more times. This type of system is necessary for internet providers because of the rapid expansion of internet subscribers. If DWDM systems did not exist, the only way for these companies to meet the demand of internet users would be to lay new fiber. It is much more cost-effective for them to implement DWDM systems and thus alleviate the bandwidth concern. DWDMs also allow for more flexibility in the design of networks.
DWDM systems are continually being improved. Research is advancing the technology to the point where 800 wavelengths on a single fiber could be feasible. The amount of data that modern applications require continues to grow. Where bit rates of a few Gbps were once sufficient, modern consumer and corporate needs necessitate Tbps. This type of growth could not have been anticipated when the first WDM systems were introduced, but the tranponder based DWDM systems are capable of meeting modern demands. | <urn:uuid:b744375e-4d4e-4ef5-9dc6-2a870c2825d8> | CC-MAIN-2017-04 | http://www.fs.com/blog/the-introduction-of-transponder-based-dwdm-system.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00129-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933787 | 1,399 | 3.328125 | 3 |
VoDSL takes the existing copper infrastructure to the next level.
First there was voice over copper. Then there was data over copper in the form of DSL. Now, finally, theres voice and data over copper: Its called Voice over DSL (VoDSL).
Simply put, VoDSL is the packetizing of voice via a DSL line, and, thus, a copper wire. The packets can be either of the IP or TCP/IP variety (VoIP) or, more commonly, the 53-byte ATM cells (VoATM).
Why the hubbub? Its the all-important copper lines, which are already deployed just about everywhereespecially to the small- to midsize-business segment. A single copper line can be leveraged to deploy data and multiple voice circuits simultaneously. VoDSL is less compelling in the residential market where users may have only one or two phone lines, which doesnt justify the cost of the equipment. And typical ADSL lines can satisfy customers with only one phone line requirement.
VoDSL allows deployment of converged services at significantly less cost than a channelized T1 line, or the typical scenario in which data is provided over DSL with extra lines for voice. Another bonus is that while a T1 line must dedicate each of its 24 channels to either voice or data, VoDSL is dynamicso the bandwidth can be used to its full extent whether the packets contain data or voice.
ATM Now, IP Later While most VoDSL applications are currently ATM-based, many vendors and analysts believe that IP will become the choice method of packetizing voice in approximately five years. The holdup is quality of service (QoS), a guaranteed throughput and delivery level.
VoIP, at least today, cant reliably deliver carrier-class service, the kind of quality we expect when we talk over the phone. IPs QoS standard, Multiprotocol Label Switching, has not been defined fully and theres no widespread implementation. On the other hand, ATM has well-established and standard QoS mechanisms.
One of the most significant ways ATM outshines IP in delivering real-time service such as voice, says Greg Wetzel of the VoDSL Working Group, is simply that IPs packets are large and inconsistently sized. That causes a delay of up to 30ms per packet on a 384Kbps line, because many packets must be assembled and then disassembled before being translated into an understandable voice call. The variation in delay time it takes to assemble and disassemble the inconsistently sized packets is known as "jitter." ATM reduces jitter because its packet size is a constant 53 bytes.
Squeeze the Calls A typical uncompressed, packetized voice call uses 64Kbps of bandwidth. That works out to a maximum of six voice circuits on a single 384Kbps line. Currently, adaptive differential pulse-code modulation (ADPCM), the standard voice-compression technology, cant deliver carrier-class voice quality at bit rates lower than 32Kbps, but Wetzel believes that compression technology will improve, and that carrier-class quality will be attainable at 16Kbps or even 8Kbps in the next few years.
Wake-Up Call So far, theres only one national service provider, mPower (www.mpower.com), deploying VoDSL. Several regional companies are conducting tests, and the incumbent local exchange carriers to date have been slow to adopt the technology. A Qwest representative would state only that the company is "conducting trials."
Expect that to change. Vendors and service providers are predicting that by the end of the year, VoDSL will take off. But TeleChoice Inc.s Adam Guglielmo says that a major carrier is going to have to sign on before VoDSL gains acceptance in the marketplace.
While most vendors believe that VoDSL is an interim technology, smart service providers could make a tidy sum over the next few years while VoIP gets its QoS issues resolved. VoDSL can squeeze more circuits out of a single copper wire at lower costs than analog technology. For that reason alone, its worth looking into. | <urn:uuid:2650effc-6723-41eb-acef-58496282028e> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Data-Storage/Its-the-Copper-Baby | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00037-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94351 | 858 | 2.65625 | 3 |
Optical fibers with fiber counts ranging from 2 to 144 counts or more are usually coated together inside a single strand of fiber optic cable for better protection and cabling. Multi-fiber optic cables are usually required to pass a lot of distribution points. And each individual optical fiber should connect only one specific optical interface via splicing or terminating by connectors. Thus, fiber optic cables used for distribution should be durable and easy to be terminated. Tight-buffered fiber distribution cables, which meet these demands, are widely used in today’s indoor and outdoor applications, like data center and FTTH projects. This post will introduce tight-buffered fiber distribution cables.
Most of tight-buffered fiber distribution cables are designed with 900um tight-buffered fibers. This is decided by its applications. As the above mentioned, the distribution cable should be durable and easy to be terminated. The following picture shows the difference between 250um bare fiber and 900um tight-buffered fiber. They are alike, but the tight-buffered fiber has an additional buffer layer. Compared with bare fibers, 900um tight-buffered fibers can provide better protection for the fiber cores. 900um tight-buffered fibers are easy to be stripped for splicing and termination. In addition, tight-buffered fiber cables are usually small in package and flexible during cabling. These are the main reasons why a lot of fiber optic distribution cables use tight-buffered design.
900nm tight-buffered distribution fiber cables also come in a variety of types. Tight-buffered distribution fiber cables used for different environments and applications might have different fiber types, outer jackets and cable structures. The following will introduce several tight-buffered distribution fiber cables for your reference.
Indoor Tight-Buffered Distribution Fiber Cable
Tight-buffered distribution fiber cables used for indoor applications are usually used for intra building backbones and routing between telecommunication rooms. Large tight-buffered fiber cable with fiber counts more than 36 fibers generally has “sub-unit” (unitized) design (shown in the above). While smaller tight-buffered distribution cables, with fiber counts of 6, 12 or 24, usually have “single-jacket” (non-unitized) designs, which are more flexible in cabling and have much smaller packages and cost advantages. The lower count tight-buffered distribution fiber cables with color coded 12 fibers and 24 fibers are very popular. The following picture shows a 24-fiber indoor tight-buffered distribution fiber cable with single-jacket design.
During practical use, these 6, 12 or 24-fiber indoor tight-buffered distribution fiber cables can be spliced with other fibers or be terminated with fiber optic connectors. And they can be made into multi-fiber optic pigtails or fiber patch cable after terminated with fiber optic connector on one end or two end. The color coded fibers can also ease fiber cabling.
Indoor/Outdoor Armored Tight-Buffered Distribution Fiber Cable
Although tight-buffered distribution fiber cables are usually used for indoor applications, there is still a place for them in outdoor applications after added with a layer of metal armored tube inside the cable. Armored fiber cables are durable, rodent-proof, water proof and can be directly buried underground during installation, which saves a lot of time and money.
Here we strongly recommend a low fiber count armored tight-buffered distribution fiber cable which can be used for both indoor and outdoor applications (show in the above picture). This low fiber count armored tight-buffered cable has a single-jacket design with a steel armored tape inside the cable. It can be used for both backbone cabling and horizontal cabling in indoor environments. And it can also be used for direct buried applications and aerial application in outdoor environments.
During the purchasing of fiber optic cables, one of the most important thing is the shipment of fiber cables. Many bulk fiber cables are transmitted via shipping, which might take a long time. Now FS.COM customers in the USA can enjoy same day shipping for tight-buffered distribution fiber cables for both indoor and outdoor applications. Details are shown in the following table. Kindly contact firstname.lastname@example.org for more details, if you are interested.
|31909||12 Fibers OM3 Plenum, FRP Strength Member, Non-unitized, Tight-Buffered Distribution Indoor Fiber Optical Cable GJPFJV|
|31922||12 Fibers OM4 Plenum, FRP Strength Member, Non-unitized, Tight-Buffered Distribution Indoor Fiber Optical Cable GJPFJV|
|31866||24 Fibers OM4 Riser, FRP Strength Member, Non-unitized, Tight-Buffered Distribution Indoor Fiber Optical Cable GJPFJV|
|51308||24 Fibers OS2, LSZH, Single-Armored Double-Jacket, Tight-Buffered Distribution Waterproof Indoor/Outdoor Cable GJFZY53| | <urn:uuid:497ef258-15e2-4883-957c-3c026b10ac6d> | CC-MAIN-2017-04 | http://www.fs.com/blog/tight-buffered-fiber-distribution-cables-for-indoor-and-outdoor-use.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00065-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.906082 | 1,028 | 3.046875 | 3 |
With all the recent hoopla about GPGPU acceleration in high performance computing, it’s easy to forget that Roadrunner, the most powerful supercomputer in the world, is based on a different brand of accelerator. The machine at Los Alamos National Laboratory uses 12,960 IBM PowerXCell 8i CPUs hooked up to 6,480 AMD Opteron dual-core processors to deliver 1.1 petaflop performance on Linpack.
Because of the wide disparity in floating point performance between the PowerXCell 8i processor and the Opteron, the vast majority of Roadrunner’s floating point capability resides with the Cell processors. Each PowerXCell 8i delivers over 100 double precision gigaflops per chip, which means the Opteron only contributes about 3 percent of the FLOPS of the hybrid supercomputer.
Some of those FLOPS are already being put to good use, though. This week, Los Alamos announced that the lab had completed its “shakedown” phase for Roadrunner. Because the machine was installed in May 2008, this has allowed researchers over a year to experiment with some big science applications.
These unclassified science codes included a simulation of the expanding universe, a phylogenic exploration of the evolution of the Human Immunodeficiency Virus (HIV), a simulation of laser plasma interactions for nuclear fusion, an atomic-level model of nanowires, a model of “magnetic reconnection,” and a molecular dynamics simulation of how materials behave under extreme stress. All of these codes were able to make good use of the petascale performance of the Roadrunner.
Now that the shakedown period has concluded, the NNSA will move in to claim those FLOPS for nuclear weapons simulations. Since these applications are obviously of a classified nature, we’re not likely to hear much about their specific outcomes. Open science codes will still get a crack at the machine, but since Roadrunner’s primary mission is to support US nuclear deterrence, the unclassified workloads will presumably get pushed to the back of the line.
The bigger question is what are the longer-term prospects of a hybrid x86-Cell system architecture and the Cell processor, in general, for the high performance computing realm? Unlike GPUs or FPGAs, Cell processors contain their own CPU core (a PowerPC) along with eight SIMD coprocessing units, called Synergistic Processing Elements (SPE), so the chip represents a more fully functional architecture than its competition. Despite that advantage, the Cell’s penetration into general-purpose computing has remained somewhat limited. Although the original Cell processor was the basis for the PlayStation3 gaming console and the double-precision-enhanced PowerXCell variant has found a home in HPC blades, neither version is a commodity chip in the same sense as the x86 CPU or general-purpose GPUs. The result is that Cell-based solutions are strewn rather haphazardly across the HPC landscape.
Besides the high-profile Roadrunner system, IBM also offers a standalone QS22 Cell blade, which is deployed at a handful of sites, including the Interdisciplinary Centre for Mathematical and Computational Modeling at the University of Warsaw and Repsol YPF, a Spanish oil and gas company. As it turns out, these systems are among the most energy efficient, with the Warsaw system currently sitting atop the Green500 list. Other Cell accelerator boards are available from Mercury Computer Systems, Fixstars, and Sony, but I’ve yet to hear of any notable HPC deployments resulting from these products.
Cell processor developer tools certainly exist, but no standard environment has come to the fore. This is rather important since the heterogeneous nature of the Cell architecture means programming is inherently more difficult. IBM, of course, provides its own software development kit for the architecture. Outside of Big Blue, Mercury Computer Systems has a Cell-friendly Multicore Plus SDK, and software vendor Gedae sells a compiler. RapidMind offers Cell support in its multicore development platform, but since the company was acquired by Intel, its Cell-loving days are likely coming to a close. French software maker CAPS was planning to offer Cell support in its HMPP manycore development suite sometime this year, but that hasn’t come to pass.
With NVIDIA’s Fermi GPU architecture poised to make a big entrance into high performance computing in 2010, IBM will have to make a decision about adding GPU acceleration to its existing HPC server lineup. Server rival HP has apparently already committed to including Fermi hardware in its offerings. Last week Georgia Tech announced HP and NVIDIA would be delivering a sub-petaflop supercomputer to the institute in early 2010. That system will be based on Intel Xeon servers accelerated by Fermi processors. Other HPC vendors, including Cray, have announced plans to bring Fermi into their product lines. If GPUs become the mainstream accelerator for HPC servers, IBM will be forced to follow suit.
That’s not to say IBM will give up on its home-grown Cell chip. Big Blue has a tradition of offering a smorgasbord of architectures to its customers, especially in the HPC market. Today the company has high-end server products based on x86 CPUs, Blue Gene (PowerPC-based) SoCs, Power CPUs, and the Cell processor. Adding GPU-accelerated hardware wouldn’t necessarily mean ditching the Cell.
On the other hand, IBM has to consider if it wants to reinvest in the architecture to keep up with the latest GPU performance numbers from NVIDIA and AMD, which would mean getting a single Cell processor to deliver hundreds of gigaflops of double-precision performance. IBM is certainly capable of building such a chip, but there’s little motivation to do so. With no established base of customers clamoring for Cell-equipped supercomputers and with a relatively small volume of Cell chips from which to leverage high-end parts, it’s hard to imagine that Big Blue will be doubling down on its Cell bet. | <urn:uuid:50c54521-8958-44bc-8ddb-468c672a2b91> | CC-MAIN-2017-04 | https://www.hpcwire.com/2009/10/27/will_roadrunner_be_the_cells_last_hurrah/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00303-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937892 | 1,260 | 2.609375 | 3 |
While it may seem a contradiction in terms, digital automation tools may make possible a new level of personalization in medical care.
Over the past several years, there has been a growing movement toward customizing medical care for the individual. This is not just a "feel good" social movement; it is a scientific realization that individualized care can be more effective than care designed for the masses.
From N=many to N=1
Before I explain about the automation, let me share a little history. The research of disease processes has, in the past, focused on studying large populations of people with common symptoms in an effort to find common denominators that could explain the underlying disease process. For example, 19th-century physician John Snow studied a population of cholera epidemic victims and discovered that contaminated drinking water was the source of contagion. Similarly the "gold standard" in medical research has been the "randomized clinical trial." In this study, a large number of patients are split into two equal groups, each of which receives a different therapy. The group that has the best overall outcome or response defines which therapy then becomes accepted.
This N=many approach (where N represents the size of the population in the study) has worked well, but it has its limitations. Human physiology is complex, with an enormous number of variations which can affect the disease process and response to treatment in an individual. Not all members of a group of patients with a given disease are alike.
And, too, a patient's environment -- including social network, education, personality and access to money and other resources -- can affect his or her ability to follow through with recommended treatments.
As our knowledge of human physiology has grown, we've realized that we need an approach that accounts for these variations and designs treatments at the individual level. An N=1 approach, if you will.
After 23 years of searching, N=1 finds a cure
Eric Dishman of Intel is the perfect example of how the variation in human physiology can confound medical science. Dishman was diagnosed in college with what was thought to be a rare form of kidney cancer. Over the next 23 years he received a wide variety of treatments that had proved effective in treating kidney cancer (an N=many approach). While the treatments kept him alive, they were devastating to his system and didn't cure his cancer.
But Dishman got lucky. As part of his job as Intel's general manager for health and life sciences, he visited a number of enterprises involved in next generation genome sequencing. On one of those visits, on a whim, he had his entire genome sequenced and sent the results to his physicians. The genomic data revealed that the cancer in his kidneys was genetically much closer to a type of cancer commonly found in the pancreas. His treatment was changed to address the specific mutations of his cancer cells (an N=1 approach) and, 18 months later, he was free of cancer and healthy again.
Dishman, who has been an innovator in the field of patient-centered care and an advocate of the personalized approach to medicine, turned out to be the poster-child for why this matters. At this week's HIMSS14 conference, he is speaking on "N=1: Customizing Care for My Life and My DNA." I am privileged to join him during that talk to share an update on an "N=1" childhood neuroblastoma project that uses high-performance computing to speed up the process of whole genome sequencing. The genomic data pinpoints the best treatment for each young patient based on the specific mutations in that child's tumor.
As the title of Dishman's presentation suggests, our genomic data are clearly important, but make up only one piece of the N=1 approach. Creating a treatment plan that takes into account other physiologic data (proteins for example) as well as individual differences in lifestyle, environment and resources matters just as much as genomics.
Automation makes individualization possible
That's where automation can help clinicians provide personalized care. On a very simple level, clinicians can use customized text messages, sent automatically on a pre-set schedule, to remind patients to take medications on time, check their blood pressure or do other important health-related tasks.
Often, a patient's ability to comply with a treatment plan can be bolstered by having the right information at the right time. For example an automated text message could say, "7 a.m. Time to take your blood pressure medicine and your statin. That's one of the round blue pills and one of the pink oval pills. Take those pills with a full glass of water."
For older patients with multiple medications, having a reminder offered at the moment it's needed, complete with personalized instructions, could mean the difference between health and hospitalization. Without automation, caregivers couldn't offer that level of specificity and support to patients.
In the area of health coaching, which turns out to be pretty effective in helping prevent chronic disease complications, automated feedback messages can be very useful, offering multiple opportunities to guide patients to appropriate diet and exercise choices, timed to arrive at the moment when the advice is most needed. They can also offer positive reinforcement, which can be a powerful motivator.
Better care, lowered costs, will earn support for N=1
While these may seem like simple tools, there is growing data that they can be very effective at keeping patients out of the hospital, saving money and improving the quality of life. And there are thousands of developers working on many more digital tools that will be customized to individual needs. Medicare and private health plans are starting to take note of the effectiveness of digital tools and have begun reimbursing for some of them.
As the N=1 approach proves its effectiveness in reducing costs by reducing hospitalizations and expensive complications, payers will not only support it, they will insist on it. And automated digital tools will quickly become ubiquitous in healthcare. | <urn:uuid:5b760a66-5274-4238-bbde-d9df096d361a> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2475895/healthcare-it/n-1--how-automated-tools-will-help-lower-costs-by-personalizing-healthcare.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00267-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.966899 | 1,213 | 2.796875 | 3 |
VoIP and CTP Certification: Converging on a Solution
Revolutionary advancements in telephone technology are causing corresponding changes in the telephony and data network industries. Traditionally, voice was transferred over circuit-based networks. These extremely advanced systems consist of wired and wireless networks. With the rise of Voice over Internet Protocol (VoIP), both voice and video are increasingly carried on packet-based networks—the same ones that carry e-mail messages—rather than on traditional telephone networks.
This integration or convergence of circuit-based and packet-based technologies requires specialized skills and knowledge. Today’s telecommunications workers cannot afford to be only telephony or data network experts. They must become convergence technology professionals focusing on total voice, video and data solutions across different networks, rather than just one network type. These professionals will be the front line in addressing the needs and challenges of the convergence industry.
What Are the Challenges?
The primary challenge facing convergence is that although all of the major protocols are in place, several practices and technologies still need to be resolved:
- Integration with common networking practices: Network address translation (NAT) allows a router or firewall to hide the internal topology of the network and conserve IP addresses. The industry must agree on a common standard allowing VoIP-related protocols to easily traverse NAT-enabled firewalls and routers.
- Emergency services: VoIP technologies have not caught up with the 911 emergency service in the United States. Homes and businesses that rely on VoIP may not be able to contact 911 easily.
- Security controls: Government agencies with the proper warrants can create network wiretaps. How will calls be properly monitored across packet-based networks?
- Internetwork support: VoIP providers have yet to agree on how to share their resources so that provider networks can work and play well together.
- Availability: Residential consumers in some larger markets, such as the greater New York City or Los Angeles areas, can choose VoIP for their homes. But small and medium-sized markets have no choice.
While these factors are getting worked out, a secondary challenge needs to be addressed. The rate at which convergence technology is evolving has created a shortage of trained, certified professionals who understand the issues facing convergence and can implement workable solutions.
The Telecommunications Industry Association (TIA) and the Internet Engineering Task Force (IETF) have published standards that address the industry’s needs, but simply publishing such standards is not enough. Workers need to demonstrate their skills in an efficient way, so TIA has taken the initiative to create a vendor-neutral certification called Convergence Technologies Professional (CTP). This certification verifies a person’s knowledge in three areas vital to modern voice and video communication:
- Traditional networking involving TCP/IP, security and common network services.
- Traditional telephony, such as planning and troubleshooting circuit-based telephone networks as well as high-speed network infrastructures.
- VoIP’s essential protocols, codecs and procedures that ensure voice and video are properly processed.
As with the networking standards it has created, TIA worked closely with leading companies, including Avaya, Cisco Systems, Nortel Networks and Toshiba, to create the CTP certification. TIA also felt it was important to work with an organization that knew how to manage certifications, so it enlisted Prosoft Learning Corp., purveyors of the CIW series of professional certifications.
CTP certification provides a standard by which to validate an individual’s proficiencies in convergence technologies. To be effective, today’s convergence technology workers require thorough skills and knowledge in networking, telephony and VoIP. Validation of these skills enables career advancement and job security.
In addition, CTP certification requires technicians to choose the right tool for a given job, rather than using only a single vendor’s solution. Sometimes, the right tool is a well-known vendor product. As open-source solutions are becoming increasingly important, however, vendors must be prepared for their products to be compared to those of their competitors. In fact, savvy vendors support CTP to enable technicians to better understand how to create a solution for the customer, regardless of specific vendor products.
Further, the CTP certification measures industry-wide skills and job-role knowledge. This prepares candidates for a variety of careers in technical or sales roles. So, whereas the telephony industry tends to be product-oriented and thus focused on vendor-specific product features, CTP workers understand product commonalities and ways that technologies work together.
Needing the Right Talent
Convergence technology workers must master not only traditional networking and server maintenance, but also traditional telephony maintenance, VoIP, video and troubleshooting. So how do you find an experienced convergence worker?
One answer is multiple certifications, which can be costly and time-consuming. For those seeking solid proof of their convergence knowledge in a single certification, CTP is a sound choice. Developed by subject-matter experts and psychometrically reviewed, CTP is a respected, high-stakes certification accepted or endorsed by companies such as Avaya, Cisco and Toshiba.
CTP is designed for professionals with experience in data networking, telephony networking and convergence implementation. Typically, an individual has strength in one area and takes instruction in the others. The exam consists of 65 multiple-choice questions, 49 of which must be answered correctly to pass.
Prosoft Learning Corp. and ComputerPREP provide official CTP course materials so that qualified candidates can prepare for the exam efficiently. This courseware was developed with input from subject-matter experts, including those from companies such as Avaya and from TIA itself. Candidates can take the exam at Prometric or Pearson VUE testing centers. Alternatively, they can take the exam in a CTP Certified Testing Center (CTC) environment. Created by Prosoft Learning Corp., the CTC is a monitored, secure proctor network that uses Web-based delivery of the same CTP exams available through traditional Prometric and VUE centers. As a result, corporations and academic institutions can deliver the exam right in the classroom.
Maturation of the Industry
The traditional telephone company employee wore a hard hat, drove around in a truck with a flashing light and worked only on your home or business telephone. Much like the proverbial housekeeper who says, “I don’t do windows,” the traditional telephony worker didn’t do data. Even five years ago, the traditional telephony worker didn’t do Windows, Linux or anything that was carried across a packet-based network. Now, anyone who wants to work with voice and video needs to know about traditional telephony, as well as data concepts such as TCP/IP, routing and ways to convert data to voice and back. Convergence knowledge is now essential to job security.
The advent of certifications such as CTP demonstrates the continuing maturation of the industry. CTP’s very existence shows that the industry is aware of its own potential and the challenges it faces. Customers are not content with only one technology or vendor. The industry has responded with multiple vendors and robust technologies. And now, the certification industry has found a way to train, test and certify convergence workers who understand t | <urn:uuid:36ca71f0-bf69-4dc1-a40a-f0f61ecc0dca> | CC-MAIN-2017-04 | http://certmag.com/voip-and-ctp-certification-converging-on-a-solution/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00111-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934099 | 1,493 | 2.625 | 3 |
The micro-airplane is powered by a special propeller that folds during gliding to minimize air drag. RoboSwift steers by sweeping back one wing more than the other. The difference in wing position lets RoboSwift make very sharp turns.
Resembling the common swift, RoboSwift will be able to go undetected while using its three micro cameras to perform surveillance on vehicles and people on the ground.
For example, the airplane could slip in unnoticed among a flock of real birds and provide unique access for their flight and migration actions.
While its creators don't list a military application for the RoboSwift you could easily see the Pentagon knocking on their door for a look-see.
In fact the US Army has been looking at such aircraft to be used in a variety of applications, such as surveillance, reconnaissance, intelligence gathering.
In fact the Dutch Aerospace Engineering students at the Delft University of Technology, together with the Department of Experimental Zoology of Wageningen University, designed the RoboSwift also designed the DeFly - a micro-aircraft that used flapping wings to hover motionless in one spot.
Such unmanned aircraft were in the news this week as the first unmanned attack squadron in aviation history will arrive in Iraq today looking to deliver 500-pound bombs and Hellfire missiles to the enemyRoboSwift, which is expected to fly in January 2008, will have a span of almost 2-foot span and weight less than a pound.
The RoboSwift can fly one hour with its lithium-polymer batteries that power the electromotor, which drives the propeller. The propeller folds back during gliding to minimize air drag. The unique morphing-wing design features are taken from the swift. Morphing means the wings can be swept back in flight by folding feathers over each other, thus changing the wing shape and reducing the wing surface area, researchers said.
RoboSwift also steers by morphing its wings. Doing so, the micro airplane can perform optimally, flying efficiently and highly maneuverable at very high and very low speeds, just like the swift.
The students found out that using only four feathers, much less than the bird uses, already provides the wing with sufficient morphing capacity; this feature makes actual production of the design feasible. Steering RoboSwift is done by asymmetrically morphing the wings.
The team based the RoboSwift on research performed by David Lentink, who published a study into the swift's flight characteristics in this year's April issue of Nature.
During its life, a common swift flies a distance that goes up to five times the distance to the Moon and back. Lentink said the swift is such an able flyer because it continuously morphs its wings to the prevailing flight conditions to fly more efficient and more maneuverable. | <urn:uuid:dd60b6b8-4723-4e58-8c30-ad01abe35ead> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2348124/security/bird-sized-airplane-could-revolutionize-intelligence-gathering-efforts.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00378-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951262 | 580 | 3.75 | 4 |
Technology has always played a critical role in cyber security, but effective solutions always require human involvement. Scanning, monitoring and assessment programs are of limited use without an analyst to qualify results and turn them into fixes.
Finding unique, advanced exploit methods remains a job for a penetration tester.
The relationship between security technologies and security professionals may be best described as a partnership. New developments from MIT’s Computer Science and Artificial Intelligence Laboratory reaffirm this relationship and offer insights into how the partnership can be augmented by leveraging AI.
MIT’s cyber security system, called AI2, combs through millions of logs per day to find suspicious events. A security professional then reviews the data for signs of a breach, weeding out false positives in the process. The work required to keep up with such a system could easily become cumbersome – but this is where artificial intelligence starts to make an impact.
As an analyst flags abnormalities that appear to be intrusions, the AI2 system gathers the contextual information it needs to make modifications in its routines. Thus, the more human input the system receives, the smarter it gets, narrowing the output of potential threat indicators it finds.
In testing the AI2 system with analyst input, MIT researchers observed an 85 percent detection rate over 90 days. Research lead Kalyan Veeramachaneni claims that, without the AI refinements of the system, analysts would need to review thousands of entries per day to achieve a similar result. And the system’s machine learning processes, without human input, would result in a success rate of 7.9 percent.
MIT’s research emphasizes that machines may have the capability to assist in dealing with cyber security’s growing talent gap. However, a healthy population of skilled professionals will remain critical for real success. Well-staffed teams can use such technologies to streamline the maintenance tasks involved in their daily work and dedicate their energies to higher-value pursuits.
Lunarline has developed a number of intelligent solutions designed to augment and simplify the work of security professionals. Our malware assessment program and Ground Station, our cyber intelligence platform, leverage contextual information and analyst input to enhance their capabilities. Sniper, our penetration testing platform, automates and standardizes advanced testing routines, and our in-house SOC can bring all these tools together to improve your security posture.
You can learn more about these innovative solutions and others by visiting Lunarline.com. If you’re ready to deploy intelligent cyber security capabilities at your organization, contact one of our experts now. | <urn:uuid:3173b803-cd85-4d7b-bf8a-c129f3752e8b> | CC-MAIN-2017-04 | https://lunarline.com/blog/2016/05/humans-better-cyber-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00012-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921962 | 518 | 2.515625 | 3 |
What is networking?
What are networking protocols?
How to configure Cisco or Juniper devices?
How to secure your network?
If you have some similar questions on your mind you are in the right place. At first there were only few articles published about some basic networking terms, networking protocols and simple device configuration but it has grown to be a good resource even for experts. It will be expanding even more information about cutting edge networking technology stuff.
Articles on this blog will mostly mention Cisco and Juniper as vendors but the focus is on technologies and protocols not vendors. If someone else come out with great new networking solutions we are happy to talk about those solutions to, of course. We are trying to be vendor independent whenever possible.
Although the focus are networking technologies and methodologies, you can expect some casual off-topic things like python or similar stuff. Maybe you will think those are not related to networking, think about it, they are maybe just not yet totally related to networking but getting there.
When talking about device configuration it will firstly be showed how to do it on Cisco. Different network devices have different ways to configure same things, when possible, we include Juniper and other CLI examples.
Enjoy your stay at howdoesinternetwork.com and check out our BLOG timeline to read newest stuff.
This project and his author strongly nurtures transparency, journalism free speech right but also journalism objectivity obligation. | <urn:uuid:4ef8372c-b8b0-4d51-8bb8-772dbc5f1a0c> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/about-howdoesinternetwork | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00222-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945236 | 292 | 2.8125 | 3 |
A Caterham F1 car at the Hungarian Grand Prix. How is Dell helping the team keep up with the competetion?
Time is of the essence for all businesses, particularly when milliseconds can mark success. In Formula 1, the pressure for saving time is not just confined to the race track. IT also plays a vital role in a team’s performance and is instrumental to help shave tenths of a second off lap times. Understanding aerodynamics is the key to unlocking speed, and that’s where Caterham F1 Team’s Dell HPC comes in.
Any team who tried to make a car without a HPC solution would be miles off the pace. The driver’s sheer speed and skill is not the only key element that comes into play for on-track success, and HPC technology is fundamental to climbing up the F1 grid.
Sitting in a room which is strictly temperature controlled, the Dell HPC, powered by Intel, is the powerful beast behind the Caterham F1 Team in Leafield Technology Centre- but it seems even more significant in the context of how this type of technology influences Formula 1 racing today.
F1 teams have long relied on wind tunnels for testing parts to find out if they are going to add performance to the car or not, yet the relentless advance of technology has seen those teams make use of Computational Fluid Dynamics (CFD) in tandem with wind tunnel work. CFD is often called ‘a wind tunnel in a computer’ and F1 teams rate this technology highly as it allows them to virtually test how modifications to a car could have an impact on the car’s aerodynamic performance and ultimately its speed.
CFD starts with a process called meshing, which means virtually segmenting the car into millions of tiny cells. These squares are in turn halved into thousands and thousands of tiny triangles. At Caterham F1 Team, the teams’ engineers use CFD to draw detailed information on what is happening with the temperature, pressure, turbulence and velocity inside each triangle that covers the surface of the virtual car. William Morrison, Caterham F1 Team’s IT Infrastructure Manager, explains why this is so important:
"CFD means you can look at the airflow over and through various components of the car, and this simulation allows us to filter so that we can test theoretical developments and match them up to wind tunnel results, before coordinating the two sets of results between each other."
"By trying out a number of different ideas and testing whether they will work or not, it means we are able to carry out development work without actually having to make parts," Morrison explained.
"To purely do development work in a wind tunnel would be very costly and time-consuming because of how long it takes to produce the wind tunnel models; the HPC allows a very quick turnaround of theoretical models before you decide whether to physically make them or not. It’s vital for a modern F1 team to have HPC simulation capability."
The primary function of the supercomputer at Caterham F1 Team is to handle a huge number of of operations at once.
"The HPC typically has about six ‘jobs’ going through it at any one time, which could last anything from six hours to two days each," says Morrison.
"For an average 17-hour job the HPC will do approximately ten billion calculations."
To put that in context, the average PC you might have at home to surf the internet would take between four and five months to get through that amount of maths.
The calculations the HPC does are not simple either. Partial differential equations are the norm and they reveal everything about how air is flowing in and around the part being tested. Once the HPC has performed its ten billion calculations, it will spend the final two hours of a 17-hour job streamlining these to about 800 million pieces of individual data, which will then be presented to the CFD analysts in video, graph and picture form.
"What we get from this is a different insight into the aerodynamics, because you’re getting a full 3D flow simulation," explains Morrison.
"You’re seeing the complete flow structures of air off the body of the car."
"It used to be that a typical job may be 17 hours," explains Morrison, "but this has recently come down to about 12. This has been done through enhancements to the model set-up, and optimising the way the model solves. There is constant work being done to improve the performance of the calculations so it’s quicker and more reliable. We have a little group of about three people dedicated to improving this all the time. We’re always looking to reduce the solve time because that means we can get more jobs through it per day."
So given the importance of the work constantly going through the HPC, what happens in the event of an unwanted event like a power cut?
"We have battery conditioning units so that there’s always conditioned power coming into the HPC, and if there’s a disruption to the external power supply we’ve got a generator outside which kicks in automatically," says Morrison.
"If the HPC was running at full power and the electricity suddenly went off, the generator would be able to keep it going for a few hours."
Saving time is a constant battle for any team in the F1 circuit and for Caterham F1 Team the Dell HPC is the nerve centre of the entire operation. Without it their car simply could not be designed or developed, which is why the HPC runs 24 hours a day, 365 days a year – always with a queue of between ten and twenty jobs waiting to be solved by powerful calculating prowess.
"December is one of the busiest times of the year in Formula 1, so the HPC is still working flat out even on Christmas Day," laughs Morrison.
"It’s not allowed any time off." | <urn:uuid:21a5b94f-56ad-4634-ad38-a0d79bbbebb7> | CC-MAIN-2017-04 | http://www.cbronline.com/news/enterprise-it/server/case-study-how-a-dell-supercomputer-is-helping-caterham-f1-team-maximise-performance | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00552-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960739 | 1,232 | 2.734375 | 3 |
The immensely powerful supercomputers of the not too distant future will need some serious fault tolerance technology if they are to fulfill their promise of ingenious research.
That’s why the U.S. Department of Energy ‘s Office of Advanced Scientific Computing Research this week said it is looking for “basic research that significantly improves the resiliency of scientific applications in the context of emerging architectures for extreme scale computing platforms. Extreme scale is defined as approximately 1,000 times the capability available today. The next-generation of scientific discovery will be enabled by research developments that can effectively harness significant or disruptive advances in computing technology.”
+More on Network World: DARPA demos lightweight, 94GHz silicon system on a chip+
According to the DOE, applications running on extreme computing systems will generate results with orders of magnitude higher resolution.
“However, indications are that these new systems will experience hard and soft errors with increasing frequency, necessitating research to develop new approaches to resilience that enable applications to run efficiently to completion in a timely manner and achieve correct results,” the agency stated.
Today, the DOE says that 20% or more of the computing capacity in a large high performance computing facility is wasted due to failures and recoveries. The situation is expected to worsen sharply as systems increase in size and complexity, wasting even more capacity. Research is required to improve the resilience of the systems and the applications that run on them, the agency stated.
What the DOE says the research it is looking for focuses on three things:
+More on Network World: Coolest house in the world: A Boeing 727+
The DOE states that a variety of factors will contribute to increased rates of faults and/or errors on extreme scale systems a few of which are
- The number of components with both memory and processors will increase by an order of magnitude, and the number of system components is increasing faster than component reliability, resulting in an increase in hard and soft errors;
- Constraining hardware and software to a power envelope of 20 Megawatts will necessitate operating components at near-threshold power levels and power levels may vary over time, making errors more likely; and
- Use of the machines will require managing unprecedented parallelism and complexity, especially at the node level of extreme scale systems, increasing the likelihood of programmer errors.
Check out these other hot stories: | <urn:uuid:97939741-401a-4df3-9420-4ba3bbba64ab> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2459664/data-center/dept-of-energy-hunting-fault-tolerance-for-extreme-scale-systems.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00002-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922659 | 484 | 2.96875 | 3 |
Choumet V.,Institute Pasteur Paris |
Attout T.,CNRS Systematics, Biodiversity and Evolution Institute |
Chartier L.,Institute Pasteur Paris |
Khun H.,Institute Pasteur Paris |
And 6 more authors.
PLoS ONE | Year: 2012
Background: Anopheles gambiae is a major vector of malaria and lymphatic filariasis. The arthropod-host interactions occurring at the skin interface are complex and dynamic. We used a global approach to describe the interaction between the mosquito (infected or uninfected) and the skin of mammals during blood feeding. Methods: Intravital video microscopy was used to characterize several features during blood feeding. The deposition and movement of Plasmodium berghei sporozoites in the dermis were also observed. We also used histological techniques to analyze the impact of infected and uninfected feedings on the skin cell response in naive mice. Results: The mouthparts were highly mobile within the skin during the probing phase. Probing time increased with mosquito age, with possible effects on pathogen transmission. Repletion was achieved by capillary feeding. The presence of sporozoites in the salivary glands modified the behavior of the mosquitoes, with infected females tending to probe more than uninfected females (86% versus 44%). A white area around the tip of the proboscis was observed when the mosquitoes fed on blood from the vessels of mice immunized with saliva. Mosquito feedings elicited an acute inflammatory response in naive mice that peaked three hours after the bite. Polynuclear and mast cells were associated with saliva deposits. We describe the first visualization of saliva in the skin by immunohistochemistry (IHC) with antibodies directed against saliva. Both saliva deposits and sporozoites were detected in the skin for up to 18 h after the bite. Conclusion: This study, in which we visualized the probing and engorgement phases of Anopheles gambiae blood meals, provides precise information about the behavior of the insect as a function of its infection status and the presence or absence of anti-saliva antibodies. It also provides insight into the possible consequences of the inflammatory reaction for blood feeding and pathogen transmission. © 2012 Choumet et al. Source
Borthwick N.,University of Oxford |
Borthwick N.,Weatherall Institute of Molecular Medicine |
Ahmed T.,University of Oxford |
Ahmed T.,Weatherall Institute of Molecular Medicine |
And 24 more authors.
Molecular Therapy | Year: 2014
Virus diversity and escape from immune responses are the biggest challenges to the development of an effective vaccine against HIV-1. We hypothesized that T-cell vaccines targeting the most conserved regions of the HIV-1 proteome, which are common to most variants and bear fitness costs when mutated, will generate effectors that efficiently recognize and kill virus-infected cells early enough after transmission to potentially impact on HIV-1 replication and will do so more efficiently than whole protein-based T-cell vaccines. Here, we describe the first-ever administration of conserved immunogen vaccines vectored using prime-boost regimens of DNA, simian adenovirus and modified vaccinia virus Ankara to uninfected UK volunteers. The vaccine induced high levels of effector T cells that recognized virus-infected autologous CD4 + cells and inhibited HIV-1 replication by up to 5.79 log 10. The virus inhibition was mediated by both Gag- and Pol- specific effector CD8 + T cells targeting epitopes that are typically subdominant in natural infection. These results provide proof of concept for using a vaccine to target T cells at conserved epitopes, showing that these T cells can control HIV-1 replication in vitro. © The American Society of Gene & Cell Therapy. Source
Demanou M.,A+ Network |
Pouillot R.,A+ Network |
Grandadam M.,Institute Pasteur du Laos |
Boisier P.,A+ Network |
And 7 more authors.
PLoS Neglected Tropical Diseases | Year: 2014
Background:Dengue is not well documented in Africa. In Cameroon, data are scarce, but dengue infection has been confirmed in humans. We conducted a study to document risk factors associated with anti-dengue virus Immunoglobulin G seropositivity in humans in three major towns in Cameroon.Methodology/Principal Findings:A cross sectional survey was conducted in Douala, Garoua and Yaounde, using a random cluster sampling design. Participants underwent a standardized interview and were blood sampled. Environmental and housing characteristics were recorded. Randomized houses were prospected to record all water containers, and immature stages of Aedes mosquitoes were collected. Sera were screened for anti-dengue virus IgG and IgM antibodies. Risk factors of seropositivity were tested using logistic regression methods with random effects.Anti-dengue IgG were found from 61.4% of sera in Douala (n = 699), 24.2% in Garoua (n = 728) and 9.8% in Yaounde (n = 603). IgM were found from 0.3% of Douala samples, 0.1% of Garoua samples and 0.0% of Yaounde samples. Seroneutralization on randomly selected IgG positive sera showed that 72% (n = 100) in Douala, 80% (n = 94) in Garoua and 77% (n = 66) in Yaounde had antibodies specific for dengue virus serotype 2 (DENV-2).Age, temporary house walls materials, having water-storage containers, old tires or toilets in the yard, having no TV, having no air conditioning and having travelled at least once outside the city were independently associated with anti-dengue IgG positivity in Douala. Age, having uncovered water containers, having no TV, not being born in Garoua and not breeding pigs were significant risk factors in Garoua. Recent history of malaria, having banana trees and stagnant water in the yard were independent risk factors in Yaounde.Conclusion/Significance:In this survey, most identified risk factors of dengue were related to housing conditions. Poverty and underdevelopment are central to the dengue epidemiology in Cameroon. © 2014 Demanou et al. Source
Jutavijittum P.,Chiang Mai University |
Andernach I.E.,Institute of Immunology |
Yousukh A.,Chiang Mai University |
Samountry B.,Health Science University |
And 6 more authors.
Vox Sanguinis | Year: 2014
Background and Objectives: In Lao People's Democratic Republic, hepatitis B virus is highly endemic. However, blood donations are only screened for HBsAg, leaving a risk of transmission by HBsAg-negative occult infected donors. Here, we characterized first-time blood donors to assess prevalence of hepatitis B virus infections and occult infected donors. Materials and Methods: Sera were screened for HBsAg, HBeAg and anti-HBs, anti-HBc and anti-HBe antibodies. Occult HBV infections (OBIs) were assessed in HBsAg-negative sera by PCR, and sera of HBsAg positive and occult infected donors were phylogenetically characterized. Results: 9·6% of the donors were HBsAg positive, and 45.5% were positive for at least one of the hepatitis B virus serum markers. More than 40% HBsAg carriers were HBeAg positive, with HBeAg seroconversion occurring around 30 years of age. Furthermore, 10·9% of HBsAg-negative, anti-HBc and/or anti-HBs-positive donors were occult infected with hepatitis B virus. Thus, at least 3·9% of blood donations would potentially be unsafe, but hepatitis B virus DNA copy numbers greatly varied between donors. Conclusion: In Lao People's Democratic Republic, a sizable proportion of HBsAg-negative and anti-HBc antibody-positive blood donations are potentially DNA positive and infective for hepatitis B. © 2013 International Society of Blood Transfusion. Source
Lee W.-J.,Seoul National University |
Brey P.T.,Institute Pasteur du Laos
Annual Review of Cell and Developmental Biology | Year: 2013
Since Metchnikoff developed his views on the intestinal microflora, much effort has been devoted to understanding the role of gut microbiomes in metazoan physiology. Despite impressive data sets that have been generated by associating a phenotype-causing commensal community with its corresponding host phenotype, the field continues to suffer from descriptive and often contradictory reports. Hence, we cannot yet draw clear conclusions as to how the modifications of microbiomes cause physiological changes in metazoans. Unbiased, large-scale genetic screens to identify key genes, on both microbial and host sides, will be essential to gain mechanistic insights into gut-microbe interactions. The Drosophila genome-commensal microbiome genetic model has proven to be well suited to dissect the complex reciprocal cross talk between the host and its microbiota. In this review, we present a historical account, current views, and novel perspectives for future research directions based on the insights gleaned from the Drosophila gut-microbe interaction model. © 2013 by Annual Reviews. All rights reserved. Source | <urn:uuid:5ac33827-fcda-4ec9-96c6-5a6685a134e7> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/institute-pasteur-du-laos-51481/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00544-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92781 | 1,977 | 2.609375 | 3 |
Researchers will demonstrate the process used to spy on smartphones using gyroscopes at Usenix Security event on August 22, 2014.
Researchers from Stanford and a defence research group at Rafael will demonstrate a way to spy on smartphones using gyroscopes at Usenix Security event on August 22, 2014.
According to the ‘Gyrophone: Recognizing Speech From Gyroscope Signals’ study, the gyroscopes integrated into smartphones were sensitive enough to enable some sound waves to be picked up, transforming them into crude microphones.
Researchers noted: "We show that we can use gyroscopes to eavesdrop on speech without using the microphone at all, which can potentially risk private information such as identity, social security and credit card numbers.
"We show that the acoustic signal measured by the gyroscope can reveal private information about the phone’s environment such as who is speaking in the room and, to some extent, what is being said.
"We use signal processing and machine learning to analyse speech from very low frequency samples.
With further development on such low-frequency signal processing researchers claim it has to be possible to raise the quality of the information pulled out from the gyroscope."
The researchers added: "We achieve about 50% success rate for speaker identification from a set of 10 speakers.
"We also show that while limiting ourselves to a small vocabulary consisting solely of digit pronunciations ("one","two", "three", …) and achieve speech recognition success rate of 65% for the speaker dependent case and up to 26% recognition rate for the speaker independent case." | <urn:uuid:5acac6b0-f0f0-47a1-bfc6-582e3f3acb08> | CC-MAIN-2017-04 | http://www.cbronline.com/news/mobility/devices/your-phone-can-be-snooped-on-via-gyroscope-4346363 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00325-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932883 | 328 | 2.546875 | 3 |
Few things can mess up a highly technical system and threaten lives like a counterfeit electronic component, yet the use of such bogus gear is said to be widespread.
A new Defense Advanced Research Projects Agency (DARPA) program will target these phony products and develop a tool to "verify, without disrupting or harming the system, the trustworthiness of a protected electronic component."
+More on Network World: Old electronics don't die, they pile up+
DARPA said in March it will detail a program called Supply Chain Hardware Integrity for Electronics Defense (SHIELD) that will develop a small (100 micron x 100 micron) component, or dielet, that authenticates the provenance of electronics components. Proposed dielets should contain a full encryption engine, sensors to detect tampering and would readily affix to today's electronic components such as microchips, the agency said.
DARPA said it eversions this dielet will be inserted into the electronic component's package at the manufacturing site or affixed to existing trusted components, without any alteration of the host component's design or reliability. There is no electrical connection between the dielet and the host component. Authenticity testing could be done anywhere with a handheld probe or with an automated one for larger volumes. Probes need to be close to the dielet for scanning. After a scan, an inexpensive appliance (perhaps a smartphone) uploads a serial number to a central, industry-owned server. The server sends an unencrypted challenge to the dielet, which sends back an encrypted answer and data from passive sensors-like light exposure-that could indicate tampering, DARP said.
"SHIELD demands a tool that costs less than a penny per unit, yet makes counterfeiting too expensive and technically difficult to do," said Kerry Bernstein, DARPA program manager. "The dielet will be designed to be robust in operation, yet fragile in the face of tampering. What SHIELD is seeking is a very advanced piece of hardware that will offer an on-demand authentication method never before available to the supply chain."
The idea behind SHIELD will be to develop what DARPA calls a "hardware root‐of‐trust" comprising full onboard encryption, intrusion sensors, wireless communication and power, and hardened cipher key storage.
Technical areas DARPA says the program will look to develop include a new on‐chip hardware‐root‐of‐ trust secret key containers, passive sensors that detect potential compromises, ID chip self‐ destruct mechanisms to counter attempted reverse engineering, new manufacturing process technologies to fabricate, personalize, and place these devices, the integration and design of the small ID chips comprising these features.
+More on Network World: How to protect Earth from asteroid destruction+
In the end, DARPA says a system that can successfully protect key core systems would be:
- Extremely low cost, with minimal impact to the component manufacturer, distributor, or end‐ user, as well as to the host component itself;
- Effective at mitigating most supply chain security threats;
- Be simple, very fast, and executable by untrained operators;
- Trustworthy, reliable, and prohibitively difficult to spoof;
- Executable at any place and at any time along the supply chain, providing instant results on‐site;
- Performed using a minimum of specialized, inexpensive interrogation equipment;
- Standardized and widely adoptable by government and industry;
- Manufacturable in high volume using standard foundry processes; and
- A value‐add to the end‐product, recognized and requested by the component consumer.
DARPA will host a Proposers' Day Workshop in support of the SHIELD program on March 14, 2014. More information is available here.
Check out these other hot stories: | <urn:uuid:a3c988b6-c128-4e8d-a1e2-01934a78b3b5> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2226403/security/darpa-wants-to-scrub-scourge-of-counterfeit-computer-gear.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00049-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.9117 | 773 | 2.71875 | 3 |
The world of digital forensics involves the use of a very diverse array of tools, some highly specialized and technical and others pretty simple, as we all know, and these tools are constantly evolving as the digital landscape itself changes and becomes more complex (and more defensive if we’re also talking about those who try to cause harm or conceal their digital footprints). One of the latest of these tools to enter the market and become useful to analysts, educators and students of the computer forensics industry has been none other than the now famous Google Glass.
These voice controlled glasses, which basically act as a sort of very sophisticated wearable computer, with its own applications and OS interface, can be worn and used in any place with a wireless connection of some kind. Users can integrate their glass with their personal preferences in their Google accounts, use them to find directions, look at interactive maps and access a wide assortment of online information about the physical world that’s actually around them at any time.
In essence, the vast amount of data access available thanks to the Google Glass interface makes them an excellent tools for technical work of any kind and especially powerful tools for those working or studying in the STEM fields (Science, Technology, Engineering and Mathematics). Additionally this same quality, amongst others, makes Google Glass a potentially powerful tool when it comes to digital security in the private, government and corporate sectors. Here are a few reasons why.
Prospective students of forensics are faced with the moderate but constant dilemma of absorbing theory and technical educational materials on data protection and recovery while then having to apply them in the real world in a way that fluidly flows off from what they’ve learned.
Google Glass by its very nature makes this process capable of being run much more smoothly and efficiently than ever before.
A student of computer science and forensics in particular can perform field training on damaged or compromised machines and breached corporate networks while simultaneously being able to capture photos of everything he does, take screen shots of his investigative probing work and then share all of this information with colleagues and instructors in real time over social media and cloud sharing platforms.
Furthermore, if stumped on a certain aspect of field training or data analytics, the ability to access online resources and previously downloaded instruction materials would let someone in training much more quickly resolve their problem.
As a basic example: Imagine a police forensics trainee with low level experience sent in to capture as much data from some captured laptops that have just been shut down: the drives themselves are covered by full AES encryption systems but there is a chance of recovering the passwords and other crucial data by performing techniques such as those explained here. By being armed with Google Glass, the trainee could deal with this odd sort of scenario much more effectively while keeping their hands free to work; they could examine research material such as the content of the above article, contact a more experienced instructor and directly get instructions on techniques such as flash freezing the RAM card with compressed air as soon as possible, and at the same time they could contact their lab and notify that a memory that needs to be kept cool until analyzed will be coming in.
General Forensic Documentation
We’ve just gone over the benefits of on the fly advice consultation and documentation of work for digital forensics students, but the same capacities apply to any other IT and digital security professional moving through a complex investigative or work environment.
Imagine being able to walk into a data security crime scene at a corporate office or some other large space with numerous pieces of evidence which need to be collected for collation and later scrutiny. The hassle of multiple pieces of equipment, such as scanning devices and disk imagers will be an inevitable part of your work, yes, but with voice and eye operated power of Google Glass on your face, a lot of your image and video capture needs will be enormously simplified. While imaging a drive or running queries on network servers’ code, you can capture constant video or photo evidence as you work in real time and without interrupting anything you’re doing with your hands. This is where Google Glass has a lot of potential as a major forensic workplace stress reliever.
Stephan Jukic writes for LWGConsulting, a global leader in forensic engineering & recovery solutions. | <urn:uuid:f3e51d09-f64d-4a37-82ea-64b1ff55fa8b> | CC-MAIN-2017-04 | https://articles.forensicfocus.com/2013/09/30/streamlining-digital-forensics-through-google-glass-eyes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00443-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956671 | 860 | 2.96875 | 3 |
ICMP protocol is a bunch of error, queries and response messages that are helping us every day to troubleshoot and manage our networks. At least if you found yourself in a networking engineer role.
Network protocol “ICMP” is known as a control protocol because it is used for the purpose of administration and management within an IP network. Described in RFC 792 ICMP is a vital part of Internet protocol implementations, but it is not holding the application data. It carries the network status information. This protocol is being utilized to provide the details of:
- issues during the core communications and interactions of applications within a network
- network obstacles and congestion
- out-of-the-way hosts accessibility
ICMP e.g. PING utility that is being utilized the Internet control message protocol in order to check out if the distant hosts is reachable and in addition it generates info about round-trip point-in-time. Moreover, TRACEROUTE is a supportive feature of ICMP. This element can spot the intermediate hops in between a specified source machine and an end machine. TRACEROUTE will also give us a way to find where in the middle of the network one hop is blocking the path of the packet being delivered.
ICMP header part organization
Every one ICMP packet will take one header of 8-byte along with a variable-sized section for data. The initial header’s 4 bytes will be unchanging and consistent. And opening byte will be reserved for the type of ICMP while second byte will be kept to store the ICMP code. Consecutively the 3rd and 4th bytes serve as the whole message checksum. But the rest of header’s 4–byte can be varied and conditional on the ICMP type plus its code. ICMP4 was introduced for the IP version 4. | <urn:uuid:0406ca89-d701-4426-9ac3-d3bb448b60e6> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/tag/icmp | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00499-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.917497 | 382 | 4 | 4 |
Internet Relay Chat protocol is introduced in order to use for chat. Online users can perform synchronized text messaging with its help. In point of fact, users involved in chatting are required to install software at both receiver and sender sides that will perform the functions of decoding and putting on view the data by means of the IRC protocol, for example Pidgin.
Internet Relay Chat (IRC) as a synchronous conferencing tool is mainly offered group communication for the conversation forums (channels). But with it one-to-one talk is possible through private message besides file sharing.
Internet information standard RFC (Request for Comment) is distinct by is FOLDOC. That is referred to a depiction of a set of rules for new or customized internet or else network protocols. Well, the RFCs relating to IRC can be made available the methodological information regarding the Internet Relay Chat protocol. But earlier than you commence into these RFCs, you must already have to be personally familiar along with the general help files.
But RFC 1459 (IRC) is the original IRC’s RFC. It became available in printed form in the year of 1993 but in the year 2000, 4 more RFCs came on the screen that were dealt with those changes which were made in original document. Anyhow, these are dealing with architecture (in RFC 2810 IRC), channel management (in RFC 2811 IRC), client protocol (in RFC 2812 IRC), and server protocol (in RFC 2813 IRC).
Open protocol IRC can use the TCP and TLS. An IRC network can be expanded by connecting an IRC server with some other Internet Relay Chat servers. But the client’s implementation is included mIRC and Xchat while server implementation is included IRCd. IRC protocol is founded upon a line-based arrangement along with the user single-line message sending to the server side and receiving of replies to each individual message etc. But copies of particular receiving messages are sent by some other clients. Also, clients can use permitted commands by adding at start a ‘/’ symbol.
One can say the main reason of IRC fame is due to the internet everyday use, around 1990s mid. User can learn its syntax somewhat simply. But no one is authorized individually for this protocol because it is not a copyrighted type code based approach so a person with the help of the programming talent can plan a program that can make use of this protocol for the convenience. But if you wish for to put in writing an IRC client, then you have to understand this protocol carefully before writing down program.
Thousands of IRC networks are running throughout the world with the assistance of a range of implementations of IRC servers. And various amongst them are controlled by IRC operators groups. But this protocol is visible to IRC handlers, and every IRC network can be retrieved by this same software for users. In case of dissimilar application of the servers, there is possibility of their minor mismatches or restricted functionality.
Anyways, client to server internet relay chat protocols are using currently which are derived from the IRC 2.4.0 version’s protocols. | <urn:uuid:be5328b7-5191-43bc-bcb1-2bf7889410f8> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2012/irc | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00223-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933377 | 631 | 3.109375 | 3 |
Ceacero F.,Institute of Animal Science |
Bartosova J.,Institute of Animal Science |
Bartos L.,Institute of Animal Science |
Komarkova M.,Institute of Animal Science |
And 6 more authors.
Journal of Animal Science | Year: 2014
The preorbital gland plays not only an olfactory role in cervids but also a visual one. Opening this gland is an easy way for the calf to communicate with the mother, indicating hunger/satiety, stress, pain, fear, or excitement. This information can be also useful for farm operators to assess how fast the calves habituate to handling routines and to detect those calves that do not habituate and may suffer chronic stress in the future. Thirty-one calves were subjected to 2 consecutive experiments to clarify if observing preorbital gland opening is related to habituation to handling in red deer calves (Cervus elaphus). Calves were born in 3 different paddocks, handled as newborns (Exp. 1), and then subjected to the same routine handling but with different periodicity: every 1, 2, or 3 wk (Exp. 2). In Exp. 1, preorbital gland opening was recorded in newborns during an initial handling (including weighing, ear tagging, and sex determination). Preorbital gland opening occurred in 93% of calves during this procedure and was not affected by sex, time since birth, or birth weight. Experiment 2 consisted of measuring preorbital opening during the same routine handling (weighing, blood sampling, and rump touching to assess body condition) when calves were 1, 3, and 5 mo old. Binary logistic regression showed that gland opening was associated with habituation to handling, since at 1 and 3 mo the probability of opening the gland decreased with the number of handlings that a calf experienced before (P = 0.008 and P = 0.028, respectively). However, there were no further changes in preorbital gland opening rate in the 5-mo-old calves (P = 0.182). The significant influence of the number of previous handlings on the probability of opening the preorbital gland was confirmed through generalized linear model with repeated measures (P = 0.007). Preorbital gland opening decreased along the phases of the study. Nevertheless, we found a significant trend in individuals to keep similar opening patterns (intraclass correlation coefficient = 0.807, P < 0.001), which suggests that the more stressed individuals can be detected with this method. Therefore, we conclude that preorbital gland opening during routine handlings is related to the number of previous handlings, and thus it can be used as an indicator of lack of habituation to handling in farmed cervids. © 2014 American Society of Animal Science. All rights reserved. Source | <urn:uuid:31f441c2-5263-4a3d-9863-0e3ea8170690> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/animal-science-techniques-applied-to-wildlife-management-research-group-2801645/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00223-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934409 | 582 | 2.671875 | 3 |
Many privacy experts, particularly in European countries, have raised concerns about the potential uses of Facebook’s facial recognition technologies. However, an expert in the technology recently told NPR that he thinks Facebook’s facial recognition abilities are not close to being fully implemented.
“Each time you do a comparison, there’s 5 percent chance that it’s wrong,” said Neeraj Kumar, a computer vision expert at the University of Washington. “And that adds up. In fact, it multiplies up. Very quickly, you find that a 95 percent accuracy leads to pretty terrible results when you’re actually trying to answer the question of, ‘Who is this person?’ ”
However, despite the system’s flaws, there is no doubt that Facebook knows it’s sitting on a potential goldmine of user information. Last year, the site bought Face.com, a website whose founders published a paper called “Leveraging Billions of Faces to Overcome Performance Barriers in Unconstrained Facial Recognition.” Despite the current technology’s flaws, it’s clear that facial recognition holds a great untapped promise for any group looking to find individuals online – including the government.
“As we’re seeing specifically over the past few months, no matter how much a company attempts to protect your privacy, if they’re collecting information about you, that information is vulnerable to government search,” said Amie Stepanovich, director of the domestic surveillance project at the Electronic Privacy Information Center.
Every time you tag a photo on Facebook, you’re making it easier to share and connect with your friends. However, you’re also creating a trail of data that facial recognition technology can process. It’s a trade off, and it’s one that has increasingly come to define user experience on Facebook.
If you would like to turn off the Facial Recognition on Facebook, then see our related post:
SocialSafe helps you to create your library of you. It’s the safest place for your online life. Downloaded to your computer, auto organised and instantly searchable. Supports most major social networks.
BitDefender Safego is a Facebook application you can install that will scan your News Feed and help keep you safe from scams on Facebook.
PRIVATE WiFi® is a Personal VPN that encrypts everything you send and receive. Don’t access Facebook from a public WiFi hotspot without it. | <urn:uuid:6f349eb5-c715-4937-b808-0f35527dd7c2> | CC-MAIN-2017-04 | http://facecrooks.com/Internet-Safety-Privacy/Concerns-Doubts-Raised-About-Facebook-Facial-Recognition-Technology.html/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Facecrooks+%28Facecrooks%29 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00131-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930601 | 528 | 2.75 | 3 |
What is Optical attenuator?
Optical attenuators are used in optical power attenuation device, it is mainly used for fiber optic system of measurement, signal attenuation for short distance communication system and system test, etc. Optical attenuator require light weight, small volume, high precision, good stability, convenient use, etc. It can be divided into fixed, variable, continuous adjustable several classification.
Sometimes in optical network, people need to reduce the fiber optical power levels from one device to another; Fiber optic attenuator is a device, the function. Unit “dB”, normally from 1 dB optical attenuator to 20 dB measures of fiber optic attenuator, fiber optic attenuation level. This is called a fixed value, fiber optic attenuator, each piece of attenuation of optical fiber attenuator is fixed.
Fiber optic attenuators can be designed to use with various kinds of fiber optic connectors. the attenuators can be female to female which is called bulkhead fiber optic attenuator or male to female which is also called a plug fiber optic attenuator. Bulkhead and plug types are designed without cables, another type inline fiber optic attenuator is designed with a piece of fiber optic cable.
Three basic types of Fiber optical attenuators are step wise variable, continuously variable and fixed.Fiberstore Hot sales fiber optic attenuators are Variable Attenuators.Variable optical attenuator offer a range of attenuation values. They are used for testing and measurement, or when you need to equalize the power between different signals.Variable fiber attenuator can help user vary the light power injected from a light source into the optical fiber. Important parameter of variable fiber attenuator include its insertion loss, reflection loss and attenuation range. We supply ST, FC, SC, LC variable Optical Attenuators with APC, UPC type. Attenuation range available is from 1dB to 30dB.
Wide range variable & inline fiber optic attenuator
The inline fiber optic attenuator are with more accurate attenuation compared with traditional connector type fiber optic attenuators. what is more ,this fiber optic attenuator is with a precision screw set, by turing it ,the attenuation range can be varied. and this fiber optical attenuator can be with various terminations on the each side of the cable.
Fiber optic attenuator technique data
You can buy fiber optic attenuator products in our store with your confidence. Welcome to Fiberstore.com to choose your fiber optic products.We supply all the fiber optics products with high quality but low price. | <urn:uuid:b7323c51-3625-4348-a701-8c54143c7142> | CC-MAIN-2017-04 | http://www.fs.com/blog/variable-attenuators-of-fiber-optic-attenuators-for-sale-from-fiberstore.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00067-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.868041 | 547 | 2.671875 | 3 |
MPLS-VPNs, Internet Service Providers and IPv6
When you think about MPLS-VPN and the IPv4 or IPv6 Internet, there are two basic problems you want to solve, two important questions you need to answer.
1) How are you providing Internet Access to the MPLS-VPN customers?
2) What if your MPLS-VPN customers are Internet Service Providers who need your MPLS-VPN service to interconnect their remote sites (POP)?
Why do you have a case here? This is because MPLS-VPN, and later 6VPE which added IPv6 support in MPLS-VPN, were not designed to support the full Internet routing tables in the VPN Routing and Forwarding tables. The VRF were designed to connect private networks, not networks the size of the Internet. This is an obvious memory size and CPU resource requirements issue. How can you think a PE could manage multiple copies of the Internet Routing table (RIB) and their associated Forwarding bases (FIB)?
Let’s start with the first issue:
1) Providing Internet Access to your MPLS-VPN customers.
If a PE accommodates 20 VRF, which is a very reasonable amount, and these 20 VRF must have access to the full Internet Routing table, you cannot copy the Internet Routing Table 20 times, in each VRF. Instead, the Internet Routing Table must be stored in the PE Global Routing table. This way, all the VRF clients hosted on this PE can reach the Internet using the same routing table. Internet Access from the CE is then provided following two possible methods:
a) The most simple is for the CE to have two separate (sub-)interfaces to the PE. An interface in the VRF for VRF access. The other is connected to the Global Table space at the PE for Internet Access. Very simple as no specific feature is required. One interface is used for VRF connectivity, the other for Internet access. Sub-interfaces can be used.
b) If this is too expensive and only one interface must be used for both accesses, we must use a few static routes at the PE and the Internet Gateway. At the PE, one static route in the VRF with a next-hop in the Global Routing table to get out from the VRF to the Internet. A static route from the Global Routing table with the outgoing interface in the VRF to the CE to get back from the Internet into the VRF. At the Internet Gateway there must also be a static route for the return traffic coming back from the Internet to the VRF network.
2) The MPLS-VPN is used to interconnect Internet Service Providers customers.
The customers of the MPLS-VPN have Internet Gateways which receive the full Internet Routing tables and these tables must be announced to the customers remote sites.
In this case, the MPLS-VPN infrastructure must support Internet Service Providers customers and we must think about another design than the vanilla MPLS-VPN. What we need is a design where the ISP customers could exchange their full routing table directly among them just like VRF routes are exchanged between PEs throughout Ps in MPLS-VPN.
This is where hierarchical Carrier ‘s Carrier design comes in to play.
To achieve this result the CE, called CSC-CE which has received the the full Internet Routing Table from an Internet Gateway must be able to do the label imposition and disposition. To be able to do this the CSC-CEs must exchange labeled routes with the CSC-PEs using eBGP. Then the PE, called CSC-PE, can do the label switching without any knowledge of the Internet Routing Table exactly how P routers are doing label switching of MPLS-VPN VRF packets without any knowledge of the VRF routes in the vanilla MPLS-VPN design.
With CsC, the full Internet Routing Tables are not learned by the MPLS-VPN backbone (CSC-PE and P routers) but just exchanged thanks to a BGP session throughout the MPLS-VPN backbone. The VRF only carries the routes of the Internet Service Provider infrastructure but it doesn’t know the customer’s Internet Routing Table.
Because the pakets are received with a valid label, the CSC-PEs, and later the Ps don’t need to know the Internet Routing Table of the Customers to forward them.
All the features discussed in this post are supported by CISCO IOS for IPv4 and IPv6 running separately or concurrently in a dual-stack configuration.
Fred BOVY, CCIE #3013
Fast Lane’s Resident IPv6 Guru | <urn:uuid:bc0e8bcc-ae07-4210-8989-cd95f1a15723> | CC-MAIN-2017-04 | http://www.fastlaneus.com/blog/2011/08/18/mpls-vpns-internet-service-providers-and-ipv6/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00187-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.897522 | 984 | 2.53125 | 3 |
Jun 23, 2012 is the 100th birthday of Alan Turing. 76 years ago, Turing, just 24 years old, designed an imaginary machine to solve an important question: are all numbers computable? As a result, he actually designed a simple but the most powerful computing model known to computer scientists. To honor Turing, two scientists, Jeroen van den Bosand and Davy Landman, constructed aworking Turing’s machine. It is not the first time such a machine is built. The interesting thing this time is that the machine was built totally from a single LEGO Mindstorms NXT set.
The modern brick design of LEGO was developed in 1958. It was a revolutionary concept. The first LEGO brick built 54 years ago still interlocks with those made in the current time to construct toys and even the Turing machine. When you want to build a LEGO toy or machine, you don’t need to worry about when and where the bricks are manufactured. You focus on the thing you are building and what standard shapes and how many of LEGO bricks you need. And you can get them in any of those LEGO store no matter what you are building.
Sounds familiar? This is very similar to how one would build a cloud service using resources in a shared fabric pool. You don’t care which or what clusters or storage arrays these resources are hosted. All you care is types (e.g. 4cpu vs 8cpu VM) and service levels (e.g. platinum vs. gold) these resources need to support. Instead of taking each element devices, such as computer hosts or storage arrays, as key building blocks, IT now needs to focus on the logic layer that provides computing power to everything running inside the cloud – VMs, storage, databases, and application services. This new way to build services changed everything on how to measure, analyze, remediate and optimize resources shared within the fabric pool in the cloud.
To understand why we need to shift our focus to pools and away from element devices, let’s talk about another popular toy – puzzle set. Last year, I bought a 3D earth jigsaw puzzle set to my son who was 3 years old at that time. He was very excited as he just took a trip to Shanghai and was expecting a trip to Disney World.He was eager to learn all the places he had been and would be visiting. So he and I (well, mostly I) built the earth using all those puzzle pieces. The final product was a great sphere constructed with 240 pieces. We have enjoyed it for 2 weeks until one of the pieces was missing. How can you blame a 3 year-old boy who wanted to redo the whole thing by himself? Now here is the problem, unlike those two scientists who used LEGO bricks to build the Turing machine, I can’t easily go to a store to just buy that missing piece. I need to somehow find that missing piece or call the manufacture to send me a replacement. In the IT, it is called incident based management. When all your applications are built using dedicated infrastructure devices, you have a way to customize those devices and the way how they are put together to tailor to the particular needs of that application. If one of those devices has issue, it impacts the overall health of that application. So you file a ticket and operations team will do triage, isolation, and remediation.
In a cloud environment with shared resource pools, things happen differently. Since now the pool is built with standard blocks and is shared by applications, you have the ability, through cloud management system, to set policy which moves VMs or logical disks around if their underneath infrastructure blocks get hit by issues. So a small percentage of unhealthy infrastructure blocks doesn’t necessary need immediate triage and repairing action. If you monitor only the infrastructure blocks themselves, you will be overwhelmed by alerts that not necessary impact your cloud services. To respond all these alerts immediately increases your maintenance costs without necessary improving your service quality. Google dida study on the failure rate of their storage devices. They found that the AFR (annual failure rate) of those storage device is 8%. Assuming Google has 200,000 storage devices (in reality, it may have more than that), every half hour, you will have a storage alert somewhere in your environment. How expensive is it to have a dedicate team to keep doing triage and fixing those problem?
So how do we know when services hosted in a pool will be impacted? We give a name to this problem – pool decay. You need to measure the decay state – the combination of performance behavior of the pool itself and distribution of the unhealthy building blocks underneath it. In this way, you will be able to tell how the pool, as a single unit, performs and how much ability it has to provide the computing power to hosted services. When you go out to look for a solution that can truly understand the cloud, you need to check whether it has such ability to detect the pool decay without giving you excessive false positive. Otherwise, you will just get a solution who is cloudwashing.
Back to my missing piece in the 3D jigsaw set, I finally found it under the sofa. But the lesson learned, I now bought my boy LEGO sets instead.
Next week, we will examine how the resource pool with the automation introduces another well known challenge – outage storm. Stay tuned. | <urn:uuid:78c6a198-e9d1-4f98-985d-1b82d429130c> | CC-MAIN-2017-04 | http://www.bmc.com/blogs/puzzle-pieces-vs-lego-bricks-how-shared-resource-pools-changed-everything-5/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00003-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963883 | 1,098 | 2.875 | 3 |
Researchers from the U.S. Army Armament Research, Development and Engineering Center recently patented a new type of bullet capable of self-destructing after traveling over a predetermined distance.
The idea behind the new and advanced projectile is that it might help limit the extent of collateral damage (read: innocents dying) during battle or in other operational settings and environments.
As for how it all works, the U.S. Army explains that when one of these limited-range projectiles is fired, a pyrotechnical material is ignited at the same time and reacts with a special coating on the bullet.
The pyrotechnic material ignites the reactive material, and if the projectile reaches a maximum desired range prior to impact with a target, the ignited reactive material transforms the projectile into an aerodynamically unstable object.
The transformation into an aerodynamically unstable object renders the projectile incapable of continued flight.
The researchers add that the desired range of its limited-range projectile can be adjusted by switching up the reactive materials used. Put simply, the Army has come up with what effectively amounts to a self-destructing bullet that is rendered ineffective over certain distances.
Currently, the invention is nothing more than a proof of concept, but the Army researchers involved are confident that they're onto something transformative.
"The biggest advantage is reduced risk of collateral damage," researcher Stephen McFarlane said. "In today's urban environments others could become significantly hurt or killed, especially by a round the size of a .50 caliber, if it goes too far."
The Army notes that the project currently lacks any funding from the U.S. Government, so it may be a while before this proof of concept becomes a working prototype, let alone an actual tool used in a combat setting. | <urn:uuid:d741e487-ac21-4b8a-8a32-469b12bb6765> | CC-MAIN-2017-04 | http://www.networkworld.com/article/3037405/hardware/army-researchers-patent-self-destructing-bullet-designed-to-save-lives.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00397-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929049 | 360 | 3.25 | 3 |
Definition: A point access method which splits space into a nonperiodic grid. Each spatial dimension is divided by a linear hash. Small sets of points are referred to by one or more cells of the grid.
Note: After [GG98].
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 17 December 2004.
HTML page formatted Mon Feb 2 13:10:39 2015.
Cite this as:
Paul E. Black, "grid file", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/gridfile.html | <urn:uuid:d7bea387-e278-41d2-9bae-6f7c762c44e6> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/gridfile.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00545-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.860519 | 164 | 2.53125 | 3 |
Rocket Launches into Northern Lights
/ February 22, 2012
This past Saturday, a team of scientists launched a small rocket into a northern lights display in an attempt to discover what makes auroras tick, Space.com reported.
The two-stage suborbital rocket blasted off from the Poker Flat Research Range just north of Fairbanks, Alaska, according to the report, and reached a height of about 217 miles as part of a NASA-funded study into how the northern lights can affect signals from global positioning system (GPS) satellites and other spacecraft.
The fisheye photo above was taken by an automated camera near the Poker Flat Research Range entrance gate in Fairbanks, Alaska (photo by Donald Hampton).
Shown below is the two-stage Terrier-Black Brant rocket as it arcs through the northern lights about 200 miles above Earth. Stage one of the rocket had just separated and is seen falling back to Earth in this photo taken by NASA's Terry E. Zaperach. | <urn:uuid:b76b9a55-43d6-4520-9fa9-8081cb0cf8b3> | CC-MAIN-2017-04 | http://www.govtech.com/photos/Photo-of-the-Week-Rocket-Launches-into-Northern-Lights-02222012.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00113-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936926 | 204 | 3.3125 | 3 |
With more diversity of power generation assets and sophisticated control capabilities, the importance of a range of ancillary services for grid operators, including spinning reserves has been heightened by the increasing complexity of the smart grid.
Using information and communications technology to gather information such as information about the behaviors of suppliers and consumers, a smart grid improves the efficiency, reliability, economics and sustainability of the production and distribution of electricity, in an automated fashion.
To serve as bridge power for the grid, spinning reserves are storage assets that can come online quickly. Over the coming decade, according to a report from a part of Navigant's Energy Practice, Pike Research, the capacity of such systems will grow steadily as will their value.
By 2022, the study concludes that the total capacity of worldwide spinning reserves for the grid will rise by 40 percent. During that period, the global spinning reserves market will more than double in size in terms of revenue, from $261 million in 2012 to $578 million by 2022.
"The traditional technologies that deliver spinning reserves, including all types of dispatchable power plants, are mature and well-understood. However, energy storage technologies, including pumped storage and newer forms of energy storage, are playing a larger role in this market," says Research Analyst, Anissa Dehamna, in a statement.
She added that in the spinning reserves market, energy storage technologies will gain market share as markets begin to differentiate between technologies such as with pay-for-performance regulations.
According to the report, natural gas and coal plants will still account for 93 percent of total spinning reserves capacity by 2022, but energy storage will grow to seven percent of capacity in the same time frame. Of particular interest to grid operators are advances in battery technology, particularly lithium-ion (Li-ion) batteries.
Compared to other forms of energy storage, Li-ion batteries can achieve up to 95 percent efficiency, but they remain relatively expensive. The cost of Li-ion for grid applications can be reduced by impressive economies of scale. Using a scenario-based forecast, global market analysis for spinning reserves and forecasts of the amount and revenue associated with this rapidly changing market has been detailed in the report, "Spinning Reserves for the Grid".
Edited by Amanda Ciccatelli | <urn:uuid:99a57c39-f4b1-4303-80fb-043c879e8a9c> | CC-MAIN-2017-04 | http://www.iotevolutionworld.com/topics/smart-grid/articles/2012/11/12/315471-increasing-complexity-the-smart-grid-heightens-spinning-reserves.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00507-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944403 | 459 | 2.703125 | 3 |
October has been a deadly month for computer science types. On October 5, Apple legend Steve Jobs passed away; a week later C language creator Dennis Ritchie was found dead; and this week artificial intelligence guru John McCarthy died at the age of 84.
McCarthy had his hand in many computer science advances during his long career, but is perhaps best known for creating Lisp, a programming language that is still in wide use today. Lisp also became the premier language for exploring AI, the computer science domain that McCarthy pioneered during the latter half of the 20th century.
Lisp refers to the language’s focus on LISt Processing, in which both the data and instructions are represented as a linked lists. By being able to manipulate the code as data, Lisp provided the capability to create new custom syntaxes within the language. This feature was advertised as a way for programmers to design and implement “intelligent” computation systems. McCarthy said he designed Lisp so programmers could create Turing machines — software that reflects a level of automated intelligence bounded by a set of rules.
As a big proponent of AI, McCarthy was attributed with coining the term “artificial intelligence” in 1955. A year later, he organized the first international conference on AI, bring together the early adherents to the field, including Marvin Minsky. In these early days, artificial intelligence was oversold, as even McCarthy realized, admitting that his 1958 paper, Programs with Common Sense “made projections that no one has yet fulfilled.”
The other area of computer science that McCarthy is less well-known for is that of utility computing and time sharing. The idea of offering computing as a utility like electricity or water gained popularity in the 1960s, but faded, mostly due to lack of enabling technologies like fast networks and cheap computers. By the 21st century, both networks and compute capacity became commodities, leading to grid computing, and more recently, of course, cloud computing. Some attribute McCarthy’s early work in this area as the foundation of the public and private cloud models in use today.
Compared to Jobs and Ritchie, McCarthy’s work was much more theoretical, but it may turn out to have even broader impact on the industry. Although AI and utility computer were mostly confined to computer science research projects during most of his career, he managed to live long enough to see IBM computers beat humans at chess and then Jeopardy, and individuals to be able to buy compute cycles from a company that sells books over the internet. | <urn:uuid:ec298ab1-9158-4f0d-8c75-7a381e46480e> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/10/27/ai_computer_legend_passes_away/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00234-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.97773 | 513 | 3.28125 | 3 |
One might argue about how additional mobile bandwidth can be supplied, but it is hard to argue that more capacity is unnecessary, as some believe. Ericsson, for example, points out that smartphones, including the Apple iPhone, Android and Windows devices, typically generatefive to 10 times more traffic than other “low-traffic” devices.
In fact, the changing nature of devices supported by mobile networks accounts for much of the explosion in bandwidth demand, though users desire to consume video arguably drives much of the new demand, across device types. Smartphones are one example.
And, of course, the big problem is that the use of smartphones is becoming the new normal. Ericsson
estimates that high-traffic device share reached around 50 percent at the end of 2011, and will represent the vast majority of devices in use by 2017.
that will be case even in the developing world. And smartphones are only part of the problem. Overall, mobile data is expected to have almost doubled during 2011, and as is typically the case, mobile PCs dominate traffic in most mobile networks today.
Smartphone traffic is growing faster, but most users consume vastly more data when they access mobile networks to support a tablet or PC applications, compared to smartphone apps. It isn’t that the smartphone apps themselves necessarily consume less bandwidth, more that the activities people engage in on PCs and tablets consume more bandwidth.
As Ericsson says, mobile broadband connections used to support 3G routers (1 Gbyte to 16 GBytes per month) represent the largest bandwidth consumption. Mobile bandwidth used to support PCs tends to range from 1 Gbyte to 7 GBytes per month.
So far, tablets represent load of about 300 Mbytes to 1600 MBytes a month, while mobile phones represent only about 30 Mbytes to 230 MBytes per month.
Machine to machine (M2M) traffic is very small, with an average volume below 10 MBytes per subscription in all measured networks, Ericsson says. One reason is that most M2M traffic consists of low bandwidth apps such as security surveillance, fleet management, andpoint of sale terminals.
Edited by Brooke Neuman | <urn:uuid:a0717a2c-989c-4e66-adde-cd1ea19add01> | CC-MAIN-2017-04 | http://www.mobilitytechzone.com/topics/4g-wirelessevolution/articles/2012/06/11/294111-new-mobile-devices-impose-order-magnitude-new-bandwidth.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00142-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925155 | 439 | 2.71875 | 3 |
1.5 What are cryptography standards?
Cryptography standards are needed to create interoperability in the information security world. Essentially they are conditions and protocols set forth to allow uniformity within communication, transactions and virtually all computer activity. The continual evolution of information technology motivates the development of more standards, which in turn helps guide this evolution.
The main motivation behind standards is to allow technology from different manufacturers to "speak the same language", that is, to interact effectively. Perhaps this is best seen in the familiar standard VHS for video cassette recorders (VCRs). A few years ago there were two competing standards in the VCR industry, VHS and BETA. A VHS tape could not be played in a BETA machine and vice versa; they were incompatible formats. Imagine the chaos if all VCR manufacturers had different formats. People could only rent movies that were available on the format compatible with their VCR. Standards are necessary to insure that products from different companies are compatible.
In cryptography, standardization serves an additional purpose; it can serve as a proving ground for cryptographic techniques because complex protocols are prone to design flaws. By establishing a well-examined standard, the industry can produce a more trustworthy product. Even a safe protocol is more trusted by customers after it becomes a standard, because of the ratification process involved.
The government, private industry, and other organizations contribute to the vast collection of standards on cryptography. A few of these are ISO, ANSI, IEEE, NIST, and IETF (see Section 5.3). There are many types of standards, some used within the banking industry, some internationally and others within the government. Standardization helps developers design new products. Instead of spending time developing a new standard, they can follow a pre-existing standard throughout the development process. With this process in place consumers have the chance to choose among competing products or services.
Top of the page
- 1.1 What is RSA Laboratories' Frequently Asked Questions About Today's Cryptography?
- 1.2 What is cryptography?
- 1.3 What are some of the more popular techniques in cryptography?
- 1.4 How is cryptography applied?
- 1.5 What are cryptography standards?
- 1.6 What is the role of the United States government in cryptography?
- 1.7 Why is cryptography important? | <urn:uuid:052bde34-e38b-49e1-877d-5e9b94530fe5> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-are-cryptography-standards.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00536-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946253 | 483 | 3.609375 | 4 |
Can SSL and TLS be made Compatible?
We are sometimes asked if there is any way to make SSL and TLS be compatible with each other. On the surface, this may seem almost nonsensical, but there are cases where such a question actually makes sense!
SSL (Secure Sockets Later) and TLS (Transport Layer Security) are fundamentally the same form of encryption – see SSL versus TLS – what’s the difference. But if that is the case, doesn’t that make them automatically compatible? Well, not really.
How are SSL and TLS Used?
SSL and TLS are used to secure communications over the Internet, e.g. POP, IMAP, SMTP, Web site traffic, Exchange ActiveSync traffic, API connections, and much more. Their use helps ensure that you are connecting to the proper servers and that the communications are not eavesdropped upon.
The actual encryption mechanisms are used by SSL and TLS are the same; however, the difference relates to how the encryption is initiated.
- SSL: With a server expecting an SSL connection, it expects the user’s computer to start negotiating security immediately … nothing can happen until the SSL connection is established and the mechanism of establishing SSL is the same no matter what will go through that secure connection once it is established.
- TLS: With TLS, the server expects an unencrypted connection from the user’s computer with the computer “speaking the language” of whatever service it is trying to talk to (e.g. SMTP to send outbound email). Before anything sensitive is said, your computer can specify commands in that language to start negotiations to make the communications channel encrypted (e.g. with SMTP, your computer would issue the “STARTTLS” command and then dialog with the server to get things encrypted). Once encryption is established, all the important things like your username, password, and data are sent across safely and securely.
So with SSL, you talk security first, business second. With TLS, you start talking business first, but its small talk. You talk security second and then important business third. The level of security is the same and the important business is protected in both cases.
So, why are they not compatible?
If you have a program that can only talk SSL, say, and not TLS but you need to connect to a service that only supports TLS … you can’t do it. Your program wants to talk security first; their system wants to do service specific small talk first. They don’t jive.
A good example might be an outbound email program that can do TLS on port 25 (the standard SMTP port) and SSL on alternate ports (like 465) but which was never made so it could do TLS on alternate ports. Old versions of Microsoft Outlook had this quirk. If you could not connect to port 25 because your ISP was blocking you and you needed to connect securely to an alternate port, you’d better hope there was one with SSL support, because you would not be able to connect securely to an alternate TLS port.
If there any way to make them compatible?
Well, there is no way to make a “square” SSL peg connect to the “round” TLS hole or vice versa. At least, not without putting some kind of adapter in between.
The simplest solution when you need to connect to a remote server that support SSL is usually to use a program like “stunnel“. This is a program that acts like an adapter:
- It runs on your local computer or server.
- It establishes a connection using SSL to a remote system that talks SSL (it doesn’t matter for what protocol).
- You connect your software insecurely to the local stunnel server. E.g. you connect without SSL or TLS, but that is OK since you are connecting from your computer to itself.
- Your communications then go securely from your computer to the remote server over SSL due to the stunnel connection.
This works great if your program can connect without SSL or TLS and the remote server uses SSL. It is trickier if you need to connect to a TLS-only server and your program only supports SSL. There is no good simple solution for this case except for possibly:
- Updating your program to one that supports TLS.
- Contacting the service provider to see if they have any alternate SSL-supporting ports.
- Using a different provider that supports SSL.
LuxSci supports many standard and non-standard ports to address these restrictions. | <urn:uuid:c41b2133-f87b-4e70-8d81-9105877a9e14> | CC-MAIN-2017-04 | https://luxsci.com/blog/can-ssl-and-tls-be-made-compatible.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00288-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943044 | 950 | 3.109375 | 3 |
Guidelines to lowering the risk of a system intrusion because of an application flaw.
Applications are the most difficult parts of an IT infrastructure to secure because of their complexity and because they often need to accept input from a variety of users. Here are guidelines to lowering the risk of a system intrusion because of an application flaw:
Assume all installed applications are flaweddont rely on the security programmed into them.
Physically remove from the system all applications not being used.
Use firewalls, content filters and OS user authentication features to restrict access to the application, and provide access only to those who absolutely must have it.
Update all applications to the latest patches when security bulletins are released.
Internally developed applications need to be code-reviewed for security weaknesses. Consider an external security review for critical applications.
Externally facing Web applications are high-risk applications because they are a bridge between the outside world and internal customer databases. Be sure to add code that can block or otherwise safely deal with all of the following hostile inputs: missing page parameters, parameters that are unusually long, parameters will nulls or hexadecimal encoding, parameters with Web browser script blocks (which are used to create server-side scripting attacks), and parameters with quotes and semicolons (likely attempts to send hostile SQL commands through to the database).
If possible, write applications in languages that run in virtual machines--such as Java, Visual Basic .Net or C#--because they provide an extra layer of security protection. Avoid C and C++ because they make it easy to write applications that allow buffer overflow attacks.
Also in this Special Report
Ignorance: The Hackers Best Friend
Here Be Dragons: Web Services Risks
Threats to Come
Trail of Destruction: The History of the Virus
Community Builds Security: Labs Answers Your Security Questions
WLAN Hardening Checklist
Operating System Hardening Tips | <urn:uuid:edcb5a88-eb66-483d-ad0b-e57e7cabd00c> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Application-Development/Application-Hardening-Checklist | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00014-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.875598 | 393 | 2.5625 | 3 |
Site home page
Get alerts when Linktionary is updated
Book updates and addendums
Get info about the Encyclopedia of Networking and Telecommunicatons, 3rd edition (2001)
Download the electronic version of the Encyclopedia of Networking, 2nd edition (1996). It's free!
Contribute to this site
Electronic licensing info
Note: Many topics at this site are reduced versions of the text in "The Encyclopedia of Networking and Telecommunications." Search results will not be as extensive as a search of the book's CD-ROM.
A node is a network-connected device such as a workstation, a server, or a printer. Network connection devices such as bridges and routers are not usually referred to as nodes on a network, even though they have network addresses.
In the IP addressing scheme, network nodes such as computers are called "hosts," while routers are sometimes called "gateways," an older term that is rarely used today because gateways refer to application layer devices that join systems or networks and provide translation services between them.
Copyright (c) 2001 Tom Sheldon and Big Sur Multimedia. | <urn:uuid:e501fc91-727d-4d05-a871-8255797e3653> | CC-MAIN-2017-04 | http://www.linktionary.com/n/node.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00014-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941317 | 233 | 3.3125 | 3 |
Site home page
Get alerts when Linktionary is updated
Book updates and addendums
Get info about the Encyclopedia of Networking and Telecommunicatons, 3rd edition (2001)
Download the electronic version of the Encyclopedia of Networking, 2nd edition (1996). It's free!
Contribute to this site
Electronic licensing info
Note: Many topics at this site are reduced versions of the text in "The Encyclopedia of Networking and Telecommunications." Search results will not be as extensive as a search of the book's CD-ROM.
Replication is a strategy for automatically copying data to other systems for backup reasons or to make that data more accessible to people at other locations. Replication can also distribute data to multiple servers that can make that information available to multiple simultaneous users. A load-balancing scheme distributes incoming requests to an appropriate server. This is the concept of a clustered system and is often used for busy Web sites.
This topic continues in "The Encyclopedia of Networking and Telecommunications" with a discussion of the following:
Copyright (c) 2001 Tom Sheldon and Big Sur Multimedia. | <urn:uuid:1f12f967-0ac4-4ee9-ad0d-c78951aa8fa7> | CC-MAIN-2017-04 | http://www.linktionary.com/r/replication.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00014-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.860334 | 232 | 2.671875 | 3 |
A fiber optic cable is a cable containing one or more optical fibers in a single cable, is a particularly popular technology for local-area networks. Fiber optic cable is composed of microscopic strands of glass. Information in the form of bytes of data can travel through this glass over longer distances and at higher bandwidths (data rates) than other types of cable. The optical fiber elements are typically individually coated with plastic layers and contained in a protective tube suitable for the environment where the cable will be deployed.
Advantage and Shortage of Fiber Optic Cables
Fiber optic cables carry communication signals using pulses of light, which consist of a bundle of glass threads, each of which is capable of transmitting messages modulated onto light waves. Fiber optic cables have several advantages over traditional metal communications lines:
Fiber optic cables have a much greater bandwidth than metal cables. This means that they can carry more data;
Fiber optic cables are less susceptible than metal cables to interference;
Fiber optic cables are much thinner and lighter than metal wires;
Data can be transmitted digitally (the natural form for computer data) rather than analogically.
While, the main shortage of fiber optic cables is that the cables are expensive to install. In addition, they are more fragile than wire and are difficult to splice.
While expensive, these cables are increasingly being used instead of traditional copper cables, because fiber offers more capacity and is less susceptible to electrical interference. So-called Fiber to the Home (FTTH) installations are becoming more common as a way to bring ultra-high speed Internet service (100 Mbps and higher) to residences. In recent years, the cost of small fiber-count pole-mounted cables has greatly decreased due to the high demand for FTTH fiber optic cable installations in Japan and South Korea.
Fiber cable can be very flexible, it can be bent with a radius as low as 7.5 mm without adverse impact. Even more bendable fibers have been developed. Bendable fiber may also be resistant to fiber hacking, in which the signal in a fiber is surreptitiously monitored by bending the fiber and detecting the leakage. But traditional fiber’s loss increases greatly if the fiber is bent with a radius smaller than around 30 mm. This creates a problem when the cable is bent around corners or wound around a spool, making FTTX installations more complicated. Bendable fibers, targeted towards easier installation in home environments, have been standardized as ITU-T G.657.
New Hollow Fiber Optic Cable
Fiber optic cables are usually made of glass or plastic but those materials actually slow down the transmission of light ever so slightly. Researchers at the University of Southampton in the UK have created a hollow fiber optic cable filled with air that’s 1000 times faster than current cables. Since light propagates in air at 99.7 percent of the speed of light in a vacuum, this new hollow fiber optic cable is able to reach data speeds of 10 terabytes per second. Now that’s fast. While the idea isn’t new, it’s previously been hampered by signal degradation when light travels around corners. This new hollow fiber optic cable reduces data loss to a manageable 3.5dB/km, making it suitable for use in supercomputer and data center applications.
FiberStore provides a wide range of power cable products including Indoor Cables, Outdoor Cables, FTTH Cables, Armored Cables and some Special Cables. They are used as Aerial Cables, Building Cables, Direct buried cables, Duct Cables, Underwater/Submarine Cable. Customers have the flexibility to choose a cable to best fit their needs. | <urn:uuid:b5845eef-5c0a-474f-b594-81358838c58a> | CC-MAIN-2017-04 | http://www.fs.com/blog/fiber-optic-cable-for-local-area-network.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00500-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92861 | 752 | 3.5 | 4 |
A survey by Trend Micro suggests that British teens might be tempted by illegal online methods to make money. One in three teens (aged 12 – 18) admitted they would consider hacking or spying on people online if it meant they could make some fast cash. The survey exposes lack of “e-morals” at a time where kids are spending a significant amount of their time online.
The survey, which polled 1,000 teens and parents across the UK, revealed that kids don’t appear to have any sense of netiquette when it comes to their online behavior. It found:
- Over one in 10 teens thought it was “cool’ or “funny’ to pretend to be someone else online
- One in seven 12 to 13 year olds have actually done this
- Over four out of ten teens have hacked into another person’s profile to read emails or looked at bank account details or logged onto another persons social networking profile
- One in three teens have admitted to being tempted to try hacking or spying on the internet to make money
- Boys it would seem, were almost twice as likely as girls to log into someone’s social networking site
- Girls were up to three times more likely than boys to enter into someone’s online shop or bank accounts without the owner knowing.
Tips for protecting your kids online
- Keep all computers in common areas.
- Agree to time limits for using the Internet and all social devices.
- Keep software security up-to-date.
- Talk with your kids about entering personal information online.
- Run a manual scan with your software security and check browser history.
- Set profiles on social networking sites to private.
- Encourage children to be respectful of others.
- Teach children to have multiple passwords that are NOT associated with names, nicknames or commonly found information over the net.
- Most importantly, keep informed about the latest outbreaks and dangers on the Internet. | <urn:uuid:19cc1c55-dbe7-4580-b00c-11824352318e> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2009/04/03/survey-shows-teens-would-spy-on-people-online-for-money/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00436-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.9432 | 406 | 2.9375 | 3 |
Hal and I realized that in the previous episode we brought up a new topic but never explained it. I broke out the "msg" command, and he used "wall." In case you couldn't figure it out, these commands are used to send a message to users on the system.
The Windows command Msg is used to send a message to one or more users based on username, sessionname, or sessionid. The username is the most common way of directing a message to a user.
C:\> msg hal You have no chance to survive, make your time!
This command simply sends a message to Hal. As mentioned above, a message can also be directed based on the session name or id. To determine the session id or name refer to episode 62.
We can also send a message to all users on the system by using the asterisk.
C:\> msg * Someone set up us the bomb!
What if we don't want to send the message to all the users, but more than one user? We can do that! It does require that we have a file containing a list of usernames to whom we would like to direct our message.
C:\> msg @mostlyeveryone.txt Someone set up us the bomb!
We can also send the messages to users on other systems by using the /SERVER switch.
C:\> msg * /SERVER:otherbox All your base are belong to us!
However, this command doesn't just send messages, but also can be used to get an acknowledgment. The /V option displays information about which actions have been performed, such as sending a message and acknowledgments. The /W option waits for a response from the users. Say we send a message to Hal, and want to make sure he gets its, this is how we would do it:
C:\> msg hal /V /W Did you make your time?
Sending message to session Console, display time 60
Message to session Console responded to by user
The first message lets us know that a message was sent to Hal. The second means that either Hal responded, or the 60 second timer elapsed. Its a bit weird that the message is the same either way, but welcome to the wonderful world of Windows commands.
If we don't think that 60 seconds is long enough for Hal to respond, we can use the /TIME option to explicitly specify the duration of the message.
C:\> msg hal /V /W /TIME:3600 Did you make your time?
This command will wait one hour for a response; more than enough time for Hal to "make his time!"
By the way, these lost-in-translation quotes are taken from the internet meme All Your Base Are Belong To Us!. As a side note, I tried to use these quote in an earlier episode, but Hal corrected my grammar and fixed it. I can't believe Hal doesn't have endless hours to waste watching silly videos on the internet. Is that what the internet is for?
Hal is a little hard of hearing:
I guess I'm just having trouble keeping up with you kids and all your Internet shenanigans. But all of this writing to people's terminals is making me fondly remember my days using dumb terminals to talk with my friends on time-sharing systems. You can't type faster than 9600 baud, so who the heck needs 10Gbps? And get off my lawn!
If I wanted to tell Tim to get off my lawn, I could do that with the write command:
$ write tim
write: tim is logged in more than once; writing to pts/2
Get off my lawn!
Specify the user you want to send a message to as an argument. As you can see, Tim is logged in on multiple PTYs, so the write command will by default send the message to the lowest numbered device. But you can specify a specific device to write to as an optional argument: "write tim pts/2", for example.
Once you hit return on the command line, anything you type in subsequent lines is sent to the specified user (each line normally gets sent immediately when you hit Enter). When you're done entering text, just hit <Ctrl>-D. Tim ends up seeing a message like this:
Message from hal@caribou on pts/1 at 16:43 ...
Get off my lawn!
Now random grumpy old men shouting into your terminal windows can be distracting when you're trying to get work done. So Tim has the option of blocking messages on his PTY with "mesg n":
$ ls -l /dev/pts/2
crw--w---- 1 tim tty 136, 2 2011-01-10 16:50 /dev/pts/2
$ mesg n
$ ls -l /dev/pts/2
crw------- 1 tim tty 136, 2 2011-01-10 16:50 /dev/pts/2
With no arguments, the mesg command tells you what the current status of your PTY is-- the default is to be accepting messages, or "y". As you can see in the output above, running "mesg n" simply removes the group-writable flag from your PTY. The write command is set-GID to group "tty" so that it can write messages to users terminals when "mesg y" is set.
But in our last Episode, I used the wall command to send New Year's greetings to everybody on the system. In its simplest form, wall accepts input on the standard in and blasts it to all currently connected users:
$ echo Get off my lawn! | wall
And the users see:
Broadcast Message from hal@caribou
(/dev/pts/1) at 16:58 ...
Get off my lawn!
If you are the superuser, wall can also be used to write messages stored in a text file.
# wall /etc/shutdown-message
The other advantage to running wall as the superuser is that your message will even go to the terminals of those users who have set "mesg n". After all, "mesg n" works by changing permissions on the PTY devices, but root can write to any file regardless of permissions.
Now that I've educated you whipper-snappers on this outmoded technology, it's time for my nap. Why don't you kids go shoot some marbles or play with your hula-hoops? | <urn:uuid:6ac23d80-0ff6-4258-b64c-8e915cb49db7> | CC-MAIN-2017-04 | http://blog.commandlinekungfu.com/2011/01/episode-129-writing-on-wall.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00554-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.909032 | 1,359 | 2.765625 | 3 |
Definition: A function computed by a Turing machine that need not halt for all inputs.
See also partial function.
Note: From Algorithms and Theory of Computation Handbook, page 24-19, Copyright © 1999 by CRC Press LLC. Appearing in the Dictionary of Computer Science, Engineering and Technology, Copyright © 2000 CRC Press LLC.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 17 December 2004.
HTML page formatted Mon Feb 2 13:10:40 2015.
Cite this as:
Algorithms and Theory of Computation Handbook, CRC Press LLC, 1999, "partial recursive function", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/partialrcrsv.html | <urn:uuid:90fc3da4-761a-41bd-8abc-0eb6d3db38f1> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/partialrcrsv.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00096-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.766898 | 198 | 2.953125 | 3 |
The 1980s and 90s
The 1980s and '90s In the 1980s, manufacturers put renewed emphasis on the quest for a device that could recognize handwriting, relying on a stylus for input. During this period, companies like Pencept and the Communication Intelligence Corporation made inroads into that technology; in 1988, Wang Laboratories offered Freestyle, a "digitizing tablet" that allowed users to hand-write or annotate on any computer screen, using a stylus to drag elements around the desktop.During this period, Apple also took its first steps into the tablet PC arena. In 1987, the company-then still known as Apple Computer, Inc.-produced some glossy concept videos for a device called Knowledge Navigator. Folding on a hinge like a conventional notebook, the tablet featured a talking avatar and the ability to recognize and respond to a user's speech. As a concept, it was even more futuristic than was Kubrick's vision, but Apple was also working on something much more real world: the Newton project, which bore fruit in 1993, with the launch of a handheld device capable of handwriting recognition. Even though Apple CEO Steve Jobs would end up killing the Newton in 1997, the device retains a cult following. Whether organizing a "to do" list or cycling through contacts, Newton represented yet another take on the same vision posited by PenPoint OS and the similar software emerging at that time: the ability to manipulate digital assets in ways familiar to anyone who ever used a pen and paper. For at least the last year of its official life, the Newton also found itself locked in competition with Palm, perhaps the most famous early producer of PDAs. Powered by Palm OS, the devices relied on a stylus-supported graphical interface. Microsoft was also exploring touch technology, eventually releasing Windows for Pen Computing for Windows 3.1x as a sort of counterstrike to the PenPoint OS. The company would continue to update the software throughout the 1990s. Years later, Microsoft found itself the target of lawsuits alleging it had tried to destroy Go Corporation in the early 1990s.
A year later, GRiD Systems Corporation released the GRiDpad touch-screen computer. Also in the late 1980s, GO Corporation began working on PenPoint OS, a stylus-based operating system it would introduce to the public in 1991. | <urn:uuid:a796ea57-fd9b-4bf3-aae2-781dc061e4a0> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Mobile-and-Wireless/Apple-iPad-Android-Just-the-Latest-in-Tablet-PCs-Long-History-217970/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00004-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961666 | 473 | 3.140625 | 3 |
5.2.5 What is SecurID?
SecurID is a two-factor authentication system developed by Security Dynamics (now RSA Security). It is generally used to secure either local or remote access to computer networks. Each SecurID user has a memorized PIN or password, and a hand-held token with a LCD display. The token displays a new pseudo-random value, called the tokencode, at a fixed time interval, usually one minute. The user combines the memorized factor with the tokencode, either by simple concatenation or entry on an optional keypad on the token, to create the passcode, which is then entered to gain access to the protected resource.
The SecurID token is a battery powered, hand-held device containing a dedicated microcontroller. The microcontroller stores, in RAM, the current time, and a 64-bit seed value that is unique to a particular token. At the specified interval, the seed value and the time are combined through a proprietary algorithm stored in the microcontroller's ROM, to create the tokencode value.
An authentication server verifies the passcodes. The server maintains a database which contains the seed value for each token and the PIN or password for each user. From this information, and the current time, the server generates a set of valid passcodes for the user and checks each one against the entered value. For more on SecurID, see http://www.emc.com/security/. | <urn:uuid:c93eda92-b9b8-4ab1-b9a8-971d7bc2a80a> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/securid.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00306-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.84602 | 303 | 3.65625 | 4 |
Ways to ensure the voice network security are many. In this article we will se the first of them that must be configured in every serious network. Implementing Auxiliary VLANs will make VoIP Networks more secure using separated VLANS for data traffic and voice.
Voice and data traffic will be transferred in the same way across the same cable and same switch by default. That means that calls and all other network traffic will be transferred in the same time in the same way and every user on the network will be able to see that data using some network sniffing tool like Wireshark. This default network setting may be used to capture call packets that are crossing the network and attacker can reproduce the call in .mp3 or some other sound format. We need to separate voice network from data network completely in order to make impossible to sniff call packets from user computer. | <urn:uuid:d6780d21-8a79-4dc7-a365-dd3fda2e56d3> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/tag/auxiliary-vlan | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00032-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.905896 | 177 | 2.78125 | 3 |
ISO 14001 Certification
ISO 14001 Certification is a general term that is used for two main things; certifying the knowledge of individuals and certifying a company’s environmental management system. What is ISO 14001 certification, you may ask?
Certifying of Individuals
ISO 14001 Certifications for individuals allow the people involved to develop and advance a career in environmental management system auditing by proving their skills for potential employers. The individual certifications supply the information, knowledge and skills needed to create and maintain an Environmental Management System (often called an EMS) for a company using ISO 14001 as a basis. There are many courses available including; two day courses on implementing ISO 14001, ISO 14001 Internal Auditor courses of two to three days, and even ISO 14001 Lead Auditor training which is usually in a 5 day format and includes a test.
The lead auditor course can only be delivered by a company that itself has been accredited as being able to present ISO 14001 lead auditor training. Additionally, an individual can only be certified as an ISO 14001 Lead Auditor if the course they took was certified as an acceptable course. Certified individuals can be hired by a certification body (explained below) to audit a company’s Environmental Management System against the ISO 14001 standard.
For more information on ISO 14001 training and becoming an ISO 14001 lead auditor see ISO 14001 Training.
Certifying an Environmental Management System
The ISO 14001 certification process for companies starts when the company decides to implement an EMS that meets the ISO 14001 requirements. After this the company will utilize the ISO 14001 requirements as a guideline of what needs to be done and will take all actions necessary to create, document, maintain and review their system. After this is complete the certified lead auditors from a certification body (sometimes called a Registrar) will audit and assess the environmental management system against the requirements of ISO 14001. If the auditors find that the system meets the standard an ISO 14001 certificate will be issued showing that the company’s EMS is acceptable and the company is then considered ISO 14001 certified.
For a good overview of this process see ISO 14001 Implementation Process Diagram.
The Cycle for a Company to Maintain ISO 14001 Certification
The ISO 14001 certification is not simply a onetime thing since it is part of the ISO 14001 certification requirements that the EMS be maintained and improved. The certification body will have an agreement with the company that continues after the initial audit (the certification audit) and will include ongoing routine audits of the system (called surveillance or maintenance audits). Typically this agreement will cover a three year cycle with the initial certification audit covering the entire ISO 14001 EMS, and the next two year’s maintenance audits only reviewing a portion of the system. As per the agreement between the company and the certification body the maintenance audits can happen annually, twice per year, or even more often. Each element of the entire system is audited at least one time during the two years of maintenance audits. If at the end of three years the company chooses to maintain the ISO 14001 certification and the benefits it provides; the cycle will start again with a recertification audit that reviews the entire system. | <urn:uuid:980eb29d-8476-4f91-b7c8-de6c8decc6d7> | CC-MAIN-2017-04 | https://advisera.com/14001academy/knowledgebase/iso-14001-certification/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00454-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931617 | 655 | 2.890625 | 3 |
How Networking Underpins Technology
Networking and systems professionals play a central role in the information technology industry. To find proof of this, you don’t have to look further than the list of vocational communities on this site. Can you find one that networking doesn’t impact in some way? Wireless, security, storage, open-source—all are substantially influenced by the job role that dominates the IT workforce in both effects and numbers. Because of the weight networking carries in technology, it’s worthwhile to take a look at the rise of networking and where it stands today.
A (Very) Brief History of Networking
Although the first example of networking through technology arguably began in the 1870s with Alexander Graham Bell’s telephone, for this audience we’ll start in 1939. This was the year Englishman Alan Reeves devised pulse code modulation (PCM), a means of transmitting virtually (no pun intended) any kind of data: books, films, songs, you name it. This was the beginning of binary, which was expressed in digits 1 (high logic) and 0 (low logic). However, Reeves’ invention was not practical for the general public’s use—yet.
In 1957, the U.S. Department of Defense formed the Advanced Research Projects Agency (ARPA) in response to the Soviet Union’s launch of the Sputnik satellite. In the late 1960s, the ARPANET project commenced with a network of four computers in Cambridge, Mass. ARPA designed the network to help scientists and engineers collaborate on assignments. This unexceptional beginning (or so it would seem today) directly led to the development of the Internet.
Several of the ideas that made networking feasible for the widespread dissemination of information, news and entertainment came out of the ARPANET project. A couple of these include packets, or pieces of data routed over a network, and TCP/IP (Transmission Control Protocol/Internet Protocol), which is essentially the idiom of the Web. In the 1980s, ARPA handed ARPANET over to the National Science Foundation’s NSFNET, which in turn handed over the vBNS (very high-speed Backbone Network Service) in the 1990s to a consortium of organizations that included MCI and IBM. The Internet was born.
It can be hard to pin down networking as a profession these days. Networking can be partitioned into a variety of subsets, including configuration, physical distance, modality, links and users. This is great for businesses, which can find suitable individuals to devise the best network to connect their workforce to each other and to customers. It’s also beneficial for IT professionals, who benefit from the wide variety of highly technical niches. This creates occupational security because it’s difficult to automate or outsource these specialized and mission-critical technologies.
Broadly defined, networking is the installation, configuration, management, maintenance and troubleshooting of two or more linked computers. Within that general explanation are local area networks (LANs), which share a single communications line or wireless link within a single processor or server that serves a small locale, metropolitan area networks (MANs), which cover a town, city or large cluster of buildings, and wide area networks (WANs), which span whole geographic regions.
Of course, there also are topologies such as the Token Ring, which has a circular structure, bus, which has units feeding into a single line, and star, which has a central computer branching out to the rest of the network. And we can’t forget communications (data, voice or both) and transmission technology (systems network architecture or TCP/IP). I could go on and on and on.
Networking and Technology
It’s easy to see networking’s impact on technology as a whole. Just try this exercise: Take every networking product or service out of the technology picture today and see what you have left. For starters, no Internet—no connected computers at all, for that matter. You’d have no e-mail and no instant messaging. Forget about cell phones or phones of any kind, really. Ditto for fax machines. Cable and satellite television would be gone, as would radio broadcasts. Without networking, how in the world would people communicate with each other? Smoke signals? Pony Express?
This exercise gets to the heart of why networking is so important: It facilitates communication and collaboration, which have been essential to our survival since our first ancestors went on Mastodon hunting parties. Human progress owes a great deal to networking. So thanks network and system admins. We at CertMag salute you and your contribution to civilization.
–Brian Summerfield, email@example.com | <urn:uuid:f93a8b24-3b8f-4c76-a286-eca50ec0f54c> | CC-MAIN-2017-04 | http://certmag.com/how-networking-underpins-technology/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00022-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941775 | 980 | 3.0625 | 3 |
Tech Glossary – G to H
One gigahertz is equal to 1,000 megahertz (MHz) or 1,000,000,000 Hz. It is commonly used to measure computer processing speeds. For many years, computer CPU speeds were measured in megahertz, but after personal computers eclipsed the 1,000 Mhz mark around the year 2000, gigahertz became the standard measurement unit. After all, it is easier to say “2.4 Gigahertz” than “2,400 Megahertz.”
HDMI (High-Definition Multimedia Interface)
HDMI is a digital interface for transmitting audio and video data in a single cable. It is supported by most HDTVs and related components, such as DVD and Blu-ray players, cable boxes, and video game systems.
CPUs include a heat sink, which dissipates the heat from the processor, preventing it from overheating. The heat sink is made out of metal, such as a zinc or copper alloy, and is attached to the processor with a thermal material that draws the heat from away the processor towards the heat sink. Heat sinks can range in size from barely covering the processor to several times the size of the processor if the CPU requires it.
Hyper-threading is a technology developed by Intel Corporation. It is used in certain Pentium 4 processors and all Intel Xeon processors. Hyper-threading technology, commonly referred to as “HT Technology,” enables the processor to execute two threads, or sets of instructions, at the same time. | <urn:uuid:647b0774-b45d-4600-9c43-ec018ccc77a5> | CC-MAIN-2017-04 | http://icomputerdenver.com/tech-glossary/tech-glossary-g-h/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00416-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939146 | 327 | 3.890625 | 4 |
The OSE Real-time Kernel combines rich functionality with high performance and true real time behavior. It is a fully pre-emptive kernel, optimized to provide high rates of data throughput yet compact enough for use in most embedded systems.
Designed for use in distributed and multiprocessor systems, the kernel includes automatic supervision of processes. This feature enables highly fault-tolerant systems to be created. Inter process communication is completely transparent, regardless of whether the communicating processes are on the same processor or on different ones. The OSE kernel also allows dynamic reconfiguration.
The OSE Real-time Kernel supports advanced memory management that allows application code to be run in protected areas of memory. It also includes comprehensive error handling and powerful source and application level debug features.
The course is theoretical and will give an insight into the architecture and building blocks of OSE. Short paper and pencil exercises also show the basic principles on how to design an OSE real-time operating system with signals and processes, how to use system calls and configuration of OSE.
To give the basic principles and knowledge for understanding the implementation of OSE as a real time operating system.
Who should attend?
Project leader, System designer, System programmer, Application programmer, Real-time programmer and System tester.
None. Basics in embedded real-time operating system recommended.
Recommended following courses
OSE Next step | <urn:uuid:f77d3541-5b90-4750-837d-15c11181bc48> | CC-MAIN-2017-04 | http://www.enea.com/training/TrainingSummaryPage/?item=849 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00416-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.877856 | 285 | 2.984375 | 3 |
Virtualization, Deduplication Helping the Green Cause
The Green Grid is a global consortium dedicated to advancing energy efficiency in data centers and business computing ecosystems. The nonprofit group is focused on defining meaningful, user-centric models and metrics; developing standards, measurement methods, processes and new technologies to improve data center performance against the defined metrics; and promoting the adoption of energy-efficient standards, processes, measurements and technologies.
Here are some of the key data points in the Green Grid's recent whitepaper (PDF), "Green Grid Metrics: Data Center Infrastructure Efficiency (DCiE) Detailed Analysis."
- Complete knowledge and understanding of each component in the data center and its power requirements.
- Knowledge and charting of total facility power; data center input power; power for building lighting, security and cooling.
- Knowledge and charting of minimum measurement at interval power: Level 1 (week/month), Level 2 (daily) and Level 3 (continuous).
- Knowledge and use of the DCiE formula, which is as follows: IT equipment power divided by total facility power; this figure comprises the actual DCiE.
In order for DCiE to become a global metric, two important requirements that all must follow are:
- The data center manager must correctly classify each subcomponent that comprises the metric's two core contributors; and
- The data center manager must obtain the data inputs that create DCiE's two core contributors in the same method; i.e., utilize a consistent method for data capture; actual measurements must always be used.
Verizon and AT&T both have begun implementing these data points. | <urn:uuid:854c1b9c-d1da-46f4-8559-185c06eace8e> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Green-IT/Verizon-Green-Grid-Join-Forces-to-Update-International-Data-Centers/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00416-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.861348 | 338 | 2.796875 | 3 |
I’ve written before about Python being a popular - if not the top - choice of programming languages for beginners. Many people cite Python as the best language to start with thanks to its low barrier to entry, flexible syntax and the fact that it teaches good programming fundamentals. If the anecdotal evidence wasn’t enough, this week we got some harder numbers to back up Python’s claim to the title of best start programming language.
Philip Guo, an assistant professor of computer science at the University of Rochester, wrote on the Communications of the ACM blog about a study he recently undertook to quantify just how popular Python has become as a teaching language at the college level. Guo looked at the course offerings at the top 39 college computer science departments in the U.S., as ranked by U.S. News & World Report. He tallied up which languages were offered by these schools in introductory programming courses to both CS and non-CS majors.
Guo found that 27 out 39 of the schools (69%) use Python in their introductory programming courses. More specifically, 8 of the top 10 programs, including CMU, MIT and Caltech, use Python as a starting language. Java, a traditionally popular choice as a first language, is a relatively close second, used by 22 of the 39 programs (56%). The other languages that these schools offer beginners (C, C++, Matlab, Scheme and Scratch) lag much further behind.
Not surprisingly, Guo’s work quickly generated a lot of reaction from the developer community. Many people, both educators and students, praised the choice of Python for beginners for many of the same reasons that have been shared in the past.
“The move to Python gained us, among other things, the freedom not to have to explain Java's generics and access modifiers. We used to spend too much time on syntax and other details… that we should have been spending on data structures and algorithms.” candeira
“Python is so perfect for learning on. No special characters for variables, forced neat code due to indentation rules, and its object orientated so you can have that fun discussion.” farking_pko
“Python is easier to understand and get into. And Python has some amazing features that you want, such as first-class functions. And it's a real-world language, which you can't say about Clojure, even with its growing popularity.” reuven
Others, however, felt the results were skewed by Guo’s choice to include introductory programming courses offered to non-CS majors, referred to as “CS0” classes, as opposed to “CS1” classes which are the introductory courses for those choosing to major in computer science.
“There are certainly some schools, big ones, that use Python in CS1. But a lot of the Python representation is coming from CS0 at schools where CS1 is taught using Java.” blahedo
Guo himself acknowledged this hybrid approach in his blog post. He also joined the discussion on Hacker News to defend his choice of not differentiating between what language CS and non-CS majors learn first, by noting that some CS majors will take CS0 first if they don’t feel ready for CS1. He also argued that “CS0 is just as important as an ‘introductory programming’ course as CS1, if not more important, due to the rise of the non-software-engineers-who-want-to-learn-programming population.” Guo wrote that he doesn’t plan redo his analysis separately for CS and non-CS majors.
Given that so many non-programmers need to be able to do some programming these days, I think Guo’s approach makes sense. However you measure it, though, it’s hard to deny the popularity of Python as a starter programming language. It will be interesting to see if the pro-Python trend continues in the future.
Read more of Phil Johnson's #Tech blog and follow the latest IT news at ITworld. Follow Phil on Twitter at @itwphiljohnson. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook. | <urn:uuid:ed154b88-9d4a-40e1-8d90-7f80a60554c1> | CC-MAIN-2017-04 | http://www.itworld.com/article/2696289/cloud-computing/more-evidence-that-python-is-the-best-starter-programming-language.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00416-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963176 | 893 | 2.765625 | 3 |
Smart grids are a fundamental component of the European critical infrastructure. They are rooted on communication networks that have become essential elements allowing the leveraging of the “smart” features of power grids.
Smart grids provide real-time information on the grid, perform actions when required without any noticeable lag, and support gathering customer consumption information. On the downside, smart grids however, provide an increased attack surface for criminals.
For instance, smart meters can be hacked to cut power bills as happened in Spain in 2014 or due to a DDoS attack or malware infection, communications and control of the network could be lost, causing an energy production halt and affecting several systems across borders.
To protect networks and devices from cyber threats, a new ENISA study focuses on the evaluation of interdependencies to determine their importance, risks, mitigation factors and possible security measures to implement.
There is high exposure of smart grid devices that makes it essential to harmonize the current situation by establishing common interconnection protocols. It has also become imperative to seek aligning policies, standards and regulations across the EU to ensure the overall security of smart grids.
These aspects have currently grown in importance due to the risk that cascading failures could result since smart grid communication networks are no longer limited by physical or geographical barriers, and an attack on one country could transgress physical and virtual borders.
The recommendations of this report are addressed to operators, vendors, manufacturers and security tools providers in the EU and they include the following:
- Foster intercommunication protocol compatibility between devices originating from different manufacturers and vendors
- Develop a set of minimum security requirements to be applied in all communication interdependencies in smart grids
- Implement security measures on all devices and protocols that are part, or make use of the smart grid communication network. | <urn:uuid:d2dd96bc-313f-481a-9a7c-94df7a33d37d> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2016/02/01/defending-the-smart-grid-what-security-measures-to-implement/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00416-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930438 | 358 | 2.984375 | 3 |
XSLand CSS complement each other on document layout
What is it?
XSL (Extended Style Sheet Language) is used to define the layout of XML documents in a presentation medium such as a web browser window or a printed page. XSL includes the transformation language XSLT, which converts XML into formats such as HTML, PDF and Braille, or into other XML formats such as typesetting languages.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Where did it originate?
The World Wide Web Consortium's (W3C) proposed recommendation for XSL pointed out that without a style sheet, a processor "could not possibly know how to render the content of an XML document other than as an undifferentiated string of characters". The proposal was submitted in 1997 by authors from a number of organisations including Microsoft, ArborText and the University of Edinburgh. XSL builds on the W3C's work on Cascading Style Sheets.
What is it for?
"It is not a replacement for your Wysiwyg authoring tools [a print preview application for developers, short for 'what you see is what you get'], but it is useful for some problems, such as very large documents or those derived from database content, that are not well served by the current tools," said Stephen Deach, a senior member of the W3C-XSL working group.
XSL provides a comprehensive model and a vocabulary for writing style sheets using XML syntax. There are three elements: XSLT; XPath, a language for defining parts of an XML document; and XSL-FO (Formatting Objects), a language for formatting XML documents.
What makes it special?
The W3C said, "XSL is a language quite different from CSS and caters for different needs. Aimed by and large at complex documentation projects, XSL has many uses associated with the automatic generation of tables of contents, indexes, reports and other complex publishing tasks."
Stephen Deach writes, "CSS was limited to what was needed for browsers and easy for the browser manufacturers to implement."
Although CSS can be used to style HTML and XML documents, XSL can transform XML data into HTML/CSS documents or other formats. The two languages complement each other and can be used together.
How difficult is it to master?
Straightforward - and essential - for those learning XML. However, IBM researcher Jared Jackson said, "This means that developers accustomed to writing in Java code or C who learn XSL often find themselves in foreign territory when using XSL's more advanced features."
Where is it used?
Not just in web and XML document design, but also printing. The W3C said XSL aims to allow the specification of printing of web documents to work as well as a word processor. Future support for high-end print typography is planned.
What systems does it run on?
Fewer suppliers and tools support XSL than CSS, although Microsoft, Adobe and others are committed to supporting final W3C specifications.
See the W3C XML site and the Cover Pages
What is coming up?
XSL 2.0 is making its way through the W3C review process.
The W3C website has links to XSL tutorials, articles and training, including Mulberry Technologies and online XSL guide Zvon.org.
Rates of pay
XSL is used in a wide range of roles including web publishing, .net and Java development and rates vary accordingly. | <urn:uuid:0a2db2e6-76ee-4ca3-ba3e-80abe22f8824> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/2240057094/XSL-to-improve-web-print-support | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00143-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922698 | 740 | 3.5 | 4 |
Where servers, storage and networking combine to form Voltron.
February 9, 2015
Filed under Uncategorized
I have to admit to being a little thrown off by the numbers. It starts off well enough with the premise that Fibre channel provides about 100MBps of throughput per gig. The author then proceeds to show how Ethernet provides 125MBps (true 1Gbps) by upping the baud rate from 1Gbaud to 1.25GBaud. But wait, 1G FC doesn’t use 1GBaud, it uses 1.0625GBaud. Following the same math used to show Ethernet throughput, 1G FC should provide 106.25MBps, not 100MBps.
OK, so we’re only 6.25MBps off here. The general statement of 1G FC provides ~100MBps throughput is still close enough to call it a wash. However, this discrepancy is exaggerated for higher speeds of FC where the author never actually runs the numbers. At the end, the author makes a pretty big point about how 128G FC is only 300MBps faster than 100G Ethernet. Knowing that the numbers for 1G FC are off, lets look at the actual numbers for 128G FC.
Throughput = (Data Bits / Encoded Bits) * Baud. (64/66)*112.2G=108.8Gbps. Divide by 8 for bits->bytes, multiply by 1000 for GBps->MBps, 108.8Gbps/8*1000=13600MBps. 13600 is a fair way off from 12800. This makes the difference between 128G FC and 100G Ethernet 1100MBps, not 300MBps.
I think the basic idea that speed in FC and Ethernet are measured differently is absolutely worth knowing. It’s also probably a good idea to remember that 1G of FC provides ABOUT 100MBps of throughput. But if you’re going to report on the numbers in a side-by-side comparison, it is important to consider the “ABOUT” in that statement. Or am I missing some sort of overhead in FC transmission that isn’t reported here?
On the Fibre Channel side, I’m not sure what the cause is for the small baud discrepancy. Perhaps another signalling/protocol overhead that they bump up slightly to make room for FC frames (looks like about 6.25%, and it’s consistent).
For transfer speed (measured in megabytes per second) I use the transfer speeds what the vendors and the T11 standard (http://www.t11.org/ftp/t11/pub/fc/pi-5/11-011v0.pdf page 30 of the PDF) all report. They all says 100/200/400/800/1600 etc. for transfer speeds, so I’m going with those transfer speeds.
Just purusing some docs, I can’t find the source of the discrepancy. Great question, I’ll look into it more.
Fascinating, I hadn’t realized that those numbers were from the standard itself. Obviously I would hope that the standards body wouldn’t simply “forget” 6.25%, so I’m sure there is a missing variable there. Also, my apologies for continually referencing “the author” without bothering to notice that you are the author of the video. Great stuff, keep up the good work!
Fill in your details below or click an icon to log in:
You are commenting using your WordPress.com account. ( Log Out / Change )
You are commenting using your Twitter account. ( Log Out / Change )
You are commenting using your Facebook account. ( Log Out / Change )
You are commenting using your Google+ account. ( Log Out / Change )
Connecting to %s
Notify me of new comments via email.
Blog at WordPress.com. | <urn:uuid:31904e93-6eab-4d7f-9bc6-45b8ace54a69> | CC-MAIN-2017-04 | https://datacenteroverlords.com/2015/02/09/differences-in-how-fibre-channel-and-ethernet-measure-speed/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00289-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.904655 | 829 | 2.625 | 3 |
Recent data breaches tell us what private and public sector victims are dealing with: disruption, reputational damage, and significant financial repercussions. They can also find themselves attracting the undesirable attention of regulators. Like those suffered recently by the IRS and Ashley Madison, data breaches have ignited the discussion about the role that federal regulators should play in holding organizations accountable.
US Congress has not yet adopted sweeping legislation governing data security. Even in cases of these large-scale, headline-grabbing data breaches with massive financial settlements, there has not been a clear path by which the federal government can file cases of wrongdoing. This may now be changing.
Over the past few months, many state and federal regulators have stepped up their focus on data security, conducting their own examinations and investigations, and ultimately levying fines for non-compliance, or lack of adequate security measures to protect consumer information.
Perhaps most significant was a ruling in August 2015 from a federal US appellate court confirming that the Federal Trade Commission (FTC) has the authority to take legal action against an organization for not adequately safeguarding customer data. This ruling widely confirms the FTC’s authority to regulate companies that are negligible in the loss of consumer data to hackers.
So what does this ruling mean? The court’s decision demonstrates that information security must be treated like any other protective measure and that having inadequate cybersecurity measures in place should not serve as an exception.
In many cases, organizations have acted recklessly by storing sensitive information without encryption, or placing passwords on sticky notes. In these cases, government bodies like the FTC will be able to make a clear argument that this lack of security equates to insufficient protection and the organization can therefore be held accountable for such unsupported claims.
One of the challenges both the FTC and future organizations will face is making a clear case that the proper safeguards were in place. As we’ve seen, cyberattacks come in many shapes and sizes and therefore there is no definitive checklist for protecting corporate or customer data. Defining a fair standard by which every organization must adhere will be a discussion point and serve as an arena of debate for some time.
Navigating data compliance
It is challenging for organizations to understand and comply with the many well-meaning regulatory requirements, particularly if such requirements are veiled as suggestions.
It’s critical for businesses to protect themselves and their customers by implementing and adhering to formal security procedures. In the coming year, the European Union is poised to introduce its General Data Protection and Regulation legislation, which would implement new regulation on privacy laws for any organization that processes personal data through the offering of services or goods to citizens in the European Union. While no such blanket regulations exists in the US, several industries have been issued increasingly larger regulatory fines for not complying with existing industry-specific legislation. The introduction of new legislation in Europe could be a catalyst for similar legislation in the U.S.
There is no one panacea solution when it comes to ensuring the integrity of your corporate network and the security of customer data. Organizations need to adopt a layered approach that includes encryption, anti-malware, and endpoint security. It is also important to conduct frequent and comprehensive security audits on the well-being of your data security.
Education and staff awareness are also critical. Having a formal procedure for what is expected in the event of a breach can often help expedite the containment process to mitigate potential risks. Internal awareness training should be conducted regularly across the organization.
With greater regulatory oversight than ever before, organizations must ensure they are investing in and prioritizing the protection of their sensitive data, across all levels of the organization.
Sweeping legislation like the EU GDPR may be inevitable, but time will tell if this form of governance will encourage organizations to prioritize security.
This article is published as part of the IDG Contributor Network. Want to Join? | <urn:uuid:f7bdc2ce-f887-4561-87b7-20c82b6a9875> | CC-MAIN-2017-04 | http://www.csoonline.com/article/3005595/data-protection/ftc-ruling-suggests-upcoming-changes-for-data-compliance-regulation.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00225-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95569 | 782 | 2.515625 | 3 |
This article is the third installment in Orlando's series on virtual operations support teams (VOSTs). The first installment, Lessons Learned From The Social Media Tabletop Exercise, is available here. The second installment, Structured Networks & Self-Coordinated Disaster Response, is available here.
“We all have to understand that there will never again be a major event in this country that won’t involve public participation. And the public participation will happen whether it’s managed or not.”
— Coast Guard Admiral Thad Allen, Incident Commander for the Deepwater Horizon Oil Spill
“A cynical streak in society looks at all forms of amateur participation as either naïve or stupid.”
— Clay Shirky
The Wisdom Of Crowds
A central tenet of disaster response is that disasters must be handled by a small group of professionals, rather than the large mass of amateurs. This seems plausible enough, but it turns out that some types of problems are better addressed by a large group of amateurs than a small group of professionals. As counter-intuitive as this seems, a large group of amateurs will consistently out-perform a small group of professionals in certain types of situations.
- In 1906, Francis Galton was visiting a livestock fair when he stumbled upon an interesting contest. Local villagers were asked to guess the weight of an ox, with the closest guess winning a prize. Of the over 800 guesses, nobody got the exact weight of 1198 pounds. Afterwards, Galton asked for the ballots and upon compiling them found that the average guess was 1197 pounds. To his surprise, the average guess was more accurate than any particular guess.
- Sociologist Gate H. Gordon had 200 students guess and rank items by weight, and found that the group average was 94 percent accurate, which was better than all but 5 of the students. In another, Jack Treynor had 56 students guess the number of jelly beans in a jar. The jar had 850, the average guess was 871, and only one student made a better guess (Surowiecki, page 5).
- When the submarine Scorpion was lost at sea in 1968 the U.S. Navy wanted to find the wreckage, but its intelligence was too scant to produce a reasonable search zone. Then naval officer John Craven had an idea: He gathered together a wide group of individuals with different talents, from math to salvage to submarines. He then asked them to bet on the submarine’s location. He compiled all of the bets and came up with a group average. The submarine was found an amazing 220 yards from that location.
- The Defense Advanced Projects Research Agency (DARPA) is also actively identifying those types of problems that are better handled through “the wisdom of crowds.” One fascinating study was its “Network Challenge.” DARPA announced that on a specific day it would hoist 10 red balloons around the country, and the first person or group to find all 10 would win $50,000. The choice of competition was not an accident. They deliberately chose a challenge that could not be handled by the current US intelligence community. As they reported:
“The geo-location of ten balloons in the United States by conventional intelligence methods is considered by many to be intractable; one senior analyst at the National Geospatial Intelligence Agency characterized the problem as ‘impossible’.”
The location of 10 red balloons in DARPA's Network Challenge.
A team from MIT found all ten in less than 9 hours. How? They created a network that began with four members and rapidly grew to over 5,000. The original crew created a formula that rewarded people who found a balloon. But it also rewarded those who passed along information on a balloon and those that dispelled false claims. You see, it turns out that some teams were deliberately feeding misinformation to other teams to throw them off of scent.
DARPA did this to test what would happen if terrorists tried to feed misinformation into intelligence gathering systems. What they found was that when misinformation was received into the social media system, the system tends to self-correct.
“Crowdsourcing” is also being used in disaster response. For instance, within days of the Haitian earthquake a group of volunteer programs created Open Street Map, a system that allowed people to compare post-earthquake satellite imagery of the disaster with a Google streetmap and tag a map of the area with information such as “destroyed building, partially destroyed building, hospital,” etc. Then hundreds of volunteers around the world did the tagging, which was turned into a cell-phone and GPS app that rescuers used to guide their efforts.
More recently, the United Nations used “The Standby Task Force,” a group of 700 volunteers from around the world, to gather information about the Libyan crisis. The volunteers monitored information from both mass media and reports on the ground, feeding it into the Ushahidi crowdsourced mapping service, which allowed the UN to keep abreast of the situation on the ground. Ushahidi is now being used to crowdsource information in every disaster that arises in the world, including the current situation in Syria:
Harnessing The Wisdom Of Crowds
These examples do not mean that there is no place for professionals in disaster response, but rather that crowds can provide information and accomplish tasks that disaster managers cannot. It would have taken weeks, if not months, for a governmental agency to set up and populate the Open Street Map for Haiti, by which time the benefits of it would greatly reduced. Disaster responders must stop thinking of the public as a problem to be managed, and start thinking of it as a resource to be harnessed. Similarly, business continuity professionals can start harnessing the power of their company’s employees in a disaster. This requires understanding the factors that lead to “Wisdom of Crowds.”
Surowiecki has identified four basic elements for the Wisdom of Crowds:
- Diversity of opinion: Each person should have private information, even if it’s just an eccentric interpretation of the known facts.
- Independence: People’s opinions aren’t determined by the opinions of those around them.
- Decentralization: People are able to specialize and draw on local knowledge.
- Aggregation: Some mechanism exists for turning private judgments into a collective decision.
The traditional “command and control” model of disaster response closes decision-making around a small group of experts who are assumed to be more reliable than all others. But this model can undermine good decision-making by making it blind to outside input:
“Groups of smart and not-smart agents make better decision than just smart agents because of a diversity of opinion.” (Surowiecki, p. 30)
A group of experts in a particular field will have a specific perspective based on selectively filtering information. However, “the crowd is holding a nearly complete picture of the world in its collective brain.” (Surowiecki, p. 11)
Small groups of experts are also susceptible to the interpersonal forces of committees. Before both the space shuttle Challenger and Columbia disasters, people sounded warnings about the conditions that led to the events. But in both cases the relevant committees that could have acted on the warnings chose to ignore them because once one or more people lent their opinion to one side of the debate, others fell in line. This is often called “groupthink,” and occurs when opinions are formed sequentially among a small group, rather than in aggregate.
Once one or two people chime in on a particular position, others assume that it must carry the weight of logic and agree for no other reason than: others agree. This leads to a decision cascade that overwhelms all of the better judgments (Surowiecki, p, 63).
Crowdsourcing can serve as a counter-balance to the interpersonal forces that might steer a closed group of experts off course. The public is a resource that can assist disaster responders by providing valuable information and perspectives.
The secret is knowing how to gather the information so as to make the public a resource. This is done with a “Virtual Operations Support Team” (VOST)—a group of individuals who monitor social media to feed information back to responders. The Red Cross, public agencies and private companies are increasingly turning to the VOST as a resource during disasters.
Orlando will conduct a one-day pre-conference workshop on Creating & Running A Virtual Operations Support Team (VOST) at the 11th Annual Continuity Insights Management Conference, April 22-24, 2013 at the Sheraton San Diego Hotel & Marina. See http://www.cimanagementconference.com/pre-post-conference-workshops for more information.
The next article in this series focuses on the structure of a VOST. | <urn:uuid:b4dcd4e9-3da6-4d8e-9d75-5b1ad895744a> | CC-MAIN-2017-04 | http://www.continuityinsights.com/article/2012/12/harnessing-wisdom-crowds | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00041-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957447 | 1,841 | 2.65625 | 3 |
In the past, administrators have needed to manage multiple passwords (simple password, NDS password, enhanced password) because of password limitations. Administrators have also needed to deal with keeping the passwords synchronized.
NDS Password: The older NDS password is stored in a hash form that is nonreversible. Only the NDS system can make use of this password, and it cannot be converted into any other form for use by any other system.
Simple Password: The simple password was originally implemented to allow administrators to import users and passwords (clear text and hashed) from foreign nds-cluster-config directories such as Active Directory and iPlanet.
The limitations of the simple password are that no password policy (minimum length, expiration, etc.) is enforced.
Enhanced Password: The enhanced password is no longer supported by Novell. The enhanced password is the forerunner of Universal Password. It offers some password policy, but its design is not consistent with other passwords. It provides a one-way synchronization and it replaces the simple or NDS password.
Novell introduced Universal Password as a way to simplify the integration and management of different password and authentication systems into a coherent network.
Universal Password addresses these password problems by doing the following:
Providing one password for all access to eDirectory.
Enabling the use of extended characters in passwords.
Enabling advanced password policy enforcement.
Allowing synchronization of passwords from eDirectory to other systems.
Most features of password management require Universal Password to be enabled.
For detailed information, see Section 2.0, Deploying Universal Password. | <urn:uuid:80040389-9422-46db-8c48-d342719a59b4> | CC-MAIN-2017-04 | https://www.netiq.com/documentation/password_management33/pwm_administration/data/bwx6mik.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00463-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.900872 | 328 | 2.859375 | 3 |
Read for the Blind
Text-to-speech conversion software that reads Web pages and on-screen documents aloud to the blind and visually impaired isn't new. But a new portable device developed by inventor Ray Kurzweil eliminates the need to be near a computer.
The Kurzweil-National Federation of the Blind Reader combines a digital camera and a PDA to create a device that photographs printed material, scans the text and then reads it within seconds using a synthesized voice. Printed pages captured with the device can be saved for later use. It can store thousands of text pages on extra memory cards and can also transfer files to a computer or PDA.
The device can read many document types, including receipts, letters, recipes, book pages, memos and package labels. An audible description tells users how many edges of a document are in range, as well as the angle and distance the reader is from the page so they may take more accurate photos. The device is equipped with a headphone jack for privacy. -- The New York Times
On July 12, the U.S. House of Representatives approved legislation instructing Americans to "give high priority to energy efficiency as a factor in determining best value and performance for purchases of computer servers."
Higher efficiency not only reduces electricity bills, it offers lower cooling costs, so server buyers have long had a strong market incentive to go green.
The bill, sponsored by Rep. Mike Rogers, R-Mich., also instructs the U.S. Environmental Protection Agency to conduct a three-month study "of the growth trends associated with data centers and the utilization of servers in the federal government and private sector." -- CNET News.com
Linux on Legs
Four companies in Japan have created a low-cost, user-programmable humanoid robot for educational and research applications. The HRP-2m Choromet uses technology from Japan's National Institute of Advanced Industrial Science and Technology (AIST), and is user-programmable thanks to open software running on a Linux implementation.
The Choromet stands about 13 3/4 inches tall, and can walk upright on two legs. It can also assume supine or prone positions, and stand up from either.
AIST hopes Choromet's ability to run software-based movement programs on a real-time Linux platform will enable researchers and schools to experiment with the effectiveness of humanoid robot motion pattern applications. -- Linuxdevice.com
On July 14, 2006, Florida Gov. Jeb Bush announced that a digital radio system allowing state law enforcement officials to communicate throughout the state during emergencies is fully operational.
The Statewide Law Enforcement Radio System will let more than 6,500 officials from 17 state agencies communicate throughout 59,000 square miles and as far as 25 miles offshore.
Florida's 14,000-radio system was tested successfully during the hurricane seasons of the past two years. Officials predicted it would prove valuable in future storms, as well as in criminal inquiries and possible terrorist attacks. -- The Miami Herald
Tailgating is a common cause of rear-end collisions, playing a role in 80 percent of crashes on one stretch of central Minnesota highway.
The Minnesota Department of Transportation (MNDOT) is painting big dots on the highway to show drivers safe following distances. The reflective, oval dots are painted 225 feet apart, and are accompanied by signs explaining that a minimum of two dots should appear between vehicles. This provides the three seconds needed to come to a complete stop without rear-ending the vehicle ahead.
The dots will remain on Highway 55 in Wright County for at least a year, giving MNDOT enough time to compare the number of crashes before and during the test. Most of the project is funded by a $25,000 federal grant. The dots were first used in Pennsylvania, and a similar project began in Maryland in May. -- MNDOT
A new study discovered that, despite knowing the risks, a majority of U.S. drivers still talk on their cell phones while at the wheel.
All the time: 6 percent
Sometimes: 67 percent
Never: 27 percent
-- Harris Interactive, June 2006
The Doctor Will See You Now
Americans make more than 1 billion visits a year to doctors' offices, emergency rooms and hospital outpatient departments, according to the Centers for Disease Control and Prevention. The amount of time a patient waits before seeing a physician in the emergency department increased from 38 minutes in 1997 to 47 minutes in 2004. There was no change in the average time -- about 16 minutes -- a patient spends face-to-face with a doctor in an office visit.
More than 95 percent of urban Japanese consumers have a mobile phone, and a new osaifu keitai -- or "wallet phone" -- promises to eliminate cash cards by incorporating a chip into the cell phone that stores everything previously held in the wallet.
Will mobile wallets replace traditional wallets?
Within 10 years: 35 percent
In 10 to 20 years: 18 percent
In 20 to 50 years: 19 percent
Other/never: 28 percent
The perceived changes that wallet phones will bring are (multiple answers allowed):
No waiting at checkouts: 38 percent
Shopping will change: 36 percent
Mobile phones will become even more important than now: 35 percent
Coins will disappear: 34 percent
Have to select which shops one can use: 30 percent
Number of people whose first card is a mobile will increase: 28 percent
Cash will not be necessary: 27 percent
Wallets will not be necessary: 19 percent
Source: NTT DoCoMo; results are based on a percentage of respondents, who are Internet users in Japan
The Center for Digital Government held its fourth annual Digital Counties Survey, which recognizes counties that use technology to provide a high level of service to citizens. The 2006 winners are:
500,000 or more population:
1st Place: Orange County, Fla.
2nd Place: Fairfax County, Va. (tie)
King County, Wash. (tie)
3rd Place: Montgomery County, Md. (tie)
Tulsa County, Okla. (tie)
4th Place: Oakland County, Mich.
5th Place: San Diego County, Calif.
6th Place: Fulton County, Ga.
7th Place: Sacramento County, Calif. (tie)
Westchester County, N.Y. (tie)
8th Place: Anne Arundel County, Md.
9th Place: Snohomish County, Wash.
10th Place: Miami-Dade County, Fla.
1st Place: Richland County, S.C.
2nd Place: Prince William County, Va. (tie)
Washtenaw County, Mich. (tie)
3rd Place: Dakota County, Minn. (tie)
Douglas County, Colo. (tie)
4th Place: Loudoun County, Va.
5th Place: Marin County, Calif.
6th Place: Seminole County, Fla.
7th Place: Utah County, Utah
8th Place: Dutchess County, N.Y.
9th Place: Howard County, Md. (tie)
Placer County, Calif. (tie)
10th Place: Marion County, Fla.
1st Place: Roanoke County, Va.
2nd Place: Hamilton County, Ind.
3rd Place: Merced County, Calif.
4th Place: Scott County, Iowa
5th Place: Racine County, Wisc.
6th Place: Clermont County, Ohio
7th Place: Horry County, S.C.
8th Place: Cumberland County, Pa.(tie)
Frederick County, Md. (tie)
9th Place: Dona Ana County, N.M.
10th Place: Yuma County, Ariz.
Less than 150,000 population:
1st Place: Charles County, Md.
2nd Place: Nevada County, Calif.
3rd Place: Olmsted County, Minn.
4th Place: Boone County, Mo.
5th Place: Napa County, Calif.
6th Place: Stearns County, Minn.
7th Place: Sutter County, Calif.
8th Place: Delaware County, Ohio
9th Place: Albemarle County, Va.
10th Place: Randolph County, N.C. | <urn:uuid:68431d8a-5b7d-48d7-a3c6-b6a0e6fa9b85> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/100493979.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00281-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916807 | 1,749 | 2.703125 | 3 |
I imagine there are some of you out there who wonder how companies come up with malware names. It can often be confusing, with different companies calling the same thing by completely different names. This guide will tell you, briefly, how we decide on names you’re most likely to encounter on this blog. (If you would like a much more in-depth look, check out CARO’s site.) Let’s take a recent malware named OSX/Imuler.D.
If you were to look at the detection names across the industry, here’s some other names you would see for this file from various vendors:
Kind of confusing, no doubt. So, let’s break this down a bit.
Prefixes – What Does This Thing Do?
You’ll see a fair number of the detection names start with a word like "Trojan," "Backdoor" or "Dropper." These vendors start their naming convention with a description of the activity of the file, but they all have different focuses on what’s the most important descriptor. By choosing “Trojan” or “Trojan Dropper,” it’s as if they’re saying “this threat does not spread by itself – it’s sent by a malicious person.” If they choose “Backdoor,” that is to say the ultimate goal of the Trojan is to create a backdoor on your machine that will let a bad actor take control of it and spy on your actions.
If the name starts with "OSX," this is a way of stating what operating system the malware affects. If the malware targets multiple operating systems, you may see one component named "W32/NastyBizness" and another called "OSX/NastyBizness." "W32" lets you know which component affects Windows systems.
Family Name – The Meat and Potatoes
The next part of the name, usually after a delimiter like a slash or a dot, is the family name. This is what the press usually uses, stripped of the prefix info. If a researcher is looking at something that’s brand new malware bearing little resemblance to other malware that’s come before, they get to choose a new family name.
Knowing whether this is similar to existing malware can be more than a little tricky, especially on Windows where there are many, many millions of malware. This is the first place where things can get a little cloudy. The first researcher to see a new malware may not be familiar with previous variants of a family, and they may choose a new malware name. The next researcher to see it may be familiar with the existing family name and will choose to use that instead.
There are certain conventions that pertain to choosing a new malware family name:
- The use of proper nouns is strongly discouraged, as this could offend the person/country/company/etc. of the thing the malware is named after. Nobody wants bad things named after them! (Except perhaps the malware’s author, and we really don’t want to be encouraging them by putting their name in the press.)
- We try not to use obscene or offensive names. This can be tricky because the malware may come from a culture or language that the researcher is unfamiliar with.
- Numeric names are a bad idea, as historically certain types of viruses included a number as a suffix that denoted how many bytes long the virus code was.
- We do not use the malware author’s suggested malware name, for the same reason we don’t use the malware author’s name or handle. We don’t want to motivate them with recognition. Sometimes vendors will choose to scramble or reverse an author’s suggested name.
- It’s best to avoid naming the malware after the filename the malware comes in, such as an email attachment. The next variant in the family may come with a different filename, and we don’t want to train people to only look for certain problematic files – any unexpected attachments should be treated as suspect.
- For the same reason, we avoid date-based names (such as “Friday_13th”), especially if those dates are related to payload triggers.
- It’s a good idea to name the malware based on something distinctive within the code or behavior of the threat. This way it will be easier for other researchers to identify the threat and possibly its future variants.
Not all companies agree to these naming rules, which is part of why you will see differing names between vendors. Other times, multiple researchers will discover a threat at roughly the same time, which is another case where you might get multiple different names. In the case of our example above, you can see there are three main family names that are used by the various vendors: Imuler, Revir and Muxler.
As part of the research process, most researchers will first scan the file with other anti-malware products to see if it is already detected. This is fairly common as generic and behavioral detections become more powerful. When this happens, it’s considered good form to use the family name already chosen by the other vendor, unless that name falls afoul of one of the conventions above. Sometimes the name is deemed unacceptable for some other reason (like a limit to the length of the detection name), which is up to the researcher and the vendor. If multiple acceptable names exist, it’s best to choose the one used by the majority of vendors.
Suffixes – How Many of These Things Are There?
Suffixes are separated by another delimiter, usually a dot or a dash. They’re meant to tell you which variant of a family this is. For most vendors, suffixes start with A for the first variant, then it goes up to Z, then AA to ZZ, and so on. In Windows-world, it’s very common to see family names with three-letter suffixes, as there are hundreds of variants in those families. Letters are usually used rather than numbers, as we used to use numbers to denote the length of viruses.
This is another place where you can see things get a little problematic, as we have suffixes of .B, .C, .D and .6 – Say what?? Some vendors buck tradition and name suffixes by number, rather than by letter, for starters. For those who stick with the alphabetic suffixes, there are a couple common reasons for variant letters to vary. Multiple variants may be discovered at once or within a short span, and Vendor X may get (and name) them in different order than Vendor Y. Or Vendor X may have generic detection that picks up multiple variants with one signature. In this case, they may choose to name the next variant .B rather than .C or .D. Or they may be aware that their detection catches several variants, and they’ll only have detection for NastyBizness.A and .D, because they didn’t need to amend their detection for .B and .C.
Ouch, My Head Hurts!
All this information may not make the situation much easier for you, since there are so many variables that go into choosing malware names. But hopefully it will help you understand why the names are the way they are and what they mean, even when they’re confusing. Some anti-virus vendors will put “aliases” in descriptions or blog posts about threats when they’re aware of other vendors using different names. This can certainly cut down on the inevitable calls to tech support and the research department about whether Vendor X detects what Vendor Y is talking in the press about. We try to keep things as simple as possible, but unlike in ye olden days, things move too quickly for us to periodically get together with other researchers throughout the industry to sanitize and consolidate everyone's names.
- Security Jargon Decoded
- Rootkits Defined: What They Are and How They Can Be Used Maliciously
- What's the Difference Between Malware, Trojan, Virus, and Worm? | <urn:uuid:03b89065-b3aa-482f-8866-4ae5daaa11ce> | CC-MAIN-2017-04 | https://www.intego.com/mac-security-blog/how-does-malware-naming-work/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00399-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928253 | 1,702 | 2.65625 | 3 |
Truong T.T.,Center for Nanoscale Materials |
Liu Y.,Center for Nanoscale Materials |
Ren Y.,Advanced Photon Source |
Trahey L.,Argonne National Laboratory |
Sun Y.,Center for Nanoscale Materials
ACS Nano | Year: 2012
Single-crystal α-MnO 2 nanotubes have been successfully synthesized by microwave-assisted hydrothermal of potassium permanganate in the presence of hydrochloric acid. The growth mechanism including the morphological and crystalline evolution has been carefully studied with time-dependent X-ray diffraction, electron microscopy, and controlled synthesis. The as-synthesized MnO 2 nanostructures are incorporated in air cathodes of lithium-air batteries as electrocatalysts for the oxygen reduction and evolution reactions. The characterization reveals that the electrodes made of single-crystalline α-MnO 2 nanotubes exhibit much better stability than those made of α-MnO 2 nanowires and δ-MnO 2 nanosheet-based microflowers in both charge and discharge processes. © 2012 American Chemical Society. Source
News Article | August 22, 2016
In a new study, researchers from the Cambridge Crystallographic Data Centre (CCDC) and the U.S. Department of Energy's (DOE's) Argonne National Laboratory have teamed up to capture neon within a porous crystalline framework. Neon is well known for being the most unreactive element and is a key component in semiconductor manufacturing, but neon has never been studied within an organic or metal-organic framework until now. The results, which include the critical studies carried out at the Advanced Photon Source (APS), a DOE Office of Science user facility at Argonne, also point the way towards a more economical and greener industrial process for neon production.
Abstract: While lithium-ion batteries have transformed our everyday lives, researchers are currently trying to find new chemistries that could offer even better energy possibilities. One of these chemistries, lithium-air, could promise greater energy density but has certain drawbacks as well. Now, thanks to research at the U.S. Department of Energy's (DOE) Argonne National Laboratory, one of those drawbacks may have been overcome. All previous work on lithium-air batteries showed the same phenomenon: the formation of lithium peroxide (Li2O2), a solid precipitate that clogged the pores of the electrode. In a recent experiment, however, Argonne battery scientists Jun Lu, Larry Curtiss and Khalil Amine, along with American and Korean collaborators, were able to produce stable crystallized lithium superoxide ((LiO2) instead of lithium peroxide during battery discharging. Unlike lithium peroxide, lithium superoxide can easily dissociate into lithium and oxygen, leading to high efficiency and good cycle life. "This discovery really opens a pathway for the potential development of a new kind of battery," Curtiss said. "Although a lot more research is needed, the cycle life of the battery is what we were looking for." The major advantage of a battery based on lithium superoxide, Curtiss and Amine explained, is that it allows, at least in theory, for the creation of a lithium-air battery that consists of what chemists call a "closed system." Open systems require the consistent intake of extra oxygen from the environment, while closed systems do not - making them safer and more efficient. "The stabilization of the superoxide phase could lead to developing a new closed battery system based on lithium superoxide, which has the potential of offering truly five times the energy density of lithium ion," Amine said. Curtiss and Lu attributed the growth of the lithium superoxide to the spacing of iridium atoms in the electrode used in the experiment. "It looks like iridium will serve as a good template for the growth of superoxide," Curtiss said. "However, this is just an intermediate step," Lu added. "We have to learn how to design catalysts to understand exactly what's involved in lithium-air batteries." ### The researchers confirmed the lack of lithium peroxide by using X-ray diffraction provided by the Advanced Photon Source, a DOE Office of Science User Facility located at Argonne. They also received allocations of time on the Mira supercomputer at the Argonne Leadership Computing Facility, which is also a DOE Office of Science User Facility. The researchers also performed some of the work at Argonne's Center for Nanoscale Materials, which is also a DOE Office of Science User Facility. A study based on the research appeared in the January 11 issue of Nature. The work was funded by the DOE's Office of Energy Efficiency and Renewable Energy and Office of Science. About Argonne National Laboratory Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation's first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America's scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy's Office of Science. The U.S. Department of Energy's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit the Office of Science website. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.
FAYETTEVILLE, AR — An international group of physicists led by the University of Arkansas has created an artificial material with a structure comparable to graphene. “We’ve basically created the first artificial graphene-like structure with transition metal atoms in place of carbon atoms,” said Jak Chakhalian, professor of physics and director of the Artificial Quantum Materials Laboratory at the U of A. In 2014, Chakhalian was selected as a quantum materials investigator for the Gordon and Betty Moore Foundation. His selection came with a $1.8 million grant, a portion of which funded the study. Graphene, discovered in 2004, is a one-atom-thick sheet of graphite. Graphene transistors are predicted to be substantially faster and more heat-tolerant than today’s silicon transistors and may result in more efficient computers and the next-generation of flexible electronics. Its discoverers were awarded the Nobel Prize in physics in 2010. The U of A-led group published its findings this week in Physical Review Letters, the journal of the American Physical Society, in a paper titled “Mott Electrons in an Artificial Graphene-like Crystal of Rare Earth Nickelate.” “This discovery gives us the ability to create graphene-like structures for many other elements,” said Srimanta Middey, a postdoctoral research associate at the U of A who led the study. The research group also included U of A postdoctoral research associates Michael Kareev and Yanwei Cao, doctoral student Xiaoran Liu and recent doctoral graduate Derek Meyers, now at Brookhaven National Laboratory. Additional members of the group were David Doennig of the University of Munich, Rossitza Pentcheva of the University of Duisburg-Essen in Germany, Zhenzhong Yang, Jinan Shi and Lin Gu of the Chinese Academy of Sciences; and John W. Freeland and Phillip Ryan of the Advanced Photon Source at Argonne National Laboratory near Chicago. The research was also partially funded by the Chinese Academy of Sciences.
Inexpensive materials called MOFs pull gases out of air or other mixed gas streams, but fail to do so with oxygen. Now, a team has overcome this limitation by creating a composite of a MOF and a helper molecule in which the two work in concert to separate oxygen from other gases simply and cheaply. The results, reported in Advanced Materials, might help with a wide variety of applications, including making pure oxygen for fuel cells, using that oxygen in a fuel cell, removing oxygen in food packaging, making oxygen sensors, or for other industrial processes. The technique might also be used with gases other than oxygen as well by switching out the helper molecule. Currently, industry uses a common process called cryogenic distillation to separate oxygen from other gases. It is costly and uses a lot of energy to chill gases. Also, it can't be used for specialty applications like sensors or getting the last bit of oxygen out of food packaging. A great oxygen separator would be easy to prepare and use, be inexpensive and be reusable. MOFs, or metal-organic frameworks, are materials containing lots of pores that can suck up gases like sponges suck up water. They have potential in nuclear fuel separation and in lightweight dehumidifiers. But of the thousands of MOFs out there, less than a handful absorb molecular oxygen. And those MOFs chemically react with oxygen, forming oxides — think rust — that render the material unusable. "When we first worked with MOFs for oxygen separation, we could only use the MOFs a few times. We thought maybe there's a better way to do it," says materials scientist Praveen Thallapally of the Department of Energy's Pacific Northwest National Laboratory. The new tack for Thallapally and colleagues at PNNL involved using a second molecule to mediate the oxygen separation — a helper molecule would be attracted to but chemically uninterested in the MOF. Instead, the helper would react with oxygen to separate it from other gases. They chose a MOF called MIL-101 that is known for its high surface area — making it a powerful sponge — and lack of reactivity. One teaspoon of MIL-101 has the same surface area as a football field. The high surface area comes from a MOF's pores, where reactive MOFs work their magic. MOFs that react with oxygen need to be handled carefully in the laboratory, but MIL-101 is stable at ambient temperatures and in the open atmosphere of a lab. For their helper molecule, they tried ferrocene, an inexpensive iron-containing molecule. The scientists made a composite of MIL-101 and ferrocene by mixing them and heating them up. Initial tests showed that MIL-101 took up more than its weight in ferrocene and at the same time lost surface area. This indicated that ferrocene was taking up space within the MOF's pores, where they need to be to snag the oxygen. Then the team sent gases through the black composite material. The material bound up a large percentage of oxygen, but almost none of the added nitrogen, argon or carbon dioxide. The material behaved this way whether the gases went through individually or as a mix, showing that the composite could in fact separate oxygen from the others. Additional analysis showed that heating caused ferrocene to decompose in the pores to nanometer-sized clusters, which made iron available to react with oxygen. This reaction formed a stable mineral known as maghemite, all within the MOF pores. Maghemite could be removed from the MOF to use the MOF again. Together, the results on the composite showed that a MOF might be able to do unexpected things — like purify oxygen — with a little help. Future research will explore other combinations of MOF and helper molecules. In addition to PNNL, participating researchers hailed from and used analytical instruments at two Office of Science User Facilities, the Environmental Molecular Sciences Laboratory at PNNL and the Advanced Photon Source at Argonne National Laboratory, as well as the University of Amsterdam. This work was supported by the Department of Energy Office of Science. | <urn:uuid:67559642-6429-446d-8404-8d2a10fa9e52> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/advanced-photon-source-391989/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00399-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942368 | 2,456 | 2.703125 | 3 |
Cloud computing is famously scalable – but only horizontally.
IBM is trying to change that with new cloud-systems software designed to connect high-performance-computing resources to make them accessible through a single interface in the same way horizontally-scaled systems are with typical cloud software.
It carries a name with Cloud in it, but the software itself looks a lot more like cluster controls than what most people mean by "cloud."
You don't hear as much about HPC as you did a few years ago, when the only efficient way to get a lot of compute power in the same place was to scale vertically – add enough processing power, memory and bandwidth to a single machine that could do the work of many.
With virtual-server and cloud software, it's now more efficient and less expensive to keep letting many servers do the work of many, but make them all divvy up the workloads and resources neatly. The end result was similar to what customers could get from specialized clusters, grids or even HPC systems, at much lower cost.
As closely as clusters or clouds can mimic some of the performance of HPC systems, they can't do it all.
The Air Force custom-built a graphics-analysis supercomputer out of almost 2,000 PlayStations because commodity level general-purpose servers couldn't crunch the visual data quickly enough.
Engineering apps, mining or geographic modeling software, some scientific software and other apps too resource-intensive to run well on standard servers – even clustered – still force many companies to by HPC systems.
IBM's HPC Management Suite for Cloud is designed to at least let those companies avoid wasting expensive HPC resources, by combining them into a single cloud system in the same way horizontally scaled servers are, but with much higher processing potential.
It's not virtualization, though. Running a hypervisor – which amounts to an additional operating system, though a small one – adds computational overhead HPC apps can't afford.
So IBM's approach is to make workload management much more aggressive and much more sophisticated than previously. It allows HPC server farms to split up computational tasks and distribute them among all the farms, to run on the servers with the most capacity.
By that description, it isn't really cloud software at all.
It's cluster-management software that's able to reach out to more pools of system resources than typical load balancers.
IBM isn't offering many details on how it works, but does say it developed the approach to create a super-cluster to be used by more than 3,000 IBM engineers working on the design of its latest-generation of Power7 processors.
It won't ship until the third quarter of this year, at a price IBM hasn't announced. The Register reports sources who estimated it would cost about $700 per node.
Sounds like exactly the kind of thing a very narrow slice of resource-hungry apps need – a market that will get smaller and smaller as more typical cloud and virtualization apps get more efficient at distributing and allocating their own resources. | <urn:uuid:ba441c2c-d0e0-43a5-b6da-b11659ef3e30> | CC-MAIN-2017-04 | http://www.itworld.com/article/2740615/cloud-computing/ibm-pitches-hpc-load-balancing-as-high-performance-cloud.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00033-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963143 | 629 | 2.671875 | 3 |
Circuit Emulation Services (PacketBand)
This course provides the participant with an excellent introduction to Circuit Emulation Services (CSE) and its implementation using PacketBand units. The course begins with a general introduction to the concept of TDM over Packet, the relevant standards and protocols, particularly in relation to the vital area of synchronisation.
|Target Group|| |
The participants are introduced to the PacketBand Units and the local management software, DbMgr. Configuration procedures and techniques are introduced, and the participants are provided with opportunities to carry out practical work to confirm the theory. The concept of logical links (pseudo-wires) is introduced, and participants will be given the skills to build resilient error-free CES circuits. Finally, advice is given with regard to how to analyse typical faults.
This course is designed for engineers and planners who require the knowledge to design, commission and maintain the CES Applications using the PacketBand platform.
|Key Topics|| |
A general understanding of telecommunications principles particularly in the area of CES would be an advantage.
Each participant receives a printed copy of the course documentation. | <urn:uuid:cb1e6795-59ad-4acd-8253-330c3ab10adb> | CC-MAIN-2017-04 | https://www.keymile.com/en/services/training/course_descriptions/packetband_courses/circuit_emulation_services_packetband | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00151-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.876548 | 234 | 2.5625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.