text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
The DOE Joint Genome Institute (JGI), a national user facility that supports the managing and analysis of complex genomic data, has been working for two years to improve its user interface and infrastructure. The Genome Portal (http://genome.jgi.doe.gov), the massive genomic database and data management system operated by the JGI, now boasts significant upgrades to support efficient handling of the rapidly growing diverse genomic data stored there.
The JGI provides high-throughput sequencing and computational analysis in support of DOE missions related to clean energy generation and environmental characterization and cleanup. The Genome Portal allowsusers to search, download and explore multiple data sets. All DOE JGI sequencing projects are available, as well as the status, assemblies and annotations of sequenced genomes.
The DOE JGI and its partners are no stranger to big data. As a recent paper in Nucleic Acids Research highlights, JGI completed 2,635 projects in 2012, a three-fold increase over 2011. The JGI generated more than 56 trillion nucleotides of genome-sequence data in 2012 and over 70 trillion nucleotides in 2013. Over the past year (2013), JGI has added 650 genomes to the public databases. Because of the increased amount and complexity of data, it became necessary to upgrade the Genome Portal. The main focus of the upgrade was expanding computational resources to enable efficient storage, access, download and analysis of data.
Among the updates are new tools designed to make it easier to locate a specific genome, including a detailed list of all JGI projects, an interactive “Tree of Life” and domain-specific comparative resources. Enhanced search functionality supports searching for genomes and projects by keyword (e.g. plants, algae, single cell, water), name and other categories of data.
The Genome Portal website was built using Apache HTTPD, Tomcat and MySQL, and most of the Genome Portal components have been developed using Java and open sources tools. The more robust infrastructure includes four load-balanced Web servers, talking to two back-end database servers. An automated build system uses Jenkins to allow updates to be applied with disruption users.
Partnerships have also been instrumental to the upgrade effort. A strong alliance with the National Energy Research Scientific Computing Center (NERSC) has led to increased HPC-level capabilities, according to the paper’s authors. NERSC hosts the servers that run the Genome Portal and provides access to ESnet (Energy Sciences Network), which facilitates high-speed data transfers.
According to JGI’s Inna Dubchak, JGI’s alliance with NERSC will enable “faster and smoother access for users tapping into the Genome Portal’s resources.” | <urn:uuid:00e4859d-5737-40dd-8207-e237f26cd3e9> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/01/08/improved-genome-portal-benefits-users/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00564-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.900648 | 569 | 2.796875 | 3 |
Your online privacy has never been less private; try to protect it with encryption and the government steps around you via stored records in the cloud. While not everyone encrypts data stored locally on their hard drive, encryption is becoming the default for Internet communications. If it is encrypted, then it seems to be interpreted as a threat by government and law enforcement agencies. In fact, the more you take advantage of services that encrypt your data, the more the government breaks into your cloud. "If you are trying to protect yourself from the government, then having it in the public cloud makes it easier for them to get it," said Stelios Sidiroglou-Douskos, a research scientist at MIT's Computer Science and Artificial Intelligence Laboratory.
While the cloud environment is supposed to be more secure and cut enterprise's costs by allowing data to be accessed from anywhere, a single breach can result in a devastating amount of stolen data. Even when you delete the data, how do you know it is really deleted? Yes you can read the privacy and data retention policies, but it's a bit of a gray area where you have little choice but to trust the provider. Trusting businesses who offer us freebie cloud services in exchange for us being the product is not necessarily the wisest move; it will have privacy experts duking it out on the data retention battlefield. Despite the increased use of online encryption, accessing stored data in the cloud is like pushing the "Easy" button for the government to get its hands on plain text versions of emails, chats and other digital communications.
Peter Swire served as Chief Counselor for Privacy under President Clinton, serves as a Policy Fellow for the CDT, is a professor of law at Ohio State University, and is leading a project on government access to personal information for the Future of Privacy Forum to name but a few of his privacy, cybersecurity and technology achievements. He's written extensively about surveillance, privacy and encryption such as "From Real-Time Intercepts to Stored Records: Why Encryption Drives the Government to Seek Access to the Cloud." While he does not offer any solutions to protect our privacy from records stored in the almighty cloud, all of this surveillance access also does not make sense with the government's "going dark" arguments.
More people are turning to VPNs, encrypted VoIP like is used by Skype, and more sites are using SSL. The increased use of SSL is great from cybersecurity and privacy perspectives; banks, ecommerce websites and services like Dropbox all use SSL, but that doesn't imply the government can't get its hands on the key to unencrypt the data. It's tough to find a free and secure email provider that will truly protect your privacy. Webmail providers like Hotmail and Gmail encrypt when sending email, yet Swire says cloud webmail providers and server owners retain "the technical ability to read the plain text of the emails."
According to Swire, "The widespread adoption of encryption for communications affects the choices for government agencies seeking lawful access. Logically, there are four ways for agencies to access communications":
"1. Break encryption in transit." Although the NSA "made a significant breakthrough against the globally used Advanced Encryption Standard" (AES), for most law enforcement agencies breaking encryption in transit is not feasible. Big Brother may choose use fake digital certificates to eavesdrop by hiding in your browser, but encrypted data in transit is so much more challenging to snag than data stored in plain text and resting in the cloud.
"2. Intercept before or after encryption" includes methods such as physically breaking into a building to install bugs, keyloggers or other surveillance devices. Yet those methods are too risky and costly to government agencies unless the target is a high priority. Another way to remotely intercept communications in real-time is via virtual force and Trojan horse search warrants. Other forms of stealthy interception include hacking into a target's computer. Secret surveillance conference vendors such as the Italian Hacking Team make this easy by selling services that allow intelligence agencies to monitor 100,000 targets at a time.
"3. Assure access in unencrypted form." Swire writes, "CALEA would assure access is wiretap-ready and can be read in unencrypted form." It opens the way to eavesdrop on calls, but government agencies continue to push agendas to stop them from "going dark." Law enforcement and government want "wiretap-ready" backdoors in all communications as apparently we are to believe that terrorists are hiding everywhere such as inside encrypted voice and chat channels found in online games like WOW that are outside the scope of CALEA. Intelligence agencies have issued warning that terrorists are hanging out in online games like WOW and Second Life and taking advantage of encrypted VOIP chats. The encrypted player-to-player text and VOIP chats in the games allegedly offer "convert communications" and safe harbor for people intent on "state-sponsored espionage." Law enforcement maintains that gangs and terrorists recruit and "plot evil" over Xbox and PS3.
"4. Access after the fact, in stored form, often in the cloud." Ding, ding, ding and clearly the Easy button winner for access. Swire writes, "A major descriptive conclusion of this paper is that a wide range of law enforcement and national security agencies will face large or insuperable obstacles to the first three methods. These agencies will thus increasingly depend on access to stored records, notably those stored in the cloud."
From government agencies like the CIA, U.S. military, huge corporations like Microsoft, to regular folks, everyone is betting on the cloud. Strategy Analytics predicted "U.S. spending on cloud services will grow from $31 billion in 2011 to $82 billion by 2016." Some experts suggest the cloud is a potential gold mine for cybercriminals, but it is definitely the ultimate jackpot for law enforcement. The more we turn to the cloud to store our data, the more accessible it is to government and law enforcement. In reality, even if encrypted, the cloud is neither private nor secure in that regard. It is in fact another cog in this "golden age of cyber-surveillance," so how exactly is the FBI "going dark" again?
Like this? Here's more posts:
- Get ready for more TSA pat-downs
- Study Finds 1 in 2 Americans are 'Clueless' about Webcam Hacking
- Track the trackers with Collusion: Interview with Mozilla's Ryan Merkley
- Microsoft 'sorry' for raunchy Windows Azure video with dancing girls, bad sexual lyrics
- Sanitize Microsoft Office: How to remove personal metadata
- The Future of Drone Surveillance: Swarms of Cyborg Insect Drones
- Male or female, who's the better social engineer? Battle of the SExes!
- Apple and Google Maps: Will eye-in-the-sky 'spy planes' place our privacy at risk?
- Is Microsoft right and W3C wrong about Do Not Track being turned on by default?
- NSA claims it would violate Americans' privacy to say how many of us it spied on
- Bill proposes to protect Americans' privacy from warrantless drone surveillance
- Feds investigate who leaked classified Stuxnet cyberattack details to NYT
- This is why people pirate Windows
- Going Dark in the Golden Age of Cyber-Surveillance?
- FBI Creates Surveillance Unit to Build Backdoors into the Web
Follow me on Twitter @PrivacyFanatic | <urn:uuid:68863814-08bd-454b-a562-79b2b526f2e4> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2222657/microsoft-subnet/the-more-you-encrypt--the-more-the-government-breaks-into-your-cloud.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00224-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.9393 | 1,536 | 2.828125 | 3 |
In a previous article, I outlined four types of passwords you shouldn’t create unless you want your account hacked. Given how valuable your passwords are, it’s important that they be secure, yet not too hard to remember. Not only do passwords protect your Facebook information, your personal blog and your e-mail account, but also many accounts linked to your credit card, such as your Amazon, eBay and PayPal accounts.
Here are four tips showing how you can create secure passwords:
Tip #1: Size Matters
With passwords, bigger is better. A 4-character password can be cracked using "brute force" techniques - where a computer simply tries every possible combination of characters - fairly quickly. A 6-character password will take much longer; 8 characters even longer. If you want to be really secure, go for 12 characters or longer.
Tip #2: Variety is the Spice of Life
There are four types of characters you can use in passwords:
- lower-case letters (a, b, c)
- upper-case letters (A, B, C)
- digits (1, 2 3)
- "special characters," which include punctuation (. ; !) and other characters (# * &)
There are 26 lower-case letters, 26 upper-case letters, 10 digits and, depending on the web site, as many as a couple of dozen special characters (some sites won't let you use certain characters). If you create a password with 6 digits, there are a million possibilities. If you use, however, six lower-case letters, the number jumps to over 300 million. And if you use a combination of upper- and lower-case letters, you get 2 billion different combinations. Add in special characters and the number of possibilities is in the hundreds of billions.
Combine this with tip #1 and use a longer password, and see these numbers expand faster than the universe during the Big Bang. If you only use letters and digits, an 8-character password can have as many as 200 trillion possibilities. Move to 12-character passwords and the number is so big I don't even know how to define it (it's 1023, plus a bit).
Tip #3: Create Unique Passwords
Here’s an easy way to create unique, memorable passwords that are impossible to crack. (Well, the NSA might be able to do it...) You should set a password like this for the user account on your Mac, because if anyone can get into your account, they can access a lot of your files and personal information.
To start with, you want something memorable. As an example, let's say you're a fan of the Game of Thrones TV series. You could create a password like this:
That's 13 characters, so it's fairly long, but it's all lower-case letters. Let's throw in a couple of upper-case letters to make it more complex, but not in the expected locations, such as the "g" or "t":
That's a bit better. But now, let's spice it up with a couple of digits. These have to still be easy to remember, right? How about this:
And the addition of even one special character makes this much, much harder to crack:
This isn't too hard to remember, but it could be a bit easier. So let's just use one capital letter, one digit, and one special character; that's more than enough to make it unbreakable:
You now have a password that is secure. According to the site HowSecureIsMyPassword.net, it would take about 423 million years for a desktop computer to crack this password.
Tip #4: Use Your Keychain to Store Passwords, or Use a Password Manager
While you have a really secure password, you still don't want to use it on all your web sites. You can use Mac OS X's keychain to store passwords - this is what "remembers" passwords when you enter them in Safari, along with the passwords you use for Mail and other programs. You can also use one of many programs that store passwords, but make sure that the master password you use for this software is as strong as the example above.
Do you have any other tips for creating secure passwords? | <urn:uuid:12cd6792-7f0e-4ac8-89bb-0616fc38d694> | CC-MAIN-2017-04 | https://www.intego.com/mac-security-blog/4-tips-for-creating-secure-passwords/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00400-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.919454 | 888 | 2.671875 | 3 |
The world’s thinnest nanowire is three atoms wide.
A PHD student has created the world’s thinnest nanowires, three atoms wide, which could help develop paper thin electronic gadgets.
Junhao Lin, who studies at Vanderbilt University and visits Oak Ridge National Laboratory (ORNL) as a scientist, made the nanowire from a special family of semiconducting materials that naturally form monolayers.
Called transition-metal dichalcogenides (TMDCs), the materials were made by combining metals molybdenum or tungsten with either sulfur or selenium.
Sokrates Pantelides, a distinguished professor of physics and engineering at the same university, said the technique is an exciting new way to manipulate matter and should give a boost in efforts to create electronic circuits out of atomic monolayers. Monolayers are the thinnest possible form factor for solid objects.
"Junhao took this project and really ran with it," Pantelides added.
Lin said: "This will likely stimulate a huge research interest in monolayer circuit design.
"Because this technique uses electron irradiation, it can in principle be applicable to any kind of electron-based instrument, such as electron-beam lithography."
The new method can also help the development of three-dimensional circuits by stacking monolayers "like Lego blocks" and using electron beams to fabricate the wires that connect the stacked layers.
Junhao’s primary microscopy mentor and ORNL Wigner Fellow Wu Zhou said: "Junhao used a scanning transmission electron microscope (STEM) that is capable of focusing a beam of electrons down to a width of half an angstrom (about half the size of an atom) and aims this beam with exquisite precision." | <urn:uuid:ae536434-03c7-4a44-afc7-f60f99b38651> | CC-MAIN-2017-04 | http://www.cbronline.com/news/atom-wide-nanowire-breakthrough-could-help-in-creating-ultra-thin-gadgets-020514-4256512 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00244-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.908595 | 375 | 3.390625 | 3 |
A device that allows wireless-equipped computers and other devices to communicate with a wired network.
As specified in Section 508 of the 1998 Rehabilitation Act, the process of designing and developing Web sites and other technology that can be navigated and understood by all people, including those with visual, hearing, motor, or cognitive impairments. This type of design also can benefit people with older/slower software and hardware.
A technology from Microsoft that links desktop applications to the World Wide Web. Using ActiveX tools, interactive web content can be created. Example: In addition to viewing Word and Excel documents from within a browser, additional functionality such as animation, credit card transactions, or spreadsheet calculations.
Identifies the location of an Internet resource. Examples: an e-mail address (email@example.com); a web address (http://www.dataprise.com); or an internet address (192.168.100.1).
A short, easy to remember name created for use in place of a longer, more complicated name; commonly used in e-mail applications. Also referred to as a "nickname".
Archive sites where Internet users can log in and download files and programs without a special username or password. Typically, you enter anonymous as a username and your e-mail address as a password.
To prevent e-mail spam, both end users and administrators of e-mail systems use various anti-spam techniques. Some of these techniques have been embedded in products, services and software to ease the burden on users and administrators. No one technique is a complete solution to the spam problem, and each has trade-offs between incorrectly rejecting legitimate e-mail vs. not rejecting all spam, and the associated costs in time and effort. Dataprise Cloud-Based Anti-SPAM e-mail service eliminates the problem almost entirely. Our state-of-the-art solution lets users see only the e-mail they want — and filters out all of the viruses and e-solicitations they don’t want before they reach user’s computers and mobile devices. To learn more click here.
A program capable of running on any computer regardless of the operating system. Many applets can be downloaded from various sites on the Internet.
A program designed for a specific purpose, such as word processing or graphic design.
A file that can be opened and read by standard text editor programs (for example, Notepad or Simple Text) on almost any type of computer. Also referred to as "plain text files". Examples: documents saved in ASCII format within word processors like Microsoft Word or WordPerfect; e-mail messages created by a program like Outlook; or HTML files.
AT command set:
An industry standard set of commands beginning with the letters "AT" that are used to control a modem. Example: ATDT tells the modem to dial (D) using touch-tone dialing (T). ATDP specifies pulse dialing (P). Also referred to as the "Hayes Command Set".
In this context, a file that is sent along with an e-mail message. ASCII (plain text) files may be appended to the message text, but other types of files are encoded and sent separately (common formats that can be selected include MIME, BinHex, and Uuencode).
Back to top B
The process of identifying yourself and the verification that you're who you say you are. Computers where restricted information is stored may require you to enter your username and password to gain access.
A term that is often used to describe the main network connections that comprise the Internet or other major network.
A measurement of the amount of data that can be transmitted over a network at any given time. The higher the network's bandwidth, the greater the volume of data that can be transmitted.
Business Continuity Plan, or "BCP," is a set of documents, instructions, and procedures which enable a business to respond to accidents, disasters, emergencies, and/or threats without any stoppage or hindrance in its key operations. It is also called a business resumption plan, disaster recovery plan, or recovery plan.
Business Intelligence - A recognized industry term for organizational analytics, including historical, current, and predictive views of business operations. To learn more please click here.
A file that cannot be read by standard text editor programs like Notepad or Simple Text. Examples: documents created by applications such as Microsoft Word or WordPerfect or DOS files with the extension ".com" or ".exe".
A common file format for Macintosh computers; it enables a binary file to be transferred over the Internet as an ASCII file. Using a program like Stuffit, a file can be encoded and renamed with an ".hqx" extension. The recipient uses a similar program to decode the file.
A binary digit (either 0 or 1); it is the most basic unit of data that can be recognized and processed by a computer.
Instruction that combines aspects of both face-to-face (F2F) and online learning experiences. An increasing number of courses at OSU now offer this type of mix.
Refers to a weblog, a web page that contains journal-like entries and links that are updated daily for public viewing.
A wireless networking technology that allows users to send voice and data from one electronic device to another via radio waves.
Bitmap file; a common image format on Windows computers. Files of this type usually have the suffix ".bmp" as part of their name.
A feature available in certain programs like Internet Explorer, Firefox, and Acrobat Reader; it is a shortcut you can use to get to a particular web page (IE and Firefox) or to a specified location within a document (PDF).
A form of algebra in which all values are reduced to either true/false, yes/no, on/off, or 1/0.
A term applied to an e-mail message when it is returned to you as undeliverable.
A device used for connecting two Local Area Networks (LANs) or two segments of the same LAN; bridges forward packets without analyzing or re-routing them.
A high-speed Internet connection; at present, cable modems and DSL (Digital Subscriber Lines) are the two technologies that are most commonly available to provide such access.
A program used to access World Wide Web pages. Examples: Firefox, Safari or Internet Explorer.
On a multitasking system, a certain amount of RAM that is allocated as a temporary holding area so that the CPU can manipulate data before transferring it to a particular device.
Data that is collected but not made immediately available. Compare to a language translator who listens to a whole statement before repeating what the speaker has said rather than providing a word-by-word translation. Example: Streaming media data viewable using a tool like RealMedia Player is buffered.
Business continuity is the activity performed by an organization to ensure that critical business functions will be available to customers, suppliers, regulators, and other entities that must have access to those functions. These activities include many daily chores such as project management, system backups, change control, and help desk. Business Continuity is not something implemented at the time of a disaster; Business Continuity refers to those activities performed daily to maintain service, consistency, and recoverability. To learn more please click here.
business continuity plan:
Business Continuity Plan or "BCP" is a set of documents, instructions, and procedures which enable a business to respond to accidents, disasters, emergencies, and/or threats without any stoppage or hindrance in its key operations. It is also called a business resumption plan, disaster recovery plan, or recovery plan. Also see above explanation. To learn more please click here.
Bring Your Own Device or "BYOD" is a business and technology policy that allows employees to bring in personal mobile devices and use these devices to access company data, email, etc.
Back to top C
A group of adjacent binary digits that a computer processes as a unit to form a character such as the letter "C". A byte consists of eight bits.
A special type of modem that connects to a local cable TV line to provide a continuous connection to the Internet. Like an analog modem, a cable modem is used to send and receive data, but the difference is that transfer speeds are much faster. A 56 Kbps modem can receive data at about 53 Kbps, while a cable modem can achieve about 1.5 Mbps (about 30 times faster). Cable modems attach to a 10Base-T Ethernet card inside your computer.
Refers to: 1) a region of computer memory where frequently accessed data can be stored for rapid access; or 2) a optional file on your hard drive where such data also can be stored. Examples: Internet Explorer and Firefox have options for defining both memory and disk cache. The act of storing data for fast retrieval is called "caching".
A challenge-response test in the form of an image of distorted text the user must enter that to determine whether the user is human or an automated bot.
As authorized agents for the biggest names in the telecommunications industry, Dataprise will deliver the most appropriate and cost-effective carrier solutions for your organization. Dataprise will design, implement and support all of your Data, Internet, Voice and Conferencing solutions. To learn more please click here.
Generally applies to a data input field; a case-sensitive restriction means lower-case letters are not equivalent to the same letters in upper-case. Example: "data" is not recognized as being the same word as "Data" or "DATA".
Computer-Based Training; a type of training in which a student learns a particular application by using special programs on a computer. Sometimes referred to as "CAI" (Computer-Assisted Instruction) or "CBI" (Computer-Based Instruction), although these two terms may also be used to describe a computer program used to assist a teacher or trainer in classroom instruction.
A type of disk drive that can create CD-ROMs and audio CDs. CD-R drives that feature multi session recording allow you to continue adding data to a compact disk which is very important if you plan on using the drive for backup.
Compact Disk, Read Only Memory; a high-capacity secondary storage medium. Information contained on a CD is read-only. Special CD-ROM mastering equipment available in the OIT Multimedia Lab can be reserved for creating new CDs.
CD-RW, CD-R disk:
A CD-RW disk allows you to write data onto it multiple times instead of just once (a CD-R disk). With a CD-R drive you can use a CD-RW disk just like a floppy or zip disk for backing up files, as well as for creating CD-ROMs and audio CDs.
Common Gateway Interface; a mechanism used by most web servers to process data received from a client browser (e.g., a user). CGI scripts contain the instructions that tell the web server what to do with the data.
Real-time communication between two or more users via networked-connected computers. After you enter a chat (or chat room), any user can type a message that will appear on the monitors of all the other participants. While most ISPs offer chat, it is not supported by OIT. However, the campus CMS (Carmen) supported by TELR does provide the capability for live chat among students participating in online courses.
A program or computer that connects to and requests information from a server. Examples: Internet Explorer or Firefox. A client program also may be referred to as "client software" or "client-server software".
Refers to a connection between networked computers in which the services of one computer (the server) are requested by the other (the client). Information obtained is then processed locally on the client computer.
(See below): a common shorthand for a provided cloud computing service (or even an aggregation of all existing cloud services) is "The Cloud".To learn more please click here.
A general term used to describe Internet services such as social networking services (e.g., Facebook and Twitter), online backup services, and applications that run within a Web browser. Could computing also includes computer networks that are connected over the Internet for server redundancy or cluster computing purposes.
'Content Management System' is the collection of procedures used to manage work flow in a collaborative environment. In a CMS, data can be defined as nearly anything: documents, movies, pictures, phone numbers, scientific data, and so forth. CMSs are frequently used for storing, controlling, revising, semantically enriching, and publishing documentation. Serving as a central repository, the CMS increases the version level of new updates to an already existing file. Version control is one of the primary advantages of a CMS.
The process of making a file smaller so that it will save disk space and transfer faster over a network. The most common compression utilities are Winrar for PC or compatible computers (.zip files) and or Stuffit (.sit files) for Macintosh computers.
A term that commonly refers to accessing a remote computer; also a message that appears at the point when two modems recognize each other.
A small piece of information you may be asked to accept when connecting to certain servers via a web browser. It is used throughout your session as a means of identifying you. A cookie is specific to, and sent only to the server that generated it.
Software designed specifically for use in a classroom or other educational setting.
Central processing unit; the part of a computer that oversees all operations and calculations.
Cloud Service Provider; a business model for providing cloud services. To learn more please click here.
Cascading Style Sheet; A set of rules that define how web pages are displayed using CSS, designers can create rules that define how page
A special symbol that indicates where the next character you type on your screen will appear. You use your mouse or the arrow keys on your keyboard to move the cursor around on your screen.
Back to top D
A term describing the world of computers and the society that uses them
Desktop-as-a-Service - Also called virtual desktop or hosted desktop services, it is the outsourcing of a virtual desktop infrastructure (VDI) to a third- party service provider. To learn more please click here.
A special small program that performs a specific task; it may run all the time watching a system, or it can take action only when a task needs to be performed. Example: If an e-mail message is returned to you as undeliverable, you may receive a message from the mailer daemon.
A collection of information organized so that a computer application can quickly access selected information; it can be thought of as an electronic filing system. Traditional databases are organized by fields, records (a complete set of fields), and files (a collection of records). Alternatively, in a Hypertext database, any object (e.g., text, a picture, or a film) can be linked to any other object.
A data center (data centre / datacentre / datacenter) is a facility used to house computer systems and associated components, such as telecommunications and storage systems. It generally includes redundant or backup power supplies, redundant data communications connections, environmental controls (e.g., air conditioning, fire suppression) and security devices.To learn more please click here.
Opposite of compressing a file; the process of restoring the file to its original size and format. The most common programs for decompressing files are Winrar for PC and compatible computers (.zip files) and Stuffit Expander (.sit files) for Macintosh computers.
The process of rewriting parts of a file to contiguous sectors on a hard drive to increase the speed of access and retrieval.
A process used to remove magnetism from a computer monitors. Note flat-panel displays do not have a degauss button since magnetism doesn't build up in them.
On computers like IBM PC or compatibles and Macintoshes, the backdrop where windows and icons for disks and applications reside.
Dynamic Host Configuration Protocol; a protocol that lets a server on a local network assign temporary IP addresses to a computer or other network devices.
Sometimes referred to as a window; on a graphical user interface system, an enclosed area displayed by a program or process to prompt a user for entry of information in one or more boxes (fields).
A network component within Windows that enables you to connect to a dial up server via a modem. Users running dial-up connections on Windows computers must have Dial-Up Adapter installed and properly configured.
dial up connection:
A connection from your computer that goes through a regular telephone line. You use special communications software to instruct your modem to dial a number to access another computer system or a network. May also be referred to as "dial up networking".
Intellectual content which has been digitized and can be referenced or retrieved online; for example, PowerPoint slides, audio or video files, or files created in a word processing application, etc.
Sometimes referred to as digital imaging; the act of translating an image, a sound, or a video clip into digital format for use on a computer. Also used to describe the process of converting coordinates on a map to x,y coordinates for input to a computer. All data a computer processes must be digitally encoded as a series of zeroes and ones.
Dual In-line Memory Module; a small circuit board that can hold a group of memory chips. A DIMM is capable of transferring 64 bits instead of the 32 bits each SIMM can handle. Pentium processors require a 64-bit path to memory so SIMMs must be installed two at a time as opposed to one DIMM at a time.
An area on a disk that contains files or additional divisions called "subdirectories" or "folders". Using directories helps to keep files organized into separate categories, such as by application, type, or usage.
Disaster recovery is the process, policies and procedures related to preparing for recovery or continuation of technology infrastructure critical to an organization after a natural or human-induced disaster. Disaster recovery is a subset of business continuity. While business continuity involves planning for keeping all aspects of a business functioning in the midst of disruptive events, disaster recovery focuses on the IT or technology systems that support business functions. Dataprise's specialist Disaster Recovery Consulting Team can help you devise a near bulletproof Disaster Recovery Plan, so that you can have total piece of mind that your critical systems and processes are safe, and/or can recover from any potential data loss situation. To learn more please click here.
disaster recovery planning
Also referred to as "DRP". Please see above explanation.
Another term for an online newsgroup or forum.
May also be referred to as "online learning" or "eLearning." A means of instruction that implies a course instructor and students are separated in space and perhaps, in time. Interaction may be synchronous (facilitated) or asynchronous (self-paced). Students can work with various course materials, or they may use tools like chat or discussion groups to collaborate on projects.
The goal of distance education; distance learning and distance education are often used interchangeably.
A means by which the illusion of new colors and shades is created by varying the pattern of dots; the more dither patterns a device or program supports, the more shades of gray it can represent. Also referred to as halftoning in the context of printing.
Domain Name System; a service for accessing a networked computer by name rather than by numerical, (IP) address.
Part of an Internet address. The network hierarchy consists of domains and subdomains. At the top are a number of major categories (e.g., com, edu, gov); next are domains within these categories (e.g., ohio-state); and then there are subdomains. The computer name is at the lowest level of the hierarchy.
The process of transferring one or more files from a remote computer to your local computer. The opposite action is upload.
Dots per inch; a measure of a printer's resolution. The higher the number, the better the print quality. A minimum of 300 dpi usually is required for professional quality printing.
Disaster Recovery as a Service; a service that helps recover data in the event of a server failure or natural disaster. To learn more click here
drag and drop:
The act of clicking on one icon and moving it on top of another icon to initiate a specific action. Example: Dragging a file on top of a folder to copy it to a new location.
Digital Subscriber Line; an always on broadband connection over standard phone lines.
Digital video disk; a type of compact disc that holds far more information than the CD-ROMs that are used for storing music files. A DVD can hold a minimum of 4.7 GB, enough for a full-length movie. MPEG-2 is used to compress video data for storage on a DVD. DVD drives are backward-compatible and can play CD-ROMs.
DVD-RW, DVD-R disk:
Back to top E
A DVD-RW disk allows you to write data onto it multiple times instead of just once like on a DVD-R disk. A DVD disk can hold a minimum of 4.7GB which is enough to store a full-length movie. Other uses for DVDs include storage for multimedia presentations that include both sound and graphics.
Extensible Authentication Protocol; a general protocol for authentication that also supports multiple authentication methods.
Extended Graphics Adapter; a card (or board) usually found in older PCs that enables the monitor to display 640 pixels horizontally and 350 vertically.
Electronic learning; applies to a wide scope of processes including Web-based learning, computer-based instruction, virtual classrooms, and digital collaboration. Content may be delivered in a variety of ways including via the Internet, satellite broadcast, interactive TV, and DVD- or CD-ROMs.
Electronic mail; the exchange of messages between users who have access to either the same system or who are connected via a network (often the Internet). If a user is not logged on when a new message arrives, it is stored for later retrieval.
Email archiving is typically a stand-alone IT application that integrates with an enterprise email server, such a Microsoft Exchange. In addition to simply accumulating email messages, these applications index and provide quick, searchable access to archived messages independent of the users of the system, using different technical methods of implementation. The reasons a company may opt to implement an email archiving solution include protection of mission critical data, record retention for regulatory requirements or litigation, and reducing production email server load. Dataprise Cloud-based e-mail archiving service offers you the latest storage technologies in a secure, redundant and easy-to-use format. We take care of all the fine details, from configuring our archiving software to automatically transferring the files to our secure remote servers. To learn more please click here.
A combination of keyboard characters meant to represent a facial expression. Frequently used in electronic communications to convey a particular meaning, much like tone of voice is used in spoken communications. Examples: the characters :-) for a smiley face or ;-) for a wink.
Refers to the ability of a program or device to imitate another program or device; communications software often include terminal emulation drivers to enable you to log on to a mainframe. There also are programs that enable a Mac to function as a PC.
The manipulation of data to prevent accurate interpretation by all but those for whom the data is intended.
Encapsulated PostScript; a graphics format that describes an image in the PostScript language.
A popular network technology that enables data to travel at 10 megabits per second. Campus microcomputers connected to a network have Ethernet cards installed that are attached to Ethernet cabling. An Ethernet connection is often referred to as a "direct connection" and is capable of providing data transmission speeds over 500 Kbps.
An adapter card that fits into a computer and connects to Ethernet cabling; different types of adaptor cards fit specific computers. Microcomputers connected to the campus network have some type of Ethernet card installed. Example: computers in campus offices or in dorms rooms wired for ResNet. Also referred to as "Ethernet adapter".
Also referred to as an expansion board; a circuit board you can insert into a slot inside your computer to give it added functionality. A card can replace an existing one or may be added in an empty slot. Some examples include sound, graphics, USB, Firewire, and internal modem cards.
Back to top F
A suffix preceded by a period at the end of a filename; used to describe the file type. Example: On a Windows computer, the extension ".exe" represents an executable file.
A cable connector that has holes and plugs into a port or interface to connect one device to another.
A single piece of information within a database (e.g., an entry for name or address). Also refers to a specific area within a dialog box or a window where information can be entered.
A collection of data that has a name (called the filename). Almost all information on a computer is stored in some type of file. Examples: data file (contains data such as a group of records); executable file (contains a program or commands that are executable); text file (contains data that can be read using a standard text editor).
Refers to: 1) a program that has the function of translating data into a different format (e.g., a program used to import or export data or a particular file); 2) a pattern that prevents non-matching data from passing through (e.g., email filters); and 3) in paint programs and image editors, a special effect that can be applied to a bit map.
A type of directory service on many UNIX systems. Queries take the format firstname_lastname (e.g., jane_doe) or for more complete information,=firstname.lastname (e.g.,=jane_doe).
A method of preventing unauthorized access to or from a particular network; firewalls can be implemented in both hardware and software, or both.
A way to connect different pieces of equipment so they can quickly and easily share information. FireWire (also referred to as IEEE1394 High Performance Serial Bus) is very similar to USB. It preceded the development of USB when it was originally created in 1995 by Apple. FireWire devices are hot pluggable, which means they can be connected and disconnected any time, even with the power on. When a new FireWire device is connected to a computer, the operating system automatically detects it and prompts for the driver disk (thus the reference "plug-and play").
A small device that plugs into computer's USB port and functions as a portable hard drive.
A type of memory that retains information even after power is turned off; commonly used in memory cards and USB flash drives for storage and transfer of data between computers and other digital products.
An area on a hard disk that contains a related set of files or alternatively, the icon that represents a directory or subdirectory.
A complete assortment of letters, numbers, and symbols of a specific size and design. There are hundreds of different fonts ranging from businesslike type styles to fonts composed only of special characters such as math symbols or miniature graphics.
A feature of some web browsers that enables a page to be displayed in separate scrollable windows. Frames can be difficult to translate for text-only viewing via ADA guidelines, so their use is increasingly being discouraged.
Copyrighted software available for downloading without charge; unlimited personal usage is permitted, but you cannot do anything else without express permission of the author. Contrast to shareware; copyrighted software which requires you to register and pay a small fee to the author if you decide to continue using a program you download.
The scattering of parts of the same disk file over different areas of a disk; fragmentation occurs as files are deleted and new ones are added.
Back to top G
File Transfer Protocol; a method of exchanging files between computers via the Internet. A program like WS_FTP for IBM PC or compatibles or Fetch for Macintosh is required. Files can contain documents or programs and can be ASCII text or binary data.
Graphics Interchange Format; a format for a file that contains a graphic or a picture. Files of this type usually have the suffix ".gif" as part of their name. Many images seen on web pages are GIF files.
gigabyte (Gig or GB):
1024 x 1024 x 1024 (2 to the 30th power) bytes; it's usually sufficient to think of a gigabyte as approximately one billion bytes or 1000 megabytes.
Global Positioning System; a collection of Earth-orbiting satellites. In a more common context, GPS actually refers to a GPS receiver which uses a mathematical principle called "trilateration" that can tell you exactly where you are on Earth at any moment.
Greyware (or grayware) refers to a malicious software or code that is considered to fall in the "grey area" between normal software and a virus. Greyware is a term for which all other malicious or annoying software such as adware, spyware, trackware, and other malicious code and malicious shareware fall under.
Back to top H
Graphical user interface; a mouse-based system that contains icons, drop-down menus, and windows where you point and click to indicate what you want to do. All new Windows and Macintosh computers currently being sold utilize this technology.
The initial negotiation period immediately after a connection is established between two modems. This is when the modems agree about how the data will be transmitted (e.g., error correction, packet size, etc.). The set of rules they agree on is called the protocol.
A storage device that holds large amounts of data, usually in the range of hundreds to thousands of megabytes. Although usually internal to the computer, some types of hard disk devices are attached separately for use as supplemental disk space. "Hard disk" and "hard drive" often are used interchangeably but technically, hard drive refers to the mechanism that reads data from the disk.
The physical components of a computer including the keyboard, monitor, disk drive, and internal chips and wiring. Hardware is the counterpart of software.
The portion of an e-mail message or a network newsgroup posting that precedes the body of the message; it contains information like who the message is from, its subject, and the date. A header also is the portion of a packet that proceeds the actual data and contains additional information the receiver will need.
A help desk is an information and assistance resource that troubleshoots problems with computers or similar products. Corporations often provide help desk support their employees and to their customers via a toll-free number, website and/or e-mail. Dataprise offers 3 types of help desk service: 24 x 7 Support365™, Oustsourced and private labeled. To learn more about our services please click here.
A program used for viewing multimedia files that your web browser cannot handle internally; files using a helper application must be moved to your computer before being shown or played. Contrast to a plug-in which enables you to view the file over the Internet without first downloading it.
A document you access using a web browser like Firefox or Internet Explorer. It usually refers to the first page of a particular web site; it also is the page that automatically loads each time you start your browser.
A computer accessed by a user working at a remote location. Also refers to a specific computer connected to a TCP/IP network like the Internet.
HyperText Markup Language; a language used for creating web pages. Various instructions and sets of tags are used to define how the document will look.
HyperText Transfer Protocol; a set of instructions that defines how a web server and a browser should interact. Example: When you open a location (e.g., enter a URL) in your browser, what actually happens is an HTTP command is sent to the web server directing it to fetch and return the requested web page.
Connects one piece of information (anchor) to a related piece of information (anchor) in an electronic document. Clicking on a hyperlink takes you to directly to the linked destination which can be within the same document or in an entirely different document. Hyperlinks are commonly found on web pages, word documents and PDF files.
Data that contains one or more links to other data; commonly seen in web pages and in online help files. Key words usually are underlined or highlighted. Example: If you look for information about "Cats" in a reference book and see a note that says "Refer also to Mammals" the two topics are considered to be linked. In a hypertext file, you click on a link to go directly to the related information.
Back to top I
A hypervisor, also called virtual machine manager (VMM), is one of many hardware virtualization techniques that allow multiple operating systems, termed guests, to run concurrently on a host computer. It is so named because it is conceptually one level higher than a supervisory program. The hypervisor presents to the guest operating systems a virtual operating platform and manages the execution of the guest operating systems. Multiple instances of a variety of operating systems may share the virtualized hardware resources. Hypervisors are installed on server hardware whose only task is to run guest operating systems. Non-hypervisor virtualization systems are used for similar tasks on dedicated server hardware, but also commonly on desktop, portable and even handheld computers.
Infrastructure as a Service; In the most basic cloud-service model, providers of IaaS offer computers - physical or (more often) virtual machines - and other resources.To learn more please click here.
On a system like Windows or Macintosh that uses a graphical user interface (GUI), a small picture or symbol that represents some object or function. Examples: a file folder for a directory; a rectangle with a bent corner for a file; or a miniature illustration for a program.
Internet Connection Sharing; a feature in Windows that when enabled, allows you to connect computer on your home network to the Internet via one computer.
IEEE 1394 port:
An interface for attaching high-speed serial devices to your computer; IEEE 1394 connectors support plug and play.
A graphic overlay that contains more than one area (or hot spot) which is clickable and links to another web page or anchor. Image maps provide an alternative to text links for directing the user to additional information.
Internet Message Access Protcol. A method of accessing e-mail messages on a server without downloading them to your local hard drive; it is the main difference between IMAP and POP3 which requires messages to be downloaded to a user's hard drive before the message can be read.
A worldwide network based on the TCP/IP protocol that can connect almost any make or model of popular computers from micros to supercomputers. Special programs called "clients" enable users with a network connection to do things like process e-mail or browse web sites using the familiar interface of a desktop computer.
Internet Domain Management:
For a comprehensive overview of Dataprise's Internet Domain Management services, please click here.
A client program from Microsoft that comes pre installed on most new PC or compatible computers; enables you to browse the World Wide Web.
An audio broadcasting service transmitted via the Internet; broadcasts consist of a continuous stream. A drawback is the inability to control selection as you can when listening to traditional radio broadcasting.
Internet Protocol address; every computer connected to the Internet has a unique identifying number. Example: 192.168.100.2.
Internet Relay Chat; a system that enables two or more Internet users to conduct online discussions in real time.
Interrupt request; refers to a number associated with a serial port on an PC or compatible computer. It usually can be changed by flipping a dip switch. Occasionally, when you're using a modem connect to the Internet, you may need to adjust the IRQ number assigned to the serial port which connects the modem to avoid conflicts with another device like your mouse.
Internet Service Provider; an organization or company that provides Internet connectivity.
An IT Assessment is the practice of gathering information on part or whole of a IT network infrastructure, and then presented in a detailed report. This report typically analyzes the current state or health of technology or services and identifies areas needing improvement or prepare for a some type of system or application upgrade. A IT Assessment can be performed in-house or outsourced to an IT vendor. Dataprise has developed a comprehensive assessment process that includes conducting thorough, in-depth reviews all of your critical technology areas, evaluating them against best practices and then providing you with a roadmap to better leverage your IT as a competitive advantage. To learn more please click here.
Back to top J
Independent Verification and Validation (IV&V) is the process of checking that a project, service, or system meets specifications and that it fulfills its intended purpose. If you’ve recently implemented a new technology solution, you may want an independent party to assess the quality of the work. To learn more please click here.
A general purpose programming language commonly used in conjunction with web pages that feature animation. Small Java applications are called Java applets; many can be downloaded and run on your computer by a Java-compatible browser like Firefox or Internet Explorer.
A publicly available scripting language that shares many of the features of Java; it is used to add dynamic content (various types of interactivity) to web pages.
Joint Photographic Experts Group; a graphics format which compresses an image to save space. Most images imbedded in web pages are GIFs, but sometimes the JPEG format is used (especially for detailed graphics or photographs). In some cases, you can click on the image to display a larger version with better resolution.
Back to top K
A word processing format in which text is formatted flush with both the left and right margins. Other options include left justified (text is lined up against the left margin) and right justified (text is lined up against the right margin).
An abbreviation for kilobyte; it contains 1,024 bytes; in turn 1,024 kilobytes is equal to one megabyte.
Kilobits per second; a measure of data transfer speed; one Kbps is 1,000 bits per second. Example: a 28.8 Kbps modem.
An authentication system developed at the Massachusetts Institute of Technology (MIT); it enables the exchange of private information across an open network by assigning a unique key called a "ticket" to a user requesting access to secure information.
The amount of space between characters in a word; in desktop publishing, it is typically performed on pairs of letters or on a short range of text to fine-tune the character spacing.
Most often refers to a feature of text editing and database management systems; a keyword is an index entry that correlates with a specific record or document.
kilobyte (K, KB, or Kb):
1,024 (2 to the 10th power) bytes; often used to represent one thousand bytes. Example: a 720K diskette can hold approximately 720,000 bytes (or characters).
Back to top L
A database where information common to a particular topic is stored online for easy reference; for example, a frequently-asked questions (FAQ) list may provide links to a knowledge base.
Local area network; a network that extends over a small area (usually within a square mile or less). Connects a group of computers for the purpose of sharing resources such as programs, documents, or printers. Shared files often are stored on a central file server.
A type of printer that produces exceptionally high quality copies. It works on the same principle as a photocopier, placing a black powder onto paper by using static charge on a rolling drum.
The vertical space between lines of text on a page; in desktop publishing, you can adjust the leading to make text easier to read.
learning management system (LMS):
Software used for developing, using, and storing course content of all types. Information within a learning management system often takes the form of learning objects (see "learning object" below).
A chunk of course content that can be reused and independently maintained. Although each chunk is unique in its content and function, it must be able to communicate with learning systems using a standardized method not dependent on the system. Each chunk requires a description to facilitate search and retrieval.
Another name for a hyperlink.
An open-source operating system that runs on a number of hardware platforms including PCs and Macintoshes. Linux is freely available over the Internet.
A program that manages electronic mailing lists; OIT is responsible for the ListProcessor software and also handles requests from the OSU community or new mailing lists.
An electronic mailing list; it provides a simple way of communicating with a large number of people very quickly by automating the distribution of electronic mail. At OSU, mailing lists are used not only for scholarly communication and collaboration, but also as a means of facilitating and enhancing classroom education.
log in, log on:
Back to top M
The process of entering your username and password to gain access to a particular computer; e.g., a mainframe, a network or secure server, or another system capable of resource sharing.
Metal-as-a-Service; The dynamic provisioning and deployment of whole physical servers, as opposed to the provisioning of virtual machines.
Media Access Control; The hardware address of a device connected to a shared network.
A personal computer introduced in the mid-1980s as an alternative to the IBM PC. Macintoshes popularized the graphical user interface and the 3 1/2 inch diskette drive.
A networked computer dedicated to supporting electronic mail. You use a client program like Microsoft Outlook for retrieving new mail from the server and for composing and sending messages.
A collection of e-mail addresses identified by a single name; mailing lists provide a simple way of corresponding with a group of people with a common interest or bond. There are two main types of lists: 1) one you create within an e-mail program like Outlook that contains addresses for two or more individuals you frequently send the same message; and 2) a Listserve type that requires participants to be subscribed (e.g., a group of collaborators, a class of students, or often just individuals interested in discussing a particular topic).
The amount of memory physically installed in your computer. Also referred to as "RAM".
A very large computer capable of supporting hundreds of users running a variety of different programs simultaneously. Often the distinction between small mainframes and minicomputers is vague and may depend on how the machine is marketed.
A cable connector that has pins and plugs into a port or interface to connect one device to another.
Software programs designed to damage or do other unwanted actions on a computer; common examples of malware include viruses, worms, trojan horses, and spyware.
A Managed Workstation reduces downtime, improves maintenance, increases productivity and data security through an effective blend of Help Desk and on-site support and centralized deployment of software patches and virus protection updates. Dataprise can deliver expert support at the workstation level for all of your users, at any location. Using our DesktopStreaming™ live online support technology, our highly qualified certified technical staff, working remotely, are able to see exactly what is happening on a user’s computer screen — allowing us to quickly isolate issues and begin remediation. To learn more please click here.
Messaging Application Programming Interface; a system built into Microsoft Windows that enables different e-mail programs to interface to distribute e-mail. When both programs are MAPI-enabled, they can share messages.
Mobile Device Management; Any routine or tool intended to distribute applications, data, and configuration settings to mobile communications devices. The intent of MDM is to optimize the functionality and security of a mobile communications network. MDM must be part of a coherent BYOD strategy.
megabyte (Meg or MB):
1,024 x 1,024 (2 to the 20th power) bytes; it's usually sufficient to think of a megabytes as one million bytes.
MHz or mHz:
Megahertz; a measurement of a microprocessor's speed; one MHz represents one million cycles per second. The speed determines how many instructions per second a microprocessor can execute. The higher the megahertz, the faster the computer.
In a graphical user interface, a bar containing a set of titles that appears at the top of a window. Once you display the contents of a menu by clicking on its title, you can select any active command (e.g., one that appears in bold type and not in a lighter, gray type).
Microsoft Exchange Server is the server side of a client–server, collaborative application product developed by Microsoft. It is part of the Microsoft Servers line of server products and is used by enterprises using Microsoft infrastructure products. Exchange's major features consist of electronic mail, calendaring, contacts and tasks; support for mobile and web-based access to information; and support for data storage. Dataprise has a 100% hosted Exchange solution that includes clustered and redundant Microsoft Exchange servers that provide more then enough horsepower to support all of your organization’s messaging needs. And we handle the entire set-up and configuration for you. To learn more please click here.
A group of operating systems for PC or compatible computers; Windows provides a graphical user interface so you can point and click to indicate what you want to do.
Multipurpose Internet Mail Extensions; a protocol that enables you to include various types of files (text, audio, video, images, etc.) as an attachment to an e-mail message.
A device that enables a computer to send and receive information over a normal telephone line. Modems can either be external (a separate device) or internal (a board located inside the computer's case) and are available with a variety of features such as error correction and data compression.
A person who reviews and has the authority to block messages posted to a supervised or "moderated" network newsgroup or online community.
The part of a computer that contains the screen where messages to and from the central processing unit (CPU) are displayed. Monitors come in a variety of sizes and resolutions. The higher the number of pixels a screen is capable of displaying, the better the resolution. Sometimes may be referred to as a CRT.
A handheld device used with a graphical user interface system. Common mouse actions include: 1) clicking the mouse button to select an object or to place the cursor at a certain point within a document; 2) double-clicking the mouse button to start a program or open a folder; and 3) dragging (holding down) the mouse button and moving the mouse to highlight a menu command or a selected bit of text.
Motion Picture Experts Group; a high quality video format commonly used for files found on the Internet. Usually a special helper application is required to view MPEG files.
Managed Remote Back Up; a service that provides users with a system for the backup, storage, and recovery of data using cloud computing. To learn more about MRB click here
Managed Service Provider; A business model for providing information-technology services.To learn more please click here.
The delivery of information, usually to a personal computer, in a combination of different formats including text, graphics, animation, audio, and video.
Back to top N
The ability of a CPU to perform more than one operation at the same time; Windows and Macintosh computers are multitasking in that each program that is running uses the CPU only for as long as needed and then control switches to the next task.
Network as a Service; a category of cloud services that provides users with the capability of where the capability provided to the cloud service user is to usinge network/transport connectivity services and/or inter-cloud network connectivity services.
A computer that runs a program for converting Internet domain names into the corresponding IP addresses and vice versa.
Network Address Translation; a standard that enables a LAN to use a set of IP addresses for internal traffic and a single IP address for communications with the Internet.
A group of interconnected computers capable of exchanging information. A network can be as few as several personal computers on a LAN or as large as the Internet, a worldwide network of computers.
A device that connects your computer to a network; also called an adapter card or network interface card.
A common connection point for devices on a network.
Network News Transport Protocol; the protocol used for posting, distributing, and retrieving network news messages.
Dataprise Cloud-based Network Monitoring service, can configure and remotely monitor all of your important network systems (e-mail, servers, routers, available disk space, backup applications, critical virus detection, and more). If our system detects a problem, it alerts the Dataprise Technical Support Center, so we can take corrective action. Depending on prearranged instructions from your own network engineers, we’ll correct the problem immediately, wait until the next business day or simply notify you of the issue. To learn more please click here.
Back to top O
Network security consists of the provisions and policies adopted by a network administrator to prevent and monitor unauthorized access, misuse, modification, or denial of the computer network and network-accessible resources. Network Security is the authorization of access to data in a network, which is controlled by a network administrator. Dataprise uses state-of-the-art network security techniques while providing authorized personnel access to important files and applications. Every organization’s needs are different and hackers are always adapting their techniques, so we are extremely serious about staying up to date with the latest network security tools, threats and industry developments. To learn more please click here.
Optical character recognition; the act of using a visual scanning device to read text from hard copy and translate it into a format a computer can access (e.g., an ASCII file). OCR systems include an optical scanner for reading text and sophisticated software for analyzing images.
Dataprise realizes that businesses are moving more and more of their critical infrastructure to Cloud-based providers. 'On-Cloud' is currently our own term coined for providing management and support for your Cloud-based systems and processes.
At-place-of-work-or-business support, typically provided by a technically qualified individual.
A term that has commonly come to mean "connected to the Internet". It also is used to refer to materials stored on a computer (e.g., an online newsletter) or to a device like a printer that is ready to accept commands from a computer.
Back to top P
OpenType is a format for scalable computer fonts. It was built on its predecessor TrueType, retaining TrueType's basic structure and adding many intricate data structures for prescribing typographic behavior. OpenType is a registered trademark of Microsoft Corporation.
Platform as a Service, in the PaaS model, cloud providers deliver a computing platform that typically including an operating system, programming language execution environment, database, and web server.
A unit of transmission in data communications. The TCP/IP protocol breaks large data files into smaller chunks for sending over a network so that less data will have to be re-transmitted if errors occur.
The range of colors a computer or an application is able to display. Most newer computers can display as many as 16 million colors, but a given program may use only 256 of them. Also refers to a display box containing a set of related tools within a desktop publishing or graphics design program.
Refers to an HTML document on the World Wide Web or to a particular web site; usually pages contain links to related documents (or pages).
An interface on a computer that supports transmission of multiple bits at the same time; almost exclusively used for connecting a printer. On IBM or compatible computers, the parallel port uses a 25-pin connector. Macintoshes have an SCSI port that is parallel, but more flexible in the type of devices it can support.
A secret combination of characters used to access a secured resource such as a computer, a program, a directory, or a file; often used in conjunction with a username.
Usually refers to an IBM PC or compatible, or when used generically, to a "personal computer". In a different context, PC also is an abbreviation for "politically correct."
Personal Digital Assistant; a small hand-held computer that in the most basic form, allows you to store names and addresses, prepare to-do lists, schedule appointments, keep track of projects, track expenditures, take notes, and do calculations. Depending on the model, you also may be able to send or receive e-mail; do word processing; play MP3 music files; get news, entertainment and stock quotes from the Internet; play video games; and have an integrated digital camera or GPS receiver.
Portable Document Format; a type of formatting that enables files to be viewed on a variety computers regardless of the program originally used to create them. PDF files retain the "look and feel" of the original document with special formatting, graphics, and color intact. You use a special program or print driver (Adobe Distiller or PDF Writer) to convert a file into PDF format.
A type of connection between two computers; both perform computations, store data, and make requests from each other (unlike a client-server connection where one computer makes a request and the other computer responds with information).
Practical Extraction and Report Language; a programming language that is commonly used for writing CGI scripts used by most servers to process data received from a client browser.
A method of setting up a computer or a program for multiple users. Example: In Windows, each user is given a separate "personality" and set of relevant files.
Pretty good privacy; a technique for encrypting e-mail messages. PGP uses a public key to give to anyone who sends you messages and a private key you keep to decrypt messages you receive.
A type of directory service often referred to as a "phone book". When accessing this type of directory service, follow the directions from the particular site for looking up information.
A con that scammers use to electronically collect personal information from unsuspecting users. Phishers send e-mails that appear to come from legitimate websites such as eBay, PayPal, or other banking institutions asking you to click on a link included in the email and then update or validate your information by entering your username and password and often even more information, such as your full name, address, phone number, social security number, and credit card number.
Packet Internet Groper; a utility used to determine whether a particular computer is currently connected to the Internet. It works by sending a packet to the specified IP address and waiting for a reply.
Stands for one picture element (one dot on a computer monitor); commonly used as a unit of measurement.
A program used for viewing multimedia files that your web browser cannot handle internally; files using a plug-in do not need to be moved to your computer before being shown or played. Contrast to a helper application which requires the file to first be moved to your computer. Examples of plug-ins: Adobe Flash Player (for video and animation) and Quicktime (for streamed files over the Internet).
plug and play:
A set of specifications that allows a computer to automatically detect and configure a device and install the appropriate device drivers.
Post Office Protocol; a method of handling incoming electronic mail. Example: E-mail programs may use this protocol for storing your incoming messages on a special cluster of servers called pop.service.ohio-state.edu and delivering them when requested.
Any application that disables the pop-up, pop-over, or pop-under ad windows that appear when you use a web browser.
The act of sending a message to a particular network newsgroup.
A page description language primarily used for printing documents on laser printers; it is the standard for desktop publishing because it takes advantage of high resolution output devices. Example: A graphic design saved in PostScript format looks much better when printed on a 600 dpi printer than on a 300 dpi printer.
Called outline or scalable fonts; with a single typeface definition, a PostScript printer can produce many other fonts. Contrast to non-PostScript printers that represent fonts with bitmaps and require a complete set for each font size.
Point-to-Point Protocol; a type of connection over telephone lines that gives you the functionality of a direct ethernet connection.
A set of instructions that tells a computer how to perform a specific task.
Private cloud (also called internal cloud or corporate cloud) is a term for a proprietary computing architecture that provides hosted services to a limited number of users behind a secure and robust infrastructure. A Dataprise private cloud solution is designed to offer the same features and benefits of shared cloud systems, but removes a number of objections to the cloud computing model including control over enterprise and customer data, worries about security, and issues connected to regulatory compliance. Dataprise Private clouds" are designed to facilitate organizations that needs or wants more control over their data than they can get by using a third-party shared cloud service.
A set of rules that regulate how computers exchange information. Example: error checking for file transfers or POP for handling electronic mail.
Refers to a special kind of server that functions as an intermediate link between a client application (like a web browser) and a real server. The proxy server intercepts requests for information from the real server and whenever possible, fills the request. When it is unable to do so, the request is forwarded to the real server.
public domain software:
Any non-copyrighted program; this software is free and can be used without restriction. Often confused with "freeware" (free software that is copyrighted by the author).
Frequently used to describe data sent over the Internet; the act of requesting data from another computer. Example: using your web browser to access a specific page. Contrast to "push" technology when data is sent to you without a specific request being made.
Back to top Q
Frequently used to describe data sent over the Internet; the act of sending data to a client computer without the client requesting it. Example: a subscriptions service that delivers customized news to your desktop. Contrast to browsing the World Wide Web which is based on "pull" technology; you must request a web page before it is sent to your computer.
Quality of service; is the ability to provide different priority to different applications, users, or data flows, or to guarantee a certain level of performance to a data flow. For example, a required bit rate, delay, jitter, packet dropping probability and/or bit error rate may be guaranteed. Quality of service guarantees are important if the network capacity is insufficient, especially for real-time streaming multimedia applications such as voice over IP, online games and IP-TV, since these often require fixed bit rate and are delay sensitive, and in networks where the capacity is a limited resource, for example in cellular data communication.
Back to top R
A video format developed by Apple Computer commonly used for files found on the Internet; an alternative to MPEG. A special viewer program available for both IBM PC and compatibles and Macintosh computers is required for playback.
Random Access Memory; the amount of memory available for use by programs on a computer. Also referred to as "main memory". Example: A computer with 8 MB RAM has approximately 8 million bytes of memory available. Contrast to ROM (read-only memory) that is used to store programs that start your computer and do diagnostics.
A set of fields that contain related information; in database type systems, groups of similar records are stored in files. Example: a personnel file that contains employment information.
A database used by Windows for storing configuration information. Most 32-bit Windows applications write data to the registry. Although you can edit the registry, this is not recommended unless absolutely necessary because errors could disable your computer.
A remote, online, or managed backup service is a service that provides users with a system for the backup and storage of computer files. Dataprise remote backup solution incorporates automatic data compression and secure data encryption. This means that your critical system data backs up safely and efficiently. For additional peace of mind, our backup service features proprietary dual tapeless backup protection, including fast incremental backup to a secure on-site hard drive and a second backup to our carrier-grade data center. Our remote backup service is completely automated and immensely secure. You’ll never have to think about the safety of your data again. To learn more please click here.
A Windows feature that allows you to have access to a Windows session from another computer in a different location (XP and later).
An interactive connection from your desktop computer over a network or telephone lines to a computer in another location (remote site).
See: "network monitoring" or click here.
See: "help desk" or click here.
Red, green, and blue; the primary colors that are mixed to display the color of pixels on a computer monitor. Every color of emitted light can be created by combining these three colors in varying levels.
An eight-wire connector used for connecting a computer to a local-area network. May also be referred to as an Ethernet connector.
Read Only Memory; a special type of memory used to store programs that start a computer and do diagnostics. Data stored in ROM can only be read and cannot be removed even when your computer is turned off. Most personal computers have only a few thousand bytes of ROM. Contrast to RAM (random access or main memory) which is the amount of memory available for use by programs on your computer.
A device used for connecting two Local Area Networks (LANs); routers can filter packets and forward them according to a specified set of criteria.
Back to top S
Rich Text Format; a type of document formatting that enables special characteristics like fonts and margins to be included within an ASCII file. May be used when a document must be shared among users with different kinds of computers (e.g., IBM PC or compatibles and Macintoshes).
Software as a Service; a software delivery model in which software and associated data are centrally hosted on the cloud. SaaS is typically accessed by users using a thin client via a web browser.
A way of starting your Windows computer that can help you diagnose problems; access is provided only to basic files and drivers.
A storage area network (SAN) is a dedicated storage network that provides access to consolidated, block level storage. SANs primarily are used to make storage devices (such as disk arrays, tape libraries, and optical jukeboxes) accessible to servers so that the devices appear as locally attached to the operating system. A SAN typically has its own network of storage devices that are generally not accessible through the regular network by regular devices.
Serial Advanced Technology Attachment or Serial ATA. An interface used to connect ATA hard drives to a computer's motherboard that provides a better, more efficient interface; Serial ATA is likely to replace the previous standard, Parallel ATA (PATA), which has become dated.
A method of data transmission; the sender beams data up to an orbiting satellite and the satellite beams the data back down to the receiver.
A software program that translates text on a Web page into audio output; typically used by individuals with vision impairment.
In a graphical user interface system, the narrow rectangular bar at the far right of windows or dialog boxes. Clicking on the up or down arrow enables you to move up and down through a document; a movable square indicates your location in the document. Certain applications also feature a scroll bar along the bottom of a window that can be used to move from side-to-side.
A tool that searches documents by keyword and returns a list of possible matches; most often used in reference to programs such as Google that are used by your web browser to search the Internet for a particular topic.
A special type of file server that requires authentication (e.g., entry a valid username and password) before access is granted.
A small device used to provide an additional level of authorization to access a particular network service; the token itself may be embedded in some type of object like a key fob or on a smart card. Also referred to as an authentication token.
A 1998 amendment to the Workforce Rehabilitation Act of 1973; it states after June 25, 2001, all electronic and information technology developed, purchased, or used by the federal government must be accessible to those with disabilities. Refer to the Section 508 website for more information.
A type of compressed file that you can execute (e.g., double-click on the filename) to begin the decompression process; no other decompression utility is required. Example: on IBM PC or compatibles, certain files with an ".exe" extension and on Macintoshes, all files with a ".sea" extension.
An interface on a computer that supports transmission of a single bit at a time; can be used for connecting almost any type of external device including a mouse, a modem, or a printer.
A computer that is responsible for responding to requests made by a client program (e.g., a web browser or an e-mail program) or computer. Also referred to as a "file server".
Copyrighted software available for downloading on a free, limited trial basis; if you decide to use the software, you're expected to register and pay a small fee. By doing this, you become eligible for assistance and updates from the author. Contrast to public domain software which is not copyrighted or to freeware which is copyrighted but requires no usage fee.
A file containing a bit of personal information that you can set to be automatically appended to your outgoing e-mail messages; many network newsreaders also have this capability. Large signatures over five lines generally are frowned upon.
Single In-line Memory Module; a small circuit board that can hold a group of memory chips; used to increase your computer's RAM in increments of 1,2, 4, or 16 MB.
Simple Mail Transfer Protocol; a method of handling outgoing electronic mail.
Any program that performs a specific function. Examples: word processing, spreadsheet calculations, or electronic mail.
Email spam, also known as junk email or unsolicited bulk email (UBE), is a subset of spam that involves nearly identical messages sent to numerous recipients by email. Definitions of spam usually include the aspects that email is unsolicited and sent in bulk. Spammers collect email addresses from chatrooms, websites, customer lists, newsgroups, and viruses which harvest users' address books, and are sold to other spammers. They also use a practice known as “email appending” or "epending" in which they use known information about their target (such as a postal address) to search for the target's email address. Also see "Anti-Spam".
Service Set Identifier; a name that identifies a wireless network.
streaming (streaming media):
A technique for transferring data over the Internet so that a client browser or plug-in can start displaying it before the entire file has been received; used in conjunction with sound and pictures. Example: The Flash Player plug-in from Adobe Systems gives your computer the capability for streaming audio; RealPlayer is used for viewing sound and video.
Any software that covertly gathers user information, usually for advertising purposes, through the user's Internet connection.
An area on a hard disk that contains a related set of files; on IBM PC or compatibles, a level below another directory. On Macintoshes, subdirectories are referred to as folders.
Dataprise's unique Support365™ plans offer the best solution for organizations that need comprehensive IT support, that either don’t have the time, skill-set or simply just don’t want the burden of managing an IT person, department, or in some situations – an entire IT division. By choosing Support365™ We make it easier than ever for you to understand, budget, and manage your monthly IT requirements. It's basically like having your own virtual IT department. To learn more click here.
Back to top T
Super VGA (Video Graphics Array); a set of graphics standards for a computer monitor that offers greater resolution than VGA. There are several different levels including 800 x 600 pixels, 1024 by 768 pixels, 1280 by 1024 pixels; and 1600 by 1200 pixels. Although each supports a palette of 16 million colors, the number of simultaneous colors is dependent on the amount of video memory installed in the computer.
A dedicated phone connection supporting data rates of 1.544Mbits per second; T-1 lines are a popular leased line option for businesses connecting to the Internet and for Internet Service Providers connecting to the Internet backbone. Sometimes referred to as a DS1 line.
A dedicated phone connection supporting data rates of about 43 Mbps; T-3 lines are used mainly by Internet Service Providers connecting to the Internet backbone and for the backbone itself. Sometimes referred to as a DS3 line.
An adaptation of the Ethernet standard for Local Area Networks that refers to running Ethernet over twisted pair wires. Students planning on using ResNet from a residence hall must be certain to use an Ethernet adapter that is 10Base-T compatible and not BNC (used with 10Base-2 Ethernet systems).
With reference to web design, a method for formatting information on a page. Use of tables and the cells within also provide a way to create columns of text. Use of tables vs frames is recommended for helping to make your web site ADA-compliant.
Transmission Control Protocol/Internet Protocol; an agreed upon set of rules that tells computers how to exchange information over the Internet. Other Internet protocols like FTP, Gopher, and HTTP sit on top of TCP/IP.
Telephony encompasses the general use of equipment to provide voice communication over distances, specifically by connecting telephones to each other. Dataprise's expert team of telecommunication consultants can design and implement a system that is feature rich, simple to use and integrates seamlessly with your existing business applications. To learn more please click here.
A generic term that refers to the process of opening a remote interactive login session regardless of the type of computer you're connecting to.
The act of using your desktop computer to communicate with another computer like a UNIX or IBM mainframe exactly as if you were sitting in front of a terminal directly connected to the system. Also refers to the software used for terminal emulation. Examples: the Telnet program for VT100 emulation and QWS3270 (Windows) and TN3270 (Macintosh) for IBM3270 fullscreen emulation.
Tag Image File Format; a popular file format for storing bit-mapped graphic images on desktop computers. The graphic can be any resolution and can be black and white, gray-scale, or color. Files of this type usually have the suffix ".tif" as part of their name.
A group of bits transferred between computers on a token-ring network. Whichever computer has the token can send data to the other systems on the network which ensures only one computer can send data at a time. A token may also refer to a network security card, also known as a hard token.
On a graphical user interface system, a bar near the top of an application window that provides easy access to frequently used options.
A harmless-looking program designed to trick you into thinking it is something you want, but which performs harmful acts when it runs.
A technology for outline fonts that is built into all Windows and Macintosh operating systems. Outline fonts are scalable enabling a display device to generate a character at any size based on a geometrical description.
An update of 140 characters or less published by a Twitter user meant to answer the question, "What are you doing?" which provides other users with information about you.
A service that allows users to stay connected with each other by posting updates, or "tweets," using a computer or cell phone or by viewing updates posted by other users.
twisted pair cable:
A type of cable that is typically found in telephone jacks; two wires are independently insulated and are twisted around each other. The cable is thinner and more flexible than the coaxial cable used in conjunction with 10Base-2 or 10Base-5 standards. Most Ohio State UNITS telephone jacks have three pairs of wires; one is used for the telephone and the other two can be used for 10Base-T Ethernet connections.
Back to top U
An extra level of security achieved using a security token device; users have a personal identification number (PIN) that identifies them as the owner of a particular token. The token displays a number which is entered following the PIN number to uniquely identify the owner to a particular network service. The identification number for each user is changed frequently, usually every few minutes.
A popular multitasking computer system often used as a server for electronic mail or for a web site. UNIX also is the leading operating system for workstations, although increasingly there is competition from Windows NT which offers many of the same features while running on an PC or compatible computer.
The process of transferring one or more files from your local computer to a remote computer. The opposite action is download.
Universal Serial Bus; a connector on the back of almost any new computer that allows you to quickly and easily attach external devices such as mice, joysticks or flight yokes, printers, scanners, modems, speakers, digital cameras or webcams, or external storage devices. Current operating systems for Windows and Macintosh computers support USB, so it's simple to install the device drivers. When a new device is connected, the operating system automatically activates it and begins communicating. USB devices can be connected or disconnected at any time.
A name used in conjunction with a password to gain access to a computer system or a network service.
Uniform Resource Locator; a means of identifying resources on the Internet. A full URL consists of three parts: the protocol (e.g., FTP, gopher, http, nntp, telnet); the server name and address; and the item's path. The protocol describes the type of item and is always followed by a colon (:). The server name and address identifies the computer where the information is stored and is preceded by two slashes (//). The path shows where an item is stored on the server and what the file is called; each segment of the location s preceded by a single slash (/). Examples: The URL for the Dataprise home page is http://www.dataprise.com.
An interface used for connecting a Universal Serial Bus (USB) device to computer; these ports support plug and play.
Commonly refers to a program used for managing system resources such as disk drives, printers, and other devices; utilities sometimes are installed as memory-resident programs. Example: the suite of programs called Norton Utilities for disk copying, backups, etc.
Back to top V
A method of converting files into an ASCII format that can be transmitted over the Internet; it is a universal protocol for transferring files between different platforms like UNIX, Windows, and Macintosh and is especially popular for sending e-mail attachments.
Virtual Desktop Infrastructure or "VDI," is a desktop-centric service that hosts users' desktop environments on remote servers and/or blade PCs, which are accessed over a network using a remote display protocol.
An online environment where students can have access to learning tools any time. Interaction between the instructor and the class participants can be via e-mail, chat, discussion group, etc.
Virtualization is the creation of a virtual (rather than actual) version of something, such as a hardware platform, operating system, a storage device or network resources. In hardware virtualization, the term host machine refers to the actual machine on which the virtualization takes place; the term guest machine, however, refers to the virtual machine. Likewise, the adjectives host and guest are used to help distinguish the software that runs on the actual machine from the software that runs on the virtual machine. The software or firmware that creates a virtual machine on the host hardware is called a hypervisor or Virtual Machine Monitor.
An online environment where students can have access to learning tools any time. Interaction between the instructor and the class participants can be via e-mail, chat, discussion group, etc.
Virtual hosting is a method for hosting multiple domain names on a computer using a single IP address. This allows one machine to share its resources, such as memory and processor cycles, to use its resources more efficiently. Dataprise Virtual Hosting provides a high-performance hosting platform for your organization's online presence. Maintained by our specialist support staff and 24x7 active monitoring systems, we work hard to meet all of your hosted Web server needs. To learn more click here.
A technique that enables a certain portion of hard disk space to be used as auxiliary memory so that your computer can access larger amounts of data than its main memory can hold at one time.
An artificial environment created with computer hardware and software to simulate the look and feel of a real environment. A user wears earphones, a special pair of gloves, and goggles that create a 3D display. Examples: manipulating imaginary 3D objects by "grabbing" them, taking a tour of a "virtual" building, or playing an interactive game.
A program intended to alter data on a computer in an invisible fashion, usually for mischievous or destructive purposes. Viruses are often transferred across the Internet as well as by infected diskettes and can affect almost every type of computer. Special antivirus programs are used to detect and eliminate them.
Voice over Internet Protocol; a means of using the Internet as the transmission medium for phone calls. An advantage is you do not incur any additional surcharges beyond the cost of your Internet access.
Virtual Private Networking; a means of securely accessing resources on a network by connecting to a remote access server through the Internet or other network.
Back to top W
A type of terminal emulation required when you open an interactive network connection (telnet) to a UNIX system from your desktop computer.
Wide Area Information Server; a program for finding documents on the Internet. Usually found on gopher servers to enable searching text-based documents for a particular keyword.
Wide Area Network; a group of networked computers covering a large geographical area (e.g., the Internet).
Wireless Application Protocol; a set of communication protocols for enabling wireless access to the Internet.
Wired Equivalent Privacy; a security protocol for wireless local area networks defined in the 802.11b standard. WEP provides the same level of security as that of a wired LAN.
Wireless Fidelity; A generic term from the Wi-Fi Alliance that refers to of any type of 802.11 network (e.g., 802.11b, 802.11a, dual-band, etc.). Products approved as "Wi-Fi Certified" (a registered trademark) are certified as interoperable with each other for wireless communications.
A special character provided by an operating system or a particular program that is used to identify a group of files or directories with a similar characteristic. Useful if you want to perform the same operation simultaneously on more than one file. Example: the asterisk (*) that can be used in DOS to specify a groups of files such as *.txt.
On a graphical user interface system, a rectangular area on a display screen. Windows are particularly useful on multitasking systems which allow you to perform a number of different tasks simultaneously. Each task has its own window which you can click on to make it the current process. Contrast to a "dialog box" which is used to respond to prompts for input from an application.
A casual way of referring to the Microsoft Windows operating systems.
The ability to access the Internet without a physical network connection. Devices such as cell phones and PDAs that allow you to send and receive e-mail use a wireless Internet connection based on a protocol called WAP (Wireless Application Protocol). At this point, web sites that contain wireless Internet content are limited, but will multiply as the use of devices relying on WAP increases.
A special utility within some applications that is designed to help you perform a particular task. Example: the wizard in Microsoft Word that can guide you through creating a new document.
Wireless Local Area Network; the computers and devices that make up a wireless network.
A graphical user interface (GUI) computer with computing power somewhere between a personal computer and a minicomputer (although sometimes the distinction is rather fuzzy). Workstations are useful for development and for applications that require a moderate amount of computing power and relatively high quality graphics capabilities.
World Wide Web:
A hypertext-based system of servers on the Internet. Hypertext is data that contains one or more links to other data; a link can point to many different types of resources including text, graphics, sound, animated files, a network newsgroup, a telnet session, an FTP session, or another web server. You use a special program called a "browser" (e.g., Firefox or Internet Explorer) for viewing World Wide Web pages. Also referred to as "WWW" or "the web".
A program that makes copies of itself and can spread outside your operating system worms can damage computer data and security in much the same way as viruses.
Wi-Fi Protected Access; a standard designed to improve on the security features of WEP.
An abbreviation for World Wide Web.
Back to top X
What You See Is What You Get; a kind of word processor that does formatting so that printed output looks identical to what appears on your screen.
A technology that enables data transmission speeds up to 56 Kbps using regular telephone service that is connected to switching stations by high-speed digital lines. This technology affects only transmissions coming into your computer, not to data you send out. In addition, your ISP must have a modem at the other end that supports X2.
Extensible Hypertext Markup Language. A spinoff of the hypertext markup language (HTML) used for creating Web pages. It is based on the HTML 4.0 syntax, but has been modified to follow the guidelines of XML and is sometimes referred to as HTML 5.0.
Back to top Y Back to top Z
Extensible Markup Language; A markup language for coding web documents that allows designers to create their own customized tags for structuring a page.
zero-day (or zero-hour or day zero) attack, threat or virus is a computer threat that tries to exploit computer application vulnerabilities that are unknown to others or the software developer, also called zero-day vulnerabilities. Zero-day exploits (actual software that uses a security hole to carry out an attack) are used or shared by attackers before the developer of the target software knows about the vulnerability.
A common file compression format for PC or compatibles; the utility WinZip or Winrar is used for compressing and decompressing files. Zipped files usually end with a ".zip" file extension. A special kind of zipped file is self-extracting and ends with a ".exe" extension. Macintosh OSX also supports the .zip format and has tools that can compress and decompress zip files.
A high capacity floppy disk drive from Iomega Corporation; the disks it uses are a little bit larger than a conventional diskette and are capable of holding 100 MB or 250 MB of data.
Back to top
The act of enlarging a portion of an onscreen image for fine detail work; most graphics programs have this capability. | <urn:uuid:ca74617a-8fdd-4605-aa6e-51b3a0b1461c> | CC-MAIN-2017-04 | https://www.dataprise.com/glossary | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00546-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.898165 | 17,402 | 2.625 | 3 |
Inserting Text Using Irfanview
The purpose of this guide is to teach you how to insert text into an image, using Irfanview.
A short Flash presentation is available for viewing. I suggest you watch the presentation first, as this will give you an idea of what this Tutorial will cover.
The written Tutorial, will give you a bit more detail about inserting text, into an image.
Inserting Text Video
- Open Irfanview, and click on File, in the Toolbar, and select Open.
- In the box that opens, navigate to the image you want to insert text into, and select it (single Left click).
Now click on the Open button.
This will open the image in the Irfanview window.
- Next, you need to draw a Selection box, where you will want the text to be located.
Place your cursor where you want the top left corner of the box to be, hold down the left mouse button, which will change the cursor to a big "+" sign, and drag diagonally, downward and to the right.
This will draw the Selection box.
More on manipulating the Selection box, can be found in this tutorial, under Step 3:
Basic Cropping Using Irfanview
- Once the Selection box the way you want it, go to the tool bar at the top, click Edit, and select Insert text into selection...
This will open the Add overlay text to image box.
- In the Text: window, type in the text desired.
- If you want the text to have a transparent background, put a check next to Text is transparent.
- If you want a colored background for your text, uncheck Text is transparent, and click the Set background color button.
This will open the Color selection box.
Make your selection, then click the OK button.
- Next, click the Choose Font button.
This will open the Font window.
Here you can choose the Font:, Font Style:, Size:, Effects, Color:, and Script: you want to use.
Just play around with them until you find something you like.
Your can preview what your selections will look like, by looking in the Sample box.
Once you have made your selections, click the OK button.
- The next step, is to pick the Text alignment.
This will align the text, in the Selection box.
Choose either the Left side of the box, the Center, or the Right side.
- Once you have verified that everything is the way you want it, click the OK button.
This will insert the chosen text, into the Selection box.
Click anywhere outside of the selection box, to remove the box, which will leave only the text.
- Save the image.
Edited by Grinler, 17 April 2012 - 09:57 AM. | <urn:uuid:69e2d02c-2a34-44e5-9e29-42c51692c7a2> | CC-MAIN-2017-04 | https://www.bleepingcomputer.com/forums/t/44074/inserting-text-using-irfanview/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00263-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.818404 | 592 | 2.5625 | 3 |
Given the fact that most of the Internet got started in America -- and Americans dominate its use -- it's easy to forget the rest of the world is not only using the Web, but doing so in unique and creative ways. Amazon.com and AOL tend to get the lion's share of attention when it comes to how things should be done online, but names such as Bhoomi, Golaganang and e-Boks, although lesser known, are still valuable online creations that have sprung up around the globe.
Some are surprising and unusual solutions to familiar problems -- a "last mile" powered by bicycle pedals, a cell phone-based parking solution and many flavors of online education.
Here is a brief look at some innovative IT solutions happening around the world.
Access Across Archipelagos, Slums and Deserts
The Solomon Islands
People First Network is a rural e-mail network that connects this remote island nation using solar-powered computers networked over short-wave radio.
Archipelago Net uses a fiber-optic backbone with wireless LAN links to connect the thinly populated 30,000-island archipelago between Sweden and Finland
. Connectivity has made the islands more attractive for year-round residents, and as one official joked, "Every fish will have an Internet address."
While Internet penetration remains low in Egypt
, the government is making it easy for people to get started -- with free Internet access and nearly 400 government-subsidized IT clubs.
In a rather strange experiment called the Hole in the Wall project, computer scientist Sugata Mitra installed touchscreen computers without instructions in walls throughout the Indian
slums. Poor children with only access and opportunity quickly taught themselves to use the computers, access information and play games.
, bringing the Information Age to rural farmers in Ban Phon Kam and nearby Laotian villages is a difficult task. There are no telephone lines or electricity in the area. The Jhai Foundation's answer is a rugged solid-state computer, which draws only 20 watts (70 watts when printing) and is powered by a car battery and a bicycle-type foot crank. The computer runs on a Lao-language version of Linux. A wireless LAN, based on the 802.11b Wi-Fi protocol, transmits signals between the villages to a server at the Phon Hong Hospital for switching to the Internet or Lao telephone system. Villagers can now make telephone calls with voice over IP, send e-mail and print materials, which will help villagers profit from crop surpluses and export textiles by giving them the ability to communicate with Laos' capital. Young entrepreneurs are helping to launch business development activities.
Dialing Up Train Tickets and Parking Places
's two largest cities, Montreal and Toronto, Bell Canada converted pay phone boxes into Wi-Fi hot spots for its AccessZone pilot. The phone boxes were replaced by wireless transmitters, and DSL carries both pay phone and wireless service from the same location.
, commuter rail tickets are a dial-tone away. NTT DoCoMo and the East Japan Railway Co., are developing Mobile Suica, which incorporates a rail-pass transponder in a DoCoMo cell phone. The Suica technology, scheduled to be operational by year-end, includes an integrated chip and antenna, and acts as a prepaid fare card as the commuter passes through the gate.
Parking in Ireland
's capital city has gone mobile. Dublin motorists can use mPark to pay parking fees by mobile phone. The motorist stops at a parking facility and dials a number posted on the payment machine. The machine gives instructions verbally, telling the motorist to enter a four-digit number displayed on the machine. The motorist's name appears on the parking machine's screen, he or she selects the amount of parking time desired and the machine prints a ticket to display on the vehicle dashboard.
also developed a mobile phone payment system for parking. The my-T-phone pilot project allows customers to register their mobile phone number and vehicle details online. They prepay their parking fees by credit card and go online to check their account balance, parking history or change their vehicle details. To pay for parking, customers can call the number displayed at the car lot to start the virtual parking meter when they arrive, and call again to stop it when they leave. SMS is used to confirm fee payments and warn if an account has insufficient funds. Customers also have the option to prepay for parking time. The service sends a reminder 5 minutes before the prepaid time expires. Parking inspectors view a list of vehicles authorized to park in the area using a WAP phone or handheld computer.
Tooling Education with the Internet
The United Kingdom
's government-backed media giant, BBC, got permission to launch a tax-supported online digital curriculum service for schools in the country. Following complaints by educational publishers, money was also made available as "e-learning credits" to allow schools to acquire software from commercial educational publishers.
UaeMath, based in the United Arab Emirates
, aims to provide free math help and integrate technology, curriculum and user needs to make students appreciate math and thus improve their socioeconomic status.
Educ.ar is Argentina
's national Internet education portal aimed at democratizing education in the country by providing high-quality, interactive educational content and services. Educ.ar integrates all official academic subjects in all levels of the Argentine educational system.
, the BBC reported that the Venerable Bede Church of England Aided School, scheduled to open in September, will use retinal scanners to identify children in the school lunchroom and library. Administrators of the 900-pupil "school of the future" maintain that the scanning technology will be safe and less costly than swipe cards or other identification systems. However, the technology is questioned by some who don't see the need for it.
Four Out of Five Continents Use E-Government
To ramp up government use of IT, South Africa
's Golaganang project will provide computers, software and Internet connectivity to 50,000 government employees and their families. The government hopes the project will stimulate a culture of digital learning and propagate an information-driven economy -- something the South African government places high on its policy agenda. The package will cost employees from $10 to $40 per month, and subsidies are available based on a sliding scale.
Bhoomi, a major document computerization project, delivers 20 million land records to 6.7 million landowners through 177 government-owned computer kiosks in the Indian
state of Karnataka. The project has reduced red tape and corruption in access to land title records.
On Oct. 6, 2002, Brazil
conducted one of the first totally "informatized" elections in the world. Brazil's 115 million voters -- who are required by law to vote -- all used electronic voting machines witnessed by representatives from 37 countries and three international agencies. Electronic voting machines powered by car batteries were carried in canoes up the Amazon to remote villages. The operating panels, with numerical keys from zero to nine, display a photograph of the candidate once their number is keyed in. Once voting is completed, the machine plays a tune to let the voter know the job is done. The system covers state and federal representatives, senators, governors, and presidential candidates.
In a recent election, citizens of Anieres -- a suburb in Geneva, Switzerland
-- cast their ballots in person, by mail or on the Internet. The online voting used several methods of security and marked the first binding vote occurrence online in Switzerland. Online voting is scheduled in Zurich and Neuchatel, and if successful, could spread to national polls.
E-Boks is a secure and free electronic archive for citizens of Denmark
. Documents, from both public authorities and private enterprises, as well as the citizens' private documents, can be transmitted and stored electronically in a secure, remote location, which is accessed via the Internet. Denmark's 2.4 million households each receive an average of 230 administrative letters annually by mail. Replacing this physical mail with e-mail is expected to save the senders approximately 1.6 billion Danish Kroner annually (approximately $220 million). The costs associated with establishing and running the e-Boks service are covered by charges to senders, who pay a fee to join and a transmission fee equivalent to about 25 percent of the current costs of mailing a physical letter.
is the editor of Government Technology International | <urn:uuid:9ffbd266-866a-42e3-a8f7-006fc194cd8d> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/pcio/Not-Invented-Here.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00171-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940559 | 1,727 | 2.578125 | 3 |
About Policy-Based Routing
Policy-Based Routing – PBR gives you very simple way of controlling where packets will be forwarded before they enter in the destination-based routing process of the router.
It’s a technology that gives you more control over network traffic flow because you will not always want to send certain packets by the obvious shortest path. That is the job of routing protocol. If you want to send some traffic to the destination using some other path, you will need to use a method that will catch the packet as soon as they enter into router and decides where to send packets before they enter destination-based routing process. That’s Policy-Based routing all about.
From the text above you can easily determine that you specify PBR on the interface that receives the packet, not on the interface from which the packet is sent. The second option would be too late.
Other thing that can be deduced from above knowledge is that PBR – Policy Based Routing using Route map has the precedence over static routes and even over directly connected networks. That’s because the packet destination is decided before the packet even enter the routers brain. They will skip the normal routers brain and go out using the interface configured in route map.
This way PBR will allow you to configure IP precedence. It will be able for you to configure different path for particular traffic. You can send low priority traffic to normal DSL internet and priority traffic over a high-cost link. You could transfer corporate data over one fast link while sending routine non important data over another slow link.
How it works?
PBR works with:
- Which is defined by Route map
- Which is composed of Statements
PBR is configured using policies that will deny or allow specific paths by reading the identity of a particular destination system, protocol, or even the size of packets sent. Is mostly decided using extended access lists.
Packets that are received on an interface with PBR enabled are filtered by route maps. The route maps is the mean to create the policy. Or you can say, using the route map you will build the PBR policy that will then say where the packets will be forwarded.
Route maps are composed of statements that can be permit or deny:
- You need to be careful, if there is no match criteria in the route map, the route map will be applied to all packets.
- If the statement is marked as permit the packets meeting the match criteria will be processed by that route map.
- If a statement is marked as deny, the packets meeting the match criteria are not processed by that route map. It is basically a way to use the match criteria to say what will not be processed with that route map.
I can suggest that you rather use permit route map statement as it is more logical to implement. The deny statement is actually used only if you need to catch most of the packet except few of them. The permit is the statement to catch and process only specific packets, but only a small part of the whole address space for example. | <urn:uuid:4af2fb5d-e982-4861-a25c-ff6bd8305ab0> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2013/pbr-policy-based-routing-route-map | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00565-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930314 | 626 | 3.25 | 3 |
We will speak here about some basics about Forwarding UDP broadcast traffic. If you were wondering what Forwarding UDP broadcast traffic actually is I will try to explain it here in few words.
If you have more that one broadcast domains in your local network, let’s say that you have three VLANs. In normal networking theory it’s normal that broadcast initiated on host inside one VLAN will get to all host inside that VLAN but it will not get across to other VLAN. Typically the broadcast domain border is router or a Layer’s 3 switch VLAN interface. Although this is normal for most of broadcast traffic there needs to be a way to forward some kinds of broadcast traffic across that border. Why? Here’s a simple example. If you use DHCP, and you are, you will probably have hosts in different VLANs and all of them need to get the IP address from DHCP. If Forwarding UDP broadcast traffic didn’t exist it will be needed to have one DHCP server on every VLAN. Remember that DHCP works using broadcast traffic in some of the steps.
Simple DHCP address leasing:
Host that connects on the network will in the first step send broadcast DHCP discover message in order to find where the server is or if the server actually exist. After the HDCP replies with unicast DHCP offer host will one again use broadcast to send DHCP request to server. Server will then acknowledge the IP address leasing with unicast ACK message and that’s it. | <urn:uuid:61c6e148-b0c5-482a-aaa9-bd81116665d7> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/tag/broadcast | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00565-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931504 | 309 | 3.03125 | 3 |
The zonal isolation is done to prevent the cross flow of the fluids between two or more geological layers. This is also done to reduce the water cut in the production because when the reservoir gets mature, there is a high probability of increased water cut from the pay zone. So to decrease this loss of water being produced instead of hydrocarbon, zonal isolation is done. Zonal isolation is primarily done in the wells which are already producing for a long time.
Zonal isolation is defined as the exclusion of fluids such as water or gas in one zone from mixing with oil in another zone. Successful isolation involves the creation of a hydraulic barrier between the casing and the cement and between the cement and the formation.
Zonal Isolation prevents flow of fluids to surface or between distinct permeable zones, it provides integrity to operate the well safely and permit safe abandonment of the well. It prevents fluid loss in multi-zone wells and thereby enhances safety, protection against formation damage, reduces the need for costly work-over, and protects reservoir integrity and productivity.
It can be used for applications such as,
- Open hole fracturing
- Unconventional fracturing
- Cement replacement and protection
- Annular barrier for inflow control devices
- Side track window sealing
- Annular barrier for sand screens
The report analyzes the zonal isolation market by application and geography. Market share analysis, by revenue, of the top companies is also included in the report. The market share analysis of these key players is arrived at, based on key facts, annual financial information, and interviews with key opinion leaders such as CEOs, Directors, and marketing executives.
Please fill in the form below to receive a free copy of the Summary of this Report
Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement | <urn:uuid:e318344e-42ef-4983-9641-a2d382368d26> | CC-MAIN-2017-04 | http://www.micromarketmonitor.com/market-report/zonal-isolation-reports-7410800471.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00381-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940158 | 381 | 2.515625 | 3 |
A group of researchers from University of California, Berkeley, claims to have achieved 99 percent accuracy when using brainwave signals instead of passwords for user authentication.
The timing was right, they say, because while EEG data was in the past captured with invasive probes, this data can now be collected using “consumer-grade non-invasive dry-contact sensors built into audio headsets and other consumer electronics.”
“We briefed subjects on the objective of the study, fitted them with a Neurosky MindSet headset, and provided instructions for completing each of seven tasks. As the subjects performed each task we monitored and recorded their brainwave signals,” the researchers explained in their report.
The tasks that the fifteen subjects were instructed to do were to focus on breathing, imagine moving a finger up and down in sync with breathing, imagine that they are singing a song, count (in their mind) the number of boxes in a grid that were of a specific color, imagine moving their body to perform a motion related to a sport, choosing and thinking about a pass-though (a concrete mental though), and so on.
After repeating the seven tasks five times per session, the researchers had recorded 1050 brainwave data samples after only two sessions. The data was then repeatedly compressed in order to end up with a “one-dimensional column vector with one entry for each measured frequency” against which later authentication attempts would be compared.
The testing led them to conclude that using brainwaves for authentication is both feasible and extremely accurate, but that tracing a brainwave signal back to a specific person would be much too difficult.
By asking questions about the enjoyability of the specific tasks and by taking stock of the difficulties that the subjects had remembering some of the things they chose to think about during the tests, the researchers also discovered that users tend to better remember secrets that they come up with themselves (song, sport, pass-thought) instead of secrets they are forced to select from a menu.
“In comparing the results of the usability analysis with the results of the authentication testing, we observe that there is no need to sacrifice usability for accuracy. It is possible to achieve accurate authentication with easy and enjoyable tasks,” they pointed out.
Still, there are many questions still to be answered: can an attacker fool the authentication system by performing the same customized task the user has chosen for himself, is the solution scalable, and so on, but they believe that there could be a future for using EEG signals for all kinds of things in a number of industries, including computing. | <urn:uuid:d3f8f2fa-6d18-4b5a-b717-29d7a342a750> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2013/04/16/pass-thoughts-as-a-solution-to-the-password-problem/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00199-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.970491 | 523 | 3.0625 | 3 |
Data continues to grow at an alarming rate across organizations of all sizes and it’s not just new data that’s growing; nothing is thrown away. Growth in data is also fueled by the popularity of portal, search, media, and e-commerce sites, such as Amazon, Yahoo, eBay, and the exponential growth in social networking sites, such as Facebook and Twitter. In the enterprise, data is growing as companies make better use of business analytics and as traditional high-performance computing (HPC) runs an increasing number of data-intensive complex simulations. Not only is data growing, but the way it is accessed is also changing. Data analysis, for example, has transitioned from a traditional, batch mode, reporting-style to an ad-hoc, on-demand real-time access model. The former is well supported by sequential scans over large volumes of data but the latter requires random I/O, which is difficult to support with existing storage infrastructures.
The size of data stores and content on major Web sites is estimated to be quadrupling every 18 months and the number of queries per terabyte is doubling every 18 months. Servers have grown significantly in capabilities and performance to meet the computing needs. Storage, however, has kept up with the demand for the volumes of data but not with speed of access to this data.
The growth in servers, storage, and networking devices in the datacenter continues to push the envelope in space, power, cooling, and management. Storage performance of hard drives has not kept pace with server performance or networking bandwidths. The result is an imbalance of computing resources resulting in I/O bottlenecks, under-utilized servers and over-provisioning of storage leading to excessive costs and storage sprawl.
The I/O bottleneck is not a new phenomenon; it did not happen overnight and has been a problem for decades. In the first half of the 1980’s, Cray Research introduced an optional Solid State Storage Device (SSD) for the Cray X-MP. There were several reasons for this move, one of which, and perhaps the most important, was the ability to stage critical application data/files, a form of tiering, to reduce the CPU wait time. The result was better CPU utilization and significant improvement in time-to-solution for performance-starved applications in the petroleum, aerospace, automotive, chemistry, and nuclear codes. Connected to the Cray X-MP/4 through two high-speed channels, the SSD, in conjunction with the X-MP/4, enabled users to exploit existing applications and develop new algorithms to solve larger and more sophisticated science and engineering problems. System performance was significantly enhanced using this SSD device by eliminating the CPU wait time, resulting in better CPU utilization, better performance, and better-cost performance.
The SSDs for the Cray X-MP were custom built and addressed the problem of providing high I/O bandwidth for long vectors which do not cache well in the processor caches. These early SSDs therefore remained exclusive and expensive. However, with the industry acceptance and growth of standards based architecture, specifically x86, the computing paradigm began to shift. The general-purpose nature of x86-based architecture with their hierarchical cache architecture is easily served by the I/O subsystem based on standard hard-drive technology. Flash technology is maturing and SSDs are on the rise, In addition, the shift of workloads to an ad-hoc, on-demand access, driving the need for random I/O, which puts electro-mechanical hard-drives under severe strain but is effectively serviced by SSDs.
During this same period, several system integrators worked with solid-state storage devices and developed application specific solutions, for example seismic processing, to boost overall system throughput by eliminating the I/O bottleneck. As the decade closed, the market began to evolve with products becoming available for PCs to large-scale UNIX servers.
The last two decades have seen tremendous changes in solid-state technology, which has been fueled by the huge growth in mobile consumer devices such as MP3 players, digital cameras, media devices, and mobile phones, all the way to multi-terabyte SSD. Industry analyst firms IDC and Gartner are watching and predicting growth in the SSD technology segment and in particular, SSD growth for enterprise-class computing.
What is Driving the SSD Growth?
There are several drivers behind the growth and adoption of SSD technology:
- Poor performance of HDD technologies creating performance gap.
- Flash technology evolution, specifically NAND flash as capacity increases and price declines, improving the economics versus HDD.
- Reliability of flash, growing use of single-level cell (SLC) flash for enterprise grade reliability.
- Aggressive availability of multi-core processor technology.
- Growing awareness of the power environment and constraints in the datacenter as a result if storage sprawl.
- New data-intensive workloads access data randomly.
Today there is a growing performance gap between the microprocessor and HDD storage devices. Over the past two decades, processing performance and networking performance have significantly increased compared to HDD performance. This has created a gap between processing and network and the I/O available through HDDs. To help compensate for this, IT managers typically add more external HDD devices and DRAM to help speed up throughput. Increasing DRAM enables systems to store working sets in memory to avoid disk latency, and adding additional HDDs can increase throughput by enabling I/O operations to be performed in parallel e.g. by striping across RAIDed HDDs. This helps to bridge the performance gap but creates an expensive and difficult to manage environment together with the increased power, rack space requirement and higher TCO.
Enterprise servers, running applications in the datacenter range from Web 2.0 to HPC to business analytics, can generate hundreds of thousands of random I/O operations per second (IOPS). In these environments, the HDDs available today can only perform thousands of IOPS combined. HDDs are great for capacity and large blocks of sequential data but are not very good at delivering small pieces of random data at a high IOPS rate. The physical characteristics and power envelope of the HDD make it an expensive option for increasing application throughput. Consequently, the CPUs are under-utilized as they wait for data.
Solid-state storage devices based on flash memory are poised to disrupt the industry. Solid-state storage delivers a performance boost compared to HDD, closing the I/O gap between microprocessor and storage; flash brings Moore’s Law to storage versus Newton’s Law. Moore’s Law describes the historical trend of computing hardware as doubling the number of transistors every two years. The capabilities of many digital electronic devices are linked to Moore’s law; processing speed doubling every two years or the number and size of pixels in a digital camera. Unlike HDD, solid-state storage will track Moore’s Law. The CPU no longer waits for data resulting in improved time-to-solution for performance-starved applications. Users will experience not only better time-to-solution but also reduced rack space, less power and increased server utilization all leading to improved TCO. Furthermore, system reliability is improved, as the solid-state storage has no moving parts. By incorporating flash technology as a new storage hierarchy will dramatically reduce the CPU-to-storage bottleneck. This new storage hierarchy smoothens the performance disparity in today’s existing hierarchy, e.g., DRAM, Disk, and Tape.
There is no shortage of SSD solutions; most today are based on disk format interfaced with SATA, SAS or Fiber Channel (FC). Today HDD storage is connected as “direct-attached storage” or “network-attached storage. No matter the connection, “direct-attached storage” is closest to the CPU and delivers both price ($/GB) and performance advantages; the same applies to SSD. SSDs directly connected can deliver 10x the performance of network-attached drives. SSDs based on the PCIe interface deliver the highest performance and lowest latency of all SSD interfaces. PCIe-based SSDs boost performance by 10x or more compared to SAS or FC-based SSDs. In other words, 100x the performance of network-attached storage is possible. Such high-end performance creates the opportunity for a new “Tier-0” in the storage hierarchy, delivering high-bandwidth and low latency to accelerate high-performance workloads. The Tier-0 is an optimized storage tier specifically for high performance workloads that benefit from using flash memory and PCIe interconnect. The future of PCIe attached SSD in the enterprise is predicted to be the strongest growth within this technology market segment.
Developed in the 1980’s, flash technology is low-cost, non-volatile computer memory that can be erased and reprogrammed electronically. Most people are familiar with some form of commercialized flash device as it is now commonly used in cameras, music players, and cell phones. Advances in technology are now making it a strong storage device for the enterprise that can help fill the performance gap. A growing number of enterprises are using or evaluating flash SSD for “Tier-0” data storage for a couple of reasons — bandwidth and latency access to data, IOPS per watt, IOPs per dollar.
The use of NAND flash technology in SSDs is commonplace with more than 100 vendors offering SSD products. Beware, as not all NAND flash is created equal. NAND flash is available in two technologies: single-level cell (SLC) and multi-level cell (MLC). MLC stores 2-bits or more in a single cell compared to 1-bit per cell for SLC. MLC is higher density and lower cost than SLC. MLC is common in consumer devices such as MP3 players, cameras, mobile phones, and USB thumb drives. SLC, on the other hand, is faster and more reliable making it ideal for enterprise datacenters.
What about Reliability?
Matching the performance and capacity to the user requirement and the right solution is key. Nevertheless, there is more to it than performance, capacity, and price. A key difference between SLC compared to MLC is the higher write cycle durability of SLC. SLC is 100K writes per cell versus MLC writes are limited to 5-10K per cell. For enterprise-class applications, this is significant difference and advantage. SLC will deliver 10x better reliability and lifetime use at lower cost of ownership. Enterprise environments demand a 24×7 environment with large IOPS throughput. An MLC based solution for true enterprise computing would need to replace every few months to keep up with the demanding IOPS and 24×7 reliability, thus increasing the cost of ownership. SLC is right answer for enterprise performance and reliability.
Impact of Microprocessor Architectures
Without a change to the storage architecture and technology, the server I/O performance gap will continue to widen. With the availability of multi-core x86 processors from Intel and AMD, the gap widens even further. These advanced microprocessors deliver high clock rates and more cores each year. Intel’s Nehalem EX, today, has four cores per processor and with a four-socket server, makes a very potent high performance platform. Intel’s next generation processor microarchitecture, Sandy Bridge, will be available sometime in 2011. Sandy Bridge will be built on Intel’s 32-nanometer technology and will no doubt have more cores; more performance, higher speed, and PCIe interconnect slots, thus making this an ideal platform for Tier-0 storage based on advanced PCIe form factor delivering high bandwidth, low latency, and high reliability resulting in a high performance throughput Tier-0 storage.
There are numerous providers of SAS or SATA SSD technology; most storage vendors have an SSD offering. Essentially this is a replacement for HDD drives and will, in most cases, deliver increased performance, smaller footprint, higher transfer rates and improved IOPs. SSD go only so far in solving the I/O bottleneck problem as they are connected to the server via slow interconnects. On the other hand, PCIe SSD devices deliver the highest I/O performance possible. The PCIe world includes vendors such as Fusion-IO, Texas Memory and emerging companies such as Virident Systems.
SSDs are Not Created Equal
While it is true that SSDs deliver much higher performance than HDDs, not all SSDs deliver the same level of IOPS and with any degree of predictability. The current lot of SSDs shows high performance in the early stages of use but it deteriorates depending on the workload (e.g., concurrent reads and writes, large number of I/O requests) and how filled the drive is. The software drivers of these SSDs are the key here as they manage the flash, specifically Wear Leveling and Garbage Collection.
Wear leveling ensures that writes to flash are spread out over all the cells available. This is required due to the limited number of write cycles of flash, 100K for SLC and 5 – 10K for MLC. Garbage Collection, on the other hand, deals with an inherent property of flash, which requires writes to be preceded by erasure of a large block of the flash. Flash does not support in-place writes like memory. SSD drivers therefore have to juggle flash blocks behind the scenes to fulfill I/O requests from applications while collecting flash blocks marked for erasure. This is done by reserving or over-provisioning flash for garbage collection e.g., a 100GB SSD may actually use 150GB of “raw” flash capacity, giving the driver 50GB of scratch capacity to manage garbage collection. These characteristics of flash entail a “flash translation layer” (FTL), which presents a standard block device view of the device to the application while moving physical blocks around for wear leveling and garbage collection.
The measure of goodness of an SSD then becomes how its driver manages flash while delivering steady, predictable IOPS with a minimal reserve of flash and using as little of system resources (CPU cycles, system memory) as possible.
Enterprise Solid-State Devices – Tier-0
Solid-state devices based on Flash and PCIe are emerging as a new class of enterprise storage option – Tier-0. Tier-0 is an optimized storage tier specifically for high performance workloads, which can benefit the most from using flash memory. By implementing a Tier-0 solution, specific data sets can be moved to higher performance, flash memory based storage platforms, resulting in dramatic improvements in application throughput. Access to data in Tier-0 is at near memory speeds and is focused on making applications run faster and more predictably. Fast read and write performance of NAND flash, ever reducing price points, very low power consumption, and increasing level of reliability are foundations for this disruptive solution to the performance-starved workloads of the datacenter.
Applications that are running on current multicore, multisocket servers will no longer be starved for performance by slow HDD storage subsystems. PCIe-based SSD Tier-0 will rebalance the servers and storage creating an optimized solution to solve the I/O bottleneck experienced today.
This Tier-0 storage will provide users with several capabilities including sustained predictable performance for life time of the product, optimized for enterprise-class reliability so that data is never lost, and field upgradeability that does not need replacement of PCIe cards. Finally, a Tier-0 solution needs to be affordable. While flash memory is more expensive per gigabyte than HDD, flash memory costs are decreasing significantly year over year. As electricity costs continue to increase and flash prices decrease, the relative cost per gigabyte and cost for IOPS of flash is continually improving. Flash out-performs hard drives by at least and order of magnitude resulting in the cost per gigabyte of Tier-0 flash being extremely attractive.
In conclusion, all the pieces are in place for a true enterprise-class SSD-based Tier-0 storage hierarchy:
- High performance multicore, multisocket servers.
- PCIe form factor delivering the highest bandwidth possible and lowest latency.
- SLC NAND flash for high performance, sustained life-time performance.
- SLC NAND flash delivering true enterprise-class five 9’s reliability and field serviceability.
- Sophisticated and transparent software for garbage collection and Wear leveling.
It is also important to point out that superior technology, by itself, does not guarantee success. The winners of flash-based solutions as broad Tier-0 storage will be those vendors who can provide all the performance benefits of enterprise-class flash, while plugging into existing storage infrastructures and usage models to deliver the same reliability and manageability that users have come to expect from established enterprise storage solutions. | <urn:uuid:845ad783-fb45-41a2-a96e-26ddd8cfa05e> | CC-MAIN-2017-04 | https://www.hpcwire.com/2010/05/28/back_to_the_future_solid-state_storage_in_cloud_computing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00437-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928102 | 3,486 | 2.515625 | 3 |
5.3.2 What are the ITU-T (CCITT) Standards?
The International Telecommunications Union, ITU-T (formerly known as CCITT), is a multinational union that provides standards for telecommunication equipment and systems. ITU-T is responsible for standardization of elements such as the X.500 directory [CCI88b], X.509 certificates and Distinguished Names. Distinguished names are the standard form of naming. A distinguished name is comprised of one or more relative distinguished names, and each relative distinguished name is comprised of one or more attribute-value assertions. Each attribute-value assertion consists of an attribute identifier and its corresponding value information, for example, ``CountryName = US.''
Distinguished names were intended to identify entities in the X.500 directory tree. A relative distinguished name is the path from one node to a subordinate node. The entire distinguished name traverses a path from the root of the tree to an end node that represents a particular entity. A goal of the directory was to provide an infrastructure to uniquely name every communications entity everywhere (hence the ``distinguished'' in ``distinguished name''). As a result of the directory's goals, names in X.509 certificates are perhaps more complex than one might like (for example, compared to an e-mail address). Nevertheless, for business applications, distinguished names are worth the complexity, as they are closely coupled with legal name registration procedures; this is something simple names, such as e-mail addresses, do not offer.
ITU-T Recommendation X.400 [CCI88a], also known as the Message Handling System (MHS), is one of the two standard e-mail architectures used for providing e-mail services and interconnecting proprietary e-mail systems. The other is the Simple Mail Transfer Protocol (SMTP) used by the Internet. MHS allows e-mail and other store-and-forward message transferring such as Electronic business Data Interchange (EDI) and voice messaging. The MHS and Internet mail protocols are different but based on similar underlying architectural models. The noteworthy fact of MHS is that it has supported secure messaging since 1988 (though it has not been widely deployed in practice). The MHS message structure is similar to the MIME (see Question 5.1.1) message structure; it has both a header and a body. The body can be broken up into multiple parts, with each part being encoded differently. For example, one part of the body may be text, the next part a picture, and a third part encrypted information.
ITU-T Recommendation X.435 [CCI91] and its equivalent F.435 are X.400-based and designed to support EDI messaging. EDI needs more stringent security than typical e-mail because of its business nature: not only does an EDI message need protection against fraudulent or accidental modification in transit, but it also needs to be immune to repudiation after it has been sent and received.
In support of these security requirements, X.435 defines, in addition to normal EDI messages, a set of EDI ``notifications.'' Positive notification implies the recipient has received the document and accepts the responsibility for it, while negative notification means the recipient refused to accept the document due to a specified reason. For- warding notification means the document had been forwarded to another recipient. Together, these notifications form the basis for a system that can provide security controls comparable to those in the paper-based system that EDI replaces.
ITU-T Recommendation X.509 [CCI88c] specifies the authentication service for X.500 directories, as well as the widely adopted X.509 certificate syntax. The initial version of X.509 was published in 1988, version 2 was published in 1993, and version 3 was proposed in 1994 and published in 1995. Version 3 addresses some of the security concerns and limited flexibility that were issues in versions 1 and 2. Directory authentication in X.509 can be carried out using either secret-key techniques or public-key techniques. The latter is based on public-key certificates. The standard does not specify a particular cryptographic algorithm, although an informative annex of the standard describes the RSA algorithm (see Section 3.1). | <urn:uuid:f6f79dff-27ab-4a57-88be-ed2f9002c033> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/itu-t-standards.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00217-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936704 | 871 | 2.953125 | 3 |
In April 2016, the European Parliament and the European Council adopted the General Data Protection Regulation, also known as GDPR. It is intended to strengthen and unify the data protection for individuals inside the European Union. The regulation will come into effect in May 2018 and organizations across Europe are working hard to ensure their security policies comply with the new legislation. To facilitate that process, we will zoom in on the significance of the GDPR for the security of mobile applications.
The GDPR contains two articles that are relevant for mobile application protection.
- Article 25 introduces the principle of data protection by design. It obligates data controllers and processors to consider privacy during the entire development cycle of new systems or processes that use personal data.
- Article 32 stipulates that data controllers and processors need to implement sufficient technical and organizational measures to ensure the integrity of processing systems and processes. These measures should counter the risks associated with data processing, like accidental or unlawful destruction, loss, modification and unauthorized disclosure of or access to transmitted or stored personal data.
The organizations concerned have to be able to show that the security measures mentioned in article 25 and 32 are in place and that compliance with the GDPR is monitored. The failure to adhere to either of these articles can result in fines of up to 2% of the annual worldwide turnover or EU10 million (article 83).
Since mobile applications have become an integral part of data processing systems, it is important to know which measures can be taken to ensure the confidentiality of the processed data in the context of the GDPR. The most important vulnerability of mobile applications is that they can be reverse engineered in no time. This enables hackers to gain insight in the structure of the application, to extract information (encryption keys, API keys, etc.) that can be used to access private data and to tamper with the application to harvest user credentials. To counter reverse engineering and secure the users' data, the applications must be protected using a double approach.
- The source code of mobile applications should be hardened using multiple obfuscation and encryption techniques. Code hardening ensures that the source code of mobile applications remains illegible to hackers that succeed in decompiling or disassembling them.
- Runtime application self-protection (or RASP) mechanisms need to be integrated in mobile applications. These mechanisms protect applications from dynamic analysis and live attacks by monitoring their integrity and the integrity of the device on which they are running.
Protecting mobile applications is a crucial aspect of developing secure data processing systems. In addition, measures have to be taken to ensure the confidentiality of the data itself.
- SSL pinning makes sure mobile applications are communicating with the intended server and protects data in transit from being intercepted by a man-in-the-middle attack.
- Whitebox cryptography is a recommended solution for mobile applications that contain a data encryption key. The technology makes sure the key cannot be lifted from the application and used to decrypt stored or transmitted data. | <urn:uuid:1522977b-a17d-4ec5-b5fb-fc11783a53a8> | CC-MAIN-2017-04 | https://www.guardsquare.com/en/blog/gdpr-and-mobile-application-protection | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00033-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920411 | 597 | 2.9375 | 3 |
Welcome to another post in a series of articles written by the Citrix Labs R&D staff on the topic of the Internet of Things (IoT). If you haven’t discovered the previous posts, you can catch up here and read about how we defined the role of IoT in the Citrix software defined workplace, identified potential security challenges, described a simple IoT framework, and focused on securing the device layer.
In this installment, we move up the IoT stack and examine the gateway layer.
As you can see in the simplified IoT model illustrated below, the gateway layer serves the important role of connectivity and messaging between things, people, and cloud services. In most cases today, the primary function of the IoT gateway is protocol translation from low power sensor networks to the Internet or LAN.
However, all the big names in the industry predict massive numbers of IoT devices arriving in the enterprise over the next 5 years.
So in addition to simple protocol translation, intelligent IoT gateways are required in the enterprise to handle the sheer volume of IoT devices and the messages communicated between them. The gateways will also provide local processing of automation rules closer to the network edge, device management functionality, and enforcement of network access control polices. Just by connecting sensors and IoT devices to your network, you also enable the potential of merging information about the physical world (from sensors) and the ability to interact with the world (through connected devices) with the critical enterprise applications on which you run your business.
For example, when fully realized, the Internet of Everything will enable any business to integrate real-time tracking of their vehicle fleet with their dispatch and logistics software or use automated warehouse inventory tracking with their asset management solution. Of course all this functionality must be performed securely and this article delves into some important security requirements of the IoT gateway.
Gateway Layer Security
Communications in the IoT usually take place over a combination of private and public networks, so securing the network protocols is obviously important and the first thing you should consider. When thinking about security for this layer, recall the basic CIA security triad. Communications between the things, the gateway, and the cloud service must be cryptographically secured to preserve confidentiality, integrity, and authenticity. Securing network communications in this way, with technology like AES cipher suites and TLS/SSL encryption, is probably the most understood area of IoT security since we’ve been doing it for years for applications like e-commerce over the Internet.
Even with mature technology for securing network communications, the Internet of things is different than the Internet of servers and PCs and so poses some unique security challenges. Many of these challenges relate to the fact that IoT devices have limited computing power and no graphical user interface for easy configuration.
As stated above, a primary role of the gateway layer is bridging lower power sensor networks, usually from a low power radio standard like ZigBee or Z-Wave, to Wi-Fi or Ethernet. This is a critical function since we expect many devices, from many manufactures, speaking several different protocols to appear in the enterprise. The IoT gateway serves as the common point of communications and control amongst the myriad of devices.
One challenge for the IoT gateway is maintaining confidentiality while preforming this protocol translation. Although both ZigBee and Z-Wave protocols support encryption, the gateway must generally decrypt and re-encrypt the payload when translating from one network protocol to another. This protocol translation makes it more difficult to maintain confidentiality because it is not true end-to-end encryption; communication between intermediaries is protected, but not the intermediary itself. If an attacker were to compromise the IoT gateway, not only the data passing through the gateway is at risk, but control of the physical things connected to it are at risk as well.
For example, research conducted by Veracode on 6 common home gateways found:
“…widespread problems securing communications between connected devices and the vendor’s management servers in the cloud. Five of the six devices tested were vulnerable to so-called “man in the middle” attacks that would allow an attacker on the same network as a device to intercept, modify and forward traffic between the device and its cloud based service. Many of the devices tested failed to properly validate the TLS or SSL certificate used to encrypt that traffic, Creighton said.”
One way to mitigate this problem is to implement true end-to-end, application layer security. Using this strategy, messages are encrypted in a way that allows only the unique recipient of the message to decrypt it, and not anyone in-between. In other words, only the IoT device and receiving cloud service hold the cryptographic keys and the gateway acts as an illiterate messenger, passing along messages that it can’t decipher. The gateway is still performing its protocol translation duties, it just can’t read the messages as it routes them from one network to another.
Secure onboarding is the process of configuring an IoT device for the first time and enrolling it for management. In the Resurrecting Duckling security model for IoT devices, this is the process of imprinting the duckling (device) with a cryptographic key generated by the mother duck (IoT management service). The IoT gateway plays an important role in the onboarding process because it is the intermediary between the IoT device and managing service. When installing a device for the first time, all communications and encryption keys pass through the gateway and must be protected from eavesdropping and man-in-the-middle attacks.
We know that encryption effectively addresses IoT communications confidentiality, however the weak link in the encryption security chain is often around key management practices and how keys are exchanged during the device onboarding process. A research paper titled Security Evaluation of Z-Wave Wireless Protocol illustrates how, even though the underlying Z-Wave protocol features strong 128-bit AES encryption, improper implementation by device manufactures can lead to serious vulnerabilities.
“Using this tool, we have demonstrated an implementation vulnerability in Z-Wave key exchange protocol that could be exploited to take full control of a target Z-Wave door lock by only knowing the Home and node IDs of the target device, both of which can be identified by observing the Z-Wave network traffic over a short period of time.”
The good news from this particular report is,
“We have communicated the details of this vulnerability to the vendor who has conducted a security review of Z-Wave specification and SDK to ensure that they cover correct handling of the discovered vulnerability. Finally, Sigma Designs has taken action to prevent such implementation flaws to reach the market in the future by adding additional security test cases to the certification test suite.”
The Z-Wave implementation vulnerability in the IoT device referenced above exemplifies the need to perform firmware updates on IoT devices in the field. Since many IoT devices gateway don’t have much in the way of UI or internal storage, an external application and gateway is often required to retrieve and apply firmware updates. To update firmware securely, the system should record current version and new version of the firmware, check for a valid signature on the downloaded firmware upon receipt, and check firmware integrity before firmware installation.
An important strategy when implementing security by design is to minimize the attack surface. In other words, an IoT gateway manufacturer should only implement the protocols and interfaces required to deliver the intended functionality and nothing more. This includes restricting services and interfaces that are running on the device for debugging purposes, but not intended for use by end-users. While these “hidden” interfaces are invaluable for manufacturing, development, and testing purposes, they are often vectors for information leakage and authentication backdoors. In addition to restricting debugging interfaces, all open interfaces should be designed to prevent a legitimate user from running arbitrary code on the gateway device.
The previously quoted research conducted by Veracode on 6 common home IoT gateways found that:
“Many of the most serious flaws revealed a kind of sloppiness in the design and production of the devices… For example: [two of the] devices left debugging interfaces exposed and unsecured in their shipped product. That could provide an avenue for attackers who had access to the same network as the device to steal information or bypass other security controls.”
“The point about the debugging interfaces is that it sounds obscure, but it really isn’t. All you need to do is go to the Android developer site, download a toolkit and within five minutes have full access to the [IoT gateway] device,” he said.
Will your Wi-Fi AP become the only gateway you need?
As more efficient Wi-Fi implementations, like the forthcoming 802.11ah standard, become more prevalent and is built-in to IoT devices, the IoT gateway function will eventually merge into the Wi-Fi access point, rather than being a separate device. That puts a lot more security demands on a network device which already has a history of vulnerabilities in the consumer space and re-enforces the need for a hardened IoT Gateway with enterprise-class security built-in from the start.
- 300,000 Compromised SOHO Gateways
- Virgin Media Router Snafu
- Asus and Linksys Router Vulnerabilities
- Linksys Router Worm
The Intel IoT Gateway
To fill the need for a secure IoT gateway for the enterprise, Intel is developing the Wind River Intelligent Device Platform. This device is positioned to connect both legacy industrial devices and emerging intelligent infrastructure to the IoT. It is designed to integrate protocols for networking, embedded control, security and manageability and provides a platform on which 3rd party applications can run. The Intel IoT Gateway whitepaper outlines the following security features of the device:
- Secure Boot – ensure only authorized and trusted software can run on the device when it is booted up.
- GRSecurity – set of free software patches released under GNU GPL to secure the Linux kernel. GRSecurity is a set of configurable role-based access control (RBAC) policies that allows collections of programs or processes to run with least-privilege.
- McAfee Embedded Control – provides system integrity by only allowing authorized software to run, validating system changes, tracks file system changes, and protects critical data files from tampering.
- Integrity Management Architecture – detects if files are maliciously or accidentally altered, both locally and remotely.
Given the Intel IoT Gateway’s focus on security and extensibility as a platform, Citrix is experimenting with running the entire Octoblu IoT stack on the device with great results, read more about it here: /blogs/2015/05/08/octoblu-intel-industrial-iot/.
The primary role of the IoT gateway is connecting low power sensor networks to private Ethernet LANs and to the Internet. As the number of devices proliferate in the enterprise, the role of the gateway will grow to handle the increased network traffic and include functionality like local processing of device automation, device management, and network access policies. It’s not hard to see that the gateway plays a vital role in the IoT, but to be included in the enterprise, it must be secure. In this article we presented a number of security requirements for the gateway device. These include:
- Encryption – Cryptography is an important component of securing communications through an IoT gateway, but it must be implemented properly to be effective. End-to-end, application layer encryption is one strategy to prevent the gateway from being the weak link in the security chain.
- Secure onboarding – Even with state-of-the-art cipher suites available, we’ve shown that secure device on-boarding and key exchange is a critical aspect of security in-depth at the gateway layer.
- Firmware updates – When security issues are found, you have to patch the devices in the field and it is largely the gateway’s role to deliver the firmware, especially low power sensors that lack a direct connection to the Internet.
- Interface minimization – Since the IoT gateway serves as an aggregation and control point for multiple devices, it is a high value target for malicious attacks. For this reason, manufacturers must take extra care to ensure only interfaces required to deliver the intended functionality are implemented and nothing more.
- Hardened Wi-Fi access points – In time, we expect the IoT Gateway role to merge with the Wi-Fi access point, but with the unique challenges of securing tiny IoT devices, gateways with enterprise-class security like the Intel Wind River IoT Gateway are required.
In the next and final post in this IoT security series we will look at how to secure and maintain customer privacy in IoT cloud services. | <urn:uuid:73e39369-8d5a-4e1e-aaac-3b3e5bee9f1b> | CC-MAIN-2017-04 | https://www.citrix.com/blogs/2015/07/24/securing-the-iot-gateway/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00245-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922041 | 2,587 | 2.640625 | 3 |
After just seven months on Mars, NASA's rover Curiosity has sent back apparent proof that the Red Planet could have supported life in the distant past.
"A fundamental question for this mission is whether Mars could have supported a habitable environment," said Michael Meyer, lead scientist for NASA's Mars Exploration, in a written statement. "From what we know now, the answer is yes."
The evidence came from the first rock that NASA technology has ever drilled on another planet.
NASA reported Tuesday afternoon that analysis of a rock sample that Curiosity 's robotic arm collected when it bored a 2.4-inch hole into a rock on Feb. 8 showed that it contained sulfur, nitrogen, hydrogen, oxygen, phosphorus and carbon, key chemical ingredients for life.
The rock sample was analyzed by chemistry instruments on the rover. The data was then sent to NASA scientists.
This is a huge finding for NASA, which sent the super rover Curiosity to Mars to seek evidence that the planet could have supported life, even in microbial form, at any point in its history, officials said.
The sedimentary rock that was drilled sits near an ancient streambed in Gale Crater. Data from the test indicates that the area was once the end of an ancient river system or an intermittently wet lake bed that could have provided the chemical energy and other conditions that could support the growth and survival of microbes.
The sample from the drilled rock also showed that the ancient environment it came from was not harshly oxidizing, acidic or very salty, NASA reported.
"We have characterized a very ancient, but strangely new 'gray Mars' where conditions once were favorable for life," said John Grotzinger, Mars Science Laboratory project scientist. "Curiosity is on a mission of discovery and exploration, and as a team we feel there are many more exciting discoveries ahead of us in the months and years to come."
Curiosity, which carries 17 cameras and 10 scientific instruments, drilled the rock at a site only a few hundred yards away from the spot where it found an ancient streambed last fall.
Curiosity's predecessors -- the Mars rovers Spirit and Opportunity -- were not equipped for drilling.
Today's news comes amid a NASA effort to repair software and hardware problems onboard Curiosity.
After finding a few weeks ago that computers onboard Curiosity were suffering memory problems, NASA engineers switched the rover onto its backup software system.
Engineers are continuing to analyze the problem.
"These tests have provided a great deal of information about the rover's A-side memory," said Jim Erickson, deputy project manager for Curiosity. "We have been able to store new data in many of the memory locations previously affected and believe more runs will demonstrate more memory is available."
NASA expects to upload two software patches, targeting onboard memory allocation and vehicle safing procedures, later this week.
The space agency reported that after the software patches are installed, the mission team will reassess when to resume full rover mission operations.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, or subscribe to Sharon's RSS feed . Her e-mail address is email@example.com.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "NASA's Curiosity finds evidence of ancient habitable Mars" was originally published by Computerworld. | <urn:uuid:9651da57-c77b-4d5c-adf6-e124bcc165e4> | CC-MAIN-2017-04 | http://www.itworld.com/article/2713474/hardware/nasa-s-curiosity-finds-evidence-of-ancient-habitable-mars.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00153-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948678 | 699 | 3.84375 | 4 |
Last year, while cleaning out the basement of my childhood home, I discovered a plastic storage bin marked "Calcusoft." Inside were piles of notebooks filled with sketches, storyboards, and lines of code, and buried beneath it all, a TI-83 Plus graphing calculator.
I bought the calculator the summer before eighth grade, when it was included on a list of required supplies for students entering algebra. At the time, owning a graphing calculator was a small but significant rite of passage for a junior high student. It was a sign of academic sophistication. It announced to younger peers that the equations you were expected to solve outpaced the primitive features of meager, four-function devices. But most importantly, graphing calculators were programmable, which meant they were equipped to play games. While possession of a traditional handheld gaming system constituted a brazen breach of school rules, playing games on a calculator maintained the appearance of genuine scholarly work. A graphing calculator was like having a school-sanctioned Game Boy.
Calculators did not always have this allure. The earliest handheld models were only "programmable" in the sense that they used rudimentary code to complete repetitive computational tasks more efficiently. It wasn't until 1990, when Texas Instruments released the TI-81 graphing calculator, that the medium became a feasible platform for game design. Unlike earlier devices, the TI-81 was equipped with a simple yet versatile programming language called TI-BASIC. Anyone willing to learn a few elementary commands could create text, graphics, and movement in minutes.
A decade had passed by the time I purchased my graphing calculator, and in the years between a robust online community had formed. These programmers -- many of them high school students -- had even figured out how to program their calculators in more complex source languages. Unfortunately, accessing such games required specialized knowledge and equipment, making the best programs precious. I wanted more of these high-quality games, and I wanted to learn how to make them. | <urn:uuid:95011dda-9421-40c0-9814-b34954e20816> | CC-MAIN-2017-04 | http://www.nextgov.com/mobile/2013/09/analysis-go-ahead-mess-texas-instruments/69846/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00153-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.974673 | 406 | 2.9375 | 3 |
Definition: A function which is defined for all inputs of the right type, that is, for all of a domain.
See also partial function.
Note: Square (x²) is a total function. Reciprocal (1/x) is not, since 0 is a real number, but has no reciprocal.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 13 September 2007.
HTML page formatted Mon Feb 2 13:10:40 2015.
Cite this as:
Paul E. Black, "total function", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 13 September 2007. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/totalfunc.html | <urn:uuid:d1ad4b6e-0559-49ad-b5c6-883daa7c5000> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/totalfunc.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00089-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.862369 | 182 | 2.890625 | 3 |
The Department of Defense has released a document that provides an outline for military-based cyber operations titled Strategy for Operating in Cyberspace (pdf) that contains five specific strategic initiatives.
A summary of the document is as follows:
"The security and effective operation of U.S. critical infrastructure – including energy, banking and finance, transportation, communication, and the Defense Industrial Base – rely on cyberspace, industrial control systems, and information technology that may be vulnerable to disruption or exploitation."
"In developing its strategy for operating in cyberspace, DoD is focused on a number of central aspects of the cyber threat; these include external threat actors, insider threats, supply chain vulnerabilities, and threats to DoD‘s operational ability. DoD must address vulnerabilities and the concerted efforts of both state and non-state actors to gain unauthorized access to its networks and systems."
"Potential U.S. adversaries may seek to exploit, disrupt, deny, and degrade the networks and systems that DoD depends on for its operations. DoD is particularly concerned with three areas of potential adversarial activity: theft or exploitation of data; disruption or denial of access or service that affects the availability of networks, information, or network-enabled resources; and destructive action including corruption, manipulation, or direct activity that threatens to destroy or degrade networks or connected systems."
"Cyber threats to U.S. national security go well beyond military targets and affect all aspects of society. Hackers and foreign governments are increasingly able to launch sophisticated intrusions into the networks and systems that control critical civilian infrastructure. Given the integrated nature of cyberspace, computer-induced failures of power grids, transportation networks, or financial systems could cause massive physical damage and economic disruption. DoD operations—both at home and abroad—are dependent on this critical infrastructure."
"While the threat to intellectual property is often less visible than the threat to critical infrastructure, it may be the most pervasive cyber threat today. Every year, an amount of intellectual property larger than that contained in the Library of Congress is stolen from networks maintained by U.S. businesses, universities, and government departments and agencies. As military strength ultimately depends on economic vitality, sustained intellectual property losses erode both U.S. military effectiveness and national competitiveness in the global economy."
- Strategic Initiative 1: Treat cyberspace as an operational domain to organize, train, and equip so that DoD can take full advantage of cyberspace’s potential.
- Strategic Initiative 2: Employ new defense operating concepts to protect DoD networks and systems.
- Strategic Initiative 3: Partner with other U.S. government departments and agencies and the private sector to enable a whole-of-government cybersecurity strategy.
- Strategic Initiative 4: Build robust relationships with U.S. allies and international partners to strengthen collective cybersecurity.
- Strategic Initiative 5: Leverage the nation’s ingenuity through an exceptional cyber workforce and rapid technological innovation. | <urn:uuid:be0224a9-4d1f-4433-85ec-0ab2eb651825> | CC-MAIN-2017-04 | http://infosecisland.com/blogview/15165-DoD-Releases-Strategy-for-Operating-in-Cyberspace.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00483-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916946 | 605 | 2.5625 | 3 |
Kemprud E.P.,Amador Valley Medical Center |
Montano S.A.,ImmunoScience Inc |
Kalbag G.S.,Amador Valley Medical Center |
Tam A.S.O.,Amador Valley Medical Center |
And 2 more authors.
European Infectious Disease | Year: 2011
The current methods of detecting exposure to HIV types 1 and 2 involve a serum enzyme-linked immunosorbent assay (ELISA), Western immunoblotting and polymerase chain reaction (PCR). All of these techniques require collection of blood, trained personnel and sophisticated lab facilities. There is also an inherent danger in collection of blood samples, especially in environments where disposable needles are not commonly used. As a result, the prevalent methods are impractical for use in the field and for mass screening, particularly in the developing world and in epidemic situations. This study compared a saliva-based rapid immunoassay with conventional ELISA. A total of 1,192 paired samples of saliva and blood from six Northern California locations were obtained. The saliva samples were subjected to the Salivax™ HIV immunoassay and the blood/serum samples were tested with US Food and Drug Administration (FDA)-approved BioRad® enzyme immunoassay (EIA) in a College of American Pathologists (CAP)-certified laboratory. The comparison of results showed Salivax HIV to have a sensitivity of 99.53 % and a specificity of 99.74 %. This study confirms several prior studies that indicated that saliva is an extremely useful biological fluid for antibody screening and further indicates that Salivax HIV has the necessary sensitivity and specificity to act as a screening test, particularly in the highly populated, developing and rural world and as a point-of-care test. © Touch Briefings 2011. Source | <urn:uuid:c6fe89d0-8306-4e94-a3c2-488a9461ab13> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/amador-valley-medical-center-2090876/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00115-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.907729 | 371 | 2.90625 | 3 |
A ZoneAlarm survey showed that 79% of consumers use risky password construction practices, such as including personal information and words.
The survey also revealed that 26% of respondents reuse the same password for important accounts such as email, banking or shopping and social networking sites.
In addition, as much as 8% admit to copying an entire password found online in a listing of “good” passwords. 29% of respondents had their own email or social network account hacked, and over half (52%) know someone who has had a similar problem.
The first thing a hacker will do to break into a computer or secure account is try and guess the victim’s password. Automated programs are also available to repeatedly guess passwords from a database of common words or other information.
The study also revealed that 22% of respondents had experienced email hacking and 46% know of others who experienced similar email problems.
Additionally, about 22% of respondents had experienced social network account hacking and 32% know others who have also had similar problem.
Once an attacker gains access to one account, almost 30% of the time that information can be used to access other sites that contain financial data such as bank account numbers and credit card information.
“Especially now, with online shopping on the rise this holiday season, consumers need to be aware of the importance of passwords and the fact that hackers are getting more and more sophisticated in cracking them,” said Bari Abdul, vice president of consumer sales at Check Point. “By creating a unique password for each important account, consumers create the first line of defense against online thieves who can’t wait to gain access to critical data for financial gain.” | <urn:uuid:f577be24-ac72-408d-80fa-40b0abb3066b> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2010/12/22/passwords-are-the-weakest-link-in-online-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00511-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.9605 | 346 | 2.71875 | 3 |
Computers must have appropriate protection from malware attacks. Aside from creating annoyances, malware infections can also affect the performance of your computer. Furthermore, the data on your system and confidential information that you use online may also be tracked and used without your knowledge. Because of the severity of the problems that it can cause you need to be very cautious about preventing malware infections, and know how to deal with them properly.
Before proceeding with the steps on how to respond to malware infections, we first need to learn about the signs and symptoms of a malware infection. These include:
In case you experience any of these symptoms, the first thing to do is to ensure that your antivirus and antispyware program is updated. This is to make sure that they detect the latest known threats on their database. You should then run scans to see if an infection is detected. If it is, the programs usually have a way to remove the infection. You then need to follow the steps the program recommends.
If this doesn’t work, disconnect the infected computer from the network to prevent the spread of the malware. Furthermore, avoid accessing the Web and using vital information such as bank account and credit card information. Let the technical department or your IT partner handle the concern since they are trained in determining and eradicating system malware infections.
Once the problem has been pinpointed, a tech specialist will go through the process of eliminating the infection. This includes backing up data on the computer and restoring the system to its original state. Depending on the extent of the infection, the computer may need to be wiped clean, or reformatted before restoring backed-up files.
After the whole process, the computer must be tested to ensure that the infection has been totally removed. Moreover, further investigation and studies must also be done to determine where the problem started, as well as to create a strategy as to how to prevent this from happening in the future.
Prevention is better than a cure and this definitely applies to malware infections. It’s best to arm yourself with knowledge on how to avoid malware attacks and prevent your systems from being infected.
Malware can hugely affect business operations and the security of private information. One of the best ways to prevent this is to work with an IT partner, like us, who can help recommend and install protection systems. You might want to think about getting help in managing these solutions too, to ensure that your systems are secure at all times.
If you have questions or concerns with regards to malware prevention and resolution, feel free to call us. Our support team is always ready to help. | <urn:uuid:0c9d099f-ad81-4350-a083-d2afdea77d06> | CC-MAIN-2017-04 | https://www.apex.com/reacting-malware-infections/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00448-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944929 | 528 | 2.84375 | 3 |
Definition: The part of a group of data by which it is sorted, indexed, cross referenced, etc.
See also dictionary, heap, sort.
Note: For instance, to sort customer records alphabetically, the key is the last name, then the given names. Other information, such as the address, outstanding balance, credit limit, etc. do not matter in sorting the records alphabetically. Different keys may be used at different times, for instance, an accounting report may need customers with larger balances first, so the records could be sorted using the outstanding balance as the key.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 17 December 2004.
HTML page formatted Mon Feb 2 13:10:39 2015.
Cite this as:
Paul E. Black, "key", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/key.html | <urn:uuid:1510663a-0e0c-4e18-8f0a-e2f05eeb858e> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/key.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00016-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.867579 | 231 | 3.046875 | 3 |
Phishing is a method of trying to gather personal information using deceptive e-mails and websites. Pharming also aims to collect personal information from unsuspecting victims by essentially tinkering with the road maps that computers use to navigate the Web. You don't want either one working its evil genius on you, your employees or your customers. Here's how to be on your guard against both phishing and pharming. Last updated: April 2009
- What is phishing?
- Can we prevent phishing attacks?
- What can my company do to reduce our chances of being targeted?
- What plans should my company have in place before a phishing incident occurs?
- How can we quickly find out if a phishing attack has been launched using our company's name?
- How can we help our customers avoid falling for phishing?
- If an attack does happen, how should we respond?
- Any legal/regulatory requirements we should be aware of?
- What action can we take against the phishers themselves?
- How might phishing attacks evolve in the near future? (E.g. "spear-phishing)
- How can we guard against pharming attacks?
Q: What is phishing?
A: Phishing is a method of trying to gather personal information using deceptive e-mails and websites. Typically, a phisher sends an e-mail disguised as a legitimate business request. For example, the phisher may pass himself off as a real bank asking its customers to verify financial data. (So phishing is a form of "social engineering".) The e-mail is often forged so that it appears to come from a real e-mail address used for legitimate company business, and it usually includes a link to a website that looks exactly like the bank's website. However, the site is bogus, and when the victim types in passwords or other sensitive information, that data is captured by the phisher. The information may be used to commit various forms of fraud and identity theft, ranging from compromising a single existing bank account to setting up multiple new ones.
Early phishing attempts were crude, with telltale misspellings and poor grammar. Since then, however, phishing e-mails have become remarkably sophisticated. Phishers may pull language straight from official company correspondence and take pains to avoid typos. The fake sites may be near-replicas of the sites phishers are spoofing, containing the company's logo and other images and fake status bars that give the site the appearance of security. Phishers may register plausible-looking domains like aolaccountupdate.com, mycitibank.net or paypa1.com (using the number 1 instead of the letter L). They may even direct their victims to a well-known company's actual website and then collect their personal data through a faux pop-up window.
Can we prevent phishing attacks?
Companies can reduce the odds of being targeted, and they can reduce the damage that phishers can do (more details on how below). But they can't really prevent it. One reason phishing e-mails are so convincing is that most of them have forged "from" lines, so that the message looks like it's from the spoofed company. There's no way for an organization to keep someone from spoofing a "from" line and making it seem as if an e-mail came from the organization.
A technology known as sender authentication does hold some promise for limiting phishing attacks, though. The idea is that if e-mail gateways could verify that messages purporting to be from, say, Citibank did in fact originate from a legitimate Citibank server, messages from spoofed addresses could be automatically tagged as fraudulent and thus weeded out. (Before delivering a message, an ISP would compare the IP address of the server sending the message to a list of valid addresses for the sending domain, much the same way an ISP looks up the IP address of a domain to send a message. It would be sort of an Internet version of caller ID and call blocking.)
Although the concept is straightforward, implementation has been slow because the major Internet players have different ideas about how to tackle the problem. It may be years before different groups iron out the details and implement a standard. Even then, there's no way of guaranteeing that phishers won't find ways around the system (just as some fraudsters can fake the numbers that appear in caller IDs). That's why, in the meantime, so many organizations—and a growing marketplace of service providers—have taken matters into their own hands.
What can my company do to reduce our chances of being targeted by phishing attacks?
In part, the answer has to do with NOT doing silly or thoughtless things that can increase your vulnerability. Now that phishing has become a fact of life, companies need to be careful about how they use e-mail to communicate with customers. For example, in May 2004, Wachovia's phones started ringing off the hook after the bank sent customers an e-mail instructing them to update their online banking user names and passwords by clicking on a link. Although the e-mail was legitimate (the bank had to migrate customers to a new system following a merger), a quarter of the recipients questioned it.
As Wachovia learned, companies need to clearly think through their customer communication protocols. Best practices include giving all e-mails and webpages a consistent look and feel, greeting customers by first and last name in e-mails, and never asking for personal or account data through e-mail. If any time-sensitive personal information is sent through e-mail, it has to be encrypted. Marketers may wring their hands at the prospect of not sending customers links that would take them directly to targeted offers, but instructing customers to bookmark key pages or linking to special offers from the homepage is a lot more secure. That way, companies are training their customers not to be duped.
It also makes sense to revisit what customers are allowed to do on your website. They should not be able to open a new account, sign up for a credit card or change their address online with just a password. At a minimum, companies should acknowledge every online transaction through e-mail and one other method of the customer's choosing (such as calling the phone number on record) so that customers are aware of all online activity on their accounts. And to make it more difficult for phishers to copy online data-capture forms, organizations should avoid putting them on the website for all to see. Instead, organizations should require secured log-in to access e-commerce forms.
At the end of the day, though, better authentication is the best way to decrease the likelihood that phishers will target your organization. Banks are beginning to experiment with technologies like RSA tokens, biometrics, one-time-use passwords and smart cards, all of which make their customers' personal information less valuable for phishers.
One midsized bank was able to cut its phishing-related ATM card losses by changing its authentication process. Every ATM card has data encoded on its magnetic strip that the customer can't see but that most ATM machines can read. The bank worked with its network provider to use that hidden information to authenticate ATM transactions—an important step that, according to Gartner, only about half of U.S. banks had taken by mid-2005. "Since the number isn't printed on the back of the card, customers can't accidentally disclose it," the bank's CISO explained. The information was already in the cards, so the bank didn't have to go through an expensive process of reissuing cards. "It was a very economical solution, and it's been very effective," said the CISO.
What plans should my company have in place before a phishing incident occurs?
Before your organization becomes a target, establish a cross-functional anti-phishing team and develop a response plan so that you're ready to deal with any attack. Ideally, the team should include representatives from IT, internal audit, communications, PR, marketing, the Web group, customer service and legal services.
This team will have to answer some hard questions, such as:
* Where should the public send suspicious e-mails involving your brand? Set up a dedicated e-mail account, such as email@example.com, and monitor it closely.
* What should call center staff do if they hear a report of a phishing attack? Make sure that employees are trained to recognize the signs of a phishing attack and know what to tell and ask a customer who may have fallen for a scam.
* How and when will your organization notify customers that an attack has occurred? You might opt to post news of new phishing e-mails targeting your company on your website, reiterating that they are not from you and that you didn't and won't ask for such information.
* Who will take down a phishing site? Larger companies often keep this activity in-house; smaller companies may want to outsource.
- If you keep the shut-down service in-house, a good response plan should outline whom to contact at the various ISPs to get a phisher site shut down as quickly as possible. Also, identifying law enforcement contacts at the FBI and the Secret Service ahead of time will improve your chances of bringing the perpetrator to justice.
- If a vendor is used, decide what the vendor can do on your behalf. You may want to authorize representatives to send e-mails and make phone calls, but have your legal department handle any correspondence involving legal action.
* When will the company take action against a phishing site, such as feeding it inaccurate information or exploiting vulnerabilities in its coding? Talk out the many pros and cons beforehand.
* How far will you go to protect customers? Decide how much information about identity theft you'll give to customers who fall for a scam, and how this information will be delivered. You should also talk through scenarios in which you will monitor or close and re-open affected accounts.
* Are you inadvertently training your customers to fall for phishing scams? Educate the sales and marketing teams about characteristics of phishing e-mails. Then, make sure legitimate e-mails don't set off any alarms.
How can we quickly find out if a phishing attack has been launched using our company's name?
Sometimes a new phish announces itself violently, as an organization's e-mail servers get pummeled with phishing e-mails that are bouncing back to their apparent originator. There are other ways to learn about an attack, though—either before or after it occurs.
a) Monitor for fraudulent domain name registrations.
Phishers often set up the fake sites several days before sending out phishing e-mails. One way to stop them from swindling your customers is to find and shut down these phishing sites before phishers launch their e-mail campaigns. You can outsource the search to a fraud alert service. These services use technologies that scour the Web looking for unauthorized uses of your logo or newly registered domains that contain your company's name, either of which might be an indication of an impending phishing attack. This will give your company time to counteract the strike (more on that later).
b) Set up a central inbox.CSO. To do this, organizations typically set up one e-mail address where all suspected phishing e-mails are directed, with an address such as firstname.lastname@example.org or email@example.com. Ideally, this central inbox should be monitored 24/7.
The easiest and most effective way to find out if your organization is being targeted by phishers is simply by giving the general public a way to report phishing attacks. "It's your customers and noncustomers who are going to be the ones that tell you that the phish is out there," said one security manager interviewed for a case study published in
c) Watch your Web traffic.Internet Storm Center recommends that by examining Web traffic logs and looking for spikes in referrals from specific, heretofore unknown IP addresses, CSOs may be able to zero in on sites used for large-scale phishing attacks.
After gathering victims' information, many phishing sites then redirect the victim to a log-in page on the real website the phisher is spoofing. SANS's
d) Hire a firm to help.Brandimensions hosts a vast, interconnected network of domain names and e-mail addresses intended solely to attract phishing e-mails and other spam. They're called honeypots. Entire websites are built to publish e-mail addresses, point to one another, and thereby attract the attention of automated Web crawlers that compile spam lists. The company then uses "relevancy detection software" to flag the e-mails that could be most damaging to its customers.
The same companies that scan the Internet for unauthorized uses of your logo can also monitor for active phishing sites. For example, Toronto-based
How can we help our customers avoid falling for phishing?
People who know about phishing stand a better chance of resisting the bait. "The best defense is that a consumer has heard of phishing and is unlikely to respond," says Patricia Poss, an attorney with the Bureau of Consumer Protection at the Federal Trade Commission. Must be trained to think twice about replying to any e-mail or pop-up that requests personal information. | <urn:uuid:d5814c0b-e881-49b1-970d-dc0782c770fb> | CC-MAIN-2017-04 | http://www.csoonline.com/article/2117843/identity-theft-prevention/phishing--the-basics.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00136-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955542 | 2,779 | 2.890625 | 3 |
Getting bright ideas while performing everyday tasks in life is not really unusual; after all many have gotten their brightest ideas while doing the most mundane things in life. But getting the idea of bridging the digital divide from wheeling and dealing in Wall Street?
That's what happened to Iqbal Quadir way back in 1993, when this Wharton-educated Bangaldeshi national working as investment banker in Wall Street, realized that just like people find gold mines in "unglamorous companies" by buying them cheap and selling high, he too has an "unglamorous" country Bangladesh, whose huge population of very poor people could be an "asset" if only they could communicate.
Quadir knew that one of the biggest reasons why over 80 percent of his country's residents lived below what the world considers poverty level, is the fact that most homes in Bangladesh and virtually all rural villages lack telephone connectivity, making the nation one of the least wired in the world. So, if connectivity meant productivity, then it must be a weapon against poverty.
Thus in 1997 GrameenPhone was born (Grameen means rural in the local lingo) to offer affordable mobile phone services to as many people as possible.
However, it wasn't easy in the beginning. The country had a very large population of poor, who lived in places underserved to such an extent that it took four hours of walking to reach the nearest post office or a medicine shop. So how does one get a mobile phone to all those Bangladeshis, most of whom couldn't even afford to pay for a call, let alone afford to own a mobile telephone? | <urn:uuid:29d96b3c-0cc2-4c5c-8482-ac40175e3272> | CC-MAIN-2017-04 | http://www.govtech.com/e-government/Unwired-Bangladesh-Taking-a-Lesson-from.html?story_pg=1&topic=117673&id=140099 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00346-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.973601 | 334 | 2.546875 | 3 |
In 2004, the autonomous region of Catalonia, Spain, launched an ambitious project to transform how it delivered services to citizens. Services from all 16 government departments, scattered across a patchwork of 270 separate networks - often hard to locate - were to be grouped according to user needs.
Technologically the transformation called for upgrading the government's internal network to all-Internet protocol (IP) and widespread broadband deployment. By replacing numerous public networks with a shared services center, Catalonia gave its citizens access to all government departments through a common portal using a single telephone number or URL.
Today, Catalonia reports that citizen satisfaction with service speed doubled in six months. Project cost savings are $22 million over three years, and the government expects a 30 percent reduction in the time it takes constituents to find services.
Unlike Catalonia, most government agencies and bureaus still maintain their own individual processes, networks and information systems to accomplish local tasks. However, these governmental IT and communications "silos" have created several situations that warrant improvement:
Fortunately technology is coming together with government programs and policies to transform what were independent systems and processes into a more connected government. As governments realize the inefficiencies of a segmented and isolated organization, policymakers are enacting regulations that allow, and sometimes require, government agencies to share information and processes across boundaries.
Simultaneously, IP-based shared-services technologies have emerged that allow governments to pull together their processes and information resources in a collaborative way. These technologies, which are now embedded in the basic network infrastructure, are:
Virtualization/service-oriented architecture is at the heart of building shared services. Hardware virtualization links multiple computing systems and WANs into one large pool of resources that an entity can use. Virtualization also logically segments user groups across the data center, as well as LANs and WANs, so they operate securely across distributed entities and departments while maintaining privacy.
In addition to hardware virtualization, service virtualization -- also called a service-oriented architecture -- relies on standard software tools and design principles to turn individually hosted applications into networkwide services that operate independently of user-access devices, local computing hardware platforms and operating systems. This infrastructure fosters interactive, real-time collaboration within and among agencies.
Collaboration/unified communications. Unifying communications systems by linking applications to one another enables the transparent use of processes and resources across systems. It also accelerates communication among employees, between employees and citizens, and between agencies with public safety or intelligence information to share.
Unified communications systems include IP telephony infrastructure and related conferencing applications, integrated voice and data messaging systems, video-conferencing systems, and contact/call centers. They also comprise special IP equipment and applications enabling the interoperability of wireless radio systems that empower police officers, firefighters and other public safety personnel to communicate with one another.
Collaboration is a large focus area for government CIOs: In November 2006, Forrester Research surveyed 64 government technology decision-makers in North America to discover where they planned to invest their software budgets in 2007. The study found that upgrading e-mail, messaging and collaboration systems were the top priority for government CIOs.
Mobile and wireless. Mobility constitutes a significant portion of the collaborative, shared-services government environment, particularly from a public safety perspective. Mobility is delivered by mobile WANs (cellular networks), as well as wireless LAN technology used to build mesh networks that deliver high-speed mobility (at LAN speeds) throughout municipalities. These standards-based networks operate with corresponding wireless client devices, such as cell phones, tablets, two-way radios and laptops used by mobile personnel.
Typically, however, various public safety radio networks run on different frequencies and are not interoperable. So when an emergency requires collaboration among the local fire and police departments, and state police, to name a few, voice and other communications must take place with each entity individually.
Now intelligent IP systems can connect dissimilar radio systems at the push of a button. When safety agencies share video feeds, building blueprints and hazardous-materials databases across disparate radio systems and other public safety organizations, agencies can dispatch the appropriate personnel and arm them with relevant information about the environment they are entering.
IP-based radio intercommunications systems can also convert other communications systems - including computers, cell phones and public address systems - into ad hoc radios, so key people can be reached during an emergency no matter where they are, and connected to a central communications channel.
These networks support push-to-talk (walkie-talkie) and cellular voice capabilities for interactive and broadcast communications, and will soon gain data and video functions to further boost public safety efforts. For example, an officer with a wireless display could download local maps and other data that could be helpful in an emergency. A video camera mounted on a fire marshal's helmet could link to local surveillance cameras in a burning building so emergency personnel on the scene could see what's going on inside and avoid injury.
Mobile networks provide other gains to the public safety sector. Because emergency responders can file reports electronically, for example, they can remotely update public safety databases in real time and download information from the records-management system. This not only saves responders time by not having to drive to the station to retrieve reports and files, but also keeps the centralized information freshly updated for access by other personnel, who may be researching cases of their own.
Finally video surveillance can join the government's IP network. Tying the surveillance system into the network makes video content accessible from anywhere across the network, including mobile security personnel's PDAs and mobile phones. With viewing no longer restricted to banks of monitors housed in special rooms, security professionals can see what's happening in multiple places throughout an organization.
Secure Information Sharing. Common applications and cross-department computing systems must be wrapped in a strong layer of security that allows information to be shared among authorized government personnel while protecting sensitive data. Sharing information among organizations while consolidating government onto a shared infrastructure appears to pose contradictory objectives to the CIO. Fortunately industry partnerships and technology developments have come together to meet these seemingly opposing objectives.
To help foster a highly secure architecture at the computing, application, storage and network levels, Cisco, EMC, and Microsoft announced in July 2007 an alliance and related architecture for secure information sharing across government boundaries. It's called the Secure Information Sharing Architecture (SISA), and it blends secure-networking components, identity management and storage subsystem technology with other off-the-shelf secure components to achieve a shared service infrastructure while maintaining policy-based security centered on communities of trust.
SISA's goal is to break down the barriers at the traditional organizational and jurisdictional IT infrastructure boundaries, while applying policies that achieve information security and privacy so sensitive information is better protected and can be shared among authorized communities more effectively.
Drive Toward Unification
When various arms of a given government interconnect their resources, they can gain interoperability among applications, and synchronize their databases and backup storage resources. Cross-boundary personnel can then access consistent data, see the larger picture and collaborate effectively with their counterparts in other organizations to improve service levels.
A market research firm, Kearney, which recently conducted a survey of C-level executives about shared services, estimates that organizations save 20 percent to 50 percent in operations costs with shared services. The survey also found that shared services improve productivity by 10 percent.
Hard cost savings are only one benefit of shared services. It also enables entirely new capabilities that empower government leaders, emergency responders and constituents. For example, a single repository of constituent information accessible by authorized personnel allows citizens to update their information just once, instead of having to contact the property tax collector, department of motor vehicles, voter registration department, library and so forth. Some countries, such as the Netherlands, forbid a government organization to request information from a citizen if that person has already provided it to another agency.
Similarly, public safety officials who access consistent, updated, real-time information from a single source can take appropriate action in emergency situations faster. Eventually shared services could allow public health officials to monitor confidential data - on pandemics, say - found in different government agencies and private-sector databases. They could then use the shared-services infrastructure to coordinate response efforts with both government agencies and critical private-sector partners.
Like the private sector, governments are investing in the unification of their technology infrastructures both to save substantial amounts of money, and improve citizen experiences and interactions.
Unifying IT and networking infrastructures, and instituting cross-agency collaborative applications in a shared-services environment, require a re-engineering of governmental back offices into a citizen-centric entity that acts as a single enterprise rather than disconnected agencies and bureaus. | <urn:uuid:77023131-c264-466f-be95-1c1a4cebf1fb> | CC-MAIN-2017-04 | http://www.govtech.com/policy-management/Building-Blocks-of-Shared-Services.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00374-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932816 | 1,789 | 2.609375 | 3 |
It looks like a bull, trots at the speed of a wolf and carries equipment like a pack mule, but does it have a place on the battlefield of the future? Researchers in the U.S. are conducting a two-year study of a robot that promises to lighten the load that soldiers must carry and they gave it a high-profile demonstration in September.
The four-legged robot, developed by the U.S. government-funded Defense Advanced Research Projects Agency (DARPA) and Boston Dynamics, is part of DARPA's Legged Squad Support System (LS3) program, and is packed with technology. It's a development on Big Dog, a robot platform developed by Boston Dynamics several years ago. (See video of the robot in this report on YouTube.)
As warfare gets more high-tech, soldiers are being asked to carry more gear -- as much as 45 kilograms, according to the U.S. military -- and that can slow them down, bring on injuries or hasten the onset of fatigue. So the U.S. Army and DARPA have made physical overburden an important focus of their technology research.
The new robot walks on four legs and has a fast-reacting balance system that means it won't fall over if shoved from one side -- something that most robots can't handle. If it does somehow fall, it's capable of righting itself. There are also "eyes" at the front, actually electronic sensors that constantly scan the surroundings.
A two-year test of the robot began in July and, if all goes well, will culminate with models of the robot taking part in a battlefield exercise alongside soldiers.
Before that happens, researchers want to perfect three distinct autonomous modes: "leader-follower tight" in which the LS3 follows as close as possible to the path of a human leader; "leader-follower corridor" in which the robot follows a leader but has the ability to decide its own path; and "go-to-waypoint" where it makes its own way to a GPS coordinate using sensors to avoid obstacles.
The robot is powered by a gasoline engine, which brings advantages -- plans call for it to be able to carry 180 kilograms on a 30+ kilometer hike over 24 hours -- but also means its noisy. Early prototypes were so loud it wasn't possible to have a conversation nearby but that is slowly changing. The latest version, demonstrated a few weeks ago, makes a tenth of the noise.
The demonstration, at Joint Base Myer-Henderson Hall in Virginia, gave General James Amos [cq], commandant of the U.S. Marine Corps., and Arati Prabhakar [cq], director of DARPA, a close-up look at the robot.
"For me, to see where it's gone just in the last four years and where it was with Big Dog, which was fascinating, you had to have a leap of imagination to know that we would get there eventually. We're getting close. Very, very close," Amos said, according to a story about the demonstration on the U.S. Army's web site.
During the test it was controlled with the Tactical Robot Controller (TRC), a handheld touchscreen controller that can operate many of the robotic platforms used by the U.S. military including theA TALON,A Dragon Runner, Robotic Bobcat,A RaiderA andA MAARSA robots.
In the future, developers want to add voice-recognition to the robot so soldiers will be able to command it to do things by voice alone, DARPA said.
And in addition to hauling equipment, the generator in the robot can also be used to recharge of power equipment when needed.
Tests of the robot are scheduled to take place approximately every quarter between now and the end of the research program. In December this year it will take part in its first test with the Marine Corps Warfighting Laboratory (MCWL)A at a U.S. base location yet to be disclosed.
Martyn Williams covers mobile telecoms, Silicon Valley and general technology breaking news for The IDG News Service. Follow Martyn on Twitter at @martyn_williams. Martyn's e-mail address is email@example.com | <urn:uuid:52bf8e78-122a-4608-80df-bb9f958105c9> | CC-MAIN-2017-04 | http://www.cio.com/article/2391750/government-use-of-it/darpa-begins-testing-robotic-mule-for-battlefields.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00190-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.962664 | 889 | 2.953125 | 3 |
In preparation of our CCNA exam, we want to make sure we cover the various concepts that we could see on our Cisco CCNA exam. So to assist you, below we will discuss Cisco CCNA 3 Layer Model.
The Cisco CCNA exam has been designed to prepare you for the real world of managing a large network and that can be a very complicated procedure. There are many factors to bring into consideration when designing a network’s topography; traffic, WAN support, speed, reliability, cost, and more. This is where Cisco’s Three-Layer Hierarchical Model proves to be an invaluable asset to establishing a fast, dependable, scalable, and cost-effective internetwork. You will definitely want to make sure you are familiar with this for your CCNA exam. The three layers include Core, Distribution, and Access, and they each provide their own purpose. Using this model, it is possible to create a network that is designed in a predictable way, is easily maintained and efficient to the effect of troubleshooting, leaves plenty of room for future expansion of the network or network services, and can manage the most possible functionality in a wide range of applications.
The Core Layer is effectively the backbone of the Hierarchical Model. This layer provides fast and efficient switching of mass amounts of data to the whole operation. This layer must be designed with speed and low latency (time between receiving and transmitting a packet) in mind. It’s important that changes made to this layer be made with concern to the notion that this is the top layer; anything affecting this layer is likely to affect all users across the board. This in mind, fault-tolerance and speed are essential at the Core Level. Avoid using any access lists or packet filters at this level. These will be used at the Distribution Layer and may cause problems if implemented in the Core Layer.
The Distribution Layer, or Workgroup layer as it is sometimes known, provides routing filtering, and WAN services, acting as a go-between for the Access Layer and the Core. The Distribution Layer also determines how packets will access the core, using the fastest and most reliable method possible. This layer will be used to implement network policies, such as address translation, firewalls, and filtering. The Distribution Layer will negotiate requests between the Access Layer, to which all workgroup members are connected, and the Core Layer, which will direct the request to the proper service and provide access if necessary to the Access Layer.
The Access Layer, or Desktop Layer, controls local user access to workgroup resources on the internetwork. The network at this level will be segmented into separate collision domains, allowing for smooth transmission to each host. There is continued management of access control and policy at this layer, a task that is shared between the Access Layer and the Distribution Layer. Static routing and DDR/Ethernet Switching are commonly implemented in this layer as well.
Summarily, the Cisco Three-Layer Hierarchical Model establishes standards for designing a reliable and fast internetwork. It is a tool that has been provided to ensure some predictability in topography and network theory. This layered approach should also be understood as a logical layering, not a physical one, meaning it is not necessary there be 3 routers performing each of these functions separately. There may be multiple devices performing single-layer functions, or there may be functions of different layers being performed by one or two devices. Each of these layers, with their specific functions, is integral to the overall deployment of a productive and powerful network. If you are familiar with these concepts, you are one topic down on your way to your CCNA certification!
I hope you found this article to be of use and it helps you prepare for your Cisco CCNA certification. Achieving your CCNA certification is much more than just memorizing Cisco exam material. It is having the real world knowledge to configure your Cisco equipment and be able to methodically troubleshoot Cisco issues. So I encourage you to continue in your studies for your CCNA exam certification. | <urn:uuid:a2992c36-d459-49a0-bd12-8891ce28bc71> | CC-MAIN-2017-04 | https://www.certificationkits.com/ccna-3-layer-model/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00034-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93401 | 829 | 2.8125 | 3 |
Technology is constantly changing. Some technologies ride a rollercoaster of favor and fade as they seek to find their place (RFID), while others replace earlier versions (imagers for lasers), and then there are those that create a permanent place in our world (the internet). Three current technologies that appear to be creating a permanent mark on the IT landscape are “Big Data”, “Cloud Computing” and “Mobile Computing”.
Big Data as defined by SAS Institute Inc.
“Big data is a popular term used to describe the exponential growth and availability of data, both structured and unstructured. And big data may be as important to business – and society – as the Internet has become. Why? More data may lead to more accurate analyses
Cloud Computing as defined by whatis.com
“Cloud computing is a general term for anything that involves delivering hosted services over the Internet. The name cloud computing was inspired by the cloud symbol that’s often used to represent the Internet in flowcharts and diagrams.
A cloud service has three distinct characteristics that differentiate it from traditional hosting. It is sold on demand, typically by the minute or the hour; it is elastic — a user can have as much or as little of a service as they want at any given time; and the service is fully managed by the provider (the consumer needs nothing but a personal computer and Internet access). Significant innovations in virtualization and distributed computing, as well as improved access to high-speed Internet and a weak economy, have accelerated interest in cloud computing.”
Mobile Computing as defined by Wikipedia.org
Mobile Computing is “taking a computer and all necessary files and software out into the field”. Mobile computing is any type of computing which uses Internet or intranet and respective communications links, such as WAN, LAN or WLAN.
There are at least three different classes of mobile computing items:
The existence of these classes is expected to be long lasting, and complementary in personal usage, none replacing the other ones.
So why all the fuss?
These are three of the biggest technology revolutions occurring and they have significant interdependency. Big Data requires large amounts of available storage, accessibility and processing power. Cloud computing provides unlimited storage, accessibility and processing power. Big data by definition requires the collection and consumption of valuable real-time information. Mobile computing is designed specifically for the collection and consumption of valuable real-time information. The promise of Big Data is using more information to make better decisions. The promise of mobile computing is getting better decisions to the right people. The decision making capabilities of big Data are enhanced with real-time information. Mobile computing delivers real-time information. Cloud computing requires user access portals. Mobile computers are ideal user access portals.
The world is changing, our industry is changing. We are the critical link between the enterprise and users, the “last mile” if you will. The future is here and DecisionPoint is leading the way. | <urn:uuid:292aebeb-ad62-414a-8a7a-fdc1ec80984d> | CC-MAIN-2017-04 | http://blog.decisionpt.com/big-data-cloud-computing-and-mobile-computing-2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00548-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943075 | 615 | 3.046875 | 3 |
Humidity monitoring can help keep hospital patients healthy
Friday, Apr 5th 2013
Healthcare providers have used temperature and humidity monitoring equipment for years to make sure their patient care tools and critical infrastructure remains in top shape at all times. Increasingly, environment control systems are necessary to make sure patients and staff stay healthy as well.
Although the purpose of a hospital is to treat patients and cure diseases, too often those seeking care end up falling ill as a result of their medical care. The Centers for Disease Control and Prevention reported that for every 200 patients that enter a hospital, 9 of them will fall ill during their time at the facility. As a result of these hospital-acquired infections, medical care facilities in the United States spend between $28.4 billion and $45 billion in additional related treatment costs every year.
"As a physician myself, I know we all entered medical school with one idea in mind - to save lives," Denise Cardo, Director of the Division of Healthcare Quality Promotion at the CDC, said in a video address. "Having a patient get a healthcare-associated infection in the course of their treatment is devastating and can have tragic outcomes. Fortunately, we know how to prevent these infections."
Humidity monitoring, air quality and HAI avoidance
Although there are many steps a medical care facility can take to reduce the chance that a patient falls ill for any reason, one of the top ways that not all treatment centers may have considered is through humidity monitoring.
According to the CDC, lung-related infections are one of the top HAIs. Although lung infections can have numerous causes, an excess of mold and other contaminants in the air is one of the prime causes of respiratory illness in indoor environments. For example, Triple Pundit reported that the asthma symptoms of 15 million Americans are triggered by poor air quality. In hospital settings that breed medicine-resistant superbugs and in which patient immune systems may already be compromised, the potential problem is exponentially greater.
Fortunately, this problem is relatively easy to address with proper humidity monitoring. Mold, bacteria and fungi responsible for causing many lung-related medical episodes thrive when the air is especially moist. However, buildings operators should be sure that moisture levels do not drop too much, as overly dry air can aggravate mucus membranes and damage sinuses, according to the Environmental Protection Agency. To stay in a comfortable space between these two poles, managers can use humidity monitoring to ensure ideal air moisture levels at all times.
Part of the process of maintaining ideal air moisture levels involves the use of temperature monitoring equipment as well. Triple Pundit reported that standard air conditioning units are vital in removing contaminants from the air and making sure the circulation in a building is appropriately encouraging ideal indoor conditions.
"Maintaining good indoor air quality requires attention to the building's heating, ventilation, and air conditioning (HVAC) system; the design and layout of the space; and pollutant source management," the EPA said. "HVAC systems include all of the equipment used to ventilate, heat, and cool the building; to move the air around the building (ductwork); and to filter and clean the air. These systems can have a significant impact on how pollutants are distributed and removed. HVAC systems can even act as sources of pollutants in some cases, such as when ventilation air filters become contaminated with dirt and/or moisture and when microbial growth results from stagnant water in drip pans or from uncontrolled moisture inside of air ducts."
Having a quality temperature and humidity monitoring system will likely not solve all of the problems hospitals face in keeping patients healthy and limiting the number of HAIs. However, by using monitors to accurately keep track of internal conditions at all times, facilities managers and healthcare providers can dramatically decrease the likelihood of a person having significant respiratory problems while receiving medical care. | <urn:uuid:95530526-1c34-4fc2-b853-8f12840b067c> | CC-MAIN-2017-04 | http://www.itwatchdogs.com/environmental-monitoring-news/healthcare/humidity-monitoring-can-help-keep-hospital-patients-healthy-417636 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00364-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951478 | 778 | 2.578125 | 3 |
The Fleischers, Dave and Max, invented the rotoscope technique for combining live action film and animation, and this film is one of the earliest uses for this technique (besides their own "Out of the Inkwell" series from the previous year), and gives a lighthearted overview of how the telephone works.
Max Fleischer today is best-known for creating Betty Boop and animating the first Popeye and Superman cartoons, but he also was a pioneer in producing animated films for industry and the military, both of a technical and scientific nature. Inkwell Studios was so gifted and quick in the turnaround that it took Dave Fleischer only a week to familiarize himself with the workings of AT&T well enough to produce That Little Big Fellow. This was not the first film they had made for the Bell System, and it was not to be their last.
Inkwell Studios films for AT&T:
How the Telephone Talks - 1924
That Little Big Fellow - 1927
Now You're Talking - 1927
Finding His Voice - 1929
Producer: Inkwell Studios, Dave and Max Fleischer
Footage courtesy of AT&T Archives and History Center, Warren, NJ | <urn:uuid:0483a863-382d-4378-9c1c-2f656c516860> | CC-MAIN-2017-04 | http://techchannel.att.com/play-video.cfm/2011/4/6/AT&T-Archives-That-Little-Big-Fellow | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00272-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950991 | 243 | 2.703125 | 3 |
Welcome to Enemy at the Gates!
This inaugural post and those that follow will use real-world and hypothetical cybercrime, cyber-espionage, and cyber-terrorism examples to comprehensively explore this question:
What is the true real-world identity of the living, breathing human being standing at the intranet or internet gate and is that living, breathing human being an enemy or a friend?
The goals are to offer the reader different ways of thinking about how vulnerabilities are exploited by criminal, nation-state, and terrorist hackers and, more importantly, suggest paths forward to effective solutions.
Through many years of studying the cyber identity problem, I’ve noticed that cybersecurity discussions often focus on identity verification technologies and techniques in a context disconnected from the living, breathing human being standing behind passwords, multi-factor authentication procedures, and even biometric measures.
Most serious cyber breaches start with an anonymous living, breathing bad actor sending a malware-laden email to a target company employee. Just this month, the cybersecurity company Symantec announced that a second group of hackers targeted banks that use the SWIFT global financial transfer system. The report suggests the attackers used phishing emails containing malicious file attachments to deliver malware payloads into their target banks’ computer networks. To illustrate the seriousness of this incident, the first group of SWIFT hackers successfully stole $81 million from the Bangladesh Central Bank.
The criminal hackers involved in the more recent attack may have used simple email phishing where they had only general knowledge of the banks’ operations or spearphishing where they may have used social engineering techniques to gather specific information about bank employees to design a very convincing email. Certainly the focus of investigators is finding an answer to this question: “Which of the world’s 7.5 billion living, breathing human beings really clicked ‘send’?”
Email is the cyber equivalent of a ballistic missile carrying a nuclear warhead and is a devastatingly effective hacker tool. Consider that the human being sending the email can be anyone operating from any location with no authentication mechanism available to the email server receiving the phishing or spearphishing email. The email technology in widespread use does not, as part of the protocol, demand that senders identify themselves in any context much less one in the real-world.
But none of this is new. The vulnerabilities baked into conventional email technology are well known. The amazing thing is that newer, more secure messaging systems haven’t yet killed it off.
Setting aside the question of why email is still around, we can conclude that hackers will always have the advantage as long as 40+ year-old conventional email technology remains in widespread use. The only effective solution is to adopt a top-to-bottom replacement for conventional email messaging. Critically, any such replacement must comprehensively address the anonymity problem.
It will be a very long and difficult process but the way forward is a focused, coordinated effort involving government standards agencies, legislatures, private companies, and cyber insurance providers. Government standards agencies such as the National Institute of Standards and Technology (NIST) should strongly promote security-focused guidelines for email replacement technologies; legislatures can use tax credits to encourage faster adoption of new messaging systems; insurance companies can use cyber policy rates to further boost the economic benefits of change.
Large businesses may hold the key to quicker adoption of new messaging technologies by using their size and economic influence to incentivize supply chains to adopt secure messaging technologies for business-to-business communication. Such action on the part of coalitions of large businesses can accelerate the successful retirement of SMTP email messaging throughout the broader economy since employees will become familiar with messaging alternatives and begin to use them when not at work.
[ RELATED: How to craft a security awareness program that works ]
Pushback from those who say this task is too difficult, expensive, or disruptive must be challenged with the unarguable fact that current email technology cannot be made secure and hackers are a very determined species.
Until email replacements are widely adopted and before focusing exclusively on the relative merits of anti-malware systems and other technologies designed to deal with attacks after the phishing email attachment is opened, security professionals should always ask ‘Who are the living, breathing human beings sending emails to my company’s employees? Are they friends or enemies at the gate?
This article is published as part of the IDG Contributor Network. Want to Join? | <urn:uuid:3df4d527-7fcb-4f1e-ad24-133f05f72600> | CC-MAIN-2017-04 | http://www.csoonline.com/article/3131493/cyber-attacks-espionage/time-to-destroy-the-hackers-ballistic-missile.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00272-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931895 | 902 | 2.53125 | 3 |
The Network is the Database Integrating Widely Dispersed Big Data with Data Virtualization
Originally published January 14, 2014
IntroductionAlmost 30 years ago in 1984, John Cage1 of Sun Microsystems (acquired by Oracle in 2010) coined the phrase ďThe Network is the Computer.Ē He was right then, and he is even more right today. Nowadays, application processing is highly distributed over countless machines connected by a network. The boundaries between computers have completely blurred. We run applications that seamlessly invoke application logic on other machines.
But itís not only application processing that is scattered across many computers; the same can be said for data. More and more digitized data is entered, collected and stored in a distributed fashion. Itís stored in cloud applications, in outsourced ERP systems, on remote websites and so on. In addition, external data is available from government, social media, news websites, and the number of valuable open data sources is staggering. The network is not only the computer anymore; the network has become the database as well:
This dispersion of data is a fact. Still, data has to be integrated to become valuable for an organization. For long, the traditional solution for data integration has been to copy the data to a centralized site such as the data warehouse. However, data volumes are increasing (and not only because of the popularity of big data systems). The consequence is that more and more often data has become too big to move (for performance, latency or financial reasons) Ė data has to stay where itís entered. For integration, instead of moving the data to the query processing (as in data warehouse systems), query processing must be moved to the data sources.
This article explains the problem of centralized consolidation of data and describes how data virtualization helps to turn the network in a database using on-demand integration. It also explains the importance of distributed data virtualization to operate efficiently in todayís highly networked environment.
A Short History LessonOnce upon a time, all the digitized data of an enterprise was stored on a small number of disks managed by a few machines, all standing in the same computer room. Specialists in white coats monitored these machines and were responsible for making backups of the valuable data. Itís very likely that all the users were in the same building as well, accessing the data through monochrome monitors. The network that was used to move data between the machines was referred to as the sneakers-network.
Then the time came when users started to roam the planet, and machines residing in different buildings were connected with real networks. Compared to today, these first generations of networks were just plain slow. For example, in the 1970s, Bob Metcalfe (co-inventor of Ethernet) built a high-speed network interface between MIT and ARPANET.2 This network supported a dazzling network bandwidth of 100 Kbps. Compare that with todayís 100 Gigabit Ethernet that offers a million times more bandwidth. In an optimized network environment, one terabyte of data can now be transferred within 80 seconds. This would have taken 2.5 years in the 1970s.
Because users were working on remote sites, accessing data involved transmitting data back and forth, and that was slow. The vendors of database servers tried to solve this problem by developing distributed database servers in the 1980s.3,4 By applying replication and partitioning techniques, data was moved closer to the users to minimize network delay. With replication, data is copied to the nodes on the network where users are requesting data. To keep replicas up to date, distributed database servers support complex and innovative replication mechanisms.
Nowadays, itís no longer the computing room where new data is entered. Data is entered, collected and stored everywhere. Examples include:
Distributed collection: Websites running in the cloud collect millions of weblog records indicating visitor behavior. Factories operating worldwide run high-tech machines generating massive amounts of sensor data. Mobile devices collect data on application usage and track geographical locations.But itís not only that data is stored in a distributed fashion; data entry is distributed as well. Employees, customers and suppliers all enter data via the Internet, using their own machines at home, on their mobile devices and so on. Data entry has never been more dispersed.
To summarize this short history lesson, in the beginning data and users were centralized. Next, data stayed centralized, and users became distributed. Now data and users are both highly distributed.
The Need to Integrate Distributed Data RemainsAs described, there are many good reasons why data entry and data storage are dispersed. Still, data has to be integrated. There are many different reasons why data has to be integrated:
Is Centralization the Answer to Data Integration?For the last twenty years, the most popular solution to integrate data is the data warehouse. In most data warehouses systems, data from multiple sources is physically moved to and consolidated in one big database (one site). Here, the data is integrated, standardized and cleansed, and made available for reporting and analytics.
This centralization and consolidation of data makes a lot of sense from the perspective of the need to integrate data. And if there is not too much data, itís technically feasible. But can we keep doing this? Can we keep moving and copying data, especially in this era of big data? It looks as if the answer is going to be no, and for some organizations itís already a no. Here we list four problems of this approach:
Data Virtualization to the Rescue Ė Moving Processing to the DataBut how can all the distributed data be integrated without copying it first to a centralized data store, such as a data warehouse? Data virtualization technology6 offers a solution. In a nutshell, data virtualization makes a heterogeneous set of data sources look like one logical database to the users and applications. These data sources donít have to be stored locally; they can be anywhere.
Data virtualization technology is designed and optimized to integrate data live. There is no need to physically store all the integrated data centrally. Itís only when data from several different sources is requested by users that itís integrated, but not before that. In other words, data virtualization supports integration on demand.
Because data virtualization servers retrieve data from other systems, they must understand networks. They must know how to efficiently transmit data over the network to the server where the integration on demand takes place. For example, to minimize network traffic, mature data virtualization servers deploy so-called push-down techniques. If a user asks for a small portion of a table, only that portion of the data is extracted by the data virtualization server from the data source and not the entire table. The query is ďpushed downĒ to the data source instead of requesting the entire table.
Push down allows a data virtualization server to move the processing to the data instead of moving the data to the processing. In the latter case, all the data is transmitted to the data virtualization server that subsequently executes the request. Especially if big data sets are used, this approach would be slow because of the amount of network traffic involved. A preferred approach is to ship the query to the data source, and transmit only relevant data back to the data virtualization server.
The Need for Distributed Data Virtualization Ė Moving Processing Closer to the DataMoving processing to the data is a powerful feature to optimize network traffic, but itís not sufficient for the distributed data world of tomorrow. Imagine that a data virtualization server runs on one server and all the requests for data are first moved to that central server, queries are sent to all the data sources, answers are transmitted back, and all the data is integrated and returned to all the users. This centralized processing of requests can be highly inefficient. It would be like a worldwide operating parcel service where all the parcels are first shipped to Denver, and from there to the destination address. If a specific parcel has to be shipped from New York to San Francisco, then this is not a bad solution. However, a parcel from New York to Boston is going to take an unnecessarily long time because of this detour via Denver. Or what about a parcel that must be shipped from Berlin, Germany, to London, UK? That parcel is going to make a long journey via Denver before it arrives in London.
Besides this inefficiency aspect, itís not recommended to have one data virtualization server because it lowers availability. If that server crashes, no one can get to the data anymore. It would be like the parcel service in a situation where the airport in Denver is closed because of bad weather conditions.
To address the new data integration workload, itís important that data virtualization servers support a highly distributed architecture. Each node in the network where queries originate and data sources reside should run a version of the data virtualization server for processing these requests. Each node of the data virtualization server that receives user requests should know where the requested data resides, and must push the request to the relevant data virtualization server. Multiple data virtualization servers work together to execute the request. The effect is that when no remote data is requested, no shipping of data and requests will take place.
This is only possible if a data virtualization server is knowledgeable about network aspects, such as what is the fastest network route, the cheapest network route, how to transmit data efficiently, the optimal package size, and so on. Like they must know how to optimize database access, they must also know how to optimize network traffic. It requires a close marriage of the network and data virtualization.
Note that this requirement to distribute data virtualization processing over countless nodes is not very different from the data processing architectures of NoSQL systems.
The Network is the DatabaseData and data entry are more and more distributed over the network, and over time it will only escalate. The time that all the data is stored together is forever gone. Sun Microsystemsí tagline once was ďThe network is the Computer.Ē In this era, in which data is entered and stored everywhere, in which users who access the data can be everywhere, and in which big data systems are being developed, an analogous statement can be made:
The network is the database.If the network is the database, copying all the data to one centralized node for integration purposes is expensive, almost technically undoable, and it may clash with regulations. Due to its integration-on-demand solution, data virtualization technology offers a more suitable approach to integrate all this widely dispersed data. Data virtualization will be the key instrument to integrating widely dispersed big data and turn ďthe network into a database.Ē A requirement will be that data virtualization servers have a highly decentralized architecture and are extremely network-aware.
SOURCE: The Network is the Database
Recent articles by Rick van der Lans
Copyright 2004 — 2017. Powell Media, LLC. All rights reserved.
BeyeNETWORK™ is a trademark of Powell Media, LLC | <urn:uuid:b2c56158-f156-4206-81e5-76308d372acb> | CC-MAIN-2017-04 | http://www.b-eye-network.com/print/17223 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00116-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928098 | 2,255 | 2.90625 | 3 |
Manufacturing Breakthrough Blog
Tuesday February 24, 2015
As I finished my last posting, When Variation Is the Enemy, I told you we would discuss what the concept of being “in control” means and why it is so important. Let’s review briefly the basics of Six Sigma before we tackle what being in control really means.
The basics of Six Sigma
Six Sigma has become well known by the use of the acronym DMAIC. As can be seen in the above graphic, DMAIC stands for Define – Measure – Analyze – Improve – Control. What this acronym is telling us is that once we have defined a problem we must measure and analyze in order to develop a solution. And once we’ve implemented the solution, we must develop a way to control the process to prevent it from producing the same problem we originally identified. So let’s focus on Step 5, Control.
A real life example
In When Variation Is the Enemy, we discussed two different types of variation: common cause and special cause. We said that common cause variation is the natural variation that exists within all processes while special cause variation is not natural. So what’s the difference?
Common cause variation is typically characterized as being predictable variation within a process whereas special cause is highly unpredictable and comes into our processes as a result of a change that we probably don’t know about. For example, suppose we are cutting extruded plastic or rubber to a specific width. The extrusion passes through a guide as it is extruded and trimmed. As we take samples and measure the width of each sample, they are very close to each other in width, but are not exactly the same. The pattern of data forms a distinct pattern, typically in the form of a bell-shaped curve. But let’s say that one of the guides becomes loose—what happens to the data then? The width data changes dramatically with wide amounts of dispersion. In other words there is a clear shift in the variation and would be traceable back to the point when the guide became loose. This is the essence of special cause variation in that prior to the guide becoming loose, the data was very predictable, but after it became loose, the data is highly variable and unpredictable. Thus the central concept of being “in control” is that processes have no active special cause variation present and are therefore predictable. Obviously this is the preferred state that we want our processes to exhibit.
So how do we know that our process is in control or not? Is there a tool to tell us when our process moves from being in control and predictable to a state of unpredictability? The answer is, yes and the tool is referred to as a control chart. A control chart in its most basic form is a graph used to study how a process changes over time with data being plotted in the order in which they were collected. A control chart always has a central line depicting the average value with upper and lower control limits depicting the common cause variation limits. These limits are determined from historical data after all special cause variation has been eliminated from the process. So by comparing current data to these limits, you can draw conclusions about whether the process variation is consistent and predictable (i.e. in control) or is unpredictable (out of control, affected by special causes of variation).
Using the Control Chart
As such, the control chart has two parts, one for the process average (the top chart) and one for variation (the bottom chart usually depicted by the range of the data) as seen in the following graphic. Control charts for variable data are typically used in pairs or in subgroups. The top chart monitors the average, or the centering of the distribution of data from the process while the bottom chart monitors the variation. Some compare a control chart to a target in target practice. In doing so, the average is where the shots are clustering together while the range is how tightly they are clustered.
Both of these charts demonstrate a state of statistical control with all of the data points falling inside the calculated control limits. Going back to our extruded width example, this data is before the loose guide problem surfaced. Somewhere around data points 19 and 20, where the data points went above the upper control limit, the guide became loose and we see a shift in our X-bar chart in the following figure.
The control chart allowed us to see the change almost immediately, reset the guide and bring the data back into control. In this example, the process should have been stopped immediately after the first data point went “out
-of -control” but for whatever reason it continued to run, producing product that was too wide. Assuming that the control limits were well inside the specification limits, there may be a re-work opportunity.
Let’s go back to our R Chart for Mean Width:
In my next posting we’ll continue our discussion on various tools and techniques that can be used to produce better product as well as how the Theory of Constraints, Lean and Six Sigma can be successfully merged into a very powerful improvement strategy.
Thanks for reading. See you next time. | <urn:uuid:46d112b8-5ce5-4874-ad73-50d014cab11a> | CC-MAIN-2017-04 | http://manufacturing.ecisolutions.com/blog/posts/2015/february/processes-are-they-in-control-or-not.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00420-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949782 | 1,059 | 2.90625 | 3 |
Did you know that when you remove metadata from a photo, such as the camera's serial number, digital forensic experts can still figure out what digital camera was used to capture the shot? It is all about finding tiny imperfections in the image that the naked eye cannot see. Did you know that if a person on a GSM network were to change SIM cards or tweak the unique and identifying IMEI number, that cellphone can still be tracked? That too is possible due to tiny, subtle differences in radio pattern signals that each phone emits to cell towers.
By knowing the International Mobile Station Equipment Identity number, IMEI, law enforcement can target that phone for “lawful interception” wiretapping. Criminals know that and often swap out SIM cards and/or spoof IMEI so law enforcement cannot track or wiretap their phones. However, that method of hiding from Johnny Law may not work in the future. German computer scientist Jakob Hasse and his colleges at the Technical University of Dresden have developed a forensic technique to identify phones in GSM networks even if crooks take steps to thwart being tracked.
Although phones are mass-produced, and each model contains the exact same hardware, there are still differences in the radio signal patterns they emit. It is those tiny unchangeable differences, or “inaccuracies” sent to cell towers that are unique enough to be used as identifying digital fingerprints, thereby allowing police to track the phone. Digital Evidence added, “The novel approach also permits re-identification of mobile phones across interchangeable SIM cards, and it is not vulnerable to manipulated identification (IMEI) numbers. The core of the method exploits signal characteristics and transmission profiles of mobile phones.”
This summer in France at the 1st ACM Workshop on Information Hiding and Multimedia Security, Hasse presented “Forensic Identification of GSM Mobile Phones” [pdf]. In “real world conditions,” the researchers were able to distinguish and correctly identify 13 mobile phones at an overall success rate of 97.62 percent. “This included four identical and nine almost identical phones, which proves the selected features to be unique for an individual device.” The researchers concluded, “By targeting the air interface of GSM on physical layer, it is possible to identify mobile phones without the interaction with or recognition by the sender.”
"Our method does not send anything to the mobile phones. It works completely passively and just listens to the ongoing transmissions of a mobile phone – it cannot be detected," Hasse told New Scientist.
“Identifying a phone from its radio frequency fingerprint is certainly not far-fetched,” according to computer forensics security expert Nick Furneaux of CSITech. “It is similar to identifying a digital camera where the image metadata does not provide a serial number. From underlying imperfections in the lens, which are detectable in the image, the source camera can be identified.”
According to researchers of the EXIST startup team Digital Evidence at TU Dresden, “Forensic mobile phone identification has applications in speaker verification, tracking of (stolen) devices, and law enforcement in general. While existing active identification techniques require support by the service provider or operating a base station to set up so-called IMSI catchers, the new passive method relies solely on the observation of transmitted signals. Future applications beyond mobile phone identification may also include the detection of fake base stations.” | <urn:uuid:25ddac22-1273-4690-923e-620e1ea4b003> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2474374/smartphones/forensic-researchers-develop-undetectable-method-for-tracking-cellphones.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00236-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932863 | 713 | 3.234375 | 3 |
Ren G.,National Climate Center |
Li J.,National Climate Center |
Ren Y.,National Climate Center |
Chu Z.,Beijing Municipality Meteorological Bureau |
And 7 more authors.
Journal of Applied Meteorology and Climatology | Year: 2015
Trends in surface air temperature (SAT) are a critical indicator for climate change at varied spatial scales. Because of urbanization effects, however, the current SAT records of many urban stations can hardly meet the demands of the studies. Evaluation and adjustment of the urbanization effects on the SAT trends are needed, which requires an objective selection of reference (rural) stations. Based on the station history information from all meteorological stations with long-term records in mainland China, an integrated procedure for determining the reference SAT stations has been developed and is applied in forming a network of reference SAT stations. Historical data from the network are used to assess the urbanization effects on the long-term SAT trends of the stations of the national Reference Climate Network and Basic Meteorological Network (RCN+BMN or national stations), which had been used most frequently in studies of regional climate change throughout the country. This paper describes in detail the integrated procedure and the assessment results of urbanization effects on the SAT trends of the national stations applying the data from the reference station network determined using the procedure. The results showed a highly significant urbanization effect of 0.074°C (10 yr)-1 and urbanization contribution of 24.9% for the national stations of mainland China during the time period 1961-2004, which compared well to results that were reported in previous studies by the authors using the predecessor of the present reference network and the reference stations selected but when applying other methods. The authors are thus confident that the SAT data from the updated China reference station network as reported in this paper best represented the baseline SAT trends nationwide and could be used for evaluating and adjusting the urban biases in the historical data series of the SAT from different observational networks. © 2015 American Meteorological Society. Source | <urn:uuid:f50de12f-9741-4cd8-8aff-51d136cf0f9c> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/beijing-meteorological-bureau-285340/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00357-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916084 | 411 | 2.59375 | 3 |
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
JupiterResearch carried out a study of thousands of website users and found that 58% have deleted cookies from their machine, and 39% delete all cookies from their PCs on a monthly basis.
Cookies are small files used by websites to track the behaviour of visitors. They enable organisations to offer the most appropriate products or services the next time a visitor logs onto their site. They are also used to recognise registered website users.
But privacy and security concerns are leading consumers to disable cookies on their browsers or delete them after they have been downloaded, said JupiterResearch.
"Given the number of sites and applications that depend heavily on cookies for accuracy and functionality, the lack of this data represents significant risk for many companies," said Eric Peterson, analyst at JupiterResearch.
"Because personalisation, track-ing and targeting products require cookies to identify web visitors over multiple sessions, the accuracy of these solutions has become highly suspect, especially over longer periods." | <urn:uuid:45d5693f-cba5-4565-b17f-b44525fec677> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/2240060388/Monitoring-made-harder-by-cookie-security-fears | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00265-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957663 | 217 | 2.84375 | 3 |
What do Google driverless cars and Stanford University autonomous helicopters have in common?
Both rely on machine learning technology to make sense of complex environments, while ensuring good decisions are made sooner. Machine learning’s ability to make good decisions faster in complex environments also can be applied to solve challenges in IT operations.
In today’s dynamic IT environments driven by virtualization, mobility, and cloud, application and infrastructure issues are popping up constantly. When an issue affecting service unfolds, there can be multiple underlying root causes that are simultaneously cascading across technology domains – apps, servers, storage, networks and, increasingly, private to public cloud hybrids. | <urn:uuid:a0ed2415-af9c-4c0c-be92-8c573ccb60be> | CC-MAIN-2017-04 | https://www.moogsoft.com/whats-new/articles/5-ways-machine-learning-reinvents-it-root-cause-analysis/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00411-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921555 | 130 | 2.578125 | 3 |
Samsung this week announced a breakthrough in the evolution of smaller, more efficient DRAM memory. The company has produced its first 20nm, 4Gbit DDR3 DRAM.
As with all semiconductor technology, the smaller the transistors, the more capacity can go into a form factor. The smaller the circuitry, the cheaper it is to manufacture chips with the same or even greater capacity.
The DDR3 is used in desktops, notebooks, ultrabooks and tablets where Samsung said "customers will see a significant power savings," as well as cost savings.
Samsung's new 20nm DRAM. (Photo: Samsung)
"While we cannot speak for [device manufacturers], we believe that customers will benefit from a significantly lower total cost of ownership," a Samsung spokesman said in an email reply to Computerworld.
Samsung is already providing the new DDR3 chips to some manufacturers, and it expects that they will be available in computing devices later this year.
Over the past five years, Samsung has shrunk its DRAM transistors from 50nm (in 2009) to 30nm (in 2010) to 25nm (in 2011) to the 20nm process technology today.
While NAND flash has the lead in the race toward single-digit nanometer processes (it has been at 19nm for some time), DRAM circuitry is decisively more difficult to shrink.
NAND flash dies of different sizes. (Image: Micron)
With DRAM memory, each cell consists of a capacitor (where the electrical charge is held) and a transistor that are linked to one another, whereas with NAND flash memory each cell only has a transistor. So DRAM technology requires both the transistor and capacitor to shrink.
Samsung said it was able to refine its DRAM design and manufacturing technologies and came up with a "modified double patterning and atomic layer deposition."
In micro-circuitry, such as DRAM or NAND flash, dense repeating nanostructures are required. Atomic layer deposition (ALD) is a technique for depositing those nanostructures using thin films with precise uniformity. Double patterning, simply put, is a method for doubling the number of features.
Samsung's said its modified double patterning technology is a milestone. By enabling 20nm DDR3 production using current photolithography equipment, it has established a new core technology for the next generation of 10nm-class DRAM production.
Samsung also successfully created ultrathin dielectric layers of cell capacitors with an unprecedented uniformity, which has resulted in higher cell performance.
Applying new process innovations, Samsung's new 4Gbit 20nm DDR3 has improved manufacturing productivity by more than 30% over that of the preceding 25 nanometer DDR3, and more than twice that of 30nm-class DDR3, the company said.
"Also our new 20nm 4Gb DDR3-based modules can save up to 25% of the energy consumed by equivalent modules fabricated using the previous 25 nanometer process technology," Samsung said.
This article, Samsung achieves DDR3 size, calls it efficiency breakthrough, was originally published at Computerworld.com.
Lucas Mearian covers consumer data storage, consumerization of IT, mobile device management, renewable energy, telematics/car tech and entertainment tech for Computerworld. Follow Lucas on Twitter at @lucasmearian or subscribe to Lucas's RSS feed. His e-mail address is firstname.lastname@example.org.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center. | <urn:uuid:72324923-23ba-4764-804a-051684e10479> | CC-MAIN-2017-04 | http://www.computerworld.com.au/article/540385/samsung_achieves_ddr3_size_calls_it_an_efficiency_breakthrough/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00319-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925001 | 730 | 3.109375 | 3 |
Print working directory name (POSIX)
The pwd utility writes the pathname of the current working directory to the standard output.
The pwd command is available both as a shell alias (equivalent to print "$PWD" ), and as a standalone utility. For information about the builtin pwd command, see ksh . To make sure you use the executable, specify the full path.
- Successful completion.
- An error occurred. | <urn:uuid:dd480dd7-80ef-45ed-82fb-7082ee943e2f> | CC-MAIN-2017-04 | https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.utilities/topic/p/pwd.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00559-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.822665 | 94 | 2.765625 | 3 |
Quanta Computer has been selected to design and build the $100 computers planned as part of M.I.T.'s One Laptop per Child initiative.
The Massachusetts Institute of Technologys Media Lab has announced that Taiwanese device maker Quanta Computer Inc. has been selected to design and build the $100 computers planned as part of the schools One Laptop per Child initiative.
After reviewing design submissions from several different companies, the OLPCs Board of Directors said it selected Quanta in part because the computer maker, which builds some PCs sold by Hewlett-Packard Co., promised to devote "significant engineering resources" to the project from its Quanta Research Institute. Under the companys current plans, it expects to deliver the laptops to market by the fourth quarter of 2006.
OLPCs goal is to sell the laptops to governments worldwide who will in turn distribute the machines to schoolchildren in impoverished regions to use in their classes and take home. The computers are expected to come in a brightly colored, rugged chassis in order to protect them from damage and discourage theft, and will run Linux with a 500MHz processor and 1GB of onboard memory, based on a design proposed by OLPC earlier this year.
The devices will also feature a hand-cranked power system, which will augment conventional batteries and electric current adaptors, in order to allow for use of the laptops in remote areas. The computers will also offer wireless mesh networking to allow Internet access to multiple machines from one connection. By using Linux software tuned to work in individual nations, the machines will also eliminate the need for two-thirds of the software on traditional laptops, MIT Media Lab professor Nicholas Negroponte has said.
OLPC reported that pricing for the devices will begin at roughly $100 with plans to lower that figure over time, even as the computers add more features. The group said it initially hopes to introduce between 5 million and 15 million laptops in large-scale pilot projects in seven countries (China, India, Brazil, Argentina, Egypt, Nigeria, and Thailand) with one million computers slated for each of these nations.
The announcement of Quanta as the manufacturer for the $100 laptop program was not completely unexpected. In April, the hardware maker signed a five-year, $20 million joint research pact with M.I.T. aimed at creating designs for next-generation computing and communications devices.
The initiative also plans to create an additional "modest" number of machines to send into developer communities in a number of other countries, and Quanta may build a version of the laptop for commercial markets.
Despite winning praise from many people, the $100 laptop effort has attracted some criticism from some industry experts who say the current device design is too simple. Earlier this year, former Intel Chief Executive Craig Barrett predicted that there will not be a significant market for the devices because they do not offer enough features.
In announcing the Quanta deal, Negroponte said that OLPC has overcome some of those criticisms. "Any previous doubt that a very-low-cost laptop could be made for education in the developing world has just gone away," Negroponte said in a statement.
AMD is aiming to win a larger share of the enterprise and SMB markets for its processors. Click here to read more.
"Quanta would like to contribute its laptop technologies to the future success of the project, in hope of affording children worldwide with opportunities not only to close the digital divide, but also to bridge the knowledge divide," Quanta Chairman Barry Lam said in a statement. "This project signifies a new stage and scale for the laptop industry by including those children never before considered to be laptop users."
Negroponte had said previously that the program could change whole communities, making the benefits of information technology apparent to far many more people than the schoolchildren who receive the laptops. In doing so, the devices will help cultures embrace new forms of learning that go beyond the institutional educational systems, he said.
"Its not just about the laptops, its more about the influence of the entire program," Negroponte said at an M.I.T. conference in September. "This is not teaching as we know it; only part of our learning comes from teaching. Much of it comes from curiosity. These are tools that can help cultivate that learning process."
A significant part of that larger effect will come from the fact that the laptops will go home with children at night, allowing their families and friends to see how the devices work, and what they have to offer, he said. In some towns where the group has researched distributing the machines it found whole families gathered around the laptops at night because the device represented the brightest source of light in their homes.
Despite the fact that the computers will be sent into some of the poorest communities on the planet, the group only expects to lose 1 percent to theft. Negroponte contends that social forces, such as adults being seen misusing one of the brightly colored machines, will dictate that people are careful with the devices and discourage criminals from taking them.
Advanced Micro Devices Inc., Brightstar Corp., Google Inc., News Corp. and Red Hat Inc. are other partners involved in the OLPC program.
Check out eWEEK.coms for the latest news in desktop and notebook computing. | <urn:uuid:4f5fc225-6d25-4aa8-8e75-72960f77fa9f> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Desktops-and-Notebooks/Quanta-Building-MITs-100-Laptops | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00191-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955519 | 1,080 | 2.640625 | 3 |
Jolt Sensor provides feedback on head injuries.
Researchers at MIT have created a new wearable sensor that helps athletes to identify and evaluate head injuries in real-time.
The ‘Jolt Sensor’ is a small clip that attaches to any bit of head worn equipment, such as a helmet or headband.
When a player experiences a dangerous impact, the sensor vibrates to alert them and then sends a notification to an accompanying iOS or Android smartphone app via Bluetooth.
It also connects to the parents’ and coaches’ smartphones , and athletes can evaluate their symptoms using the Jolt app’s cognitive test and concussion symptom checklist.
Ben Harvatine and Seth Berg, the MIT creators behind the sensor, came up with the idea after Harvatine suffered a concussion during wrestling practice.
"Through the ensuing hospital visits and months of recovery, the same thought kept crossing my mind – how could this have been prevented?" Harvatine said in his Kickstarter video.
"Many athletes like myself continue to play without realising they’ve been concussed, so there needs to be a way to alert parents, coaches and athletes of dangerous impacts as soon as they happen."
The sensor, which is waterproof and protected by soft silicone rubber, also comes equipped with a micro-USB port and a battery that lasts for several weeks on a single charge.
The sensor is currently seeking to raise $60,000 on Kickstarter following success trials. | <urn:uuid:43cfa403-583b-4bb1-b661-36e18c2ee6aa> | CC-MAIN-2017-04 | http://www.cbronline.com/news/how-an-internet-of-things-sensor-detects-concussions-in-young-athletes-4357159 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00099-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952455 | 300 | 2.796875 | 3 |
Bakker J.,Catholic University of Leuven |
Bakker J.,Center for Archaeological science |
Kaniewski D.,Toulouse 1 University Capitole |
Kaniewski D.,Center for Archaeological science |
And 6 more authors.
Holocene | Year: 2012
A well-dated pollen diagram from Gravgaz marsh, near the archaeological site of Sagalassos (western Taurus Mountains, Turkey), provides the first detailed record of vegetation change in southwest Turkey during the last two millennia. A newly developed numerical analysis disentangles the climatic and anthropogenic influences on vegetation and reveals for the first time for southwest Turkey the timing and influence of late-Holocene climate change. Results show that sudden vegetation changes, driven by changes in moisture availability, co-occurred with well-defined European climate shifts. A trend towards dry conditions, from c. ad 640 to 940, coincides with the cold early Middle Ages in Europe. During this period, human presence in the region diminished and agricultural activity switched focus from crop cultivation to pastoralism while signs of cereal cultivation temporarily ceased. This period was followed by a return to moister conditions from ad 940 to 1280, coinciding with the 'Medieval Climate Anomaly'. During this period there was a resurgence of human activity in the basin. Another trend towards dry conditions occurred at c. ad 1280, corresponding with the start of the 'Little Ice Age' in Europe and another disappearance of cereal pollen until the present day. The numerical analyses suggest that human impact around Gravgaz during the last two millennia is primarily driven by climatic changes. © The Author(s) 2011. Source
Ganio M.,Center for Archaeological science |
Boyen S.,Center for Archaeological science |
Brems D.,Center for Archaeological science |
Scott R.,Center for Archaeological science |
And 6 more authors.
Glass Technology: European Journal of Glass Science and Technology Part A | Year: 2012
In this study analysis of major elements and Sr-Nd isotopes is performed on 33 colourless glass fragments from two Roman shipwrecks discovered in the Northern Mediterranean Sea, the Iulia Felix (first half of the third century AD) and the Ouest-Embiez (end of the second-beginning of the third century AD). Two compositional groups are defined based upon the major elements analysis, suggesting the use of different raw materials, and possibly the production of the glass samples in two separate factories. Sr-Nd isotopes, promising indicators for provenancing geological resources used as raw materials in glass manufacturing, confirm the compositional groups. The 87Sr/ 86Sr signature is very close to the modern sea water signature (0.7092) for all samples, likely due to the use of shell as glass raw material. The Nd signature further subdivides the compositional groups, suggesting the use of three different sand raw materials for the production of glass. Source | <urn:uuid:679c07a5-c29b-4381-9f85-e5edf212aa1b> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/center-for-archaeological-science-1398420/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00099-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.888737 | 612 | 2.875 | 3 |
5.1.2 What is SSL?
The SSL (Secure Sockets Layer) Handshake Protocol [Hic95] was developed by Netscape Communications Corporation to provide security and privacy over the Internet. The protocol supports server and client authentication. The SSL protocol is application independent, allowing protocols like HTTP (HyperText Transfer Protocol), FTP (File Transfer Protocol), and Telnet to be layered on top of it transparently. Still, SSL is optimized for HTTP; for FTP, IPSec (see Question 5.1.4) might be preferable. The SSL protocol is able to negotiate encryption keys as well as authenticate the server before data is exchanged by the higher-level application. The SSL protocol maintains the security and integrity of the transmission channel by using encryption, authentication and message authentication codes.
The SSL Handshake Protocol consists of two phases: server authentication and an optional client authentication. In the first phase, the server, in response to a client's request, sends its certificate and its cipher preferences. The client then generates a master key, which it encrypts with the server's public key, and transmits the encrypted master key to the server. The server recovers the master key and authenticates itself to the client by returning a message authenticated with the master key. Subsequent data is encrypted and authenticated with keys derived from this master key. In the optional second phase, the server sends a challenge to the client. The client authenticates itself to the server by returning the client's digital signature on the challenge, as well as its public-key certificate.
A variety of cryptographic algorithms are supported by SSL. During the ``handshaking'' process, the RSA public-key cryptosystem (see Section 3.1) is used. After the exchange of keys, a number of ciphers are used. These include RC2 (see Question 3.6.2), RC4 (see Question 3.6.3), IDEA (see Question 3.6.7), DES (see Section 3.2), and triple-DES (see Question 3.2.6). The MD5 message-digest algorithm (see Question 3.6.6) is also used. The public-key certificates follow the X.509 syntax (see Question 5.3.3).
For more information on SSL 3.0, see http://home.netscape.com/eng/ssl3/index.html.
TLS (Transport Layer Security) is a protocol that is based on and very similar to SSL 3.0; for more information about TLS 1.0, see ftp://ftp.isi.edu/in-notes/rfc2246.txt.
We should also mention WTLS (Wireless TLS), which specifies the security layer protocol in WAP (Wireless Application Protocol); WAP is the de facto standard for the delivery and presentation of information to wireless devices such as mobile phones and pagers. WTLS is very similar to TLS but optimized for low-bandwidth bearer networks. For more information on WAP and WTLS, see http://www.wapforum.org/what/technical.htm. | <urn:uuid:a1bec13d-67c8-4765-857d-0e456467bb07> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/ssl.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00403-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.889701 | 646 | 4.15625 | 4 |
LONDON - 28 July 2008 - In the race to find alternative energy sources, geothermal energy is gaining favour. Geothermal energy is a continuous source of energy since the heat is trapped inside the earth, without depleting. This places geothermal energy above sporadic wind and solar energy, which tends to have a capacity factor of only 20-35%; geothermal capacity is more than 70%.
Although global energy use from geothermal sources today only amounts to less than 1%, geothermal projects now exist in around 20 countries around the world.
Frost & Sullivan attributes its previously limited use to the high start-up costs. However, with the steep price increases of oil and gas emission concerns, geothermal energy is generating greater interest everywhere. This coupled by the fact that geothermal costs are decreasing, as traditional energy sources are increasing in prices, leads researchers to believe geothermal energy will play a greater role in the global quest for alternative energy.
Geothermal energy is produced from using the earth's burning center to generate heat and electricity. “Geothermal energy has several advantages when compared to other renewable energy sources as well as carbon-emitting fuels. The sector is something to watch out for in the next few years,” notes Frost & Sullivan analyst, Gouri Nambudripad. A clear advantage of geothermal energy is that the heat is trapped inside the earth, without depleting, and is therefore a continuous source of energy. This places geothermal energy above sporadic wind and solar energy, which tends to have a capacity factor of only 20-35%. Geothermal capacity is more than 70%. In some areas, it is almost competitive with conventional fuels. Another advantage is that geothermal energy does not produce any toxic waste.
The only major impediment to geothermal energy success is the high cost of setting up and drilling the hot water from under the surface of the earth. The prices are comparable to drilling in the oil and gas industry. However, research shows costs are dropping. The generation costs of geothermal electricity used to be 50 - 150 euros/MWh in 2005. This is expected to fall to 40 - 100 euros/MWh in 2010 and 40 - 80 euros/MWh in 2020. As geothermal energy becomes more affordable, interests continue to rise.
Already in the EU, geothermal plants are found in Iceland, Greece, Italy, Turkey, Germany and Austria. The potential areas for geothermal generation capacity are in the north western and central western coast of Italy, western part of Turkey, and parts of Portugal, Spain, France and Germany. In Iceland, 85% of all houses are heated using geothermal energy and 30% of all their electricity is geothermal energy. Italy's geothermal market is maturing with installed capacity expected to increase to 1200MWe - 1500MWe by 2020. Most recently, Germany has close to 150 plants with €4bn in the pipe line. Germany is stimulating the industry by passing laws in favour of making projects financially viable. Geothermal energy grows more promising as its advantages begin to outweigh its high implementation costs. This will be an interesting market to follow in the next few years.
Geothermal heat was recognized first by the hot springs ancient cultures enjoyed at various hot spots around the world. Its capability to produce electricity came to light by Italian Prince Piero Ginori Conti, almost a century ago. Since then, as technology and understanding increased, two specific methods of creating energy have enabled people to generate both heat and electricity.
One method, Engineered Geothermal Systems (EGS), produces energy by drilling two parallel lines into the center of the earth. One line pumps water into the earth to heat it up to about 200°C, while the other line is used to pump out the hot water and steam. The steam is used to run a turbine, while the hot water heats houses or industrial units.
The second system, the Organic Rankine Cycle, builds wells, deep into the hot reserves, separating the steam from the high pressure, hot water. The steam and hot water are separated, split and used to drive turbines in power plants. Once this geothermal fluid is cooled, it goes back into the reservoir, where it reheats and is ready to be used again.
If you are interested in receiving more information on geothermal energy, alternative energy sources and on our Green Energy Subscription, then send an e-mail to Chiara Carella - Corporate Communications at email@example.com with your full name, company name, title, telephone number, e-mail address, city, state and country. Upon receipt of the above information, an overview will be sent to you by e-mail. All research included in subscriptions provide detailed market opportunities and industry trends that have been evaluated following extensive interviews with market participants.
On green technologies and green growth opportunities Frost & Sullivan organises an executive symposium titled The Global Green Revolution 2008: Driving Growth Through Sustainable Technology and Innovation. The event will focus on strategies for seizing real market opportunities for sustainable technology innovation to drive growth. It will take place on 17 September, 2008 in San Francisco, California. To secure your registration, schedule your one-on-one Growth Strategy Dialogue, or obtain more information please contact Chiara Carella at firstname.lastname@example.org.
Frost & Sullivan, the Growth Partnership Company, partners with clients to accelerate their growth. The company's TEAM Research, Growth Consulting and Growth Team Membership empower clients to create a growth-focused culture that generates, evaluates and implements effective growth strategies. Frost & Sullivan employs over 45 years of experience in partnering with Global 1000 companies, emerging businesses and the investment community from more than 30 offices on six continents. For more information about Frost & Sullivan's Growth Partnerships, visit http://www.frost.com.
Corporate Communications – Europe
P: +44 (0) 20 7343 8314
M: +44 (0) 753 3017689
Corporate Communications – North America
Corporate Communications – Southeast Asia
P: +603 6304 5832
F: +603 6201 7402
Corporate Communications – South Asia
P: +91 44 42044760
F: +91 44 24314264
Corporate Communications – Middle East
P: +91 22 4001 3404
F: +91 22 2832 4713
José María Jantus
Corporate Communications – Latin America
P: + 54-11-4777- 9951
F: + 54-11-4777-0071
Corporate Communications – China
P: +86 21 5407 5783 Ext 8669
M: +86 13621724823
Corporate Communications – Africa
P: +27 18 468 2315 | <urn:uuid:30dca2ed-672a-47d2-8052-aa3e272b14e6> | CC-MAIN-2017-04 | http://www.frost.com/prod/servlet/press-release.pag?docid=139283602&ctxixpLink=FcmCtx1&ctxixpLabel=FcmCtx2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00311-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.906048 | 1,389 | 3.359375 | 3 |
SNMP, short for Simple Network Management Protocol, is a set of protocols for managing network devices. SNMP works by sending messages called protocol data units (PDUs) to different parts of a network. SNMP-compliant devices* called "agents" store data about themselves in Management Information Bases (MIBs) and return this data to the SNMP requesters.
* Lexmark printers are SNMP-compliant devices.
Lexmark network laser printers support version 1 of the printer MIB. The Lexmark1.MIB draws off the RFC1759 and printer 3805 V2 standards. For some queries, the Lexmark printer may reply “unknown”, which is a valid response according to the RFC.
Examples of MIB implementation
Duplex MIB (OID – Object Identifier):
Using prtMediaPathDescription (OID 18.104.22.168.22.214.171.124.4.1.10) should yield a return value:
- SNMPv2-SMI::mib-126.96.36.199.188.8.131.52 = STRING: "Duplex Paper Path"
If duplex is not recognized, you would expect to see the return value:
- SNMPv2-SMI::mib-184.108.40.206.220.127.116.11 = STRING: "Primary Paper Path"
Color MIB (OID):
Using prtMarkerSuppliesDescription (OID 18.104.22.168.22.214.171.124.1.1.6) should yield a return value:
- SNMPv2-SMI::mib-126.96.36.199.188.8.131.52 = STRING: "Cyan Toner"
- SNMPv2-SMI::mib-184.108.40.206.220.127.116.11 = STRING: "Magenta Toner"
- SNMPv2-SMI::mib-18.104.22.168.22.214.171.124 = STRING: "Yellow Toner"
- SNMPv2-SMI::mib-126.96.36.199.188.8.131.52 = STRING: "Black Toner"
If color is not detected, you would expect to see the return value:
- SNMPv2-SMI::mib-184.108.40.206.220.127.116.11 = STRING: "Black Toner"
Lexmark MIB documentation and support statement
Lexmark provides the MIB information below as a courtesy to network administrators who use SNMP management utilities. Lexmark does not directly support the use of third-party management tools for the administration of our printers and MFPs.
NOTE: We will provide specific information upon request but only if essential to the implementation of our product.
Download the Lexmark MIB file pack for the following contents:
- Lexmark Private MIB (lexmark1.mib)
- Lexmark Managed Print Services MIB (lexmark-mps-mib.mib)
- Lexmark Root MIB (lexmark-root-mib.mib)
- Lexmark Textual Conventions MIB (lexmark-tc-mib.mib)
NOTE: The referenced MIB files in this package are for use with later (2011 and above) generation products. However, the lexmark1.mib file is viable for both current and older generation products.
Still need help?
If you require additional assistance, please close this window and locate Get In Touch with Lexmark! for contact information. NOTE: When calling for support, you will need your printer model/machine type and serial number (SN).
Please be near the products described in this article to expedite the support process and reduce callbacks. | <urn:uuid:34aa93d5-1c77-4647-982e-22f3a85756ab> | CC-MAIN-2017-04 | http://support.lexmark.com/index?page=content&id=FA615&locale=en&userlocale=EN_US | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00127-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.737977 | 839 | 3.203125 | 3 |
Regular hard drives have spinning disk while solid state drives have no moving parts. A Hybrid Drive uses moving parts the say way a traditional sata drives does, but it combines a NAND flash memory into the hard drive. This is done to give you speed as well storage space that is expensive to get from an actual Solid State Drive. The hybrid drive won’t beat the SSD in speed, however it will come close, and definitely boost your speed in comparison to a traditional hard drive and give you the space you need to store data. Hybrid drives do spin less than traditional hard drives and therefore the parts move less, because it has less moving parts it should also last longer than traditional drives. Hybrids have a memory manager in the hard drive that identifies what data and files are used most frequently and store it in the flash memory, it basically learns what is used frequently so those items open faster, so a hybrid drive learns what you are doing and subsequent boot times and file access time will shorten. | <urn:uuid:1858f87e-7733-4c25-8bf1-571f508f1c1c> | CC-MAIN-2017-04 | http://www.bvainc.com/hybrid-solid-state-drives-sshd/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00521-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954113 | 201 | 3.140625 | 3 |
Microsoft is researching ways to reduce touch-screen lag down to single millisecond, a hundred times less than current touch-screen systems.
Paul Dietz of Microsoft's Applied Science group fronted a YouTube video explaining the software giant's research into so-called 'high-performance touch'.
"Currently touch systems have about a hundred millisecond delay between when you touch and when the image actually changes," said Dietz, adding that a touch gesture of one meter per second would see a trail 10cm behind.
Dietz reckons that the delay breaks the UI analogy of moving a physical object because the object is so far behind. To get to the bottom of the effect of latency, Microsoft knocked up test rig where they could change the latency to see how it felt to the user.
After demonstrating a 1ms delay, Dietz said: "If you were playing with the real set-up, you would notice a real perceptual cliff and that this really starts to feel like a physical object."
Rather than suggesting that 1ms latency on touch-screen displays is just around the corner, Dietz called 1ms a "bar for where we'd like to head over the next decade." | <urn:uuid:71a91570-1bb8-4600-aef4-fbe1e31fc6f8> | CC-MAIN-2017-04 | http://www.pcr-online.biz/news/read/microsoft-wants-lower-latency-touch-screens/028072 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00155-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934932 | 242 | 2.796875 | 3 |
Femtocell is quickly becoming a buzzword in the mobile space, and it also impacts wireless data users one way or another. Basically, femtocell is a term used to describe a small base station or a signal strengthener/repeater, but there are many factors to consider other than just the technical ones when it comes to femtocell. Of course, that does not mean that the entire concept of micro/relay towers is even technically sound in all cases.
What is Femtocell Really?
Wireless signals can be fickle things, and troubleshooting wireless problems can be a serious undertaking. We here are High Speed Experts routinely receive requests to help troubleshoot wireless problems, and have created a three-part wireless troubleshooting guide. Our guide covers signals that come from laptops, desktops, and wireless routers, and that means two things: shorter distances and higher power. Simply put, distance and low-power requirements both make transmissions difficult, add in high-throughput and the problem is rather severe. In fact, 3G and 4G data standards are really amazing considering the power profiles offered by most portable devices/cellular phones.
A femtocell tower is basically a mini-relay station that can be installed in a property or in a vehicle. The idea is that by providing a much closer tower with much greater power consumption capabilities than those provided by a cellular phone or smartphone battery, that signals will be better. Of course, theory and reality collide in many cases, which leads people to ask…
Does Femtocell Really Work?
There are many reports of femtocell stations that do not do what they say they will do, and this could be due to many reasons. It could truly be that some of the early micro/relay stations do not work as advertised or are in need of a firmware upgrade, a common problem for early tech adopters. It could be that some consumers have cellular phones that are not designed to be compatibility with femtocell towers. Some networks do not support femtocell technology, and some only support very specific femtocell technology.
Add to this the laundry list of technical standards and compliance-issues, and it is entirely possible that most of the horror stories are the result of early adopters, unprepared firmware, and other issues that are easily explained. Despite the fact that it is possible to easily attribute so many possible complaints to one reason or another, the early roll-out of femtocell technology has left a bad taste in the mouths of many. This is particularly bad for those looking into femtocell technology in order to boost their mobile broadband performance and/or reliability.
The Argument Against Femtocell
The most obvious and common argument femtocell would be that service providers should be responsible for building out networks and ensuring connectivity. This argument certainly has merit, but there are some technical considerations that make it unreasonable for network providers to offer affordable service that has very high quality levels and no dead spots. The reason has to do with the incredible cost of even smaller cellular relay towers.
While it would be possible for carriers to build out their networks in such a way in theory, in practice the results would probably be a network that would cost too much. That cost would slow R&D and be passed on to consumers. In short, it might seem ironic but poor service helps make mobile broadband and cellular plans affordable. That does not mean that all carriers should be let off the hook for poor service, but that there might be a place for femtocell technology.
The Argument For Femtocell
The aforementioned argument for femtocell is simple: cellular providers have an obligation to provide great service in a wide area, but they cannot ensure high-quality/high-performance connectivity in all areas. The future of mini-relay stations might have consumers willing to offer open access to larger relaytowers receiving compensation in some form, which could help networks build quicker and more thoroughly (or deep in network lingo). A similar arrangement is being used in many British and French cities in regards to open WiFi access, and has proven quite effective.
Is Femtocell Right For You?
- If, after reading this article, you feel femtocell might be for you, then you need to do the following:
- Call your wireless service provider and ask them which standard(s) they support.
- Ensure that your mobile device/phone supports femtocell technology.
- Buy an appropriate micro/relay tower.
- Ensure that your new femtocell tower is running the appropriate firmware. Note the use of the word appropriate instead of latest. This may take a little research.
- Learn where to place your new femtocell tower for best effect. | <urn:uuid:8da7fc97-05db-4b73-8450-a8fc13667351> | CC-MAIN-2017-04 | http://www.highspeedexperts.com/femtocell-guide/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00457-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948038 | 970 | 3.078125 | 3 |
The chip maker offers a glimpse at its first finished Phase Change Memory chip. The nonvolatile chip technology can be used to store data or execute code.
SAN FRANCISCO-Intel literally has, in hand, the
of a new type of nonvolatile memory chip that its executives think could someday supplant flash memory and thus change the face of the industries such as cellular phones, music players and possibly even PCs.
Intel, as part of a lengthy joint venture with ST Microelectronics, has produced the first Phase Change Memory or PCM chips-nonvolatile memory chips that work well for both executing code and storing large amounts of data, giving it a superset of the capabilities of both flash memory and dynamic random access memory.
This means it can both execute code with performance, store larger amounts of memory and also sustain millions of read/write cycles.
Its necessary to invest in technologies such as PCM because flash memory will eventually hit a wall in which it can no longer scale with silicon manufacturing.
"This is pretty exciting stuff," said Ed Doller, chief technology officer for Intels Flash Memory Group, based in Folsom, Calif., during an interview with eWEEK.
"Were getting pretty close to the limits [of fabricating silicon] in developing NOR and NAND flash memory; our engineers are wondering Whats next?"
Doller reached for an often-used but appropriate saying: "This is a case in which Necessity is the mother of invention is very true. We were forced to look for something else, completely different. Thats why we decided to invest in PCM.
"There are definitely limits to what you can do with our current flash methodology. There needs to be a complete quantum leap somewhere along the line to push everything forward. We believe PCM are going to be that quantum leap."
Moreover, PCM has the potential to go into production before many other flash alternatives, he said.
During the interview, Doller produced what he said was one of the very first PCM wafers, containing numerous 128-megabit PCM chips produced in a ST Microelectronics chip plant in Agrate, Italy, and sent to him just hours before.
Doller opened a round, black plastic container to reveal several foam protective separations around a 10-inch round wafer of chips safely packaged in between.
Next page: Whats inside PCM chips? | <urn:uuid:bb43651a-9a78-46d4-9812-014bd422e904> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Desktops-and-Notebooks/Intel-Previews-Potential-Replacement-for-Flash-Memory | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00577-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956997 | 498 | 2.578125 | 3 |
iPhone App Can Fly Unmanned AircraftBoeing, MIT test technology to control mini-drones that may eventually be used by the U.S. military.
Slideshow: 14 Most Popular Government Mobile Apps (click image for larger view and for slideshow)
Researchers from Boeing and the Massachusetts Institute of Technology (MIT) have created a prototype application that allows someone to fly a miniature unmanned aircraft from an iPhone.
Eventually, the technology may be applied to remotely control unmanned aerial vehicles (UAVs) such as drones used by the U.S. military in combat and reconnaissance missions.
Boeing Research & Technology and student researchers at MIT's Humans and Automation Lab have successfully tested an iPhone application that uses the device's touch interface to navigate a mini-UAV from across the country as part of a Boeing project called Micro Aerial Vehicle Visualization of Unexplored Environments, or MAV-VUE, according to Boeing.
[ Not only can you use an iPhone to control a UAV, you also can Run Your Data Center From An iPhone ]
An engineer from the aerospace company, George Windsor, successfully flew the UAV above a baseball field on the MIT campus in Cambridge, Mass., via an iPhone in the company's Seattle office. The application is part of Boeing's efforts to team with industry and education partners to develop better and easier ways to control UAVs, among other technological innovations.
Boeing said the benefits of remote control of mini-drones via an iPhone or another smartphone are that the applications can be used to control UAVs for dirty or dangerous tasks, or during long missions that would be tedious for a human at the controls of an aircraft.
Joshua Downs, a human factors specialist with Boeing Research & Technology and the Boeing technical leader of the MAV-VUE project, in a statement described use-case scenarios for the application.
One envisioned a soldier using a mobile app-controlled, lightweight UAV for a better view of a battlefield, and another proposed scenarios in which firefighters or rescue workers can use UAVs to quickly and efficiently get a better view of disaster areas.
The federal government--the military in particular--has been exploring innovative ways to use mobile applications on iPhones and Android-based devices in various combat and reconnaissance scenarios.
One program, Connecting Soldiers to Digital Applications, gives soldiers in the field mobile devices loaded with custom applications that will be helpful in combat, while another called Relevant ISR to the Edge sends real-time intelligence information to soldier handheld devices during a military mission.
Join us for GovCloud 2011, a day-long event where IT professionals in federal, state, and local government will develop a deeper understanding of cloud options. Register now. | <urn:uuid:48fa78f7-79a1-4aa5-91fa-66271fa17de2> | CC-MAIN-2017-04 | http://www.darkreading.com/risk-management/iphone-app-can-fly-unmanned-aircraft/d/d-id/1100410?cid=nl_iw_govt_2011-09-29_html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00393-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.904333 | 560 | 2.734375 | 3 |
In the Cascades framework, one way to animate UI controls is to change their visual properties. You can animate an image of a ball moving across the screen by changing the position properties of the image, or you can spin a button by changing the button's rotation property. You can also animate controls using properties such as size, scale, and opacity.
Cascades automatically produces smooth, fluid animations based on your changes. These animations are called implicit animations. You can also use another type of animation, called explicit animations, to manually control the properties of animations in your apps.
You can learn more about implicit and explicit animations by visiting the links below. You can also follow a tutorial that teaches you how to create a demonstration app that uses animations.
Last modified: 2015-03-31 | <urn:uuid:21b27724-d1b1-4adf-a2ca-7e9f33409c47> | CC-MAIN-2017-04 | http://developer.blackberry.com/native/documentation/ui/animations/index.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00421-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.912736 | 161 | 3.125 | 3 |
Password Rejected: A Crypto Perspective
It is unfortunately common practice that applications which allow
users to remember their passwords as a convenience rarely encrypt them
but instead opt to simply obfuscate them.
Actually, it’s an unfortunately (verging on ridiculously) common practice to
think that there’s any difference between encoding a password using a fixed
algorithm and “encrypting” a password using a fixed key.
There isn’t one.
This is easily provable in terms of mere simplicity. How hard is it to
reverse engineer the instructions a binary executes to implement some
homebrew obfuscation? Reasonably doable, but it’s definitely at least a bit
of work–much more work than just tracing the application opening up some
file “c:\sekret.key” that happens to be the 168 bit Triple-DES key that’ll
decrypt its password database.
So does that mean that 3DES is less secure than some homebrew? No. It
means 3DES is being misused. Badly.
Any system that attempts to verify a password by comparing the password
plaintext supplied against the correct password plaintext has already lost.
No matter how many obfuscations and contortions it goes through, it can
never escape the fact that the system itself must have permanently available
access to the password plaintext. Encrypting the password with 3DES means
nothing; the decryption key for the password is as functionally open as code
for the executable running the home brew system. (Indeed, as we saw above,
since the secrecy is distilled rather than hidden in binary code, it’s
actually *easier* to rip off passwords when they’re 3DES encrypted!)
Of course, the obvious question is how a system verify the correctness of a
password without actually posessing that password. It’s a question that’s
rather repeatedly answered. Password handling is simultaneously one of the
few Solved Problems of Cryptography *and* one of the most misunderstood.
Simply store a MD5 or SHA-1 one-way hash of the password. One way hashes
lead to a fixed size, deterministic, and computationally unfeasable to
invert value from some amount of input data. Passwords make great input
data. When a plaintext password comes in, compare the MD5/SHA-1 hash of the
submitted password with the stored hash. If it matches, the password was
correct. If it doesn’t match, the password wasn’t correct.
There are (much) more advanced password exchange systems for network
password exchange, SRP for example, but if your system needs to locally
verify a password from the user, *please*, before cryptographers around the
world walk out in disgust, at least just store a hash instead of the actual
plaintext. Hashes prevent plaintext-equivalent information from being
stored at all. That means there’s nothing for an attacker to decrypt en
masse–he or she can only try password after password to see if it hashes
The core theoretical problem with encrypting *or* encoding passwords
actually derives from the fact that different users–indeed, all users–are
all dependant on one nonsecret: Either a fixed algorithm or a fixed key is
shared throughout the vendor’s clientele. There’s no secrecy in my password
database if you’ve got the same password or algorithm protecting yours.
Sooner or later, one of us is going to post to Bugtraq the simple method to
decrypt the other’s database. With hashes, there exists no key that *can*
decrypt the database.
One reasonably simple solution(translation: somebody contact me if this is
in error) for those who have implemented a respected encryption system(RC4,
Blowfish, even *DES* qualifies) to obfuscate access to their password
database is to encrypt passwords with themselves–i.e. encrypt “fg&^2jfa”
with the key “fg&^2jfa”. Using one way hashes essentially challenges a user
to provide the only value that will invert the hash equation, i.e. the
original data. Encrypting content with itself similarly challenges a user
to prove they have the only value that will invert the encryption equation,
again, the original data. While plaintext is indeed stored on a host, only
somebody who already *has* the plaintext can access it. That happens to
overlap exactly with the intent of a password system.
The most common reason why hash systems aren’t used, of course, is that
often the plaintext is needed by the system for purposes outside of
verification. Sometimes the system needs to automatically represent itself
as a user–with the user’s passwords–in order to fulfill the demands of a
protocol. This is primarily a problem with network authentication when no
plaintext has been exchanged, only hashes. But local authenticators will
*always* have access to the plaintext password, because they control the
user interface that the user uses to actually enter it in the first place!
If the password is representing the user, then storing the plaintext in RAM
while the user can be presumed to still be controlling the machine(two
minutes since last keypress?) means the required password would be made
available. The only reason a local authenticator would need to store an
easily decryptable password is if, at arbitrary times when the user wasn’t
around to supply the password, the system needed to prove to itself or
someone else that it indeed had the plaintext password.
Indeed, it’d need to be able to access a password before a user ever entered
it in the first place. At that point, who exactly is the system
representing, anyway? It’s actually not an empty question. Automated mail
checkers and instant messengers are often are expected to have access to a
plaintext password without having to retrieve it from the user on each
reboot. At that point, the physical security of the system and the hard
drive is turned into the authentication token for the user controlling that
system. Given the mass of IM services I’ve known some people to run and the
number of reboots this mass entails, forcing users to repetitively log in
becomes a difference between life and death. Ideally, passwords of this
type would be locally managed by a single “identity password” that the user
typed in once on login and that never needed to be transmitted or recorded
anywhere. Security may begin with the physical, but it sure better not end
Most systems, however, simply don’t have those kinds of requirements, and
that leads to the bottom line: Use hashes. It’s a solved problem. Stop
helping people hack your code, stop helping people hack our servers, stop
helping me quote Susan Powter. Please. Thank you. | <urn:uuid:fcd1f593-bde6-47cb-93bd-4e1a64baa8f4> | CC-MAIN-2017-04 | https://dankaminsky.com/2000/07/18/81/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00421-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.900991 | 1,508 | 2.546875 | 3 |
Data centers are the hubs of a variety of services that businesses and consumers rely on daily, from cloud-based applications and file storage to digital communications and more. They are also increasingly becoming the hubs of the financial world, powering what is known as high-frequency trading (HFT) in the stock market. It’s certainly unsurprising that computers now perform many tasks in the financial world—just as they do in many other areas of life—but the potential dangers, especially in a weak economy, are shocking.
Computers Running the Markets
High-frequency trading, sometimes called algorithmic trading, “is the biggest thing to hit Wall Street in years. On any given day, this lightning-quick, computer-driven form of trading accounts for upward of half of all of the business transacted on the nation’s stock markets,” according to The New York Times (“Searching for a Speed Limit in High-Frequency Trading”). These speedy trades generally don’t net huge profits, but when thousands or millions are conducted each second, small individual gains can quickly add up to large aggregate gains. Obviously, to conduct these kinds of trades, computers must analyze available data quickly, but this information doesn’t involve information like business plans, individuals in leadership positions of companies and so on—information that often drives the market decisions of savvy (human) investors.
With such a large volume of trades being carried out by computers according to a formula (which is necessarily bereft of critical business data that would normally inform a sound investment), individual investors are increasingly becoming an outside group for whom the stock market provides fewer and fewer opportunities to garner profits. Issues of the benefit of the stock market to individual investors aside, the question is whether this mix of computers and the stock market poses serious dangers not only to investors but to the economy at large.
High-Frequency Trading and Flash Crashes
With the advent of algorithmic trading, where millions of trades are conducted by computers in seconds or less, has come a new danger: the flash crash. A flash crash was reflected in the Dow Jones index on May 6, 2010. As Wired (“Nanosecond Trading Could Make Markets Go Haywire”) relates, “Starting at 2:42 p.m. EDT, the Dow Jones stock index fell 600 points in just 6 minutes. Its nadir represented the deepest single-day decline in that market’s 114-year history. By 3:07 p.m., the index had rebounded.” At one point that day, the Dow Jones index had lost more than 1,000 points. To be sure, the world didn’t stop turning; the market regained its footing, and the problems were attributed to computer glitches. But such tremendous swings in the market are nearly impossible when trades are conducted in a slower and, arguably, more rational manner.
A more narrow, but equally disturbing, example of the dangers of high-frequency trading manifested itself in early August of this year when brokerage firm Knight Capital Partners was nearly wiped out financially as a result of “a bug in one of its high-frequency trading algorithms [that] caused the firm to lose $440 million,” according to Time (“High Frequency Trading: Wall Street’s Doomsday Machine?”). Of course, companies engaging in high-frequency trading are virtually certain to add safeguards to their programs to prevent these kinds of catastrophic losses, but computers do exactly what they’re told, which isn’t always what the programmer wants or expects.
Perhaps the greatest danger of high-frequency trading isn’t the individual examples of a buggy program that loses a company some large chunk of money, but the aggregate behavior of all these programs together affecting and being affected by market conditions. A single program may behave in a fairly predictable manner, but when many of these programs interact, the results can be less predictable (this may fall into the category of “emergent behavior”). Flash crashes are a potential consequence of these interactions.
Add Sour Economy
In a strong economy, flash crashes are likely more of an annoyance than a real danger. Given the already precarious situation much of the western world currently resides in, however, flash crashes pose more of a danger. The current extremely low interest rates encourage borrowing—meaning greater debt—at all levels of the economy, from the consumer to businesses to governments. And at all levels (generally speaking), this debt is staggering. Furthermore, the fractional-reserve banking system means that if depositors decide en masse to withdraw their money, banks will quickly lose their reserves; not everyone will be able to get their money.
If a flash crash (say it lasted more than minutes—maybe a day or two) causes a panic, leading to a bank run, the situation could easily snowball into an financial calamity similar to, or even worse than, the events leading up to the Great Recession of 2008 and 2009. Thus, although high-frequency trading has its own inherent dangers as exemplified by flash crashes and losses like those suffered by Knight Capital Partners, the greater concern is the economic tinderbox that could go up in flames in response to this kind of stimulus.
So, What’s the Answer?
The involvement of computers and data centers in stock markets certainly isn’t wholly undesirable: it is a means of increasing the ability of individual investors and small companies to the market, and in doing so, it reduces the costs of trading by increasing competition. But anything can be taken to an extreme. High-frequency trading more closely resembles a game of poker played by master strategists than a means of profiting by providing a fiscal foundation for a company to do real, productive work. Does that mean regulators should tighten controls on the market? No, but it should be a warning to investors—particularly individuals. The stock market is largely rigged: individual investors are generally unable to make significant profits from the market, in part because of inflation (which, ironically, is a strategy of the Federal Reserve to prop up the market’s value). And inflation means a bubble: eventually, the credit card must reach its limit, at which time the bubble will burst.
Data centers aren’t the problem, per se. The problem is that high-frequency trading is more or less a sneaky way to make profits without even nominally creating any value. A business ideally turns investments in its operations into profits by producing something that is in demand. Investors then reap a portion of these profits. Algorithmic trading looks not to business productivity but market quirks to make money off the system, not off productivity. People should be allowed legally to play petty games with their money; investors, however, should be aware of such games. In this case, however, the scope of high-frequency trading poses a danger to an already fragile economy.
This may all seem like a matter of economics, but many companies are involved to some extent—perhaps through customers—in finance by way of their data centers. Data center space around cities like New York is at a premium as financial companies seek to minimize their distance from the market floor, thereby maximizing the potential speed of their trades. Data centers, in and of themselves, are not at fault for the situation with high-frequency trading, but they are at the center of it.
Photo courtesy of francisco.j.gonzalez | <urn:uuid:8a84aa81-dc3e-4243-adde-5314855d7a1b> | CC-MAIN-2017-04 | http://www.datacenterjournal.com/high-frequency-trading-a-disaster-in-the-making/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00450-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955265 | 1,525 | 2.65625 | 3 |
One of the most pressing issues faced by the HPC community is how to go about attracting and training the next generation of HPC users. The staff at Argonne National Laboratory is tackling this challenge head on by holding an intensive summer school in extreme-scale computing. One of the highlights of the 2013 summer program was a class taught by Pete Beckman: An Introduction to Parallel Supercomputers.
Argonne has a history of supporting these Summer Institutes that goes back to the late 1980s. The attendees, a select group of mainly PhD students and postdocs, are fortunate to be able to not only receive training in the use of supercomputing systems for large-scale science and engineering research, they get to rub elbows with some of the brightest minds in HPC.
In this 30-minute presentation, Professor Beckman provides a short overview of the course and shares with the students what he thinks is really important in the world of HPC. Starting with an overview of Argonne and Fermi, and the DOE institutions’ hallowed histories, Beckman explains how Argonne has emphasized parallel computing and teaching parallel architectures long before it was in wide use. Back in 1983, Paul Messina helped found the first math and computer science division at the lab.
Messina is now the Director of Science at the Argonne Leadership Computing Facility (ALCF), which was established in 2006 in recognition of the role that parallel computers would play in the future of scientific computing.
“You are going to see the architectural changes that are happening,” Beckman told the students, “and these are not small. There was a period for almost a decade where things were very stable in an area of computing, and right now we are in a big change again. What happened in 1984 is about to happen again. Everything in software and hardware is changing and you have to adapt to it.”
Beckman puts up a slide from the mid-90s with various parallel programming architectures and machines with many that have since gone under or changed hands, names like BBN Butterfly, CM-2, Kendal Square Research, MasPar, and others.
Beckman says it is likely that there will be another one of these high-churn periods, and this is evidenced so far with technologies like GPGPU, ARM and others.
When beginning a code project, Beckman recommends starting with the view that this code will last for five, ten, even fifteen years. Considering this long-term investment, there must be a way to preserve this investment. To that end, Beckman provides a list of investment recommendations for budding HPC programmers that will enable them to spend more time doing the cool science. His number one point of advise is to be aware of other people’s libraries. From there he explores the benefits of encapsulation (parallelism, messaging and I/O), embedded capabilities (debugging, performance monitoring, correctness detection and resilience), the two workflow views (the science side and the programming side), automation, and community (web, tutorial, email, bug tracking, etc.).
As he wraps up the class, Beckman explores some of the major trends in HPC programming, those that are ramping up and those that are ramping down. See slide below.
The Argonne Training Program for Extreme-Scale Computing is funded through the DOE’s Office of Science from 2013 to 2016. More information is available at http://extremecomputingtraining.anl.gov/. The next session will be offered August 3 – August 15, 2014. | <urn:uuid:586f38fd-1627-419a-a8da-19c720728be9> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/07/22/parallel-computing-trends/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00476-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945327 | 731 | 2.625 | 3 |
Cross the Finish Line: Tips for Winning the IT Race
There are many reasons you might want to pass one or more certification exams. You may be seeking a new or different career, desirous of adding new responsibilities or skills to your repertoire or needing to demonstrate proof of existing skills. Regardless of your reasons for achieving a certification, it is imperative to focus on passing the exam itself, as well as mastering the skills the certification represents. Some certification exams do a better job than others at testing for the requisite skills needed to demonstrate mastery of a topic or skill. Ultimately, it is up to you to make sure you study not only to pass the certification exam, but also to pass the test that life will give you as you seek to use these skills.
There are essentially two types of certification exams: those that ask you to take a test in the form of a multiple-choice or other knowledge-based exam format and those that have you go through a simulation of the product or skill itself to show that you know how to perform certain tasks or troubleshoot problems. Regardless of the test format, your ability to study for the exam and learn the material will be critical to your success. Techniques that are applicable to each of the testing formats, as well as tips that will apply in both cases are covered in this article.
Some certifications are specific to a vendor, such as the ever-popular Microsoft certifications. Earning a certification on products like Microsoft’s acknowledges you as an expert in those products and technologies. Other certifications are vendor-neutral and are not tied to particular products or brands. The Computing Technology Industry Association (CompTIA) is the world’s largest provider of vendor-neutral certifications and has certified more than 600,000 people to date.
In order to pass a certification exam, memories need to be formed and new skills obtained. There are two fundamental forms of memory: declarative and nondeclarative memory. Declarative memory contains material that is available to the conscious mind and can be expressed. Examples of declarative memory at work include the ability to list types of servers or recall the various functions of a Web server. Nondeclarative memory is the type of memory necessary to perform skills. Nondeclarative memory is often associated with motor skills such as riding a bike or learning to ski, but there are certainly aspects to the role of an IT professional that require performing skills. Someone acquiring an A+ certification and seeking to become a hardware repair technician would need the motor skills to replace a hard drive or install a sound card. Both declarative and nondeclarative memory is necessary to be successful in the IT profession and to pass most certification exams.
In order to form memories and develop new skills, a variety of techniques may be used. Choosing the best techniques for you will depend on what type of learner you are: visual (you learn by seeing), auditory (you learn by hearing) or kinesthetic (you learn by doing). No person uses one style of learning exclusively. Additionally, using techniques that combine multiple learning styles will reinforce your retention even further. Studies have demonstrated that individuals who pass certification exams use more than two methods of study to prepare for the exam, so using multiple methods will dramatically increase your likelihood of passing.
Lutz Ziob, general manager of Microsoft Learning, is an advocate of using more than one learning method to study for a certification exam. “The best study tools are those that combine more than one learning method,” said Ziob. “For example, a good e-learning product will combine auditory and visual learning tools, such as allowing you to choose whether you want the material visually displayed or read aloud. Most training companies today seek to combine learning styles for different audiences, realizing that you may have all learning types in a single classroom or using a single e-learning product. Basically, my recommendation is to seek out the newer versions of learning products that are already, in themselves, a combination of the learning styles.”
The right place to start passing a certification is to know thyself. It is critical to assess your learning style prior to strategizing about how to study for a certification exam. Gene Salois, vice president of certification for CompTIA, stresses the importance of learning styles in passing exams. He suggested, “Study using your own personal learning style. Some companies may tell you that e-learning, for example, is the best way to prepare for an exam. No one learning method is appropriate for everybody. Find out what works best for you.”
It is also imperative to use different learning methods to study. Salois said, “Ensure a good study diet. Just as a good diet cannot be gained by eating one type of food, effective learning cannot be gained from a single test-preparation product. For instance, even if you find that you learn best by visual methods, don’t just go and buy just one book and make that your sole source of information. Buy a variety of manuals; combine them with other visual tools such as an electronic tutorial, flash cards, etc.”
Jeff Michael, a Microsoft Certified Systems Engineer (MCSE) learns best visually. “If I were to characterize myself as one specific learning style, I would say I am visual,” Michael said. “I have to read the material to fully understand it. I feel like I am missing things if I am not reading the material in its entirety. After I do that, I can fill in any blanks by interfacing with the technology.”
Doug Notini, a director of training for the certified instructors at New Horizons Computer Learning Centers of Boston is another visual learner. He said, “Everybody is going to be different; I am a visual learner. One method that worked well for me was to use cue cards. There is something about reading something and then writing the information down to make the cue card—it makes the information stick in my mind.” In writing cue cards, Notini uses both visual learning techniques by reading the information and seeing it again on flash cards. He also employed kinesthetic learning as he wrote the information down.
It is no surprise that we see such an emphasis on visual learners. Visual learning is the prominent learning style for around 65 percent of the population. These types of learners work best when reading, referencing notes, seeing diagrams and viewing pictures. Visual learners will respond well to Web-based training that is graphically rich and to course content that has visual depictions of key concepts.
The second type of learning style is auditory. Those who prefer to learn in an auditory fashion make up around 30 percent of the population and prefer the spoken word above all other learning methods. They prefer hearing the instructor talk about the technologies or hearing an audio accompaniment to a Web-based course. Most auditory learners want to hear information first, before attempting to read it and make sense of it. Some even find it beneficial to read aloud to themselves to fully grasp a concept when an instructor is not readily available.
The last learning style is kinesthetic. Kinesthetic learners learn by doing and are the minority, coming in at around 5 percent of the population. They learn skills by imitation and practice and often use their hands a great deal when conveying information. Kinesthetic learners benefit from hands-on practice and from simulations. Many learning companies offer the opportunity for learners to use virtual labs, which allow candidates to practice on high-end hardware without having to invest in the technology themselves or risk their company’s infrastructure as they are learning. By devoting some of your study time to hands-on activities | <urn:uuid:235c9bd4-b6d4-4303-84fd-1911a45427cc> | CC-MAIN-2017-04 | http://certmag.com/cross-the-finish-line-tips-for-winning-the-it-race/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00046-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951814 | 1,574 | 2.6875 | 3 |
Site home page
Get alerts when Linktionary is updated
Book updates and addendums
Get info about the Encyclopedia of Networking and Telecommunicatons, 3rd edition (2001)
Download the electronic version of the Encyclopedia of Networking, 2nd edition (1996). It's free!
Contribute to this site
Electronic licensing info
NNTP (Network News Transport Protocol)
Note: Many topics at this site are reduced versions of the text in "The Encyclopedia of Networking and Telecommunications." Search results will not be as extensive as a search of the book's CD-ROM.
The NNTP protocol is the delivery mechanism for the USENET newsgroup service. USENET runs on the Internet and other TCP/IP-based networks and provides a way to exchange messages, articles, and bulletins throughout the Internet. Articles are put in central databases throughout the Internet and users access the database to get the articles they need. This reduces network traffic and eliminates the need to store a copy of each article on every subscriber's system.
There are thousands of different newsgroups related to computers, social issues, science, the humanities, recreation, and other topics. See "USENET" for more information on USENET itself. This topic discusses the operation of the NNTP protocol.
USENET servers use NNTP to exchange news articles among themselves. NNTP is also used by clients who need to read news articles on USENET servers. The server-to-server and user-to-server connections are described here:
Before NNCP, USENET servers used UUCP (UNIX-to-UNIX Copy Program) to exchange information. UUCP is a "flood broadcast" mechanism. Hosts send new news articles they receive to other hosts, which in turn forward the news on to other hosts that they feed. Usually, a host receives duplicates of articles and must discard those duplicates-a time-consuming process and waste of bandwidth.
NNTP uses an interactive command and response mechanism that lets hosts determine which articles are to be transmitted. A host acting as a client contacts a "server" host using NNTP, and then inquires if any new newsgroups have been created on any of the serving host systems. An administrator can choose to create similar newsgroups on the host he or she manages.
During the same NNTP session, the client requests information about new articles that have arrived in all or some of the newsgroups. The server then sends the client a list of new articles and the client can request transmission of some or all of those articles. The client can refuse to accept articles that it already has.
Readers interested in the details of NNTP commands and responses should read RFC 977 (A Proposed Standard for the Stream-Based Transmission of News, February 1986).
Some organizations may prefer to set up their own USENET systems on their TCP/IP-based intranet rather than deploy groupware and collaborative applications. If you plan on setting up your own news server, refer to "USENET."
Copyright (c) 2001 Tom Sheldon and Big Sur Multimedia. | <urn:uuid:c10a0364-9583-44af-a97d-92ef5528e066> | CC-MAIN-2017-04 | http://www.linktionary.com/n/nntp.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00468-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.904836 | 658 | 3.015625 | 3 |
CDC Unveils the FluChipBy Stacy Lawrence | Posted 2006-08-30 Email Print
The new microarray can be used to distinguish between influenza types and help to trace back the origins of a given strain, including the avian flu.
The Centers for Disease Control and Prevention and the University of Colorado at Boulder have developed a microchip-based test that distinguishes between flu strains and can even help trace the strains back to their origins.
The FluChip can be used to identify 72 influenza strainsincluding the H5N1 avian influenza strain that is currently of such concernin fewer than 12 hours.
This technology can be used to make sophisticated influenza diagnostic capabilities more widely available to labs around the globe, not just concentrated with the CDC and a few major international laboratories.
"The ability to quickly and accurately identify strains of influenza would be invaluable to international flu surveillance efforts," said National Institute of Allergy and Infectious Diseases director Anthony Fauci.
The FluChip is a microarray, commonly called a gene chip. Microarrays can be made by using a robotic arm to drop hundreds or thousands of spots of genetic materialDNA or RNAof known sequence onto a microscope slide.
Read the full story on eWEEK.com: CDC Unveils the FluChip | <urn:uuid:17b07d22-ad93-40df-9ad2-ab2746ad1f33> | CC-MAIN-2017-04 | http://www.baselinemag.com/c/a/Business-Intelligence/CDC-Unveils-the-FluChip | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00100-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913437 | 261 | 3.140625 | 3 |
October 14, 2015
Last week I fielded a question about online systems’ ability to discern loaded or untruthful statements in a plain text document. I responded that software is not yet very good at figuring out whether a specific statement is accurate, factual, right, or correct. Google pokes at the problem in a number of ways; for example, assigning a credibility score to a known person. The higher the score, the person may be more likely to be “correct.” I am simplifying, but you get the idea: Recycling a variant of Page Rank and the CLEVER method associated with Jon Kleinberg.
There are other approaches as well, and some of them—dare I suggest, most of them—use word lists. The idea is pretty simple. Create a list of words which have positive or negative connotations. To get fancy, you can work a variation on the brute force Ask Jeeves’ method; that is, cook up answers or statement of facts “known” to be spot on. The idea is to match the input text with the information in these word lists. If you want to get fancy, call these lists and compilations “knowledgebases.” I prefer lists. Humans have to help create the lists. Humans have to maintain the lists. Get the lists wrong, and the scoring system will be off base.
There is quite a bit of academic chatter about ways to make software smart. A recent example is “Sentiment Diffusion of Public Opinions about Hot Events: Based on Complex Network.” In the conclusion to the paper, which includes lots of fancy math, I noticed that the researchers identified the foundation of their approach:
This paper studied the sentiment diffusion of online public opinions about hot events. We adopted the dictionary-based sentiment analysis approach to obtain the sentiment orientation of posts. Based on HowNet and semantic similarity, we calculated each post’s sentiment value and classified those posts into five types of sentiment orientations.
There you go. Word lists.
My point is that it is pretty easy to spot a hostile customer support letter. Just write a script that looks for words appearing on the “nasty list”; for example, consumer protection violation, fraud, sue, etc. There are other signals as well; for example, capital letters, exclamation points, underlined words, etc.
The point is that distorted, shaped, weaponized, and just plain bonkers information can be generated. This information can be gussied up in a news release, posted on a Facebook page, or sent out via Twitter before the outfit reinvents itself.
The researcher, the “real” journalist, or the hapless seventh grader writing a report will be none the wiser unless big time research is embraced. For now, what can be indexed is presented as if the information were spot on.
How do you feel about that? That’s a sentiment question, gentle reader.
Stephen E Arnold, October 14, 2015
January 1, 2015
The Pentaho blog takes the year in review and makes some pretty big speculations about 2015 and they’re big, because they concern big data: “Big Data In 2015-Power To The People.” Pentaho predicted that big data business demands would be shaped by businesses’ demands for data blending and it turns out that was correct. Companies do not have standard data sets that fly across the board, rather each company in different fields are turning to big data to handle their increasing amount of data sets.
“Moving into 2015, and fired up by their initial big data bounties, businesses will seek even more power to explore data freely, structure their own data blends, and gain profitable insights faster. They know “there’s gold in them hills” and they want to mine for even more!”
The post’s 2015 big data predictions are even bigger than the imagination.
In 2015, companies will want to blend traditional data with more unstructured content. An example of how this will be used is to get a 360-degree customer profile. Combining social media with sentiment analysis about a company’s good and services tells them more about their clients. Industry is predicted to see big changes in operational, strategic, and competitive advantages by feeding companies info on to improve in these areas. Think smart house capabilities transferred to the new smart factories.
Big data will also have more flexibility in the cloud and people are demanding embedded analytics to see all the nitty gritty details about their business. The list ends that more big data power will be given to the people, mostly in ease of use. You can’t really call that a prediction, more like common sense. Whatever happens in 2015, big data will see big growth.
November 21, 2014
Short honk: The notion of figuring out something about the emotional payload of a message is interesting. If you are following developments in sentiment analysis, you may find “Emotion Detection in Suicide Notes Using Maximum Entropy Classification” interesting. Now what might be done to pipe the output of this analysis into a predictive analytics engine with access to deep user data?
Stephen E Arnold, November 21, 2014
October 28, 2014
I found the Attensity blog post “Attensity Takes Utah Tech Week” quite interesting. I cannot recall when mainstream content processing companies embraced hackathons so fiercely.
The blog post explains:
A hackathon, for the uninitiated, is exactly what it sounds like: a hybrid of computer hacking and a marathon in a grueling, caffeine-fueled, 12-hour time period. Groups comprised of mostly engineers and IT whizzes compete against the clock and other teams to create a project to present at the of the day to a panel of judges.
What did Attensity’s engineers build to showcase the company’s sentiment analysis and analytics technologies? Here’s the Attensity description:
With the Twitter API up and running, Team Attensity used Raspberry Pi to process tweets using #obama and #utahtechweek. Simultaneously, the team used Arduino to code sentiments from the tweets using a red light for negative sentiments, blue for positive sentiments, and yellow for neutral sentiments.
Attensity was pleased with the outcome in Utah. More hackathons are in the firm’s future. I wonder if one can deploy IBM Watson using a Raspberry Pi or showcase HP Autonomy with an Arduino.
How will hackathons generate revenue? I am not sure. The effort seems like a cost hole to me.
Stephen E Arnold, October 28, 2014
August 26, 2014
Natural language processing—one of its most-discussed functions in business is sentiment analysis. Over at the SmartData Collective, Lexalytics’ Scott Van Boeyen tells us “Why Sentiment Analysis Engines Need Customization.” The short answer: slang. The write-up explains:
The problem with sentiment analysis is sometimes it’s wrong.[…]
“Oh man, that was nasty!” Is this sentence positive or negative? Surely, it must be negative. “Nasty” is a negative word, and everything else in this sentence is neutral. Final answer, negative! Drum roll…. Wrong! It’s positive.
The person who said this used the American slang definition of nasty, which has positive sentiment. There is absolutely no way to know by reading the sentence. So, if you (a human) were just tricked by reading this article, how is a machine supposed to figure it out? Answer: Tell the engine what’s positive and what’s negative.
High quality NLP engines will let you customize your sentiment analysis settings. “Nasty” is negative by default. If you’re processing slang where “nasty” is considered a positive term, you would access your engine’s sentiment customization function, and assign a positive score to the word.
The man has a point. Still, we are left with a few questions: How much more should one expect to pay for a customization feature? Also, how long does it take to teach an NLP platform comprehensive alternate vocabulary? How does one decide what slang to include—has anyone developed a list of suggestions? Perhaps one could start by consulting the Urban Dictionary.
Cynthia Murrell, August 26, 2014
August 4, 2014
In 2010, Attensity purchased Biz360. The Beyond Search comment on this deal is at http://bit.ly/1p4were. One of the goslings reminded me that I had not instructed a writer to tackle Attensity’s July 2014 announcement “Attensity Adds to Patent Portfolio for Unstructured Data Analysis Technology.” PR-type “stories” can disappear, but for now you can find a description of “Attensity Adds to Patent Portfolio for Unstructured Data Analysis Technology” at http://reut.rs/1qU8Sre.
My researcher showed me a hard copy of 8,645,395, and I scanned the abstract and claims. The abstract, like many search and content processing inventions, seemed somewhat similar to other text parsing systems and methods. The invention was filed in April 2008, two years before Attensity purchased Biz360, a social media monitoring company. Attensity, as you may know, is a text analysis company founded by Dr. David Bean. Dr. Bean employed various “deep” analytic processes to figure out the meaning of words, phrases, and documents. My limited understanding of Attensity’s methods suggested to me that Attensity’s Bean-centric technology could process text to achieve a similar result. I had a phone call from AT&T regarding the utility of certain Attensity outputs. I assume that the Bean methods required some reinforcement to keep pace with customers’ expectations about Attensity’s Bean-centric system. Neither the goslings nor I are patent attorneys. So after you download 395, seek out a patent attorney and get him/her to explain its mysteries to you.
The abstract states:
A system for evaluating a review having unstructured text comprises a segment splitter for separating at least a portion of the unstructured text into one or more segments, each segment comprising one or more words; a segment parser coupled to the segment splitter for assigning one or more lexical categories to one or more of the one or more words of each segment; an information extractor coupled to the segment parser for identifying a feature word and an opinion word contained in the one or more segments; and a sentiment rating engine coupled to the information extractor for calculating an opinion score based upon an opinion grouping, the opinion grouping including at least the feature word and the opinion word identified by the information extractor.
This invention tackles the Mean Joe Green of content processing from the point of view of a quite specific type of content: A review. Amazon has quite a few reviews, but the notion of an “shaped” review is a thorny one. See, for example, http://bit.ly/1pz1q0V.) The invention’s approach identifies words with different roles; some words are “opinion words” and others are “feature words.” By hooking a “sentiment engine” to this indexing operation, the Biz360 invention can generate an “opinion score.” The system uses item, language, training model, feature, opinion, and rating modifier databases. These, I assume, are either maintained by subject matter experts (expensive), smart software working automatically (often evidencing “drift” so results may not be on point), or a hybrid approach (humans cost money).
The Attensity/Biz360 system relies on a number of knowledge bases. How are these updated? What is the latency between identifying new content and updating the knowledge bases to make the new content available to the user or a software process generating an alert or another type of report?
The 20 claims embrace the components working as a well oiled content analyzer. The claim I noted is that the system’s opinion score uses a positive and negative range. I worked on a sentiment system that made use of a stop light metaphor: red for negative sentiment and green for positive sentiment. When our system could not figure out whether the text was positive or negative we used a yellow light.
The approach used for a US government project a decade ago, used a very simple metaphor to communicate a situation without scores, values, and scales. Image source: http://bit.ly/1tNvkT8
Attensity said, according the news story cited above:
By splitting the unstructured text into one or more segments, lexical categories can be created and a sentiment-rating engine coupled to the information can now evaluate the opinions for products, services and entities.
Okay, but I think that the splitting of text into segment was a function of iPhrase and search vendors converting unstructured text into XML and then indexing the outputs.
Attensity’s Jonathan Schwartz, General Counsel at Attensity is quoted in the news story as asserting:
“The issuance of this patent further validates the years of research and affirms our innovative leadership. We expect additional patent issuances, which will further strengthen our broad IP portfolio.”
Okay, this sounds good but the invention took place prior to Attensity’s owning Biz360. Attensity, therefore, purchased the invention of folks who did not work at Attensity in the period prior to the filing in 2008. I understand that company’s buy other companies to get technology and people. I find it interesting that Attensity’s work “validates” Attensity’s research and “affirms” Attensity’s “innovative leadership.”
I would word what the patent delivers and Attensity’s contributions differently. I am no legal eagle or sentiment expert. I do like less marketing razzle dazzle, but I am in the minority on this point.
Net net: Attensity is an interesting company. Will it be able to deliver products that make the licensees’ sentiment score move in a direction that leads to sustaining revenue and generous profits. With the $90 million in funding the company received in 2014, the 14-year-old company will have some work to do to deliver a healthy return to its stakeholders. Expert System, Lexalytics, and others are racing down the same quarter mile drag strip. Which firm will be the winner? Which will blow an engine?
Stephen E Arnold, August 4, 2014
July 28, 2014
Attivio has placed itself in the news again, this time for scoring a new patent. Virtual-Strategy Magazine declares, “Attivio Awarded Breakthrough Patent for Big Data Sentiment Analysis.” I’m not sure “breakthrough” is completely accurate, but that’s the language of press releases for you. Still, any advance can provide an advantage. The write-up explains that the company:
“… announced it was awarded U.S. Patent No. 8725494 for entity-level sentiment analysis. The patent addresses the market’s need to more accurately analyze, assign and understand customer sentiment within unstructured content where multiple brands and people are referenced and discussed. Most sentiment analysis today is conducted on a broad level to determine, for example, if a review is positive, negative or neutral. The entire entry or document is assigned sentiment uniformly, regardless of whether the feedback contains multiple comments that express a combination of brand and product sentiment.”
I can see how picking up on nuances can lead to a more accurate measurement of market sentiment, though it does seem more like an incremental step than a leap forward. Still, the patent is evidence of Attivio’s continued ascent. Founded in 2007 and headquartered in Massachusetts, Attivio maintains offices around the world. The company’s award-winning Active Intelligence Engine integrates structured and unstructured data, facilitating the translation of that data into useful business insights.
Cynthia Murrell, July 28, 2014
July 11, 2014
One of the most widespread misperceptions in enterprise search and content processing is “install and search.” Anyone who has tried to get a desktop search system like X1 or dtSearch to do what the user wants with his or her files and network shares knows that fiddling is part of the desktop search game. Even a basic system like Sow Soft’s Effective File Search requires configuring the targets to query for every search in multi-drive systems. The work arounds are not for the casual user. Just try making a Google Search Appliance walk, talk, and roll over without the ministrations of an expert like Adhere Solutions. Don’t take my word for it. Get your hands dirty with information processing’s moving parts.
Does it not make sense that a search system destined for serving a Fortune 1000 company requires some additional effort? How much more time and money will an enterprise class information retrieval and content processing system require than a desktop system or a plug-and-play appliance?
How much effort is required to these tasks? There is work to get the access controls working as the ever alert security manager expects. Then there is the work needed to get the system to access, normalize, and process content for the basic index. Then there is work for getting the system to recognize, acquire, index, and allow a user to access the old, new, and changed content. Then one has to figure out what to tell management about rich media, content for which additional connectors are required, the method for locating versions of PowerPoints, Excels, and Word files. Then one has to deal with latencies, flawed indexes, and dependencies among the various subsystems that a search and content processing system includes. There are other tasks as well like interfaces, work flow for alerts, yadda yadda. You get the idea of the almost unending stream of dependent, serial “thens.”
When I read “Why Sentiment Analysis Engines need Customization”, I felt sad for licensees fooled by marketers of search and content processing systems. Yep, sad as in sorrow.
Is it not obvious that enterprise search and content processing is primarily about customization?
Many of the so called experts, advisors, and vendors illustrate these common search blind spots:
ITEM: Consulting firms that sell my information under another person’s name assuring that clients are likely to get a wild and wooly view of reality. Example: Check out IDC’s $3,500 version of information based on my team’s work. Here’s the link for those who find that big outfits help themselves to expertise and then identify a person with a fascinating employment and educational history as the AUTHOR.
In this example from http://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=idc%20attivio, notice that my work is priced at seven times that of a former IDC professional. Presumably Mr. Schubmehl recognized that my value was greater than that of an IDC sole author and priced my work accordingly. Fascinating because I do not have a signed agreement giving IDC, Mr. Schubmehl, or IDC’s parent company the right to sell my work on Amazon.
This screen shot makes it clear that my work is identified as that of a former IDC professional, a fellow from upstate New York, an MLS on my team, and a Ph.D. on my team.
I assume that IDC’s expertise embraces the level of expertise evident in the TechRadar article. Should I trust a company that sells my content without a formal contract? Oh, maybe I should ask this question, “Should you trust a high profile consulting firm that vends another person’s work as its own?” Keep that $3,500 price in mind, please.
ITEM: The TechRadar article is written by a vendor of sentiment analysis software. His employer is Lexalytics / Semantria (once a unit of Infonics). He writes:
High quality NLP engines will let you customize your sentiment analysis settings. “Nasty” is negative by default. If you’re processing slang where “nasty” is considered a positive term, you would access your engine’s sentiment customization function, and assign a positive score to the word. The better NLP engines out there will make this entire process a piece of cake. Without this kind of customization, the machine could very well be useless in your work. When you choose a sentiment analysis engine, make sure it allows for customization. Otherwise, you’ll be stuck with a machine that interprets everything literally, and you’ll never get accurate results.
When a vendor describes “natural language processing” with the phrase “high quality” I laugh. NLP is a work in progress. But the stunning statement in this quoted passage is:
Otherwise, you’ll be stuck with a machine that interprets everything literally, and you’ll never get accurate results.
Amazing, a vendor wrote this sentence. Unless a licensee of a “high quality” NLP system invests in customizing, the system will “never get accurate results.” I quite like that categorical never.
ITEM: Sentiment analysis is a single, usually complex component of a search or content processing system. A person on the LinkedIn enterprise search group asked the few hundred “experts” in the discussion group for examples of successful enterprise search systems. If you are a member in good standing of LinkedIn, you can view the original query at this link. [If the link won’t work, talk to LinkedIn. I have no idea how to make references to my content on the system work consistently over time.] I pointed out that enterprise search success stories are harder to find than reports of failures. Whether the flop is at the scale of the HP/Autonomy acquisition or a more modest termination like Overstock’s dumping of a big name system, the “customizing” issues is often present. Enterprise search and content processing is usually:
- A box of puzzle pieces that requires time, expertise, and money to assemble in a way that attracts and satisfies users and the CFO
- A work in progress to make work so users are happy and in a manner that does not force another search procurement cycle, the firing of the person responsible for the search and content processing system, and the legal fees related to the invoices submitted by the vendor whose system does not work. (Slow or no payment of licensee and consulting fees to a search vendor can be fatal to the search firm’s health.)
- A source of friction among those contending for infrastructure resources. What I am driving at is that a misconfigured search system makes some computing work S-L-O_W. Note: the performance issue must be addressed for appliance-based, cloud, or on premises enterprise search.
- Money. Don’t forget money, please. Remember the CFO’s birthday. Take her to lunch. Be really nice. The cost overruns that plague enterprise search and content processing deployments and operations will need all the goodwill you can generate.
If sentiment analysis requires customizing and money, take out your pencil and estimate how much it will cost to make NLP and sentiment to work. Now do the same calculation for relevancy tuning, index tuning, optimizing indexing and query processing, etc.
The point is that folks who get a basic key word search and retrieval system work pile on the features and functions. Vendors whip up some wrapper code that makes it possible to do a demo of customer support search, eCommerce search, voice search, and predictive search. Once the licensee inks the deal, the fun begins. The reason one major Norwegian search vendor crashed and burned is that licensees balked at paying bills for a next generation system that was not what the PowerPoint slides described. Why has IBM embraced open source search? Is one reason to trim the cost of keeping the basic plumbing working reasonably well? Why are search vendors embracing every buzzword that comes along? I think that search and an enterprise function has become a very difficult thing to sell, make work, and turn into an evergreen revenue stream.
The TechRadar article underscores the danger for licensees of over hyped systems. The consultants often surf on the expertise of others. The vendors dance around the costs and complexities of their systems. The buzzwords obfuscate.
What makes this article by the Lexalytics’ professional almost as painful as IDC’s unauthorized sale of my search content is this statement:
You’ll be stuck with a machine that interprets everything literally, and you’ll never get accurate results.
I agree with this statement.
Stephen E Arnold, July 11, 2014
May 12, 2014
Short honk. I have some questions about the efficacy of search vendors who pitch sentiment analysis. The jargon blizzard obscures some of the methods. I talk about some of these hyperboles in my video about search jargon. The article “Turning the Frown Upside Down: Kraft’s Jell-O Plans Twitter Mood Monitor” explains one of the secrets of the sentiment analysis wizards. Big Data? Nah, counting smiley faces. What Dark Arts do other sentiment analysis mavens conjure?
Stephen E Arnold, May 12, 2014
March 11, 2014
Attensity has been a quiet sentiment, analytics, text processing vendor for some months. The company has now released a new version of its flagship product, Analyze, now at version 6.3. The headline feature is “enhanced analytics.”
According to a company news release, Attensity is “the leading provider of integrated, real-time solutions that blend multi-channel Voice of the Customer analytics and social engagement for enterprise listening needs.” Okay.
The new version of Analyze delivers to licensees real time information about what is trending. The system provides “multi dimensional visualization that immediately identifies performance outliers in the business that can impact6 the brand both positively and negatively.” Okay.
The system processes over 150 million blogs and forums, Facebook, and Twitter. Okay.
As memorable as these features are, here’s the passage that I noted:
Attensity 6.3 is powered by the Attensity Semantic Annotation Server (ASAS) and patented natural language processing (NLP) technology. Attensity’s unique ASAS platform provides unmatched deep sentiment analysis, entity identification, statistical assignment and exhaustive extraction, enabling organizations to define relationships between people, places and things without using pre-defined keywords or queries. It’s this proprietary technology that allows Attensity to make the unknown known.
“To make the unknown known” is a bold assertion. Okay.
I have heard that sentiment analysis companies are running into some friction. The expectations of some licensees have been a bit high. Perhaps Analyze 6.3 will suck up customers of other systems who are dissatisfied with their sentiment, semantic, analytics systems. Making the “unknown known” should cause the world to beat a path to Attensity’s door. Okay.
Stephen E Arnold, March 11, 2014 | <urn:uuid:6991cccf-ff25-4b35-905f-94ee999bffb7> | CC-MAIN-2017-04 | http://arnoldit.com/wordpress/category/sentiment-analysis/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00008-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925015 | 5,774 | 2.5625 | 3 |
A guide to wireless standards for CompTIA Network+ candidates
We’ve looked at some of the wireless networking related topics that appear on CompTIA’s Network+ certification exams (both the current N10-005 exam and the upcoming N10-006) in other articles. To finish that topic, the focus this month is on wireless standards and what you need to know about them to pass this certification exam. Much of this information is excerpted from the CompTIA Network+ Exam Cram.
802.11 represents the IEEE designation for wireless networking. Several wireless networking specifications exist under the 802.11 banner. The Network+ objectives focus on 802.11a, 802.11b, 802.11g, 802.11n, and 802.11ac. All these standards use the Ethernet protocol and the CSMA/CA access method. NOTE: Exams like to see if you know not only the characteristics of the wireless standards, but also the access method. Remember: 802.11 wireless standards use the CSMA/CA access method.
The 802.11 wireless standards can differ in terms of speed, transmission ranges, and frequency used, but in terms of actual implementation they are similar. All standards can use either an infrastructure or ad hoc network design, and each can use the same security protocols.
IEEE 802.11: There were actually two variations on the initial 802.11 wireless standard. Both offered 1 or 2Mbps transmission speeds and the same RF of 2.4GHz. The difference between the two was in how data traveled through the RF media. One used FHSS, and the other used DSSS. The original 802.11 standards are far too slow for modern networking needs and are now no longer deployed.
IEEE 802.11a: In terms of speed, the 802.11a standard was far ahead of the original 802.11 standards. 802.11a specified speeds of up to 54Mbps in the 5GHz band, but most commonly, communication takes place at 6Mbps, 12Mbps, or 24Mbps. 802.11a is incompatible with the 802.11b and 802.11g wireless standards.
IEEE 802.11b: The 802.11b standard provides for a maximum transmission speed of 11Mbps. Devices are designed to be backward compatible, however, with previous 802.11 standards that provided for speeds of 1, 2, and 5.5Mbps. 802.11b uses a 2.4GHz RF range and is compatible with 802.11g.
IEEE 802.11g: 802.11g offers wireless transmission over distances of 150 feet and speeds up to 54Mbps compared with the 11Mbps of the 802.11b standard. Like 802.11b, 802.11g operates in the 2.4GHz range and therefore is compatible with it.
IEEE 802.11n: The most popular wireless standard today is 802.11n. The goal of the 802.11n standard was to significantly increase throughput in both the 2.4GHz and the 5GHz frequency range. The baseline goal of the standard was to reach speeds of 100Mbps, but given the right conditions, it is estimated that the 802.11n speeds can reach a staggering 600Mbps. In practical operation, 802.11n speeds are much slower.
IEEE 802.11ac: The newest of the wireless standards listed in the Network+ objectives is 802.11ac, which became an approved standard in January of 2014 and can be thought of as an extension of 802.11n. Any device using this standard must support all the mandatory modes of both 802.11n and 802.11a. The goal of the standard is 500Mbps with one link and 1Gbps with multiple links. It has support for up to 8 MIMO streams and increased channel bonding. 802.11ac is a 5 GHz-only technology.
The Magic Behind 802.11n and 802.11ac
802.11n took the best from the 802.11 standards and mixed in some new features to take wireless to the next level. First among these new technologies was multiple input multiple output (MIMO) antenna technology.
MIMO was unquestionably the biggest development for 802.11n and the key to the new speeds. Essentially, MIMO uses multiplexing to increase the range and speed of wireless networking. Multiplexing is a technique that combines multiple signals for transmission over a single line or medium. MIMO enables the transmission of multiple data streams traveling on different antennas in the same channel at the same time. A receiver reconstructs the streams, which have multiple antennas as well. By using multiple paths, MIMO provides a significant capacity gain over conventional single-antenna systems, along with more reliable communication.
While 802.11n can transmit more than one spatial stream at the same time, the streams are directed to a single address (MIMO). 802.11ac allows for multiple user MIMO (MUMIMO) to let an AP send multiple frames to multiple clients at the exact same time (thus allowing the AP to act like a switch instead of just a hub).
In addition to all MIMO, 802.11n enabled channel bonding that essentially doubled the data rate. What is channel bonding? The 802.11b and 802.11g wireless standards use a single channel to send and receive information. With channel bonding, you can use two channels at the same time. As you might guess, the ability to use two channels at once increases performance. It is expected that bonding can help increase wireless transmission rates from the 54Mbps offered with the 802.11g standards to a theoretical maximum of 600Mbps. 802.11n uses the OFDM transmission strategy.
802.11ac extends this by increasing the maximum from 40 MHz to 80 MHz (with hypothetical of 160 MHz). By doubling the channel bandwidth, twice as much data can be carried in the same time. NOTE: In wireless networking a single channel is 20MHz in width. When two channels are bonded, they are a total of 40MHz. 802.11n systems can use either the 20MHz channels or the 40MHz channel.
Aggregation is the other big difference, allowing data to be packaged together to increase speeds. 802.11n brought the technology to the mainstream and 802.11ac simply builds on it.
Summary of 802.11 Wireless Standards
The following table highlights the characteristics of the various 802.11 wireless standards.
|IEEE Standard||Frequency/Medium||Speed||Topology||Transmission Range||Access Method|
|802.11||2.4GHz RF||1 to 2Mbps||Ad hoc/infrastructure||20 feet indoors||CSMA/CA|
|802.11a||5GHz||Up to 54Mbps||Ad hoc/infrastructure||25 to 75 feet indoors; range can be affected by building materials||CSMA/CA|
|802.11b||2.4GHz||Up to 11Mbps||Ad hoc/infrastructure||Up to 150 feet indoors; range can be affected by building materials||CSMA/CA|
|802.11g||2.4GHz||Up to 54Mbps||Ad hoc/infrastructure||Up to 150 feet indoors; range can be affected by building materials||CSMA/CA|
|802.11n||2.4GHz/ 5GHz||Up to 600Mbps||Ad hoc/infrastructure||175+ feet indoors; range can be affected by building materials||CSMA/CA|
|802.11ac||5GHz||Up to 1Gbps||Ad hoc/infrastructure||115+ feet indoors; range can be affected by building materials||CSMA/CA|
FHSS, DSSS, OFDM
The original 802.11 standard had two variations, both offering the same speeds but differing in the RF spread spectrum used. One of the 802.11 standards used FHSS. This 802.11 variant used the 2.4GHz radio frequency band and operated at a 1 or 2Mbps data rate. Since this original standard, wireless implementations have favored DSSS.
The second 802.11 variation used DSSS and specified a 2Mbps peak data rate with optional fallback to 1Mbps in noisy environments. 802.11, 802.11b, and 802.11g use DSSS. This means that the underlying modulation scheme is similar between each standard, enabling all DSSS systems to coexist with 2, 11, and 54Mbps 802.11 standards.
As a comparison, it is like the migration from the older 10Mbps Ethernet networking to the more commonly implemented 1000Mbps standard. The speed was different, but the underlying technologies were similar, enabling an easier upgrade.
The following table compares wireless standards and the spread spectrum used.
|IEEE Standard||RF Used||Spread Spectrum||Data Rate (in Mbps)|
|802.11||2.4GHz||DSSS||1 or 2|
|802.11||2.4GHz||FHSS||1 or 2|
Summing it Up
To prepare for the wireless networking portion of a certification exam — particularly CompTIA’s Network+ — it helps to know the basics of wireless standards. This article highlighted the characteristics of the various 802.11 wireless standards. | <urn:uuid:b3a39ae2-96c4-46f4-9e88-7446eb8c2896> | CC-MAIN-2017-04 | http://certmag.com/guide-to-wireless-standards-for-comptia-network-plus-candidates/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00210-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.884656 | 1,932 | 3.328125 | 3 |
A Cost-Based Security Analysis of Symmetric and Asymmetric Key Lengths
A. Why is Key Size Important?
In order to keep transactions based upon public key cryptography secure, one must ensure that the underlying keys are sufficiently large as to render the best possible attack infeasible. However, this really just begs the question as one is now left with the task of defining ?infeasible?. Does this mean infeasible given access to (say) most of the Internet to do the computations? Does it mean infeasible to a large adversary with a large (but unspecified) budget to buy the hardware for an attack? Does it mean infeasible with what hardware might be obtained in practice by utilizing the Internet? Is it reasonable to assume that if utilizing the entire Internet in a key breaking effort makes a key vulnerable that such an attack might actually be conducted? If a public effort involving a substantial fraction of the Internet breaks a single key, does this mean that similar sized keys are unsafe? Does one need to be concerned about such public efforts or does one only need to be concerned about possible private, surreptitious efforts? After all, if a public attack is known on a particular key, it is easy to change that key. We shall attempt to address these issues within this paper.
Standards, such as ANSI X9.30, X9.31, X9.42, and X9.44 and FIPS 186-2 all require a minimum of a 1024-bit RSA or Diffie-Hellman key. The fundamental question we will answer in this paper is: How long will such keys be secure?
B. What Affects Security Requirements?
It is clear that the size of a key must be tied to the value of the data being protected by the key and also tied to the expected lifetime of that data. It makes no sense for an adversary to spend (say) $10 million breaking a key if recovering the key will only net (say) $10 thousand. This principle also applies to keys used to protect other keys, such as the master signature key of a Certifying Authority although such a key is certainly worth more than $10 thousand. Furthermore, if the lifetime of the key or the data being protected is measured in only days or weeks, there is no need to use a key that will take years to break. While standards specify a minimum of 1024 bits for RSA, there are many applications for which 768 bits is more than adequate. While signatures on contracts might need to be secure for 30 years or more [unless time stamped and periodically renewed], other applications can require much less: anywhere from about 1 day for signatures with short-term session keys (such as SSL) to perhaps several years for commercial financial data. Military and national intelligence data such as the identity of spies can have a lifetime of 100 years, but such data is not accessible on-line nor is it protected by public-key methods.
C. Basic Assumptions
Throughout this paper all extrapolations shall be based upon extrapolating speed and memory enhancements of existing hardware and improvements in existing algorithms. While breakthroughs in both algorithm and hardware technology may occur, such events are inherently unpredictable and we do not consider them. Instead, current key sizes are advocated which allow some margin for error.
We also do not believe that any public key size specified today should be used to protect something whose lifetime is more than 20 years. Later in this paper we will advocate a pro-active security policy whereby keys are renewed as the state of the art in breaking keys advances.
II. Methods of Attack
A. General Methods for RSA and DL: The Number Field Sieve
The General Number Field Sieve (GNFS or just NFS) is a general purpose algorithm for either factoring large integers or for solving an ordinary discrete logarithm problem. Its run time depends only on the size of the number being factored, or the size of the underlying field for the discrete logarithm problem. It is the fastest method known today and has two phases. In the first phase, a sieving operation requiring moderately large amounts of memory is conducted on independent computers. This phase is used to construct a set of equations. In the second phase a large Cray with massive memory has then been used to solve this set of equations. Once a solution has been found to this set of equations factoring the number or solving the discrete log takes a miniscule amount of time and memory.
The amount of time it takes to factor a number of x bits is asymptotically the same as the time it takes to solve a discrete log over a field of size x bits. However in practice solving discrete log problems has been more difficult than factoring equivalent numbers. While there are several reasons for this, the main reason is that solving the matrix for a discrete log must be done using multi-precision integer arithmetic while the matrix for factoring is solved mod 2 and thus one can use simple bit operations. It has been estimated that for large x, one can break a discrete log of size x-30 in about the same time as factoring an x-bit number. Throughout this paper we shall assume that solving the two problems are equivalent under NFS and henceforth shall only discuss factoring.
A.1 Complexity of Attacks
The TIME required to factor N using the General Number Field Sieve is
L(N) = exp( (c + o(1)) ( (log N)1/3 (log log N)2/3)).
This function combines the time to do the sieving and the time to solve the matrix. Theoretically each takes an equal amount of time. However, for problems done thus far solving the matrix has taken only a fraction of the time to do the sieving. However, this fraction has been increasing as the numbers get larger.
The SPACE required scales with SQRT(L(N)).
Once you have a benchmark for a particular N, you can predict the difficulty of factoring a larger number N? relative to the difficulty of factoring N by computing L(N?)/L(N) to get a time estimate and SQRT(L(N?)/L(N)) to get a space estimate. Estimating the ratio this way ignores the effect of the o(1) term in the exponent, but Silverman showed that at least for the Special Number Field Sieve, this term is very small. It does not materially affect the results given herein if its value at 512 bits is (say) .05, while its value at 1024 bits is .01. The results of suggest that these values are not far off.
A.2 Hardware Requirements and Availability
The sieving phase of the number field sieve depends for its speed upon the ability to rapidly retrieve values from memory, add to them, and put them back. Therefore, the size of the data cache and speed of the memory bus have a strong impact upon the speed with which the sieve can operate. Furthermore, the sieve operation requires moderately large (but manageable within the state-of-the-art) memory. For a 512-bit key, 64 Mbytes per sieve machine was adequate. However, as the size of the key (and hence the run time) increases, so does the memory required. As shown above, required memory scales with the square root of the run time. Thus, a problem 10 times more difficult than RSA-512 (in terms of time) would require sqrt(10) times more memory [for both phases], or about 200 Mbytes for the sieve machines. If one uses a cost-based model, as we suggest below, the cost of memory very quickly becomes the binding constraint in terms of how much hardware can be acquired.
Historically, as machines became faster, sieving speed did not keep up with machine speed. The main reason for this is that while CPU?s were getting faster, memory bus speeds and the sizes of data caches were not keeping pace with speed improvements. However, recent progress in increasing the size of the data cache with each new generation of Pentiums, along with increases in bus speed has alleviated this difficulty. Indeed, reference [1, p. 19] even noted a super-linear increase in speed when moving from a Pentium I to a Pentium II based computer. Lenstra and Verheul suggest that this is due to processor improvements, but we do not believe this viewpoint is correct. The super-linear speed increase can be more readily explained by an increase in cache size and an increase in memory bus speed from 66 MHz to 100 MHz. A fundamental question is therefore: will cache sizes and memory bus speed continue to scale according to Moore?s law? There is reason to believe the answer is NO. Lenstra and Verheul?s modification of Moore?s law says that total CPU cycles doubles every 18 months rather than just doubling processor speed every 18 months. The modification combines an increase in machine speed with an increase in the number of available machines. It seems clear that if one only considers increases in machine speed, without allowing the number of machines to increase one cannot achieve a doubling every 18 months. There is data to support this viewpoint. The VAX, a nominal 1-MIPS machine, introduced in 1977 had a memory bus which ran at 13 MHz. Now the latest generation of PC?s has bus speeds of 133 MHz. This is far short of a doubling every 18 months. Further, 10 years ago a state-of-the-art workstation such as a Sparc-10 had an available data cache of 256 Kbytes. Today?s Pentium based PC?s have data caches typically around 512 Kbytes. This too falls far short of a doubling every 18 months. Thus, while processor speed increases have followed Moore?s law over the last 20 years, other parts of the computer which influence sieving speed have not kept pace. See reference for a more complete discussion of this.
Unless improvements in bus speed and cache size keep pace with improvements in CPU speed, we will once again return to the situation where improvements in processor speed do not help very much in speeding sieving.
Another issue involved in sieving hardware is the size of the required memory. Today?s 32-bit machines typically can only address 2 Gbytes of user address space. This represents a factor of only 32 over the memory needed for RSA-512. Data given in section III shows that once keysizes exceed about 710 bits, the memory needed for sieving will no longer be able to be addressed on 32-bit computers. We note that while 64-bit processors are available, it does not appear likely that they will become sufficiently widespread to be useful in attacks for some time to come. There is also a (somewhat speculative) concern about whether the market will routinely demand multi-gigabyte machines. There are few applications today which require multi-gigabyte memories. Servers are one such, but they are not available as distributed sieving machines; they have little idle time and are dedicated to other purposes. While they might contribute some CPU time, the proportion of their processing power so contributed would be limited by their other processing demands. It is certainly questionable whether desktop machines with memories in the 200 Gbyte range will become available anytime soon. As may be seen in section III, 170 Gbytes per machine is what is needed to conduct an attack on 1024-bit keys.
A.2.2 Linear Algebra
The assumptions made in II A, above, imply that the solving the matrix can be done perfectly in parallel. If we can only efficiently solve matrices on small sets of machines running in parallel it means that each of these machines will need not just large, but massive RAM memories. To do a 1024-bit key will require about 6 Terabytes of memory to hold the matrix. We do not know of the existence of a single machine today, or a tightly coupled set of machines with this much memory. Indeed, 6 Tbytes is beyond the ability of 32-bit machines to even address even if distributed among several hundred machines. Trying to estimate when such a machine might become available requires a crystal ball. However, throughout the rest of this paper we assume that the available machines for sieving can somehow be tightly coupled together and used for solving the matrix. This assumption is unrealistic, but allows us to be conservative in discussion of key sizes.
While historically a large Cray has been used to solve the equations, there is no theoretical reason why a set of tightly coupled parallel processors could not do the same thing. Montgomery is experimenting with such a implementation now. Solving large linear algebra problems in parallel has been a topic of research in computer science for a long time. Many papers have been written and there is unanimous agreement that such implementations do not scale well. As one increases the number of processors, communication becomes a bottleneck and total speedup rapidly departs from linear. Indeed, Montgomery?s early results do not look encouraging. He reported less than 20% per processor utilization even on a very small (16 machines) tightly coupled fast network. And the drop in per-processor utilization could readily be observed in going from 8 to just 16 processors. While one might theoretically develop a custom-purpose machine for solving such problems, to date no one has proposed such a machine. Its design is still an unsolved research problem.
The Block Lanczos algorithm, which is the algorithm currently employed, is also very close to the theoretically best possible algorithm for solving large matrices mod 2. The best that can be done theoretically, in terms of the number M of rows in the matrix is O(M2+S). The Block Lanczos algorithm already achieved this with S ~ .2 while breaking RSA-512. Therefore the prospects for improved algorithms for solving the matrix do not look very good. One of the assumptions of the Lenstra and Verheul paper is that algorithmic improvements in factoring will continue. It would seem therefore that this assumption of theirs is not correct unless future factoring algorithms can either reduce the size of the matrix, or eliminate the need for solving it. However, the assumption of this paper is that we only extrapolate from existing algorithms.
It is also quite difficult to predict exactly how much faster (say) 1000 machines running in parallel will be than 1 machine. Therefore, throughout this paper we make the unrealistic, but very conservative estimate (conservative with respect to key size) that one can achieve linear speedup when solving the matrix in parallel.
If anything, recommending key sizes based on this assumption will overstate what is possible. Note also, that solving the matrix requires a different parallel architecture than the one assumed for sieving. While sieving each machine runs independently. If a machine or group of machines becomes unavailable, it only means that the sieving slows down by a little bit; the other sieve machines can continue their work. However solving the matrix requires that machines share data frequently. Thus, a single machine stopping can stall all the other machines because they are waiting for data to be forthcoming from the stalled machine. Also, a LAN does not have the bandwidth and its latency is too high to support a parallel group of machines that must communicate frequently. Therefore any parallel matrix solver must be done on a dedicated set of machines with a very fast interconnection scheme. Such a machine must also be fault-tolerant. If a processor goes down, then its current work assignment must be reassigned. Then the other processors must wait while the lost computation is recomputed. It also takes some time to detect a lost processor. Each time it happens a latency is incurred before the entire computation can continue. As the number of machines increases, the probability of a machine being down at any given time increases. A machine being down does not have to be due to hardware problems. In a distributed network (even tightly coupled) sets of machines can become unavailable because of firewall maintenance, software installations/reboots, power failures, someone moving a machine, etc. etc. These problems create strong practical difficulties in getting this kind of application to work. We know of no successful effort to solve linear algebra problems in parallel that involved more than just a few thousand processors at once.
The idea of using ?idle? time on a loosely coupled network simply will not work. The machines need to be tightly coupled, they need to be dedicated to just this task, and they require special interconnection hardware that goes beyond what is provided by (say) a local FDDI net or the Internet. Reference assumes [page 18] that the computers needed for attacks will be available for free. This assumption is reasonable for the sieving phase, but it is totally unrealistic for the linear algebra phase. To do a 1024-bit key will require a tightly coupled parallel computer with 6 Terabytes of memory and a fast interconnection scheme. This kind of hardware costs a lot of money.
B. Special Methods for RSA and DL
Special methods, such as the Elliptic Curve factoring Method (ECM) have virtually no chance of succeeding in factoring reasonably large RSA keys. For example, the amount of work needed to factor a 1024-bit modulus with ECM is about 1017 MIPS-Years and only succeeds with probability 1 ? 1/e. The Number Field Sieve is roughly 10 million times faster. While the ECDL machines of Wiener [section V.] can be trivially modified to run ECM factoring rather than DL, the amount of arithmetic is still unrealistic. The Number Field Sieve, run on ordinary computers is faster. We do not elaborate further on this subject, but refer the interested reader to reference .
III. Historical Results and the RSA Challenge
We give below some historical results for record factorizations using a general purpose method. While in many cases the number being factored had special form, the method of attack did not depend on this special form. The data is tabulated and plotted below. Size is given in decimal digits. The Number is sometimes a co-factor of the listed number, after small primes have been divided out.
TABLE 1: Historical Factoring Records
|1970||39||2128 + 1||Brillhart/Morrison||CFRAC||IBM Mainframe|
|1978||45||2223 – 1||Wunderlich||CFRAC||IBM Mainframe|
|1981||47||3225 – 1||Gerver||QS||HP-3000|
|1982||51||591 – 1||Wagstaff||CFRAC||IBM Mainframe|
|1983||63||1193 + 1||Davis/Holdridge||QS||Cray|
|1984||71||1071 – 1||Davis/Holdridge||QS||Cray|
|1986||87||5128 + 1||Silverman||MPQS||LAN Sun-3’s|
|1987||90||5160 + 1||Silverman||MPQS||LAN Sun-3’s|
|1988||100||11104 + 1||Internet||MPQS||Distributed|
|1990||111||2484 + 1||Lenstra/Manasse||MPQS||Distributed|
|1991||116||10142 + 1||Lenstra/Manasse||MPQS||Distributed|
One thing surprising about this data is that it is VERY linear. A simple least-squares fit yields the equation: Size = 4.23 * (Year ? 1970) + 23. The correlation coefficient is about .955. This is somewhat puzzling. The algorithms are sub-exponential. That is, their run time grows more slowly than an exponential function. Moore?s law is strictly exponential. It would seem, therefore, based on theoretical grounds that this curve should be growing FASTER than linearly. A possible explanation for this is that even though records continue to be established, that over time the level of effort applied to each effort has dropped. Breaking RSA-129 required a large effort involving many thousands of machines widely distributed over the Internet. Compared with that, breaking RSA-512 was done by a relatively small group of people using far fewer (albeit much faster) machines. If we solve for when a 1024-bit number may be expected to be factored, based solely on extrapolating this historical data, we get an answer of around 2037.
Brent, arguing from theoretical grounds in , assumes that given Moore?s law, (keysize)1/3 should grow linearly with time. He derives the equation:
Year = 13.25 * (SIZE)1/3 + 1928 (SIZE in digits) and extrapolates that 1024-bits should become breakable in 2018. We agree with his theoretical model, but strongly suggest that the historical data shows that key sizes grow linearly with time based upon public efforts. Brent further states that Moore?s law suggests 6-7 digits/year advancement in key sizes. We note however, that the historical data suggests instead a growth of about 4.25 digits/year. However, both Brent?s estimate and ours suggest that 1024 bits keys should be safe for a minimum of 20 years from public efforts. It is impossible to say what private, unpublicized efforts might have achieved.
As a basis of comparison we shall use data from the break of RSA-512. This effort required a total of 8000 MIPS-Years to do the sieving, represented by about 300 PC?s averaging 400 MHz and with at least 64 Mbytes of RAM, running for 2 months, and 10 days and 2.3 Gbytes of memory on a Cray C90 to solve the matrix. Using this data we can predict how much harder it is to factor a number of 576, 640, 704, 768, 1024 or 2048 bits:
|L(2576)/L(2512) ~ 10.9||SQRT(L(2576)/L(2512)) ~ 3.3|
|L(2640)/ L(2512) ~ 101||SQRT(L(2640)/L(2512)) ~ 10.0|
|L(2704)/ L(2512) ~ 835||SQRT(L(2704)/L(2512)) ~ 29|
|L(2768)/L(2512) ~ 6 x 103||SQRT(L(2768)/L(2512)) ~ 77|
|L(21024)/L(2512) ~ 7 x 106||SQRT(L(21024)/L(2512)) ~ 2650|
|L(22048)/L(2512) ~ 9 x 1015||SQRT(L(22048)/L(2512)) ~ 9 x 107|
576 bits will take 10.9 times as long as RSA-512 and require 3.3 times the memory.
768 bits will take 6100 times as long as RSA-512 and require 77 times the memory.
1024 bits will take 7 million times as long as RSA-512 and require 2650 times the memory.
Note: Space scaling is the same for both the sieving phase and for storing the matrix.
To put this in perspective, it would require about 1.4 billion 500 MHz machines, each with about 170 Gbytes of memory to do the sieving for a 1024-bit number in the same time as RSA-512. While a hacker might try to steal cycles on the Internet by creating a ?Number Field Sieve Worm? it is hard to see how such an attack could find enough machines with enough memory to make such an attack feasible. Further, such an attack would be detected and shut down rather quickly as with the Robert Morris worm. Of course increasing speed will reduce the required number accordingly. It would take a single Cray with 6 Terabytes of memory approximately 70 million days (192,000 years) to solve the matrix. One could reduce this to a mere 19 years with 10000 Crays each with only 600 Mbytes of memory running perfectly in parallel. It is likely that within 10 years common desktop machines will be as fast or faster than a Cray C90 is now. However, it is unlikely in the extreme that 10000 machines running in parallel will be anywhere close to 10000 times as fast as one machine. It would require 10 million such machines running perfectly in parallel to solve the matrix in about the same time as that for RSA-512.
Note that Moore?s law assumes that processor speed doubles every 18 months. If one accepts the premise that this will not change, then one expects to see a speed increase of 7 million (needed to do a 1024-bit number relative to RSA-512) in 34 years. This is in fairly close agreement with the estimate of 2037 taken from the historical data above.
It might be argued that this historical data lies BELOW what was theoretically achievable at each data point; that the data represents only modest efforts to break large keys. For example, to do a 600-bit key is about 25 times harder than RSA-512 and would require 5 times the space. Thus, it could be done in the same time as RSA-512 with about 5000 PC?s each with 320 Mbytes of memory for the sieving. The Cray C90 would take 250 days and about 11 Gbytes of memory to solve the matrix. Perhaps this is closer to what is theoretically achievable now. We note that 1024-bits is about 292,000 times harder than 600 bits. Based on Moore?s law we can expect a factor of 292,000 improvement in about 27 years.
According to PC magazine, approximately 130 Million PC?s were sold in 1999. A substantial fraction of these will have been 32 Mbyte machines. We do not have data on the exact fraction. Note that such machines could not even be used for the attack on RSA-512 because they do not have enough memory. It is not unrealistic to assume that with few exceptions machines sold more than 5 years ago simply are not big enough to run an NFS attack. A question we are unable to answer is: how many machines are there on the Internet today that might be both large enough and available for an attack? Lenstra and Odlyzko focus on the total CPU power that might be available, while ignoring that many of those machines are unusable and that among those that are suitable, logistical problems will mean that a fair fraction will not be accessible.
At the rate of 130 million machines/year, it will take more than 10 years for enough machines to be sold to even attempt sieving for a 1024-bit modulus. The number of required machines will decrease as they get faster. However, among these machines most will not have enough memory to be able to be used in an attack. It is also unrealistic to believe that all machines that are sufficiently large will be available for an attack. Other attempts to define key sizes, such as Lenstra?s and Odlyzko?s have based their estimates on the assumption that most of the CPU cycles on machines attached to the Internet will be available for attacks. These prior estimates have conveniently ignored the fact that most machines simply are not big enough to be used in an attack even if they are available.
IV. Security Estimates for RSA
A. Models of Computation
A.1 Computational Equivalence
Two algorithms are said to be computationally equivalent if they require the same number of computer operations to complete. Traditional estimates of key size equivalents for different public key algorithms have looked only at computation equivalence. We strongly believe that this approach is wrong because it assumes that CPU TIME is the only constraint that keeps keys from being broken. While this is certainly true if the amount of memory needed to execute an algorithm is negligible, it is not true when memory requirements become prohibitive. For example suppose that key A1 for algorithm A takes 10 hours to break and the same for key B1 of algorithm B. These keys are equivalent in time. However, if breaking algorithm B requires 10 Gbytes of memory, while algorithm A requires 10 Kbytes, then it is certainly safe to say that algorithm B is harder to break. It requires more hardware and hence costs more. But what if A takes 10 hours and 10 Kbytes, but B takes only 5 hours with 10 Gbytes? Which algorithm is easier to break? The answer would depend on the relative weighting one gave to time versus memory.
A.2 Space Equivalence
When both TIME and SPACE are needed resources to run an algorithm, to equate different algorithms on the basis of TIME only is arbitrary. Why use TIME only? We believe that the answer is twofold. The first is historical ? there has always been a historical bias towards time equivalence.
The second reason is ignorance. Many people simply are not aware that complexity theory measures SPACE as well as time, nor are they aware that sometimes an algorithm cannot be run because of space as opposed to time limitations. It is also possible that they assume SPACE problems can be overcome.
Rather than ask: what RSA key size is equivalent to 56-bit symmetric in terms of the time needed to break each key, why not ask: How big does a symmetric key need to be before it is as hard to break as (say) RSA-512 in terms of SPACE? The answer would be millions, if not billions of bits, and this is clearly ridiculous. Yet to measure the problems against one another in terms of SPACE is no more nor less arbitrary than to measure them purely in terms of time. Both time and space are binding constraints and must therefore be considered together.
A.3 Cost Equivalence
While we are in agreement with Lenstra and Verheul that the cost of computer hardware changes with time and is somewhat fluid, we note that the same thing is true of computer speed and memory. It is our belief that measuring key size equivalents in terms of what can be broken in a given amount of elapsed time and with a given amount of money is closer to measuring the true equivalence between different public key methods. We give such estimates in section VIII.
B. Discussion of Moore?s Law
There has been much emphasis placed on Moore?s law with respect to predicting what key sizes will be safe 10 or 20 years from now. We note that technology usually follows a sigmoid growth curve; i.e. the curve is S-shaped: there is a ramp-up period in which technology is immature, followed by explosive exponential growth, followed by a gradual tailing off in improvements as technology matures. They question is: where are we with respect to computer technology? Some believe that speed will continue to double every 18 months for the indefinite future, while others such as Stan Williams [Hewlett Packard?s chief architect ] sees growth slowing down in the near future. The reasons given are that it is becoming progressively harder and more expensive to fabricate smaller circuits and that we are not far off from inherent quantum mechanical limits on MOS-FET silicon technology. While other technologies may hold promise, their viability is hard to predict. For example, there was a time when Josephson junction technology was touted as a cure for silicon limitations, but no one could make it work. Gallium Arsenide has had similar problems.
This paper makes projects based only upon extrapolations of existing technology. It seems safe to assume that Moore?s law will continue for 10 years or so. Beyond that seems impossible to predict. More than a 10-fold improvement over existing computers would seem to require new technology and not just incremental improvements on our current techniques. We re-emphasize our conclusion from section III: even if Moore?s law continues to hold, it will still require at least 27 years to reach a point where 1024-bit keys are vulnerable to a public effort. The cost of the hardware (many millions of machines even at improved speeds) places a private attack out of reach for all except perhaps governments.
C. Predictions based upon Shamir?s TWINKLE Device
Shamir has proposed a custom purpose device that replaces traditional computers for the sieving operation. This device does seem within current technology to build. The TWINKLE device by itself is however useless. It requires one or more PC?s to act as backend processors, feeding it data and retrieving its results. This joint paper with Lenstra reaches the following conclusions:
- TWINKLEs can be built for about $5000 each in quantity.
- To do a 768-bit number requires about 5000 TWINKLEs, supported by 80,000 PCs. The computation would take 6 months.
- They estimate that the 80000 PC?s if connected to a fast enough network and if dedicated solely to the task could solve the matrix in 3 months also provided that they were attached to a single, central PC large enough to hold the entire matrix. The central PC would therefore require about 160 Gbytes of memory. They acknowledge that these estimates have not been confirmed by an actual implementation.
- If 768-bits is attempted on ordinary PC?s (without TWINKLEs) they write:
?PC?s with 5 Gigabyte RAM?s which are needed to run the special q siever in standard NFS factorizations are highly specialized. Only a negligible number of such machines exist, and they have few other applications?.
From these conclusions, we conclude the following:
- The quoted remark in 4. above is totally inconsistent with assumptions made in reference . Reference deliberately ignores memory problems, saying in effect that memories will grow in size to the point where they will be sufficient for larger keys. But if there are few machines capable of handling 768 bits today, where will machines with 128 Gbyte memories [needed for 1024 bits] come from? Yet Lenstra and Verheul conclude in that 1024 bits will be vulnerable in 2002 using ?for free? machines distributed on the Internet. However, this conclusion was reached based on a primary assumption of that paper: that DES was breakable in 1982. We simply observe that DES was not actually broken until 1997. Further, the remark in 4. about having few applications might be taken to imply the following: While it might become theoretically possible to build multi-gigabyte machines at reasonable cost in 10 years, there are few applications which demand such machines. But machine capabilities are driven by market needs. If there is little need for such machines, will they become readily available in 10 years? We do not pretend to have an answer to this question. We also note that one cannot even put 5 Gbytes of RAM on today?s 32-bit PC?s.
- It does not seem possible to tightly couple 80,000 processors with today?s technology. The largest such machines today of which we are aware typically consist of 1000 to 2000 processors at most and are very expensive.
- Doing 1024 bits with TWINKLEs would be 1400 times harder (7 million/5000) than 768 bits. Thus, attempting 1024 bits even with the aid of TWINKLEs still seems way beyond reach.
- If one could do the sieving INFINITELY fast and at zero cost, solving the matrix for 768 bits and beyond is still prohibitive.
V. Elliptic Curve Cryptosystems
A. Method of Attack
The best known attack against an Elliptic Curve Discrete Log system is based upon a collision attack and the birthday paradox. One expects that after computing approximately sqrt(order of the curve) points, that one can find two points that are equivalent under an algebraic relationship. From this collision, the key can be found. Thus, the best known attack is purely exponential in the size of the key. The time complexity is:
T(k) = sqrt(pi/2) * 2k/2, where k is the bitsize of the order of the basepoint.
The space requirements are modest, even for large k.
B. Time and Cost of Attack
Wiener, in 1996, proposed a special purpose hardware design that for $10 million could break a 155-bit Elliptic Curve key over a 120-bit sub-field in 32 days. The time to do a k-bit Elliptic Curve is then 32 * sqrt(2k-120) days with one of these machines. It is likely that a faster machine could be designed and implemented today, and we will assume a machine that is about 50 times faster and can therefore break the given key in about 12 hours rather than 32 days.
VI. Symmetric Key Systems(Private Key Systems)
A. Method of Attack
The best known attack against symmetric key systems is a brute-force search of the key space. Nothing else seems to work. Thus, the attack is purely exponential. Thus, for a k-bit symmetric cipher, the expected time to break it is
T(k) = 2k-1.
The space requirements are trivial: a few kilobytes suffices.
B. Time and Cost of Attack
Wiener, in 1993, designed a $1 million DES cracking machine which would crack a DES key in 3.5 hours. This is slow by current technology standards and can easily be improved. We shall assume a machine that is 100 times faster than this one.
VII. RSA Key Size Recommendations
A. Near Term
Breaking a 1024-bit key with NFS is impossible today. Enough sufficiently large machines simply do not exist to do the sieving, and solving the matrix will require a major technology breakthrough. This situation should remain for at least 20 years. No foreseeable increase in machine speed or availability will allow enough hardware to be gathered.
Further, 768 bits seems unreachable today, even utilizing the TWINKLE device, because of the difficulties in dealing with the matrix. However, we do believe that 768-bits might be breakable in about 10-years time. The fitted data from section III gives a date of 2019 for breaking such a key by public effort.
With respect to yet even larger key sizes we note that for a 2048-bit key, the matrix will closely approach the address limits of even a 64-bit processor (1018 bytes or so) and that the total data collected during sieving will exceed the address limits. There is no predictable time horizon for when 128-bit computers may appear.
B. Pro-Active Security
We strongly advocate a policy of pro-active security. Software which uses public keys should not statically define key sizes according to what is infeasible today. Instead, software should provide the capability to instantiate new keys and new key sizes as the art in breaking keys progresses. Systems need to be flexible ? to change keys as needed, to resign as needed, to re-protect data as needed and to timestamp where appropriate.
While changing keys and resigning documents protects signatures against cryptanalytic advances, there is no way to protect old data that was publicly transmitted under old keys. Therefore key sizes must be selected carefully according to the lifetime of the data. It will do an adversary no good to break a current key in (say) 10 years, if the data becomes useless after 5 years.
VIII. Cost-Based Key Size Equivalencies
We assume that a PIII processor at 500 MHz can be acquired for $100.00 and that memory costs $.50/Mbyte. These assumptions are slightly optimistic, given current costs but making this choice yields conservative key size estimates. This section presents key size equivalents for RSA, Elliptic Curves, and Symmetric Key systems using a cost-based model. We assume that $10 million is available to conduct an attack.
Consider using Wiener?s Elliptic Curve cracker for a 120-bit subfield as a data point in constructing the table below. If one extrapolates downward to 112 bits, this problem is sqrt(28) or 16 times easier. It seems that such a machine could break a 112-bit EC key in about 45 minutes. Note that this time estimate is quite sensitive to estimates of the speed of the machine and one has never been built.
We expect that today a machine could be built that is 100 times faster than Wiener?s DES machine. Thus, we assume that today one could build a machine for $10 million which would break a DES key in .03 hours or about 100 seconds.
Based upon a purely computational model, the amount of arithmetic needed to break a 56-bit DES key is about the same as that needed to break an EC key which is twice that size: 112 bits. However, the Wiener designed 56-bit DES cracker seems faster than his equivalent 112-bit EC cracker. We take as a base point in the table below the assumption that 56-bit DES could be broken in ?about? 5 minutes with the right hardware and that this is indeed equivalent to 112-bit EC.
While the TWINKLE device of Shamir seems to be a very effective way of doing the sieving for RSA keys in the (say) 512 to 700 bit range, even Shamir admits that the device is unlikely to scale to where it is effective in attacking 1024 bit keys. Thus, for the table below we assume a software only attack using PC?s for sieving and tightly coupled PC?s for the linear algebra. We assume 500-MIPS machines and that the number of such machines available for $10 million is:
107/(100 + .5 * required memory in Mbytes)
The denominator represent a per machine cost of $100 for the processor plus the cost of the memory. The required memory is assumed to be
64 Mbytes * SQRT( L(2keysize)/L(2512) ) since 64 Mbytes was required for RSA-512.
We assume that the total memory for all the sieve machines is adequate to hold the matrix and that these same machines can be tightly coupled. Therefore, if we have F dollars to spend on hardware, and time T (in months) for an attack we obtain the following formula:
This formula takes as a baseline that RSA-512 took 2 months on 300 PC?s, each with 64 Mbytes of memory. This explains the term T/2 in the numerator and the numbers 300 and 64 in the denominator. The value of N that satisfies the above equation is the modulus that can be broken for $F and time T.
In reality there would be a large additional cost for the fast interconnection network needed for the tightly coupled parallel machine used to solve the matrix, but we ignore this component. Doing this can only make our recommended key sizes more conservative because if we include this cost it means fewer machines are available for $10 million and hence the key size that can be attacked would be smaller.
This table gives key size equivalents assuming that $10 million is available for computer hardware. It assumes that EC key sizes should be twice the Symmetric Key sizes.
Table 2: Cost Equivalent Key Sizes
|Symmetric Key||EC Key||RSA Key||Time to Break||Machines||Memory|
|56||112||430||less than 5 minutes||105||trivial|
|80||160||760||600 months||4300||4 Gb|
|96||192||1020||3 million years||114||170 Gb|
|128||256||1620(1)||1016 yrs||.16||120 Tb|
The table above gives cost-equivalent key sizes. It gives the size, in bits, for equivalent keys. The time to break is computed assuming that Wiener?s machine can break a 56-bit DES key in 100 seconds, then scaling accordingly. The ?Machines? column shows how many NFS sieve machines can be purchased for $10 million under the assumption that their memories cost $.50/Mbyte.
Note: (1) The memory needed for an RSA equivalent to 128-bit symmetric is about 120 Terabytes. Such a machine cannot be built for $10 million, therefore there is no RSA equivalent for $10 million because one cannot build .16 of a machine. Note further that the universe is only 15 x 109 years old. Suppose therefore that instead of allowing $10 million for an attack, we allow $10 trillion. Now, the attack on the symmetric and EC keys takes ?only? 1010 years ? about two-thirds the lifetime of the universe. The RSA equivalent for this is about 1620 bits. This key is 4 x 1012 times harder than RSA-512. Each machine requires 1.2 x 1014 bytes of memory, and we can purchase about 158000 of them for $10 trillion.
Note that traditionally a strict equivalence has been made between 80-bit symmetric, 160-bit EC and 1024-bit RSA. This equivalence has been based PURELY on the assumption that the only required resource is CPU time. It has ignored the size and the cost of memory needed to break a 1024-bit RSA-key. The large difference in RSA key size (760 bits vs. 1024) comes solely from the fact that for fixed FINANCIAL resources, the cost of memory is by far the largest cost associated with breaking an RSA key and computational equivalence ignores memory.
Here are examples of how the RSA equivalents were computed:
With 100000 machines and 5 minutes instead of 2 months, we can solve an RSA problem that is about 26 times easier than RSA-512. L(2512)/L(2430) is about 26 if we ignore the time to solve the matrix. Thus, this estimate is conservative.
760 bits is about 4800 times harder than 512 bits. This requires about 4.4 Gbytes per machine. Each one costs $2300, giving about 4300 machines for $10 million. RSA-512 took 2 months with 300 machines, thus 760 bits should take 9600 months with 300 machines or about 670 months with 4300 machines. Thus, the estimate of 760 bits is slightly conservative.
Changing the amount of money available for an attack does not change key size equivalents. All this does is reduce the time needed for an attack to succeed. Thus, for $100 million breaking a 760 bit RSA key takes 60 months instead of 600 months. But this is still equivalent to an 80 bit symmetric key because the time to break the latter is similarly reduced.
While the Lenstra and Verheul paper reaches the conclusion that 1024-bit RSA keys will be safe only until 2002, we find this conclusion baffling. This conclusion was reached by assuming that 56-bit DES was vulnerable in 1982 despite the fact that it was not until 1997 that DES was actually broken. Does anyone seriously believe that we can solve a problem 7 million times harder than RSA-512 (and needing 6 Terabytes of memory) within just the next few years when RSA-512 was only recently done?
The cost of memory and the difficulty of scaling hardware for solving the matrix suggests that 1024 bit keys will be safe for at least 20 years (barring an unexpected new factoring algorithm). The hardware does not exist today which will allow an NFS attack on a 1024-bit key. Discussion of ?total cycles available on the Internet? is irrelevant if machines are not large enough to run NFS.
There are basically four reasons why someone might want to break a cryptographic key:
- Economic gain. For hackers with such a motive, the cost of conducting
the attack on suggested key sizes is prohibitive. Such an attack must
be conducted in secret, otherwise the supposed victim could just change
the key. Hence, the model of doing sieving ?for free?, on the Internet
does not apply because such an attack could not be secret.
- Malice. An attacker might want to break a key to be malicious, but
once again such an attack must be done in secret. Is it conceivable
that a corporation with large resources might want to break some key
from malice or that it could be done in secret? A typical Internet hacker
could not possibly obtain the required resources in private.
- Research. An attack might be conducted only to establish a limit on
what can be done. But such an attack would not be done on a particular
user?s key and therefore does not threaten existing keys.
- National Security or Law Enforcement. Citizens should not fear a government attack on a key for economic motives as the cost of attack will exceed the economic gain that might be derived from breaking a key. Citizens do need to be concerned about government intrusions on privacy. It seems to be an unanswerable question as to what level of effort might be expended by a government to retrieve a key for such a purpose.
We suggest that users maintain a flexible, pro-active policy in which they are able to change to larger keys as the art in breaking keys improves.
If one accepts a purely computational model of equivalence, as opposed to a financial model equivalence, then we agree with the key size equivalents given in reference . However, we do not agree with the conclusions about when large RSA keys will be vulnerable.
- (1) Lenstra, A., and Verheul, E. Selecting Cryptographic
* See http://www.cryptosavvy.com.
- (2) Lenstra, A., and Shamir, A., Analysis and Optimization of the TWINKLE Factoring Device.
- (3) Brent, R., Some Parallel Algorithms for Integer Factorization.
- (4) Q & A with Stan Williams, Technology Review, Sept 1999 pp. 92-96.
- (5) Silverman, R., Exposing the Mythical MIPS-Year, IEEE Computer, Aug 1999 pp. 22-26.
- (6) Montgomery, P. Parallel Implementation of the Block-Lanczos Method, RSA-2000 Cryptographers Track.
- (7) Silverman, R. The o(1) Term in the Complexity of the SNFS. Rump Session talk, Crypto ?97.
- (8) Silverman, R., & Wagstaff Jr., S. A Practical Analysis of the Elliptic Curve Factoring Method, Mathematics of Computation, vol. 61, 1993, pages [445-463].
- (9) Odlyzko, A., The Future of Integer Factorization, CryptoBytes 1 (1995) pp. 5-12.
- * Robert Silverman is a senior research scientist at RSA Laboratories in Bedford, MA. He has an A.B. from Harvard in Applied Mathematics and a Masters (an ABD) from the University of Chicago in Operations Research. he spent four years at Data Resources Inc. and ten years at the MITRE Corporation where he was a Principal Scientist. His research interests include parallel and massively distributed computing, computational number theory, algorithmic complexity theory and general design and analysis of numerical algorithms. He is a member of the American Mathematical Society. | <urn:uuid:1f6de798-ec3c-4647-ba49-b0dcebf60158> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/historical/a-cost-based-security-analysis-key-lengths.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00330-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935991 | 10,532 | 2.921875 | 3 |
A conductive ink (CI) is a thermoplastic viscous paste that conducts electricity by inculcating conductive materials such as silver and copper. This ink comprises binder, conductor, solvent, and surfactants used during its manufacturing process. The Asia-Pacific conductive inks market was valued at $2,349.2 million in 2012, and is projected to reach $2,654.6 million by 2018, growing at a CAGR of 3.0 % from 2013. Asia-Pacific conductive inks market has grown considerably during the past few years and is expected to grow at a more rapid pace in the next five years. Silver flakes are the major type of conductive inks, and has a huge demand in Asia-pacific.
The binder helps to clutch together all conductive materials in the ink and provides a strong support to the product. It is particularly used in the applications which require high reliability and flexibility. The conductor is another important part of the ink which allows the passage of electricity. The different types of conductors used in conductive inks are silver, copper, nickel, aluminum, and so on. Similarly, the solvent is used for the formation of solution, whereas the surfactants help in uniform mixing of the ink. Conductive inks have various applications such as photovoltaic, membrane switches, automotive, RFID/smart packaging, bio-sensors, printed circuit boards, and other applications.
The continuous rise in production of end products for use within the region and for exports derives a huge demand for the chemicals. The growing demand and policies including emission control, environment friendly products, etc., have led to innovation and developments in the industry, making it a strong chemical hub globally. The exorbitant growth and innovation along with the industry consolidations are expected to ascertain a bright future for the industry in the region. China is the major consumer of conductive inks in Asia-Pacific, accounting for 67.7% of the total consumption. Subsequent to China are Japan and South India.
The key countries covered in Asia-Pacific conductive inks market are China, Japan, India, and Others. The types of conductive inks studied include conductive silver ink, conductive copper ink, conductive copper ink, conductive polymers, carbon nanotube ink, dielectric inks, carbon/grapheme ink, and others. Further, as a part of qualitative analysis, the Asia-Pacific Conductive inks market research report provides a comprehensive review of the important drivers, restraints, opportunities, and burning issues in the conductive inks market.
The Asia-Pacific Conductive inks Market report also provides an extensive competitive landscape of the companies operating in this market. It also includes the company profiles of and competitive strategies adopted by various market players, including Applied nanotech holdings Inc. (U.S.), Conductive Compounds Inc. (U.S.), Creative Materials Inc. (U.S.), and E.I. Du Pont De Nemours and Company (U.S.).
With Market data, you can also customize MMM assessments that meet your Company’s specific needs. Customize to get comprehensive industry standards and deep dive analysis of the following parameters:
- Market size and forecast (Deep Analysis and Scope)
- Competitive landscape with a detailed comparison of portfolio of each company mapped at the regional- and country-level
- Analysis of Forward chain integration as well as backward chain integration to understand the approach of business prevailing in the Asia-Pacific Conductive inks market
- Detailed analysis of Competitive Strategies like new product Launch, expansion, Merger & acquisitions etc. adopted by various companies and their impact on Asia-Pacific Conductive inks Market
- Detailed Analysis of various drivers and restraints with their impact on the Asia-Pacific Conductive inks Market
- Upcoming opportunities in conductive inks market
- Trade data of CI market
- SWOT for top companies in conductive inks market
- Porters 5 force analysis for conductive inks market
- PESTLE analysis for major countries in conductive inks market
- New technology trends of the CI market
Please fill in the form below to receive a free copy of the Summary of this Report
Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement | <urn:uuid:ee33cdec-4576-4735-9845-11d278210bcc> | CC-MAIN-2017-04 | http://www.micromarketmonitor.com/market/asia-pacific-conductive-inks-6719939256.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00359-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936171 | 909 | 2.703125 | 3 |
The Trivium are made up by the three core skills taught in medieval universities:
The study of logic, grammar and rhetoric was considered preparatory for the quadrivium, which was made up of arithmetic, geometry, music, and astronomy. The trivium was the beginning of the liberal arts.
I’ve probably sent you off on a reading frenzy. Enjoy. | <urn:uuid:f8f73532-4558-4106-8f84-5c049195a890> | CC-MAIN-2017-04 | https://danielmiessler.com/blog/the-trivium/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00477-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964006 | 77 | 2.96875 | 3 |
In addition to bringing you the latest in AppSec research and news in this blog, we will begin presenting short educational briefings on key subjects within the application security space.
We hope you will enjoy and learn from these short posts. We value your opinion, so please let us know if there are any concepts or topics you would like to hear about from us.
Today, I would like to pen my thoughts on a Data Breach. We hear of data breaches happening ever so frequently, so what exactly is a data breach and how can it occur? Read on…
Webster’s defines “to breach” as literally “the act of breaking”, as in the infraction or violation of a law, obligation, tie or standard. A data breach is an incident during which an encrypted database is broken or hacked, and the valuable information stored within is compromised.
The term “data” in this case most often describes sensitive, protected or confidential data such as customer records that are protected by law or required by Federal regulation to be protected. Data breaches may involve personal health information, personally identifiable information, trade secrets or intellectual property.
Most often the term data breach is applied to describe the theft of data – a malevolent action by unauthorized parties such as hackers, fraudsters or spies. The data need only be viewed for a breach to have occurred, but if it is copied and transmitted the potential consequences are ominous.
The loss of information by data breach is the nefarious first step in online crimes such as identity theft, credit card fraud, and banking fraud. In these cases crooks target data such as credit card numbers, PINs, bank account numbers, and social security numbers.
However the term can also describe the release of sensitive data to an “untrusted environment” by accident, through the fault of an authorized party. Past incidents have resulted from the careless handling of laptop computers or CD-ROMs.
Although malicious intent is not present in such cases, the potential consequences of a data breach are no less dire. In most cases where personally identifiable information is lost, authorities demand that companies or organizations notify everyone whose information may have been compromised, even if they is little risk of malicious intent.
In the information security industry, there exist numerous guidelines and regulatory compliance mandates governing the protection of confidential data from data breaches – from the Payment Card Industry Data Security Standard (PCI-DSS) to the Health Insurance Portability and Accountability Act (HIPAA).
Today there exists a global organized criminal network of “black hat” hackers devoted solely to the stealing of confidential data. The spoils from their illegal activities are then sold on a thriving underground black market, where criminals trade in stolen information that can change hands numerous times.
Companies that suffer a data breach lose more than just confidential information. Their reputation, productivity, and profitability can all be negatively impacted in the aftermath of even a single incident. If a data breach results in actual identity theft or other financial loss, the offending organization may face fines, civil or criminal prosecution.
Cross-posted from Veracode | <urn:uuid:82093b90-de38-488f-991f-db3df655bc8d> | CC-MAIN-2017-04 | http://www.infosecisland.com/blogview/20846-Data-Breach-Definitions-Costs-and-Security.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00111-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922301 | 634 | 3 | 3 |
World War II may seem like an unlikely place to go looking for the origins of data analytics or insights into building predictive cyber intelligence programs, but the lessons of the past can help inform even a digital future. British code breakers who deciphered the encrypted messages of the German Enigma machines during World War II not only made breakthroughs in mathematics, but also in understanding and predicting the behavior of German code clerks. The success of Bletchley Park’s code breakers stemmed in part from their insight into human behavior.
The British cryptanalysts had an advantage that we do not always enjoy today – they knew who their enemy was. They could analyze linguistic and cultural patterns within the encrypted messages – searching for recurring communications such as weather reports, or common phrases such as Heil Hitler, to identify patterns. Predicting cyberattack behavior on a global, 21st century scale is more complex. Networks are under constant bombardment from communications that may have hopped numerous times before arriving at their destinations. Malicious actors are always innovating and morphing. Still, human ‘fingerprints’ are bound to appear within the network data to help us identify them, and hopefully to predict and safeguard against future attacks.
Imagine there is a government agency called SHIELD. Its networks are under attack, and administrators suspect that data is being stolen. Information security analysts start with the agency’s risk profile: What critical information is at risk, who might want it, and what might they do with it?
SHIELD’s critical data and national secrets may be targeted by run-of-the-mill criminal hackers, but they also may be very valuable to foreign governments—valuable enough that some actors might go to surprising lengths to obtain them. The geographies associated with unusual network activity are one piece of the puzzle, and so are tactics. Defacement and DDoS (Distributed Denial of Service) attacks occur more frequently than espionage and theft and are often driven by ideology. On the other hand, the most sensitive and highly-guarded intellectual property is likely to be targeted through a combination of social engineering (phishing or insider attacks) and sophisticated malware.
Initial indicators allow SHIELD’s analysts to track and monitor cycles of attacks over specified timespans, looking for patterns. For example, let’s say that SHIELD was hit by a DDoS attack on an election day. It would be in SHIELD’s interest to follow the Internet and social media buzz leading up to future election days.
SHIELD analysts also monitor fluctuations in the amount of suspicious activity correlating to certain times of the year or specific political events. Depending on the evolving theory of the malicious actor, analysts may start monitoring social media or other news outlets for signs that their theory may be supported by geopolitical evidence.
This sort of approach gives clues to politically or ideologically motivated attacks, but does not address financially-motivated criminal activity or espionage. Moreover, making generalizations around political, geographic or cultural factors could lead to reputational damage and is often misleading. Attackers may position cultural references in malware code as decoys in order to cover their tracks. An organized crime group in one country may be acting on someone else’s behalf. Code written by a state-backed hacker may be copied and repurposed by a novice activist motivated by ideology on the other side of the world. Not to mention, our understanding of others’ worldviews or motives are often distorted.
Ultimately, human context is only one piece of the puzzle. The fingerprints on data may cast some light onto the path in front of us as it did for the code breakers at Bletchley Park. Given the attackers, targets and threat vectors we face today, our conclusions and actions must begin and end with the data itself. Developing a resilient cyber threat intelligence program calls for proactive analysis of human behavior, network traffic and, ultimately, letting the truth in the data take us where it will. | <urn:uuid:5865f7fc-ef7a-4573-914b-85878a52fbd9> | CC-MAIN-2017-04 | http://www.nextgov.com/technology-news/tech-insider/2014/02/what-world-war-ii-code-breaking-tells-us-about-cybersecurity/78032/?oref=voicesmodule | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00505-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946136 | 811 | 3.4375 | 3 |
Researchers at a German university have created an insect-inspired robot called HECTOR (hexapod cognitive autonomously operating robot). The 3-foot, 26-pound, six-legged robot can carry nearly three times its weight. Hector has 18 joints and is made from carbon fiber reinforced plastic. Its control program works on the same distributed intelligence principle found in insect brains and will eventually be enhanced to learn and plan ahead. Hector ultimately may find a home in law enforcement. Source:Gizmag
By 2020, traditional school IT departments will be obsolete — at least as we currently know them, according to Mind/Shift, an education blog. Cloud computing and increased Wi-Fi and satellite access will make standard IT roles like software, security and connectivity history. IT professionals will then have more time to spend on innovation.
Source: KQED Public Media
People say they something they really like, but what if your heartbeat could power your most beloved gadget? Researchers at the Georgia Institute of Technology have developed nanogenerators that produce power from the tiniest of movements. Nanogenerators use piezoelectric zinc-oxide nanowires that generate an electric current when strained or flexed. Almost any kind of movement — walking, a heartbeat, wind, even rolling tires — can generate electricity. The researchers have used nanogenerators to power LCD screens and transmit a radio signal.
Electric vehicle owners have resources available at their fingertips for locating vehicle-charging stations and staying plugged in to the green scene. Free smartphone apps include: PlugShare and EV Charger Finder for the iPhone, and DriveAlternatives and EV-olution for Android devices.
A microorganism that continuously secretes large quantities of sugar, a basic building block for ethanol, might one day provide sweet savings at the gas pump. New Jersey startup Proterro created microbes that naturally produce sucrose when the water they’re growing in becomes too salty. The new approach could lower the cost of biofuels, since traditional sugar sources like corn and sugar cane have huge transportation costs and require plenty of sun and water. Source: Technology Review | <urn:uuid:4900bf08-9456-4a86-9f88-dfdb32d76694> | CC-MAIN-2017-04 | http://www.govtech.com/technology/Power-To-The-People-Microorganisms-Green.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00321-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933628 | 432 | 2.78125 | 3 |
When we are addressing Voice over IP we need to remember that essentially we would like to reach customers over the PSTN or SS7 network. The only avenue to date to do this, is by using something called the telephone number. However, that number has undergone some changes recently.
The International Telecommunication Union (ITU) is the responsible body for all telephony numbering around the world. The way the numbering plan laid out is:
- A telephone number can have a maximum of 15 digits
- The first part of the telephone number is the country code (one to three digits)
- The second part is the national destination code (NDC)
- The last part is the subscriber number (SN)
- The NDC and SN together are collectively called the national (significant) number
Now, each geographic area has responsibility for it’s own numbering plan. The United States and Canada share equally in the North America Numbering Plan (NANP) which entails a single-digit country code, followed by a 3-digit area code, a 3-digit prefix, and a 4-digit subscriber code. In other words, our numbering system is quite fixed. Other countries have variable length numbering plans, like England where the number grows based upon the density of a given city.
The E.164 has developed into something broader called ENUM (TElephone NUmber Mapping) which was the brain child of the IETF (internet engineering task force). Using the international E.164 number as a model, ENUM will assign a specific Uniform Resource Identifier (URI) to each and every networked device — including analog telephones and fax machines, smart phones, or computers. The reason for this is to make it easier to look up the numbers or devices using DNS servers on the internet.
The URI could be in a format of an email address (firstname.lastname@example.org) or a number with an assigned domain name (+email@example.com). With URI all these devices will be able to contact each other directly using a single network address or phone number. ENUM also deals with storage of these numbers on the DNS (domain name server) so that voice over IP phones could look up a number over the internet and be connected to another voice over IP system.
RFC 3761 is defines the format of the number as:
The Application Unique String is a fully qualified E.164 number minus any non-digit characters except for the “+” character which appears at the beginning of the number. The “+” is kept to provide a well understood anchor for the AUS in order to distinguish it from other telephone numbers that are not part of the E.164 namespace.
For example, the E.164 number could start out as “+44-116-496-0348”. To ensure that no syntactic sugar is allowed into the AUS, all nondigits except for “+” are removed, yielding “+441164960348”.
So the full ENUM number must begin with a leading “+”. But the question remains, what does the “+” represent? Basically it represents your country’s access code to dial out to make international calls. For instance, if I were to dial an international number from my home in North America, I would begin by dialing “011” which is our code fore requesting international service.
In essence, a fully qualified ENUM number is one that could be dialed by any device, any place in the world, and the call would be properly set up. If you look closely at your cell phone when you dial from outside your home country you will notice that the cell provider has translated the number into a fully-qualified ENUM during the calling operation.
The beauty in this system is that I don’t have to worry about remembering international access codes for different countries, as long as the provider understands the leading “+” symbol.
Author: Joe Parlas | <urn:uuid:cd145501-ac4d-40a2-a1a5-850615837fbf> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2009/07/02/e164-the-modern-dial-plan/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00221-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921478 | 842 | 3.25 | 3 |
What is it?
XHTML (Extensible Hypertext Markup Language) is a markup language for web pages that combines HTML with XML. It was intended to replace HTML, but there is no sign of this happening on a wide scale yet.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Even so, the World Wide Web Consortium (W3C) is set to approve XHTML2 next year. This could create an even bigger step, and a greater range of backward compatibility problems for those who have ignored XHTML.
It would not be the first time that users have frustrated the plans of the technocrats by sticking with a popular technology.
But in the case of XHTML, the arguments to switch seem strong, even if they are often couched in terms that appeal only to a dedicated minority.
Standardisation means greater device independence, and could eliminate much of the time wasted checking browser compatibility. Standardised web pages can be understood by XML-savvy applications, as well as by people.
XHTML has been described as offering "all the benefits of XML while avoiding the complications of true XML" - bridging the gap for HTML developers who might not fancy taking on something as tricky as full XML.
Where did it originate?
In 1999, the W3C brought out HTML 4.0 and XHTML, which was essentially HTML 4.0 recast into XML. The W3C said XHTML brought "the rigour of XML to HTML".
HTML, XML and XHTML all derive from SGML (Standard Generalised Markup Language). XHTML2 is intended to represent "a complete break from the non-XML heritage of HTML".
What is it for?
According to Steven Pemberton, chair of the HTML and forms working groups at the W3C, the design aims of XHTML 2.0 include using XML as much as possible.
He said, "Where a language feature already exists in XML, do not duplicate or reinvent it." The emphasis is on structure over presentation.
"Thanks to Cascading Stylesheets (CSS), you no longer need explicitly presentational tags in HTML." And by removing some of the "needless idiosyncracies" of HTML, it should make it easier to maintain, if not to write.
What makes it special?
Apart from the advantages of standardisation, greater interoperability will be possible between XHTML and other XML languages. Semantic web applications will be able to use XHTML documents.
How difficult is it to master?
HTML developers are said to be able to pick up XHTML in their spare time in about a week.
What is coming up?
XHTML2 is not likely to become a W3C recommendation until 2007. In the meantime, acquiring the skills could give you the edge in a vibrant new jobs market - bearing in mind there may be no immediate upsurge of XHTML2 work.
According to IBM's Developerworks site, you can prepare by getting serious about using CSS and removing presentational mark-up.
Rates of pay
XHTML developers can earn £30,000, rising to £35,000 with more experience.
You can pay for XHTML training, but there are thousands of free tutorials. For a proper understanding, start with the W3C site or look for books by the W3C's HTML/XHTML expert Dave Raggett.
Vote for your IT greats
Who have been the most influential people in IT in the past 40 years? The greatest organisations? The best hardware and software technologies? As part of Computer Weekly’s 40th anniversary celebrations, we are asking our readers who and what has really made a difference?
Vote now at: www.computerweekly.com/ITgreats | <urn:uuid:5ca9c14a-2a4a-4d1e-ae2d-6e3c5eb601f6> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/2240078299/Get-in-early-to-ride-wave-of-move-to-Extensible-HTML | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00129-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931022 | 794 | 3.53125 | 4 |
Diagnose DHCP problems with ipconfig
When it comes to troubleshooting network problems, many people are intimidated by the complexity of TCP/IP. Although TCP/IP is much more complex than other protocols, it tends to be easier to troubleshoot because it contains several tools designed to help you locate and solve a wide variety of problems. (Many network administrators wish that protocols such as IPX/SPX and Net BEUI had TCP/IP's troubleshooting tools.) One such tool is the ipconfig program.
To use ipconfig, simply open a Command Prompt window and enter "ipconfig". When you do, Windows NT will display a summary of each network adapter installed in your system, along with its TCP/IP configuration.
The ipconfig command is especially useful for diagnosing DHCP problems. If you're using a DHCP server, you can use ipconfig to see the address that DHCP has assigned to the adapter. If you're using a DHCP server, but you see the IP address 0.0.0.0, your computer has either lost communication with the DHCP server or the DHCP server is malfunctioning.
If you're using static IP addresses, you can use ipconfig to see the TCP/IP configuration as the Windows server sees it. The information displayed isn't simply a regurgitation of what's inserted into the TCP/IP properties sheet--rather, it's a way to tell if Windows has accepted the address that you've used.
By default, ipconfig lists the IP address, subnet mask, and default gateway of each network adapter. If you require more detailed information, you can use the /all switch after the ipconfig command. Doing so will cause the ipconfig program to display more detailed information, such as the MAC address of each network card, and an indication of whether the address was provided by a DHCP server.
|"Although TCP/IP is much more complex than other protocols, it tends to be easier to troubleshoot because of the powerful troubleshooting tools it contains. "|
Fibre Channel, an emerging technology, consists of a technology converted from SCSI to optical to a very specialized packet. Fibre Channel does two things: It runs the SCSI protocol at 100MB per port over optical cables, and it runs a unique storage protocol at 1.06Gbps in packets. (Fibre Channel does not currently run IP.) It's really SCSI using a different protocol. As a network topology, Fibre Channel uses a hub or a switch as a concentrator. The switch runs faster than the hub. Fibre Channel supports up to 500 meters, which is suitable for most applications. (You can spend more money and purchase special cables and drivers to go up to 10 kilometers.)
Current Fibre Channel Arbitrated Loop (FC-AL) has one downside: it runs Class 3 service. Three classes exist for quality of service of transmission, and Class 3 service doesn't guarantee or acknowledge transmission. If a fibre drops a packet and the software fails to catch it, the result is a hang up (or a time out), causing the system to momentarily freeze. The reinitialization loop process begins, resetting the entire bus.
Use winipcfg in Windows 98
Unfortunately, the ipconfig command only works in Windows NT. If you have computers on your network running Windows 98, you'll have to use a different command. To do so, enter "winipcfg" at the Run prompt. Doing so will display a dialog box that displays the computer's TCP/IP configuration, as seen by Windows. Click the More Info button to get additional information, such as the WINS and DNS configuration.
You can select the network adapter on which you're viewing information by choosing the adapter from a drop-down list. If you need to release or renew a lease, you can do so by selecting the adapter from the drop-down list and clicking the Release or Renew button. Likewise, you can release or renew the lease for all installed adapters by clicking Release All or Renew All. //
Brien M. Posey is an MCSE who works as a freelance writer and as the director of information systems for a national chain of health care facilities. His past experience includes working as a network engineer for the Department of Defense. Because of the extremely high volume of e-mail that Brien receives, it's impossible for him to respond to every message, although he does read them all. | <urn:uuid:cb8f28b4-4fa2-4683-8ab2-e3cd17bb92c5> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/netsysm/article.php/623271/Diagnose-DHCP-problems-with-ipconfig.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00129-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.917241 | 909 | 2.90625 | 3 |
Handling time zones can become very complex in a distributed environment. For example, you might have an event source in one time zone, the Collector Manager in another, the back-end Sentinel server in another, and the client viewing the data in yet another. When you add concerns such as daylight saving time and the many event sources that don't report what time zone they are set to (such as all syslog sources), there are many possible problems that need to be handled. Sentinel is flexible so that you can properly represent the time when events actually occur, and compare those events to other events from other sources in the same or different time zones.
In general, there are three different scenarios for how event sources report time stamps:
The event source reports the time in UTC. For example, all standard Windows Event Log events are always reported in UTC.
The event source reports in local time, but always includes the time zone in the time stamp. For example, any event source that follows RFC3339 in structuring time stamps include the time zone as an offset; other sources report long time zone IDs such as Americas/New York, or short time zone IDs such as EST, which can present problems because of conflicts and inadequate resolutions.
The event source reports local time, but does not indicate the time zone. Unfortunately, the extremely common syslog format follows this model.
For the first scenario, you can always calculate the absolute UTC time that an event occurred (assuming that a time sync protocol is in use), so you can easily compare the timing of that event to any other event source in the world. However, you cannot automatically determine what the local time was when the event occurred. For this reason, Sentinel allows customers to manually set the time zone of an event source by editing the Event Source node in the Event Source Manager and specifying the appropriate time zone. This information does not affect the calculation of DeviceEventTime or EventTime, but is placed into the ObserverTZ field, and is used to calculate the various ObserverTZ fields, such as ObserverTZHour. These fields are always expressed in local time.
The second scenario is in many ways the simplest. If the long-form time zone IDs or offsets are used, you can easily convert to UTC to get the absolute canonical UTC time (stored in DeviceEventTime), but you can also easily calculate the local time ObserverTZ fields. If a short-form time zone ID is used, there is some potential for conflicts.
The third scenario can be the trickiest, because it requires the administrator to manually set the event source time zone for all affected sources so that Sentinel can properly calculate the UTC time. If the time zone is not properly specified by editing the Event Source node in the Event Source Manager, then the DeviceEventTime (and probably the EventTime) can be incorrect; also, the ObserverTZ and associated fields might be incorrect.
In general, the Collector for a given type of event source (such as Microsoft Windows) knows how an event source presents time stamps, and adjusts accordingly. It is always good policy to manually set the time zone for all Event Source nodes in the Event Source Manager, unless you know that the event source reports in local time and always includes the time zone in the time stamp
Processing the event source presentation of the time stamp happens in the Collector and on the Collector Manager. The DeviceEventTime and the EventTime are stored as UTC, and the ObserverTZ fields are stored as strings set to local time for the event source. This information is sent from the Collector Manager to the Sentinel server and stored in the event store. The time zone that the Collector Manager and the Sentinel server are in should not affect this process or the stored data. However, when a client views the event in a Web browser, the UTC EventTime is converted to the local time according to the Web browser, so all events are presented to clients in the local time zone. If the users want to see the local time of the source, they can examine the ObserverTZ fields for details. | <urn:uuid:25d4b08c-276b-478a-b3aa-7003c7727277> | CC-MAIN-2017-04 | https://www.netiq.com/documentation/sentinel70/s701_install/data/bvtjo3t.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00037-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.912501 | 829 | 2.65625 | 3 |
In this video, you'll learn how to create and order multiple IPv4 policies in the policy table.
In this example, three policies will be configured: PolicyA allows Internet access to the local area network, PolicyB allows Internet access to mobile devices connecting while applying additional security features and PolicyC: allows the system administrator's PC to have full access. In this example, a wireless network has already been configured that is in the same subnet as the wired LAN.
Visit Fortinet's documentation library at http://docs.fortinet.com.
Best viewed in 1080p. | <urn:uuid:7553f35b-1778-4581-aae9-210edc3f754f> | CC-MAIN-2017-04 | https://video.fortinet.com/video/104/basic-firewall-policies-5-2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00551-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.88589 | 119 | 2.734375 | 3 |
Gonzalez-Moreno P.,Centro Andaluz Of Medio Ambiente Iecolab Ceama |
Gonzalez-Moreno P.,Wageningen University |
Quero J.L.,Wageningen University |
Quero J.L.,Rey Juan Carlos University |
And 3 more authors.
Basic and Applied Ecology | Year: 2011
Mediterranean forest plantations are currently under an intense debate related to their ecological function, sustainability and future performance. In several Mediterranean countries, efforts are directed to convert pine plantations into mixed and more diverse forests. This research aims to evaluate the effect of the spatial configuration of pine plantations on regeneration and plant diversity in order to facilitate plantation management towards more diversified stands. Spatial characteristics of plantations (proximity to different vegetation types, fragmentation and internal patch structure) were related to abundance of seedlings of an ecologically important broadleaved species, Holm Oak (Quercus ilex L.), and the Shannon diversity index of the community. Q. ilex seedling abundance and plant diversity in pine plantation patches are favoured by the proximity to oak patches located uphill. Fragmentation affected only plant diversity, with smaller patches having more diversity. The internal structure of the pine patch influenced both regeneration of Q. ilex and diversity. Pine patches with lower pine tree density were characterized by higher diversity and less Q. ilex regeneration confirming that internal structure affects species differently. From a management perspective, the process of conversion of Mediterranean pine plantations to mixed oak-pine forests could be facilitated by (1) having the seed source uphill from the plantation, (2) increasing the fragmentation of plantations and (3) promoting the internal heterogeneity of plantations to create a diverse range of light environments matching the different requirements of species. © 2011 Gesellschaft für Ökologie. Source | <urn:uuid:cdf5191e-6861-4575-8d78-4a9d38ba23bf> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/centro-andaluz-of-medio-ambiente-ceama-743206/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00459-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.897561 | 381 | 2.578125 | 3 |
Given the pace of innovation in companies globally, fostering a learning culture is an essential part of leadership skills.
A learning culture is one in which learning is not confined to specific instances of training, but happens continually, as employees and managers upgrade and expand their skills. Once, any training was generally instituted and guided as part of change management. Now, learning is a continuous part of emphases on teamwork, collaboration, and the rise of digital methods.
So how can managers foster a learning culture? By embracing a culture of continuous learning. Continuous learning cultures share four characteristics: 1) employees are both teachers and learners, 2) learning is employee-directed, 3) collaboration and technology are central to the enterprise, and 4) learning is tied to results.
Employees as Teachers and Learners
Older-model organizations often used human resources as a central source of training, even if training was diffused through a company. New skills and practices were disseminated by specific leaders to a group, in a conference room.
In contrast, Digitalist recommends that this form of training constitute much less of the learning pie.
Under the 90%/10% model, 90% of workplace learning is accomplished through practice, experience, and interaction. Ten percent is still done via formal learning such as workshops and training sessions. The 90% is done by employees who are senior or have specific needed skills, and the 10% may be as well.
Employee-Directed Learning Is the Way of the Future
Employees are increasing tasked with directing their own learning, including identifying development needs, ways to fill the needs, and managing the schedule in which learning will take place. Managers are tasked with coaching these activities but aso need to focus on the priorities of their departments as well.
Employees will likely have more than one employer in their life, so employee-directed learning allows them to maximize learning in all positions and to relate one position's learning opportunities to another.
Collaboration and Technology Fosters Continuous Learning
A learning culture rests on collaboration and technology. Technology provision of learning in mobile applications and the cloud means that managers and employees can learn whenever it is convenient for them, rather than, say, only between 2 and 3 p.m. on Thursday, when the training session is scheduled.
Online provision of courses by MOOCs like Coursera and other online providers has expanded the universe of online learning.
At the same time, the Millennials focus on collaborative approaches has moved peer-to-peer learning approaches and those structured like social media, with real-time comments and feedback, to the forefront.
Companies Need to Tie Learning to Results
Forbes notes that, given the rise of tools for predictive analytics and the growth of big data, linking education and results are easier than ever.
Managers should first plan and disseminate what the priority goals are. Increased sales? The rate of new sales developed? Customer/client service satisfaction? Response times to customer/client questions? Product quality? Employee productivity? Engagement? The list can almost literally be endless.
Once these are determined, they should be disseminated throughout the company so that employees can direct their teaching and learning activities. | <urn:uuid:b55ae0a8-264b-486c-a063-b5ebe246e153> | CC-MAIN-2017-04 | https://www.broadsoft.com/work-it/how-collaboration-leads-to-a-learning-culture | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00303-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965887 | 652 | 2.546875 | 3 |
At the Hot Chips conference in Santa Clara last week, IBM lifted the curtain on its Blue Gene/Q SoC, which will soon power some of the highest performing supercomputers in the world. Next year, two DOE labs are slated to boot up the most powerful Blue Gene systems ever deployed: the 10-petaflop “Mira” system at Argonne National Lab, and the 20-petaflop “Sequoia” super at Lawrence Livermore. Both will employ the latest Blue Gene/Q processor described at the conference.
That, of course, is assuming IBM doesn’t back out of those projects as it did recently with its 10-petaflop Power7-based (PERCS) Blue Waters supercomputer for NCSA at the University of Illinois. The company terminated the contract to build and support the $300 million Blue Waters system based on financial considerations, leaving the NCSA and its NSF sponsor looking for another vendor to fill the void. The DOE is certainly not expecting to endure that fate for their Blue Gene/Q acquisitions.
The unveiling of the Blue Gene/Q SoC last week implies IBM is committed to those DOE machines as well as futures systems. And Unlike the Power7 CPU, which is being used for both enterprise and HPC systems, the Blue Gene technology has always been exclusively designed and built for supercomputing.
Both, the Power7 and new Blue Gene SoC use IBM’s 45 nm SOI technology, but the similarity end there. As described at Hot Chips, the BGQ processor is an 18-core CPU, 16 of which will be used for the application, one for the OS, and one held in reserve. And even though the chip is a custom design, it uses the PowerPC A2 core that IBM introduced last year at the International Solid-State Circuits Conference. The architecture represents yet another PowerPC variant, which in this case merges the functionality of network and server processors. IBM is using the A2 architecture to implement PowerEN chips for the more traditional datacenter applications such as edge-of-network processing, intelligent I/O devices in servers, network attached appliances, distributed computing, and streaming applications.
As such, the A2 architecture emphasizes throughput and energy efficiency, running at relatively modest clock speeds. In the case of the Blue Gene/Q implementation, the clock is just 1.6 GHz and consumes a modest 55 watts at peak. To further reduce power consumption, the chip makes extensive use of clock gating.
But thanks to the double-digit core count, support for up to four threads per core, and the quad floating-point unit, it delivers a very respectable 204 gigaflops per processor. Contrast that with the Power7, which at 3.5 GHz and 8 cores delivers about 256 gigaflops, but consumes a hefty 200 watts.
That gives the Blue Gene/Q chip nearly three times the energy efficiency per peak FLOP compared to the more computationally muscular Power7 (3.72 gigaflops/watt versus 1.28 gigaflops/watt). IBM has been able to capture most of that energy efficiency in the Blue Gene/Q servers. The current top-ranked system on the latest Green500 list is a prototype machine that measures 2.1 gigaflops/watt for Linpack, beating even the newest GPU-accelerated machines as well as the Sparc64 VIIIfx-based K supercomputer, the current champ of the TOP500.
Even compared to its Blue Gene predecessors, BGQ represents a step change in performance, thanks to a large bump in both core count and clock frequency. The Blue Gene/Q chip delivers a 15 times as many peak FLOPS its Blue Gene/P counterpart and a 36 times as many as the original Blue Gene/L SoC.
|Version||Core Architecture||Core Count||Clock Speed||Peak Performance|
|Blue Gene/L||PowerPC 440||2||700 MHz||5.6 Gigaflops|
|Blue Gene/P||PowerPC 450||4||850 MHz||13.6 Gigaflops|
|Blue Gene/Q||PowerPC A2||18||1600 MHz||204.8 Gigaflops|
As with Blue Gene/L and P, the Q incarnation uses embedded DRAM (eDRAM), a dynamic random access memory architecture that is integrated onto the processor ASIC. The technology is employed for shared Level 2 cache, replacing the less performant SRAM technology used in traditional CPUs. In the case of Blue Gene/Q, 32 MB of L2 cache have been carved out.
What is brand new for the latest version is transactional memory. According an EE Times report, the addition of transactional memory will give IBM the distinction of becoming the first company to deliver commercial chips with such technology.
Transactional memory is a technology used to simplify parallel programming by protecting shared data from concurrent access. Basically it prevents data from being corrupted by multiple threads when they simultaneously want to read or write a particular item, and does so in a much more transparent way to the application than the traditional locking mechanism in common use today.
The technology can be implemented in both hardware, software, and a combination of the two. It has been studied by a number of vendors over the years, most notably Intel, Microsoft, and Sun Microsystems. According to the EE Times report, IBM’s implementation exploits the high performance on-chip eDRAM to achieve better latency compared to traditional locking schemes.
If everything goes according to plan, the new processor will elevate the Blue Gene franchise into the double-digit petaflops realm. The aforementioned Mira and Sequoia, taken together, represent 30 petaflops of supercomputing and will both be top 10 systems in 2012. Sequoia, in particular, is positioned to be the top-ranked supercomputer next year, assuming no surprises from China or elsewhere.
Whether the BGQ architecture is the end of the line for the Blue Gene franchise is an open question. As of today, there is no R system on the roadmap and IBM seems to be leaning toward a Power-architecture-only strategy for its custom supercomputing lineup. Even if IBM is able to repurpose the cores of other PowerPC architectures, designing and implementing a custom SoC for a single niche market, albeit a high-margin one, is an expensive proposition. | <urn:uuid:4b8d9a01-0277-42e2-af64-54c9f63d1a29> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/08/22/ibm_specs_out_blue_gene_q_chip/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00515-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.917289 | 1,340 | 2.53125 | 3 |
Chipmaker Adapteva is sampling its 4th-generation multicore processor, known as Epiphany-IV. The 64-core chip delivers a peak performance of 100 gigaflops and draws just two watts of power, yielding a stunning 50 gigaflops/watt. The engineering samples were manufactured by GLOBALFOUNDRIES on its latest 28nm process technology.
Based in LEXINGTON, Massachusetts, Adapteva is in the business of developing ultra-efficient floating point accelerators. Andreas Olofsson, a former chip engineer at Texas Instruments and Analog Devices, founded the company in 2008, and gathered $2.5 million from various VCs and private investors. With that shoestring budget, he managed to produce four generations of the Epiphany architecture, including two actual chips. The technology is initially aimed at the mobile and embedded market, but Olofsson also has designs on penetrating the supercomputing space.
Epiphany is essentially a stripped down general-purpose RISC CPU that throws out almost everything but the number-crunching silicon. But since it doesn’t incorporate features needed by operating systems, like memory management, it relies on a host processor to feed it application kernels in the same manner as a GPGPU. The current implementation supports single precision floating point only, but plans are already in the works for a double precision implementation.
The general layout of Epiphany is a 2D mesh of simple cores, which talk to each other via a high-speed interconnect. In that sense, it looks more like Intel’s manycore Xeon Phi than a graphics processor, but without the x86 ISA baggage (but also without the benefit of the x86 ecosystem).
The latest Epiphany chip, which was spec’d out last fall, runs at a relatively slow 800MHz. But thanks to its highly parallel design and simplified cores, its 50 gigaflops/watt energy efficiency is among the best in the business. NVIDIA’s new K10 GPU computing card can hit about 20 single precision gigaflops/watt, but that also includes 8GB of GDDR5 memory and a few other on-board components, so it’s not an apples-to-apples comparison. Regardless, a 100 gigaflop chip drawing a couple of watts is a significant achievement.
The downside of the design is that it uses Adapteva’s own proprietary ISA, so there are no ready-made software tools that developers can tap into. “Everybody is very impressed by the numbers,” Olofsson told HPCwire. “They just haven’t quite been convinced they can program this thing.”
That has now changed. In conjunction with the 28nm samples, Adapteva has also released its own OpenCL compiler wrapped in their new software developer kit (SDK). The compiler is an adaptation of Brown Deer Technology’s OpenCL implementation developed for ARM and x86 platforms. Brown Deer provides tools and support for high performance computing applications and is especially focused on acceleration technologies based on GPUs and FPGAs. The Adapteva implementation means developers can now use standard OpenCL source to program the Epiphany processor.
Olofsson says they chose OpenCL because it’s a recognized open standard that is being used for heterogeneous computing platforms in all the segments Adapteva is interested in. In particular, it’s getting some traction on heterogeneous platforms in the embedded space, where GPUs are increasingly being targeted to general-purpose computing. “The way we are pitching [Epiphany] is that OpenCL GPGPUs may not be good at everything, because of their architectural limitations,” say Olofsson. “So why not put another accelerator next to it that is also OpenCL-programmable.”
Adapteva is putting the SDK through its paces using existing OpenCL codes like 2D Fast Fourier Transform (FFT) and multi-body physics algorithms that were downloaded off the Internet. The company is currently using an x86-based board for these test runs, but since OpenCL has bindings for C/C++, essentially any commodity CPU is fair game as the host driver. Adapteva’s SDK is currently in beta form and is being released to the company’s early access partners.
As far as getting the Epiphany chips onto useful platforms, that’s still a work in progress. At least some of the engineering samples of the 28nm chip will go to Bittware, an early customer of Adapteva’s. Bittware used the early 16-core, 32-gigaflop version of Epiphany on its custom PCIe boards. Those products are aimed at military and industrial application for things like embedded signal processing. Because of the need to minimize power usage in embedded computing, Epiphany is a good fit for this application domain. At least one more vendor has signed up to develop Epiphany-based PCIe boards, but that company is not ready to go public just yet.
Adapteva’s market aspirations extend beyond the military-industrial complex though. Olofsson believes Epiphany is ideal for mobile computing, and eventually HPC. With regard to the former, Adapteva is planning to use the new chip to demonstrate face detection, an application aimed at devices like smartphones and tablets. Face detection and recognition rely on very compute-intensive algorithms, which is fine if you’ve got a server or two to spare, but it’s beyond the number-crunching capabilities of most mobile-grade CPUs and GPUs today.
Other flop-hungry applications that could find a home on in this market include augmented overlays, gesture recognition, real-time speech recognition, realistic game physics, and computational photography. Like mobile-based face detection/recognition, all of these require lots of computational performance operating within very restricted power envelopes.
For high performance computing, the path is a little more complex. For starters, someone has to build a Epiphany-based PCIe card suitable for HPC servers, and then an OEM has to be enticed to support that board. To deliver a reasonable amount of computation for a server — say, a teraflop or so — you would need multiple Epiphany chips glued to a card, which would necessitate a PCIe expansion setup of some sort. Not an impossibility, but probably not a job for a do-it-yourselfer.
More fundamentally though, the architecture has to add support for double precision floating point to be taken seriously for HPC (although applications like seismic modeling, image and audio processing, and video analysis are fine with single precision).
In any case, double precision is already on Adapteva’s roadmap. “We’ll definitely have something next year,” says Olofsson.
Beyond that, the company has plans on the drawing board to scale this architecture up to the teraflop/watt realm. Following a Moore’s Law trajectory, that would mean that by 2018 a 7nm Epiphany processor could house 1,000 cores and deliver a whopping two teraflops. Since such a chip would draw the same two watts as the current 100 gigaflops version, it could easily provide the foundation for an exascale supercomputer. Or a killer tablet. | <urn:uuid:64dbc9b7-7ef6-44ed-a84f-7deb9ae2f835> | CC-MAIN-2017-04 | https://www.hpcwire.com/2012/08/22/adapteva_unveils_64-core_chip/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00515-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936386 | 1,518 | 2.578125 | 3 |
When deciding where to store your data, traditional Relational database such as Oracle, Microsoft’s SQL Server, or the ever-popular open source MySQL are most popular. Alternatively, you could choose to use XML. Though XML is not the best choice to choose as a database in your applications, it has its own merits and demerits.
What to do, if you have opted for XML as a huge relational database:
- Design the XML Schema, more robust.
- Even if the XML schema changes often, code changes should be minimal to handle the schema changes.
- If your application is unmanaged code (C++), decide on which type of parser to use. There are a lot of parsers available, including open source and third party parsers. Go for an event triggered parser. (Ex: SAX Parser)
- The type of parser determines the performance of your application.
- Handling transaction is a major part of XML as a database. You can implement your own transaction logic, but make sure to handle the transactions properly for possible memory leaks.
- If your application is managed code (C#), there are a lot of ways in loading the XML data into your application. XML Serialization will load the data faster when compared to other methods.
- Since your database is huge, load the data from the database only when it is required. For doing this, design your own server, such that it will load the needed data into the memory and if the same data is required by the application, take it from the memory/server instead of accessing the database.
When to go for XML:
- XML is viable depending on the context. If your data is pretty static, and not changing much XML will be a good use.
- Configuration settings, sample data (even if it's millions of rows, but rarely changing), are all good uses of XML.
- Easily portable and platform independent.
- Human readable format
- Open source
XML is very verbose. Data stored in an XML file will take much more disk space then the same data stored in any reasonable database system. The name of a particular field will be stored twice. For example, to store a single integer variable XML representation will be <value>24</value>.
XML does not offer role-based security like other database applications.
XML is not the best choice to use as a database for a larger database applications. XML is best suited to store configurations and for smaller database applications. In order to use it for larger applications, safety precautions to be taken care right from the design, in order to ensure performance of your application. | <urn:uuid:191d3820-be44-4a1d-b820-6047d59f0deb> | CC-MAIN-2017-04 | https://www.hcltech.com/blogs/using-xml-database-software-applications | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00452-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.871125 | 545 | 2.6875 | 3 |
Energy harvesting is the conversion of ambient energy into electricity to drive small or mobile electronic and electrical devices. Wireless sensors are particularly in need of energy harvesting because they are increasingly deployed in numbers and locations where hard wiring or battery changing are impracticable. We are at an exciting stage with both because harvesting is becoming better and electronics is demanding less power -so they meet in the middle. Energy harvesting is the conversion of ambient energy into electricity to drive small or mobile electronic and electrical devices. Wireless sensors are particularly in need of energy harvesting because they are increasingly deployed in numbers and locations where hard wiring or battery changing are impracticable. We are at an exciting stage with both because harvesting is becoming better and electronics is demanding less power -so they meet in the middle.
The sectors that are at the tipping point in adopting these technologies are Aerospace & Military, Industrial, Automotive & Urban, Buildings, Health care and Logistics. For example, Boeing, NASA, United Technologies, Rockwell Collins and Microstrain have trials and rollouts of a rich variety of wireless sensors rendered more reliable and long lived by energy harvesting. They variously harvest heat, vibration, strain, solar and other ambient energy, increasingly employing several modes in one device to reduce or eliminate battery use.
In industrial applications, leaders in wireless sensors such as GE and Emerson are moving on to true meshed networks where Dust Networks manages with remarkably low power. This is very significant because the incorporation of energy harvesting is easier. Active RFID leader Identec Solutions has moved into real time location and wireless sensing and PMG Perpetuum's electrodynamic wide band vibration harvesting is now successful in industrial sensing applications
Automotive and urban
Automotive and urban uses of harvesting are now very varied and a rapidly growing market. Regenerative braking in cars and that solar panel on the roof are now yesterday's story and next comes Levant Power Systems adding energy harvesting shock absorbers. BBN Technologies is developing an urban scale wireless network and Novotech has roadway power systems based on harvesting. Most are based on harvesting movement but Automotive Thermoelectric generators on engine and exhaust are nearing market readiness.
Buildings now widely employ wireless controls and sensors that use batteries. Many have moved on to use energy harvesting, sometimes without any batteries. Indeed, more than 100,000 buildings in the world now have wireless sensors and actuators with energy harvesting and no batteries. Power from light, heat and movement are favorites here but the $20 billion Schneider Electric of France and others have demonstrated how thermoelectrics has a place too. In addition, Powerleap has self-powered smart flooring solutions. However, although China is installing 485 million smart meters, most will have hard wired communication for now.
Health care is a sector where there is not yet a lot to see in terms of wireless sensing let alone using energy harvesting. Yet a few percent of people die when cut open to change their heart defibrillator or pacemaker electronics. Finding a better way can be a matter of life and death in other procedures as well. Tagsense now has a diagnostic skin patch. Biorasis has a continuous glucose monitoring device. Massachusetts Institute of Technology is developing embedded and wearable sensor networks for sports medicine, smart utilities and interactive media such as human-computer interfaces that will help the disabled. GeorgiaTech is developing self powered implantable nano devices. There is a tsunami wave of eagerly expected inventions about to hit the health care sector. The more enthusiastic may even say, "Superwoman here we come."
In logistics, the leader in using active RFID with sensors on military supplies and equipment and in heavy civilian logistics is the Lockheed Martin Company Savi which now envisages tracking just about everything wirelessly. Indeed, Omnisense has developed seamless geolocation in a wireless sensor network.
Seeking Large Wireless Sensor Networks
Large wireless sensor networks are needed on trees to monitor forest fires and for holistic management of oil refineries, undersea prospecting and extraction, pollution outages and much more besides. Consequently, coping with more wireless sensor nodes and making them maintenance free for 20 years is concentrating the mind as well as getting longer range. Contributions here come from BIMAQ on large sensor networks, the work at Princeton University on low cost piezoelectric ribbons. Add Northeastern University progressing capacitive energy storage with much longer life than that of a rechargeable battery, Teledyne Benthos doing energy harvesting on the sea floor and Voltree Power creating electricity from living trees.
Allied Technologies Have a Place
There has been mixed success with beaming electric power to vehicles and devices, from the useful cordless toothbrush to the unsuccessful coils in the road to power electric vehicles. There is more to come with these technologies and they are part of the equation even if not quite within our definition of energy harvesting. Watch Leggatt & Platt, Proxima RF and Powercast. Widetronix is manufacturing small millimeter-sized, beta emission "batteries" with more than 25 years life for nanowatt to milliwatt applications in defense, medical, and logistics sectors, totaling $4 billion in addressable market, it estimates.
Many of these new markets are already substantial. The only available report analyzing the whole energy harvesting market, the IDTechEx, "Energy Harvesting 2010-2020" forecasts a market of over US$2 billion dollars in 2016 just for the harvesting elements, excluding power storage and electronic interfaces. We see nearly 10 billion energy harvesting devices being sold in 2020. Energy harvesting on wireless sensors is only a modest part of this number but their sophistication will often justify premium pricing. IDTechEx forecasts the market for Wireless Sensor Networks (truly meshed ones) in its report "Wireless Sensor Networks 2010-2020." WSN is otherwise known as Third Generation Active RFID. This report includes WSN with and without harvesting and it shows a market of over $2.7 billion in 2020, that being larger than that for Real Time Locating Systems RTLS at that time. RTLS is Second Generation Active RFID. For more see the IDTechEx report "Real Time Locating Systems 2010-2020." | <urn:uuid:17811ce9-ab49-4116-9d17-b7ed27d5a283> | CC-MAIN-2017-04 | https://www.asmag.com/print_article.aspx?id=10543 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00112-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93473 | 1,259 | 2.96875 | 3 |
Researchers Develop 'BlackForest' To Collect, Correlate Threat Intelligence Researchers at the Georgia Tech Research Institute develop the BlackForest system to help organizations uncover and anticipate cyberthreats.
Whether it's on the ground or in cyberspace, knowing that an army is going to attack you ahead of time is a nice advantage to have.
That idea is the linchpin of BlackForest, a new cyber intelligence collection system developed by experts at the Georgia Tech Research Institute (GTRI). The system is meant to complement other GTRI systems that are designed to help companies and other organizations deal with sophisticated attacks.
The system works by collecting information from a variety of sources on the public Internet, such as hacker forums and other sites where malware authors and others congregate. The system then connects the information and relates it to past activities to help organizations figure out if and how they are being targeted.
Users can identify sources of information along with keywords to focus on, says Christopher Smoak, a research scientist in the GTRI's Emerging Threats and Countermeasures Division.
"The system collects information from those sources and builds a common picture of the linkages provided by the information," he tells Dark Reading. "Analysts may then utilize the interface to customize the relationships generated if desired. Further, the system also integrates a number of automated analysis mechanisms to provide a baseline clustering, classification, and correlation capability."
For instance, "we may be interested in tying a username on a forum to a user in an IRC channel," he says. "This can help us in identifying interesting people, tools, and information through various linkages. Identifying someone on a forum that has previously posted credit card information as being related to someone active in IRC speaking of a future attack may lead us to conclusions about the type, scale, and potential target for such an attack."
As another example, if attackers are coordinating a DDoS attack via social media, the BlackForest system can measure the scale of involvement, as well as who is participating, who is coordinating, and other attack specifics. This can be used to prevent attacks as organizations learn more about common methodologies for communication and coordination.
There is also value in organizations tracking certain forums to see whether data has leaked.
"You have to monitor what's out in the wild that your company or organization owns," Ryan Spanier, head of the GTRI's Threat Intelligence Branch, said in a press release. "If you have something of value, you will be attacked. Not all attacks are successful, but nearly all companies have some computers that have been compromised in one way or another. You want to find out about these as soon as possible."
Smoak offered a hypothetical example about a company named Acme that wants to protect itself. "We monitor open-source data to identify any references to Acme. All of these references are automatically cross-referenced with other collected data, which happens to provide a linkage to a known threat actor. We can utilize what we already know about this actor to identify potential targets and techniques."
Two other GTRI cyber security systems are already available -- Apiary, a malware intelligence system that helps corporate and government security officials share information about the attacks they are fighting, and Phalanx, which helps fight spear phishing attacks.
"We want to provide something that is predictive for organizations," Spanier said. "They will know that if they see certain things happening, they may need to take action to protect their networks."
Brian Prince is a freelance writer for a number of IT security-focused publications. Prior to becoming a freelance reporter, he worked at eWEEK for five years covering not only security, but also a variety of other subjects in the tech industry. Before that, he worked as a ... View Full Bio | <urn:uuid:29ca4234-9c35-4459-80a0-a95e8b9294e6> | CC-MAIN-2017-04 | http://www.darkreading.com/researchers-develop-blackforest-to-collect-correlate-threat-intelligence--/d/d-id/1297570?_mc=RSS_DR_EDT | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00140-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95355 | 777 | 2.5625 | 3 |
Spyware goes by many names, including adware, malware, crimeware, scumware and snoopware, but no matter what you call it, its purpose is still the same: to creep into your computer files and steal your personal information.
Once the information is in their hands, hackers can steal your identity, use your credit cards, siphon funds from your bank accounts, and more.
Simply put: it’s bad news and you want nothing to do with it.
The good news is that spyware prevention is possible — and there are many ways to keep these dangerous programs at bay.
In addition to installing the right software, consumers can practice these computer security tips from Webroot:
Download software directly from the source. Primary common distributor of spyware information infection is free, pirated programs downloaded from file-sharing sites which have been booby-trapped with malware. Set your browser security settings to “high” and protect yourself from “drive-by” downloads and automatic installations of unwanted programs.
Avoid questionable websites, such as those featuring adult material. They’re notorious for spreading spyware threats and causing users problems.
Use a firewall and be suspicious of email and IM. For instance:
- Don’t open attachments unless you know the sender and are expecting a file from him or her.
- Delete messages you suspect are spam (don’t even open them).
- Avoid clicking on links within messages.
- Do not provide personal information to unsolicited requests — even if they seem legitimate. Instead, if you receive a request for personal information from your bank or credit card company, contact that financial institution directly, but do not click on a link embedded in the email message. | <urn:uuid:1d1152be-b3d2-463b-af0f-4b51dd351f68> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2011/01/21/spyware-prevention-101/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00258-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.919295 | 362 | 2.734375 | 3 |
It seems you can't go anywhere these days without hearing about Big Data. It's a broad term used to describe the challenge of working with massive complicated data sets. But the problem with looking at data from the point of view of volume or size is that it misses some of the particular benefits of finding those little bits of valuable information.
Some might describe finding bite-sized pieces of information similar to that of finding a needle in a haystack, but recent advancements in data visualization design and predictive analytics have made this much easier to do. Today, various data visualization tools can be used to not only assemble all of the available information and data inputs, but also output a complete illustrated vision. The use of these sorts of visuals can help key stakeholders see and act upon the information presented in a much more timely fashion.
A great example of this is a 2012 multistate outbreak of E. coli. The use of various analytics led public health and agriculture officials to identify the source of the outburst and a means of responding to it in record time.
After gathering medical records – and the ages, locations, and recent dietary patterns of patients who have been treated – visualization helped bring to light that the majority of people suffering from E. coli illnesses had eaten romaine lettuce that was sold primarily through the same grocery chain across different states.
From there, public health and agriculture officials were able to identify the farm, or farms, where the lettuce sold at those grocery stores was produced, and ensure that farm workers, distributors, and grocery staff are educated on the proper produce handling and hygiene recommendations.
Now, don't misunderstand me, there is certainly value to a holistic view of information as well. This is particularly true when it comes to the visualization and understanding of data you're working with. Regardless of the scope of your viewpoint, we should always attempt to comprehend the sum of the parts as intimately interconnected to the whole of the entire data set. Yet the challenges still remain for many corporate users of data and analytics. Getting the data is easy, using it and understanding it hard. | <urn:uuid:4fe59c89-6da8-4efb-907f-053793a40b51> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2912881/big-data-business-intelligence/big-data-visualization-finding-the-needle-in-a-haystack.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00460-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950348 | 417 | 2.921875 | 3 |
A thin film is majorly used in photovoltaic solar cells, microelectromechanical systems (MEMS), semiconductor and electrical (circuit boards), and optical coatings. Earlier it was mostly used for plating and coating purposes. In recent times, its application is mostly in in photovoltaic solar cells because of its smaller size and higher efficiency.
The Asia-Pacific market for thin films material is estimated to grow from $1,458.6 million in 2013 to $2,448 million by 2018, at a CAGR of 10.9% from 2013 to 2018. Asia-Pacific’s growing demand for energy and basic material will propel the region to increase the production of metals, chemicals, and paper. With the increasing infrastructure expenditure, the thin films market in Asia-Pacific is bound to grow significantly, and will outpace other regions. The market will grow during the next five years due to the increased awareness of the benefits of solar energy. The drivers of this market include high demand from current and emerging applications, development and modernization of infrastructure, increased demand for efficiency and miniaturization, advancement in technology, and decreasing raw material prices.
The report, ‘Asia-Pacific Thin Films Material Market’, defines and segments the market with analyses and projections of size with respect to value. It is analyzed in terms of revenue ($ million) for all regions and their respective countries. Revenues from each country have been broken down by types of thin film material and the end-user industry.
This report provides a comprehensive review of major market drivers, restraints, opportunities, winning imperatives, challenges, and key issues in the Asia-Pacific market for thin film. It covers the market and its trends in key countries like China, India, and Japan. It also covers major end-user industries, namely, photovoltaic solar cells, MEMS, semiconductors and electrical (circuit boards), optical coating, and others. The types of thin films that are identified and included in the report are cadmium telluride (CdTe), copper indium gallium selenide (CIGS), amorphous silica, and other types. Also, the types of deposition processes—physical and chemical—are also mentioned in the report.
The report on the Asia-Pacific thin film material market also provides an extensive competitive landscape of the companies operating in this market. It includes the company profiles of and competitive strategies adopted by various market players, including Anwell Solar, Hanergy Holding Group Limited, KANEKA Solar Energy, Moser Baer India Ltd., Solar Frontier, Suntech Power Holdings Co., Ltd., and Trony Solar Holdings Co., Ltd.
Report Customization Options
Along with market data, you can also customize MMM offerings that are in keeping with your company’s specific needs. Customize your report on the Asia-Pacific thin film material market to get an insight into all-inclusive industry standards and a deep-dive analysis of the following considerations:
- Case studies on the top market player that holds 80% of the Asia-Pacific market share, indicating its strategies, product innovations, and methods to specifically target market
- Technical details on the applications of thin film material market (conversion rates, $/watt rate, life time PV cell)
- Additional information on second-generation photovoltaic cells (apart from existing information on third-generation photovoltaic cells), which will help clients obtain a clear overview of the market
- Government policies for the raw material used for thin films (cadmium, indium)
- Trends in renewable energy and country-specific targets
- Comparison of wind and water energy with solar energy as per $/watt, conversion rates, and feasibility
- Analysis of non-solar application of Asia-Pacific’s thin film material market and the scope of growth in these nascent markets (MEMS, semiconductors)
Please fill in the form below to receive a free copy of the Summary of this Report
Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement | <urn:uuid:a5adb58b-d662-43d4-9cc2-75dedff8621a> | CC-MAIN-2017-04 | http://www.micromarketmonitor.com/market/asia-pacific-thin-films-material-7544448702.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00094-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913899 | 863 | 2.609375 | 3 |
By now you’re likely familiar with PUE, or Power Usage Effectiveness, an industry standard measurement for the energy efficiency of a data center. Despite some claims that PUE is easily manipulated or not enough to judge full environmental impact, many data centers (including Green House Data) are using PUE to measure efficiency.
The Green Grid, a consortium of technology companies who aim to improve data center efficiency, has collaborated with industry groups around the world to develop several new metrics to measure carbon emissions and energy use in the data center, including GEC, ERF, CUE, and DCeP. What are these new measurements, and how does Green House Data stack up?
First, a quick overview of how the standards were developed. The Green Grid is a nonprofit organization founded in 2006, consisting of technology industry leaders including AMD, Dell, EMC, Microsoft, and IBM. In order to address inconsistencies in the application of PUE and other metrics, they gathered together with the US Department of Energy; the European Commission Joint Research Centre Data Centres Code of Conduct; Japan’s Ministry of Economy, Trade and Industry; and Japan’s Green IT promotion council.
Their latest release is an agreement regarding data center productivity and includes the methodology for several metrics, originally developed in 2012. The main focus of the agreement is for data center operators to begin measuring DCeP, or Data Center Energy Productivity.
The goal is to go beyond pure efficiency to reflect how much actual productive work happens in a data center relative to the energy use, green energy sources, and carbon emissions. The more metrics in place to measure these factors, the more ways data center managers can tweak their operations to maximize energy use. Here are the main metrics detailed in the agreement.
Power Usage Effectiveness (PUE)
The classic. PUE measures the total energy use of the data center compared to the energy used by IT equipment. In other words, for every watt used to power IT equipment, how much is used for cooling, lighting, and additional infrastructure? Green House Data averages about 1.25 PUE across all data centers, and our new facility will have a low PUE of 1.14.
Green Energy Coefficient (GEC)
GEC is a measurement of how much of a facility energy use is sourced from green providers like wind, solar, or geothermal plants. This includes “any form of renewable energy for which the data center owns the rights to a green energy certificate or renewable energy certificate”. The amount of green energy purchased or consumed divided by the total energy consumption equals the GEC. Green House Data has a GEC of 1.0 as 100% of our energy use is covered by RECs.
Energy Reuse Factor (ERF)
The ERF of a data center reflects how much energy is exported for reuse outside of data center operations. For example, heat given off in the hot aisle of a data center that is then piped to heat other office areas would be reuse energy. Any energy that is reallocated outside of the data center floor and the infrastructure support (cooling, etc) is divided by the total energy consumption of the facility to find the ERF.
Carbon Usage Effectiveness (CUE)
The most involved of all the metrics, CUE is also potentially the most interesting, as it quantifies the actual emissions involved compared to the work performed. This measurement currently excludes the emissions involved in producing the actual equipment, shipping it to the data center, construction and materials of the data center building, and so forth. It focuses merely on operational energy use and emissions.
CUE is calculated as Carbon Dioxide Emission Factor (CEF) multiplied by PUE. The CEF is found from Energy Star data in the United States and is the kg of CO2 produced for each kilowatt-hour of electricity used. This varies a large amount depending on which grid system is used by the data center, which in turn is entirely based on location.
To calculate the CEF, data center operators should find their energy grid subregion (in our case, the Rocky Mountain Power Area or RMPA) on the Energy Star table describing Indirect Greenhouse Gas Emissions Factors. Then grab the kg Co2e/MBtu measurement and divide that by 1 btu, which equals 0.293Wh. For the RMPA, that would be:
254.6387 kg Co2e/MBtu / 0.293Wh = 869.07 KgCo2e/MWh = 0.86907 KgCO2e/kWh
In other words, Green House Data emits 0.87 kilograms of CO2 for every kilowatt-hour of electricity used based on the efficiency of the Rocky Mountain Power Area electric grid. This measurement is independent of the GEC as all energy credits are eventually fed into the standard power grid. This measurement isn't the most accurate or up-to-date, as the latest EPA chart is from 2010. Admittedly, grid improvements are few and far between.
The CEF calculated above is multiplied by PUE to get the total CUE. Green House Data has a cumulative average CUE of 1.0863. The new facility will have a CUE of 0.99.
Together these new metrics offer a more detailed picture of the efficiency of a data center facility, allowing operators to more closely examine energy use and how to release less carbon emissions. Although most data center providers aren’t widely reporting CUE, ERF, or GEC just yet, the Green Grid and the associated organizations helping to create these metrics continue to refine measurements of data center energy use and impact.
In the future, measurements like Data Center Energy Productivity (DCeP) could allow more detailed reporting on the amount of actual work performed by the servers compared to their energy use. The new agreement does include a formula to measure DCeP, but it is contingent on "tasks initiated" during the assessment window, leaving some ambiguity at hand (of course, plenty of people argue that PUE is also highly ambiguous). Measuring the number of tasks completed can be difficult for service providers as well, as insight into customer environments can vary.
The following section includes some suggestions for Data Center Productivity Proxies, or simpler, more comparable methods of measuring productivity including:
These metrics are all great ways to maximize the use of energy and existing equipment, but can vary dramatically depending on the goal of each data center (research vs. development, for example).
What do you think about the metrics recommended by the Green Grid to measure data center efficiency and carbon emissions? Let us know @greenhousedata on Twitter!
Posted By: Joe Kozlowicz | <urn:uuid:302857cb-a9a9-415c-98b3-15c4d68da1f7> | CC-MAIN-2017-04 | https://www.greenhousedata.com/blog/beyond-pue-new-metrics-for-data-center-energy-efficiency | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00488-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931601 | 1,386 | 3.171875 | 3 |
The Internet of Things will connect every thing with everyone in an integrated global network. People, machines, natural resources, production lines, logistics networks, consumption habits, recycling flows, and virtually every other aspect of economic and social life will be linked via sensors and software to the IoT platform, continually feeding Big Data to every node – businesses, homes, vehicles – moment to moment, in real time. Big Data, in turn, will be processed with advanced analytics, transformed into predictive algorithms, and programmed into automated systems to improve thermodynamic efficiencies, dramatically increase productivity, and reduce the marginal cost of producing and delivering a full range of goods and services to near zero across the entire economy.
The Internet of Things European Research Cluster, a body set up by the European Commission, the executive body of the European Union, to help facilitate the transition into the new era of “ubiquitous computing,” has mapped out some of the myriad ways the Internet of Things is already being deployed to connect the planet in a distributed global network.
The IoT is being introduced across industrial and commercial sectors. Companies are installing sensors all along the commercial corridor to monitor and track the flow of goods and services. For example, UPS uses Big Data to keep up to the moment with its 60,000 vehicles in the United States. The logistics giant embeds sensors in their vehicles to monitor individual parts for signs of potential malfunction or fatigue so they can replace them before a costly breakdown on the road occurs.¹⁹
Sensors record and communicate the availability of raw resources, inform the front office on current inventories in the warehouses, and troubleshoot dysfunctions on the production lines. Other sensors report on the moment to moment changes in the use of electricity by appliances in businesses and households, and their impact on the price of electricity on the transmission grid. Electricity consumers can program their appliances to reduce their power consumption or switch off during peak periods of electricity use on the power lines to prevent a dramatic spike in the electricity price or even a brownout across the grid and receive a credit on their next month’s electricity bill.
Sensors in retail outlets keep the sales and marketing departments apprised of which items are being looked at, handled, put back on shelves, or purchased to gauge consumer behavior. Other sensors track the whereabouts of products shipped to retailers and consumers and keep tabs on the amount of waste being recycled and processed for reuse. The Big Data is analyzed 24/7 to recalibrate supply chain inventories, production and distribution processes, and to initiate new business practices to increase thermodynamic efficiencies and productivity across the value chain.
The IoT is also beginning to be used to create smart cities. Sensors measure vibrations and material conditions in buildings, bridges, roads, and other infrastructure to assess the structural health of the built environment and when to make needed repairs. Other sensors track noise pollution from neighborhood to neighborhood, monitor traffic congestion on streets, and pedestrian density on sidewalks to optimize driving and walking routes. Sensors placed along street curbs inform drivers of the availability of parking spaces. Smart roads and intelligent highways keep drivers up to date on accidents and traffic delays. Insurance companies are beginning to experiment with placing sensors in vehicles to provide data on the time of day they are being used, the locations they are in, and the distances traveled over a given period of time to predict risk and determine insurance rates.²⁰ Sensors embedded in public lighting allow them to brighten and dim in response to the ambient lighting in the surrounding environment. Sensors are even being placed in garbage cans to ascertain the amount of rubbish in order to optimize waste collection.
The Internet of Things is quickly being applied in the natural environment to better steward the Earth’s ecosystems. Sensors are being used in forests to alert firefighters of dangerous conditions that could precipitate fires. Scientists are installing sensors across cities, suburbs, and rural communities to measure pollution levels and warn the public of toxic conditions so they can minimize exposure by remaining indoors. In 2013, sensors placed atop the U.S. Embassy in Beijing reported hour to hour changes in carbon emissions across the Chinese capital. The data was instantaneously posted on the Internet, warning inhabitants of dangerous pollution levels. The information pushed the Chinese government into implementing drastic measures to reduce carbon emissions in nearby coal-powered plants and even restrict automobile traffic and production in energy-intensive factories in the region to protect public health.
Sensors are being placed in soil to detect subtle changes in vibrations and earth density to provide an early warning system for avalanches, sink holes, volcanic eruptions, and earthquakes. IBM is placing sensors in the air and in the ground in Rio de Janeiro to predict heavy rains and mudslides up to two days in advance to enable city authorities to evacuate local populations.²¹
Researchers are implanting sensors in wild animals and placing sensors along migratory trails to assess environmental and behavioral changes that might affect their well-being so that preventative actions can be taken to restore ecosystem dynamics. Sensors are also being installed in rivers, lakes, and oceans to detect changes in the quality of water and measure the impact on flora and fauna in these ecosystems for potential remediation. In a pilot program in Dubuque, Iowa, digital water meters and accompanying software have been installed in homes to monitor water use patterns to inform homeowners of likely leaks as well as ways to reduce water consumption.²²
The IoT is also transforming the way we produce and deliver food. Farmers are using sensors to monitor weather conditions, changes in soil moisture, the spread of pollen, and other factors that affect yields, and automated response mechanisms are being installed to ensure proper growing conditions. Sensors are being attached to vegetable and fruit cartons in transit to both track their whereabouts and sniff the produce to warn of imminent spoilage so shipments can be rerouted to closer vendors.²³
Physicians are even attaching or implanting sensors inside human bodies to monitor bodily functions including heart rate, pulse, body temperature, and skin coloration to notify doctors of vital changes that might require proactive attention. General Electric (GE) is working with computer vision software that “can analyze facial expressions for signs of severe pain, the onset of delirium or other hints of distress” to alert nurses.²⁴ In the near future, body sensors will be linked to one’s electronic health records, allowing the IoT to quickly diagnose the patient’s likely physical state to assist emergency medical personnel and expedite treatment.
Arguably, the IoT’s most dramatic impact thus far has been in security systems. Homes, offices, factories, stores, and even public gathering places have been outfitted with cameras and sensors to detect criminal activity. The IoT alerts security services and police for a quick response and provides a data trail for apprehending perpetrators.
The IoT embeds the built environment and the natural environment in a coherent operating network, allowing every human being and every thing to communicate with one another in searching out synergies and facilitating interconnections in ways that optimize the thermodynamic efficiencies of society while ensuring the well-being of the Earth as a whole. If the technology platforms of the First and Second Industrial Revolutions aided in the severing and enclosing of the Earth’s myriad ecological interdependencies for market exchange and personal gain, the IoT platform of the Third Industrial Revolution reverses the process. What makes the IoT a disruptive technology in the way we organize economic life is that it helps humanity reintegrate itself into the complex choreography of the biosphere, and by doing so, dramatically increases productivity without compromising the ecological relationships that govern the planet. Using less of the Earth’s resources more efficiently and productively in a circular economy and making the transition from carbon-based fuels to renewable energies are defining features of the emerging economic paradigm. In the new era, we each become a node in the nervous system of the biosphere.
While the IoT offers the prospect of a sweeping transformation in the way humanity lives on earth, putting us on a course toward a more sustainable and abundant future, it also raises disturbing issues regarding data security and personal privacy, which will be addressed at length in chapter 5 and in other chapters throughout the book.
Some of the leading information technology companies in the world are already at work on the build-out of the Internet of Things. General Electric’s “Industrial Internet,” Cisco’s “Internet of Everything,” IBM’s “Smarter Planet,” and Siemens’ “Sustainable Cities” are among the many initiatives currently underway to bring online an intelligent Third Industrial Revolution infrastructure that can connect neighborhoods, cities, regions, and continents in what industry observers call a global neural network. The network is designed to be open, distributed, and collaborative, allowing anyone, anywhere, and at any time the opportunity to access it and use Big Data to create new applications for managing their daily lives at near zero marginal cost.
From The Zero Marginal Cost Society by Jeremy Rifkin. Copyright © 2014 by the author and reprinted by permission of Palgrave Macmillan, a division of Macmillan Publishers Ltd.
A complete list of sources can be found in the book.
Jeremy Rifkin, author of The Zero Marginal Cost Society: The Internet of Things, the Collaborative Commons, and the Eclipse of Capitalism, one of the most popular social thinkers of our time, is a bestselling author whose 20 books have been translated into 35 languages. Rifkin is an advisor to the European Union and to heads of state around the world and a lecturer at the Wharton School’s Executive Education Program at the University of Pennsylvania. For more information, please visit http://www.thezeromarginalcostsociety.com. | <urn:uuid:922d1eb9-7b62-4b28-802a-e6737a748574> | CC-MAIN-2017-04 | http://data-informed.com/internet-of-things-eclipse-capitalism/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00120-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927257 | 2,003 | 3.234375 | 3 |
Is that really broadband? A look at the technologies in play.
- By Kevin McCaney
- Mar 12, 2013
The National Broadband Plan has set ambitious goals for high-speed Internet connectivity across the country, and agencies, cities and other municipalities have hopped on the broadband bandwagon.
But what exactly counts as broadband? The Federal Communications Commission originally designated broadband as anything faster than 200 kilobits/sec, which basically meant anything that wasn't the old dial-up modem, then upgraded it to 768 kilobits/sec downstream and 200 kilobits/sec upstream. The National Telecommunications and Information Administration still considers 768 kilobits/sec to be broadband at the most basic level, but the FCC in 2010 set the bar for the national broadband plan at 4 megabits/sec downstream and 1 megabit/sec upstream.
The range of available broadband technologies is still wide. Below is a rundown of the most common types of broadband, their range of transmission speeds and, to put it into context for the everyday user, an estimate of how long, on average, each technology would take to download a 1M book, 4M song and 6.14G movie, drawn from the trove of information available from NTIA's National Broadband Map.
Just as there is no single type of broadband, there are no guaranteed speeds for any broadband technology. Upload and download speeds often differ, and their range can vary depending on a number of factors, such as the standards being used. For wireless, speeds also are influenced by internal equipment on the customer's end, such as whether your WiFi router is 802.11 a, b, n or so on. Note also: The fastest speeds listed with each technology are ideal, almost theoretical, peaks that everyday users are not likely to reach.
Old copper wireline
The National Broadband Map's list of broadband technologies includes venerable, if past-their-prime, technologies that use copper phone lines to transmit data, including T-1, which was once considered the starting point for broadband, and ISDN, which hasn't made the grade for a long time.
Speed: 1.544 megabits/sec
Book: 5.3 seconds
Song: 21.3 seconds
Movie: 9 hours, 6 minutes
Speed: ISDN: 64 kilobits/sec to 128 kilobits/sec
Book: 40 seconds
Song: 2 minutes, 40 seconds
Movie: 68 hours, 16 minutes (read the screenplay instead)
Digital subscriber lines also transmit data over traditional copper phone lines, but at greater speeds that T1 or ISDN. Asymmetric DSL, used most often by home subscribers, has faster download speeds than upload speeds, since it is designed primarily for consumers of information. Symmetric DSL's upload and download speeds are the same, and it is more common in the enterprise. Although it has been around a long time and uses copper wires, DSL speeds are still getting faster and DSL is still the fastest-growing type of wired broadband.
Speed: 500 kilobits/sec to 40 megabits/sec
Book: 2.7 to 5.3 seconds
Song: 10.7 to 21.3 seconds
Movie: 4 hours, 33 minutes to 9 hours, 6 minutes
Cable modem service uses the coaxial cables and other cable TV infrastructure to deliver Internet access. The current standard is DOCSIS 3.0 (Data Over Cable Service Interface Specifications), but some older DOCSIS standards are sometimes used, leading to a wide range in available speeds.
Speed: 512 kilobits/sec to 20 megabits/sec
Book: 0.3 seconds to 10.4 seconds
Song: 1.3 seconds to 41.7 seconds
Movie: 33 minutes to 9 hours, 6 minutes
Example: Cable broadband providers joined the FCC in an effort launched in November 2011 to provide inexpensive service to low-income families.
Fiber optic, the fastest broadband technology, converts electrical signals into light and then sends the light through glass fibers that are about the diameter of a human hair.
Speed: 5 megabits/sec to 150 megabits/sec
Book: 0.1 seconds to 1.3 seconds
Song: 0.3 seconds to 5.3 seconds
Movie: 8 minutes to 2 hours, 16 minutes.
Example: Lincoln, Neb., extends its fiber-optic municipal network.
Electric power line
Sometimes referred to as broadband-over-power lines (BPL), this technology sends data over the power lines connected to a consumer's residence, where special modems from the power companies provide access to the Internet.
Speed: 500 kilobits/sec to 3 megabits/sec
Book: 21.3 seconds to 10.4 seconds
Song: 5.3 seconds to 41.7 seconds
Movie: 2 hours, 16 minutes to 17 hours, 47 minutes
Example: The economic stimulus in 2009 included funding for efforts to establish BPL networks for rural customers in Alabama, Indiana, Michigan and Virginia.
Terrestrial Fixed Wireless, unlicensed and licensed.
Unlicensed wireless provides broadband service to a specific location using spectrum shared by ISPs. Licensed wireless, as the name suggests, uses spectrum licensed to a provider. Terrestrial fixed wireless includes WiFi and WiMax.
Speed: Up to 40 megabits/sec.
Book: 0.2 seconds
Song: 0.6 seconds
Movie: 16 minutes
Examples: Houston installs high-speed WiMax coverage for the 640-square-mile city, Eduroam gives university users secure WiFi access to the cloud and Staunton, Va., builds a public WiFi network.
Terrestrial Mobile Wireless
Using spectrum licensed to providers, it includes 4G LTE and mobile WiMax.
Speed: LTE: 100 megabits/sec; WiMax: 40 megabits/sec.
Book: 0.1 seconds
Song: 0.3 seconds
Movie: 8 minutes
Examples: Police in Tampa and St. Petersburg, Fla., used a first-of-its-kind, law enforcement-only LTE cellular network as part of security operations during last year's Republican National Convention and
Denver area emergency crews set to get an LTE broadband system. | <urn:uuid:cb0a2139-f25b-4cc1-875d-cd3b5b3fe4f1> | CC-MAIN-2017-04 | https://gcn.com/articles/2013/03/12/boradband-technologies-speeds.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00332-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.897603 | 1,285 | 2.984375 | 3 |
Reducing PC energy consumption can save electricity costs and reduce environmental impact. Small enterprises running a fleet of Windows XP desktop PCs and laptops can reduce power consumption costs by half by switching to Windows Vista. In this research note, we describe the results of lab tests performed by Info-Tech Research Group that compare power consumption for Windows XP versus Windows Vista.
Main points of discussion include:
- XP and Vista power consumption under high and low power states.
- The effect of Aero on Vista overall power consumption.
- Sleep mode improvements in Windows Vista.
- Group Policy differences between Windows XP and Windows Vista.
- Sample power running costs under Windows XP and Windows Vista.
- How power consumption translates into carbon dioxide emission reductions.
This note does not include discussion around conducting a total cost of ownership analysis of the upgrade to Vista. This note focuses on one aspect of the upgrade, which is the change in power consumption of the OS. | <urn:uuid:b0b12c30-fd3f-457c-90b2-71d78bb951bc> | CC-MAIN-2017-04 | https://www.infotech.com/research/power-consumption-contest-xp-vs-vista | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00332-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.895024 | 190 | 2.734375 | 3 |
Networking 101: Understanding Tunneling
The computing world has become dependent on various types of tunneling. All remote access VPN connections use tunnels, and you'll frequently hear the geeks talking about SSH tunnels. You can accomplish amazing things with tunnels, so sit back and relax while you enjoy a gentle introduction to tunneling and its uses. If you're looking for IPSEC details, we'll cover that in a future Networking 101.
A tunnel is a mechanism used to ship a foreign protocol across a network that normally wouldn't support it. Tunneling protocols allow you to use, for example, IP to send another protocol in the "data" portion of the IP datagram. Most tunneling protocols operate at layer 4, which means they are implemented as a protocol that replaces something like TCP or UDP.VPN tunnels allow remote clients to tunnel into our network. This supports the previous notion of tunnels being used for "unsupported protocols," even though that may not be apparent. If we VPN into work to gain access to printers or file sharing, it's probably because ports 139 and 445 (the Windows mating ports) are blocked from the outside. They are, in effect, unsupported TCP ports across our border routers. But if we allowed IPSEC or PPTP across the border, to known VPN servers, then everything "just works."
Your packets destined for the Active Directory server's port 445 will be hidden with the VPN packets. When they reach the VPN server, it will demux (de-multiplex, AKA disassemble) the packet and then forward it onto the internal network. When it hits the internal network, the packet's source address is now the VPN server's internal IP, so that responses can go back to the VPN server. Other than that, the packet is exactly as you intended it at this point. Upon receiving a response, the VPN server will encapsulate that packet by adding the VPN headers, and then ship it back to you out its external interface.
A few interesting things to note about the VPN tunnel are: once your data hits the internal network it's already been unencrypted, and when your data is traversing the Internet there is extra "stuff" attached to the packet.
Unmentioned, but probably obvious, is that VPN protocols will also encrypt your data before transmission. It doesn't matter for understanding tunneling, but it's worth mentioning. Take notice that the encryption is not end-to-end, i.e. you and the server's communication are not truly secure. Surely it's secure from prying eyes between yourself and your work, but as soon as packets are shipped beyond the VPN server, they're once again unencrypted.
To understand the second interesting point, let's take a look at how basic IP encapsulation works. Conceptually, we're going to nest packets. That is, the data portion of the outer IP packet is going to contain an entire IP packet itself. Neat, isn't it? We've just described an IPIP tunnel: IP living in IP packets.Path MTU issues that crop up when people use tunnels.
A wonderfully geeky thing to do is "fire up an ssh tunnel." This means that you're using the SSH protocol to pass data around, tunneled. X11, i.e. a program that runs a GUI and requires that you have a display available, can be tunneled over SSH very easily. X clients (the window that pops up) will try connecting to a display. If you're SSH'd into a server with the right options set, your X connection attempts will be tunneled back to your local machine, where they can connect with your local X server. This "just works" for Unix to Unix connections if you're already running a window manager. If you're in Windows-land, you'll need to run an X server via cygwin or some commercial product. Give it a try by SSH'ing to something with 'ssh Y email@example.com' and run 'firefox' once you're in. It should display on your local computer, and it did so over an encrypted SSH tunnel!
Because it's easy to talk about OpenSSH's capabilities, and it's instructive for tunneling concepts, let's take a look at two other tricks with SSH.
You can forward a port from your computer to a remote computer, which has the result of tunneling your data over SSH in the process, making it secure. This may not seem useful, after all, why would I want a port on my computer being forwarded to another computer? The answer lies within some clarification. The port forwarding function of SSH works by first listening on a local socket for a connection. When a connection is made, SSH will forward the entire connection onto the remote host and port. This is a one-port VPN!
For example: 'ssh -L80:workserver.com:80 firstname.lastname@example.org'
This command creates an SSH connection to your workdesktop.com computer, but at the same time opens port 80 on your local machine. If you point your web browser at http://localhost, the connection will actually be forwarded through your SSH connection to your desktop, and sent onto the workserver.com server, port 80. This is very useful for accessing intranet-only sites from home, without connecting to a VPN first.
The latest OpenSSH version also supports tunneling IP over SSH. Actually, it supports Ethernet too, for the purposes of bridging two Ethernet broadcast domains together; encrypted over SSH! OpenSSH's Tunnel option allows people to set up fully functioning SSH-VPN tunnels that will work anywhere SSH works. It creates the tunnel interfaces on both sides, and the only manual configuration necessary is to adjust the routing table. If you want all traffic destined for your work's network to be sent through the encrypted tunnel, you simply add a route for that network and point it through the tunnel interface that SSH created automatically. This is truly the most hassle-free VPN setup available.
Tunnels tunnel other protocols. Hopefully these examples have provided enough insight into tunneling to spark an interest in some, or at least demystify this technology for others. Happy encapsulating. | <urn:uuid:db9cf9a2-512d-416f-873f-5f0338164cb9> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/netsp/article.php/3624566/Networking-101-Understanding-Tunneling.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00240-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933042 | 1,275 | 3.71875 | 4 |
Use Cloud BPM to Meet National Institute of Standards and Technology Guidelines for Cloud Computing
Cloud computing is confusing. The term covers multiple deployment and service models, and the decision to “go cloud” requires thought around economic considerations and security. Luckily, the National Institute of Standards and Technology (NIST) has published a detailed document for anyone looking for information, definitions and guidance around cloud computing. While NIST’s “DRAFT Cloud Computing Synopsis and Recommendations” report has been prepared for use by Federal agencies, commercial IT and business leaders can also learn valuable lessons from federal models for IT and security. A review of the report shows that Appian’s Cloud BPM meets NIST’s key considerations for a variety of cloud models.
Cloud confusion arises, the report states, because “cloud computing is not a single kind of system, but instead spans a spectrum of underlying technologies, configuration possibilities, service models, and deployment models.” The NIST-established definition of cloud computing covers four deployment models:
- Private cloud: The cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on premise or off premise.
- Community cloud: The cloud infrastructure is shared by several organizations and supports a specific community that has shared objectives and concerns. Management and location options are the same as for a private cloud.
- Public cloud: The cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services.
- Hybrid cloud: The cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability.
It also describes three cloud service models:
- Cloud Software as a Service (SaaS): Using a provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a Web browser.
- Cloud Platform as a Service (PaaS). Creating in-house applications that are deployed onto the cloud
infrastructure. The consumer does not manage or control the underlying cloud infrastructure, but has control over the deployed applications.
- Cloud Infrastructure as a Service (IaaS). The provider provisions processing, storage, networks, and other fundamental computing resources for the consumer to deploy and run arbitrary software. The consumer does not control the underlying cloud infrastructure but has control over operating systems, storage and deployed applications.
Deciding what type of cloud computing is right for your organization requires examining these deployment and service model variables. It also requires weighing economic considerations (pay-for-use pricing, provisioning elasticity and elimination of large up-front costs) against any perceived lack of control. When you find the right mix, you’ll see that cloud computing for businesses and government makes sense on multiple levels. There are cost savings. There are time and resource savings. There are space savings. There are efficiencies that were pipe dreams before tapping into the cloud.
Of course, protecting your information and your clients’ information remains the top priority — as it should be. The report states, “As complex networked systems, clouds are affected by traditional computer and network security issues, [however] by imposing uniform management practices, clouds may be able to improve on some security update and response issues.”
While strict cloud security measures must be in place, this is true for any IT system. Computerworld recently reported on exaggerated cloud security fears, with former Federal CIO Vivek Kundra stating that “A lot of people are sort of driving this notion of fear around security…I think that’s been amplified, frankly, is because it preserves the status quo.” Kundra was fond of pointing out that the federal Recovery.gov site is hosted on Amazon’s EC2 cloud.
Appian’s Cloud BPM helps achieve cost-saving goals while maintaining the highest level of security across many cloud deployment and service models. We support multiple deployment models, as well as both SaaS and PaaS service models. And Appian Cloud provides reliability and security that matches – or exceeds – that of the best internally managed environments. Appian’s reliability and security guarantees include 99.5% uptime, SAS-70 Type II infrastructure audit reports, SAML or LDAP/AD integration for secure authentication and single sign-on, SSL encryption of all communication between systems, compliance with national data privacy laws through local hosting, and FISMA security certification. In short, we cover all your security needs in the cloud in order for you to do business.
Cloud computing is the wave of the future, but be careful: make sure the cloud technology you choose offers deployment and service flexibility, and meets the highest security standards. If you do that, nothing can stop your organization (and your partners and customers) from enjoying all the benefits cloud computing offers.
-Ben Farrell, Director of Corporate Communications | <urn:uuid:547a5a03-1786-47d2-809f-9aa340750439> | CC-MAIN-2017-04 | http://www.appian.com/blog/bpm/use-cloud-bpm-to-meet-national-institute-of-standards-and-technology-guidelines-for-cloud-computing | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00415-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935757 | 1,035 | 2.671875 | 3 |
The downing of a Malaysian airliner by an anti-aircraft missile and air and land battles in Israel and Gaza raise concerns about whether these conflicts will expand to include cyber-attacks, perhaps even drawing other nations, including the United States, into a virtual battle with real-world consequences.
But it's unlikely we'll see a ramping up of cyber-attacks - outside of isolated disruptions of websites.
Despite all of the fears about nation-state critical infrastructure attacks ... what you see is the equivalent of a cyber cold war.
Let's first look at the conflict in Ukraine, which pits the Ukraine government against Russia and its separatist allies.
When Russia invaded Crimea earlier this year, the head of Ukraine's security service reportedly said the country's telecommunications system had been attacked, with equipment installed in Russian-controlled Crimea used to impede the mobile phones of members of parliament (see Cyber's Role in Ukraine-Russia Conflict). But that apparently was the most damaging cyber-attack in the conflict.
The use of cyberwarfare in the Ukraine conflict will expand only if it helps the narrative of either side, advancing the agenda of political leaders to win the hearts of specific constituencies at home or abroad.
"Each side, to the extent that they're considering the use of cyberwar at all, wants the cyberwar to support the narrative because at this point the narrative is probably the most important thing going here," says Martin Libicki, senior management scientist at the Rand Corp., a national security think tank.
Libicki hasn't seen any evidence that Ukraine and Russia or its separatists allies have engaged in serious cyberconflict. That may be because such virtual battles are out of sight; cyber-attacks occurred but failed; the adversaries sought to limit the conflict; or "both sides found cyber as a weapon isn't what it's cracked up to be," he says.
That's what the Russians might have learned when, reportedly, they tried to wage an attack against the Ukraine election system that, if successful, would have allowed the Russians to declare that the winner was a fascist.
"That would have been a cyber-attack in clear support of a narrative," Libicki says. "That narrative would have said, 'See, these Ukrainians really are fascists.' Nobody in the West would have bought it because we know how to balance elections with election polls. But that doesn't matter so much for the Russians, as long as their people accepted it. And so, a cyber-attack by Russia on Ukraine that does not support that narrative may not be particularly useful."
Another deterrent to the use of cyber-attacks is the principle of mutually assured destruction that kept the United States and the Soviet Union at bay during the Cold War because of the fear a nuclear attack could set off a global conflagration.
That's why Russia hasn't used cyber to disrupt Ukraine's natural gas pipelines and vice versa. "Disruption of Russian pipeline would bring with it the disruption of a Ukrainian pipeline," says Paul Rosenzweig, founder of homeland security consultancy Red Branch Consulting and former deputy assistant secretary for policy at the Department of Homeland Security. "Despite all of the fears about nation-state critical infrastructure attacks ... what you see is the equivalent of a cyber cold war."
For other nations, including the U.S., to be drawn into the conflict would require one or both sides to cause disruption to infrastructure through a cyber-attack. But with nothing for the Russians or Ukrainians to gain from such attacks, pulling other nations into the conflict is remote.
Social Media Battlefield
For the moment, at least, it appears that any potential battling in cyberspace would be mostly propaganda advanced through social media, trying to score points for their respective cause.
We've seen a bit of that already. At the United Nations, U.S. Ambassador Samantha Powers says Ukrainian separatists boasted of shooting down a plane on social media but later deleted the post after learning the aircraft carried civilians. It served the separatists objectives when they thought the downed jet belonged to the Ukrainian military, but not when they discovered the missile attack killed nearly 300 passengers and crew.
In the Israeli-Gaza conflict, Hamas loyalists sent text messages to Israelis' mobile phones, purportedly from Israeli security services, warning: "Rocket from Gaza hit petrochemical plant in Haifa, huge fire, possible chemical leak, advised to evacuate Haifa." Although there was no attack or huge fire, the fake message could have caused frayed nerves.
Isaac Ben-Israel, head of the Tel Aviv University's Yuval Neeman Workshop for Science, Technology and Security, tells the Times of Israel that cyberattacks against Israel have grown by 900 percent since the latest Israel and Hamas conflict.
Around the same time that Israeli ground troops entered Gaza on July 17, Israeli hackers took down a number of prominent Hamas and Islamic Jihad websites, according to the Jerusalem Post. The sites either crashed or showed error messages.
From a cyberwar-perspective, this is about as bad as it will get for now. Hamas' cyber-attacks likely wouldn't be any more effective against Israeli infrastructure than the thousands of missiles it has launched against Israel. Although Israel, no doubt, has the technical wherewithal to cause great destruction via cyber-attacks if it wanted to, there really isn't much critical infrastructure that could be destroyed in Gaza that the Israelis haven't already demolished with kinetic weaponry.
The conflicts in Ukraine and the Middle East likely won't include serious cyber-attacks. There's really no need for them. | <urn:uuid:e8d51a6b-0733-49c6-b3da-25afb3132940> | CC-MAIN-2017-04 | http://www.bankinfosecurity.com/blogs/wheres-cyberwar-in-todays-wars-p-1711/op-1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00415-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.96489 | 1,125 | 2.734375 | 3 |
We are going back to networking basics with this post. In few lines below you will find most important theory that makes network gear do its job.
The main router job is to making routing decisions to be able to route packets toward their destination. Sometimes that includes recursive lookup of routing table if the next-hop value is not available via connected interface.
Routing decision on end devices
Lets have a look at routing decision that happens if we presume that we have a PC connected on our Ethernet network.
If one device wants to send a packet to another device, it first needs to find an answer to these questions:
- Is maybe the destination IP address chunk of local subnet IP range?
- If that is true, packet will be forwarded to the neighbour device using Layer 2 in the ARP example below.
- If that is not the case, does the device network card configuration include a router address through which that destination can be reached? (default gateway)
- Device then looks at his local ARP table. Does it include a MAC address associated with the destination IP address?
- If the destination is not part of the local subnet, does the local ARP table contain the MAC address of the nearest router? (MAC address to IP address mapping of default gateway router) | <urn:uuid:91a7ae53-7d2e-417b-b271-4657720902c5> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/tag/ip | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00141-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948046 | 264 | 3.828125 | 4 |
The Mars rover Curiosity is on a mission to deliver what scientists hope will be groundbreaking scientific research that wouldn't be possible without robotics, according to a NASA chief engineer.
"It's critical to this mission," said Louise Jandura, sampling system lead engineer with NASA's Jet Propulsion Laboratory. "Robotics is a key component of the mobile robotic laboratory, in terms of being able to rove, to move around and drive. We're mobile and we can use the robotic arm to reach out and touch targets. It's all key."
NASA scientists (clockwise from upper right) Louise Jandura, Avi Okon, Brett Kennedy, Dan Sunshine, Joe Melko and Matt Orzewalla worked with the robotic arm on a testbed rover, a twin to Curiosity. This testbed rover was not sent to Mars. (Photo: NASA / JPL)
Curiosity, a $2.5 billion, car-size rover, landed on Mars on Aug. 6. Tasked with a two-year mission to discover if Mars has ever been capable of sustaining life, the rover is carrying 10 scientific instruments, a 7-foot robotic arm and the ability to drive across the Martian surface.
Curiostiy is NASA's largest, most expensive and best-equipped rover to be sent into outer space, but the rover is dependent on its robotics to get its job done.
"It's extraordinary," said Jandura. "It's joyous, not only because of seeing the success of something that we've poured a lot of time and effort into, but it presents the ability to go and explore a world we don't live in quite yet."
Jandura noted that Curiosity represents robotics on a larger scale than the space agency has ever taken on with a rover. "Curiosity is a Range Rover size robotic laboratory," she told Computerworld. "All the robotics on it are equally scaled up from [its Mars rover predecessors] Opportunity and Spirit."
NASA's Spirit and Opportunity rovers have been two of the agency's most successful robotic projects. While Spirit was given up for dead last year, both rovers worked on the Martian surface for more than six years -- far longer than the three months that NASA initially expected them to last.
However, the rover Opportunity is still working. It was upgraded with artificial intelligence software to enable the robot to make some of its own decisions about what rocks or geological formations it should stop and analyze.
Both Spirit and Opportunity were equipped with two-and-a-half-foot-long robotic arms, while Curiosity's is 7 feet. Jandura noted that the arm configurations are similar but the new arm is stronger and rigged with better tools.
Spirit and Opportunity, for instance, carried an abrasion tool designed to grind rock surfaces. The older rovers made observations but weren't able to collect samples.
Curiosity has a drill on its arm so powerful that it can turn a rock into powder. It also can collect the powder, sort it into larger and smaller particles and deliver the smaller pieces to the rover's analysis instruments.
Curiosity's mast, which holds several cameras, also is equipped with robotic joints, enabling it to move, take high-resolution color images and video.
There also are robotic joints in the rover's six wheels.
"We can tell the rover to drive using its wheel system and, based on images taken by its cameras, the rover can avoid obstacles," said Jandura.
She added that NASA engineers were able to improve the robotics in Curiosity based on what was learned from the earlier two rovers.
"Every time we take the step of building a rover to go to another planet... we've taken the lessons we've learned from one and use them in another," said Jandura. "We learned how to scale it up to be bigger and more capable."
She also was quick to add how much she loves her job these days.
"I'm still amazed when I look up in the sky and see the little dot that is Mars," she said. "To think that at night I see Mars and then I go to work and get to see Mars close up with all these cameras and all these instruments that analyze the planet. It's phenomenal."
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is email@example.com.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "Robotics 'critical' to NASA's mission on Mars" was originally published by Computerworld. | <urn:uuid:7c635fc8-376a-4bd7-87f3-707b6c01abef> | CC-MAIN-2017-04 | http://www.itworld.com/article/2720168/hardware/robotics--critical--to-nasa-s-mission-on-mars.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00535-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961224 | 973 | 3.3125 | 3 |
What are terabit speeds and do we really need them?
You may be familiar with the term gigabit. CenturyLink has rolled out broadband speeds to homes and businesses up to one gigabit per second (Gbps), equivalent to 1,000 megabits per second (1000 Mbps), in parts of 17 markets across the country. This provides the capacity to allow businesses to transfer very large amounts of data while working in the cloud and families to have multiple data-intensive devices (typically streaming HD video) connected to one Internet connection in the home while still having more than enough speed to support uninterrupted work-at-home.
Now that you understand the power behind 1 gigabit speeds, you may be wondering about the recent news around terabit speeds. A terabit per second (Tbps), used for measuring the amount of data that is transferred in a second between two telecommunication points or within network devices, is equivalent to a whopping 1,000 Gbps. CenturyLink uses this capacity to transport extremely large amounts of data around the world on our backbone network by combining several 100 Gbps wavelengths of light together. To help us better manage all those wavelengths, we’ve decided to take it the next level – the start of the terabit era.
Earlier this year, CenturyLink, with help from Ciena, successfully delivered transmission speeds of 1 Tbps on a portion of its fiber network in central Florida as part of a live field test. The terabit super-channel packed a lot of wavelengths of light together - five 200 Gbps wavelengths - and more than doubled the network’s traffic carrying capacity during the trial, demonstrating the scalability and efficiency of CenturyLink’s network. While we can’t offer terabit speeds directly to our customers, the trial illustrates how we can use this capacity to keep up with increasing bandwidth demands on our core network, ensuring our customers will continue to have a positive experience using our cloud, hosted IT and high-speed broadband services, as well as video services like CenturyLink Prism® TV.
We also recently deployed this capacity for an event where it may be most appreciated: the annual International Conference for High Performance Computing, Networking, Storage and Analysis in Austin, Texas, also known as the Supercomputing conference. Along with help from our vendor, Infinera, we delivered 1.5 Tbps of super-channel transmission capacity to support SCinet, one of the most powerful and advanced networks in the world and created each year for this conference. This 1.5 Tbps network supports the revolutionary research applications and experiments that took place at the conference, connecting the Austin convention center to research and commercial networks around the world.
At CenturyLink, we want to make sure we can quickly satisfy broadband demands, whether it’s having enough capacity to accommodate growing Internet traffic on our core network, providing individual customers with the connectivity they need to run their business efficiently, or satisfying the communication and entertainment needs of families. Our network is ready to serve our customers, from megabit to terabit speeds. | <urn:uuid:7804fdb2-90f7-4b62-81fb-4117ce1927bb> | CC-MAIN-2017-04 | http://news.centurylink.com/blogs/corporate/what-are-terabit-speeds-and-do-we-really-need-them-3114699 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00259-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940152 | 622 | 2.71875 | 3 |
Web Browser Privacy Settings FlawedPrivate and anonymous settings in Firefox, Internet Explorer, and others can expose more details than users expect, security researchers find.
Do you believe that your browser's privacy settings hide your viewing habits? Think again.
According to researchers from Stanford University and Carnegie Mellon, their first-ever study of the privacy mode in browsers found multiple weaknesses, which attackers could exploit to reconstruct a browser's true history. The researchers plan to present their findings at this week's Usenix Security Symposium in Washington.
To assess the security of browsers' privacy modes, the researchers examined privacy controls, cookie controls, and object controls in Firefox 3.5, Internet Explorer 8, Google Chrome, Apple Safari 4, and Opera 10. They also evaluated numerous add-ons, including CookieSafe for cookie controls in Firefox, AdBlock Plus for controlling objects -- such as suppressing banner advertisements from displaying -- in Firefox, and PithHelmet for Safari object control.
What the researchers found were numerous vulnerabilities in how these browsers and add-ons approach privacy. As a result, "current private browsing implementations provide privacy against some local and web attackers, but can be defeated by determined attackers," they said.
For example, browsers sometimes leak information when in private mode. For starters, any Certificate Authority (CA) certificates cached when a user is in private mode persist when they switch out of private mode. "This is significant privacy violation," according to the researchers.
When it comes to browser privacy, or lack thereof, the researchers also cautioned that more flaws and vulnerabilities could exist, such as accessing browsing data that's been cached in memory.
In short, "privacy" mode can be anything but, at least against a determined adversary or forensic investigator.
Then again, what draws people to use privacy mode? To find out, the researchers also spent $120 to purchase 155,216 impressions for two advertising campaigns they created, running simultaneously online, then used code in the advertisements to detect whether someone was in private browsing mode. "We found that private browsing was more popular at adult web sites than at gift shopping sites and news sites, which shared a roughly equal level of private browsing use," according to their study. | <urn:uuid:c73b546a-84b4-48b7-ac5d-cbdbd6523d7e> | CC-MAIN-2017-04 | http://www.darkreading.com/vulnerabilities-and-threats/web-browser-privacy-settings-flawed/d/d-id/1091477 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00315-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934567 | 448 | 2.9375 | 3 |
In our data recovery lab at Gillware, we’re no strangers to water damage. When we get data storage devices in our cleanroom that have suffered water damage, it can be anything from a laptop that got a glass of water spilled on it to hard drives or servers pulled out of flooded businesses and homes in the wake of natural disasters such as the current flooding in Louisiana. Read on for Gillware’s overview on water damaged HDDs, including why water is so bad for electronics and the effects water has on the innards of your hard drives. We also have a handling guide for water damaged HDDs if you need to send a hard drive to us for data recovery.
How Water Corrodes Your Electronics
Water is a precious substance. An average human can only survive for three to five days without it. You’ve probably been told at least once in your life to drink eight glasses of it a day. And yet it’s not so healthy for your phone or computer. And this is because of simple chemical reactions.
Water is made up of two hydrogen atoms and one oxygen atom. Hydrogen and oxygen are both reactive elements. Oxygen, in particular, is a surprisingly corrosive agent. The best example of oxygen’s corrosive properties is iron oxide, also known as rust. Copper is also vulnerable to corrosion. This is how the Statue of Liberty, which is made out of copper, gained its now-characteristic mint-green coloration (when it was first unveiled, it was as shiny as a new penny). For an example of how scary too much oxygen can be, one need look no further than the horrifying and tragic Apollo 1 fire. Or, if geological history is more your thing, the charmingly-named Oxygen Catastrophe.
It goes without saying that there is a lot of metal inside electronic devices. Especially copper, which makes for a great conductor. When you submerge a phone or computer in water, you saturate its metal components with water. The ensuing chemical reactions create corrosion.
How Water Short-Circuits Your Electronics
Somewhat counter-intuitively, pure H2O (or dihydrogen monoxide) is actually a very poor conductor of electric currents. There are very few ions in pure water, and electricity loves ions. The natural impurities in water, however, are full of ions, and are very good at conducting electricity. A single drop of water in the right place on a circuit board can draw the electrical current out of its preordained path.
Imagine that you are an electrical current traveling through a circuit. Suddenly you come up against a droplet of water in your way. It’s even more inviting of a path than the copper you’re traveling through. And electricity always takes the path of least resistance. And so you escape your preordained path, take a shortcut through the water, and end up in the wrong place. Thus, you short-circuit the board.
Once a circuit board has been shorted, running power through it can burn parts of the board. Connecting a short-circuiting device to another device can also cause the other device to short out as well. For example, plugging a hard drive with a shorted board into a power supply unit could fry the PSU as well.
What Happens to Water Damaged HDDs?
Your hard drive is full of delicate parts, such as the read/write heads and platters. Unsurprisingly, water is not good for them. Not only can water corrode or short-circuit the exposed circuit board, but in extreme flooding situations it can also seep into the hard drive and wreak havoc. The internals of your hard drive can become corroded. There’s no way to reverse corrosion (although some corrosion can be cleaned off by an expert), just as you can’t un-burn firewood.
Inside your hard drive, there are read/write heads that read the data stored on the drive’s platters. These heads are tiny coils of copper wire mounted on the ends of long metal arms. Read/write heads are like tiny airplane wings, hovering mere nanometers apart from the surfaces of the drive’s platters. From their perspective, a single droplet of water is a mountain. If the read/write heads collide with even a single drop of water, it can cause a head crash and scratch the surfaces of the platters.
How Safe Are Your Water Damaged Hard Drives’ Platters?
Your hard drive’s platters are made out of either aluminum or glass. These aluminum and glass disks are coated with a thin layer of magnetic substrate, which contains your data. The substrate is made up of very thin layers of Ruthenium, cobalt-nickel-iron alloys, and cobalt-chromium-platinum alloys. The substrate actually corrodes very readily. Platters are delicate, and only a small amount of corrosion can cause data loss.
Our data recovery engineers often see rust on the platters of flood-damaged drives, not from the platters themselves, but from the screws inside the drive. Particles of rust would flake off and stick to the platters. When floods and hurricanes hit, the water that fills businesses and homes is incredibly dirty, and the dirt can actually become caked onto the platters as well.
Modern hard drives have a thin layer of lubricant on their platters. When the platters become wet, the water sticks to the lubricant. And after the water has evaporated, the lubricant will no longer be evenly coated on the platters. Instead, it will have pooled on portions of the platters, forming large bubbles on their surfaces. Our engineers can remove the pooled lubricant, but there is no way to re-apply it.
The disk lubricant is PFPE in several forms. Typically a mixture of both end chain surface-bonded lubricants (PFPE Zdol or PFPE Ztetraol) and free lubricant (PFPE Z) are used. These came out of the space program and have an extremely wide functional temperature range. However, adsorbed water on the disk carbon overcoat can interfere with the hydroxl end group bonding of the Zdol to the disk. Basically at some minimum bound of H2O adsorbption thickness the hydroxl end groups bond to the water instead of the carbon. As the water typically evaporates in shrinking drops or pools, it should leave behind lube concentrations.
Being able to clean off and burnish these platters is critical to data recovery. Even microscopic particles on the platters can cause head crashes and platter damage that can destroy your data. Water damaged HDDs need to have their platters carefully and vigorously scrubbed by skilled hard drive recovery engineers and burnished using our state-of-the-art platter burnishing technology.
Why Aren’t Hard Drives Air-Tight, Anyway?
Some hard drives, like the latest high-capacity drives from Hitachi, Seagate, and Western Digital, are hermetically sealed. This is because they are filled with helium. (Helium is lighter than air, so it provides the hard drive’s components with much less air resistance.) Normal hard drives, though, are not perfectly sealed devices.
Hard drives don’t even need to be hermetically sealed, due to the filter assembly inside modern drives. As Ron Dennison explains:
I should point out that short duration immersion in “clean” water will rarely hurt an unpowered HDD if the PCBA is removed and dried properly and the HDA is also dried properly. The small breather hole and attached filter assembly in the HDA typically will not admit liquid water to the drive.
However, when a hard drive spends a long time under several feet or more of water, the water pressure can force far more liquid into the drive than can be admitted normally. This is, unfortunately, often the case when a hard drive has been exposed to floodwater.
It seems that if all hard drives were perfectly air-tight, water damage wouldn’t be such a big deal. You may ask, “Why don’t they just make all hard drives hermetically sealed?” Hard drives (that aren’t filled with helium) need the air pressure outside and inside to be roughly equal. If an air-tight hard drive were manufactured in Madison, Wisconsin (887 feet above sea level), it would crash extremely easily if you brought it to Denver, Colorado (5,280 feet above sea level). Hard drives already need to be specially designed to function at high altitudes. Helium-filled drives are designed to avoid this issue, but your typical hard drive, unfortunately, needs that breathing hole to remain uncovered.
Handling Guide for Water Damaged HDDs
If you have critical data stuck on water damaged HDDs, time is of the essence. It is important to contact Gillware right away to set up your data recovery case. In the meantime, handling your waterlogged drive properly can dramatically improve the chances of a successful recovery. For your convenience and your data’s safety, we here at Gillware have published a simple set of guidelines for proper drive handling, which you can view and download at the link below. | <urn:uuid:f7a2c6b8-ce7e-4105-beac-7db76cd6e277> | CC-MAIN-2017-04 | https://www.gillware.com/blog/best-practices-and-tips/intro-water-damaged-hdds/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00315-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944823 | 1,945 | 3.125 | 3 |
If you look at today’s typical data center with its racks of blade servers, SANs, network switches, UPSs, and more humming away, crunching gigabytes, terabytes, even petabytes of data, everything seems fine. If you look into the eyes of a typical data center manager, you might see something else. You might see fear.
As data center managers are well aware, data quantities are growing relentlessly. The explosion of data requires more servers, more storage and more bandwidth. These sharp demands threaten everything from the ability to conduct regular backups to simply keeping the lights (and air conditioning) on. Let’s drill down into the issue of data center power consumption.
I don’t want to upset data center managers any more than they already are, but consider that there are 4 billion mobile devices out there today ready to consume information from data centers and, more ominously, create data to put into them, requiring even more hardware and more electricity.
Five years ago, the U.S. environmental Protection Agency issued a report predicting that power consumption by data centers in the United States would double between 2005 to 2010, just as it had done from 2000 to 2005. IDC says power and cooling demand is the number one operational problem for data centers.
But IT systems are not standing still. They are giving new life – and efficiency — to old data centers. Consider that five years ago, when the EPA issued its dire report, Intel-based servers were running two core chipsets at 110 watts. Today Intel’s E7 chipset architecture runs 10 cores with 20 processing threads using only 130 watts. In short, you can do substantially more processing within a substantially smaller power envelope with the latest servers.
And virtualization has improved server utilization dramatically, reducing the need to add more power-consuming machines. By next year, IDC estimates that every physical server will house an average of 8.5 virtual machines.
And these advances already are paying dividends. Indeed, a study last year by Stanford’s Jonathan Koomey shows that the EPA’s projection was overly pessimistic. Power consumption by data centers grew at 36% in the U.S. between 2005 and 2010, not 100% as predicted. While the recent recession undoubtedly contributed to the lower use of electricity, consider that during that same period we saw dramatic growth in cloud computing, a doubling of Internet users globally and the building of massive, power-hungry data centers by companies like Amazon, Apple, Facebook, Google, Microsoft and others.
Certainly big data will continue to put pressure on IT operations in many areas, including power consumption. However, IT vendors will continue to squeeze more efficiencies out of hardware, assuring that big data will have data centers to process, store and serve it up to users.
Related reading: Invent new possibilities with HANA, SAP's game-changing in-memory software SAP Sybase IQ Database 15.4 provides advanced analytic techniques to unlock critical business insights from Big Data. SAP Sybase Adaptive Server Enterprise is a high-performance RDBMS for mission-critical, data-intensive environments. It ensures highest operational efficiency and throughput on a broad range of platforms. SAP SQL Anywhere is a comprehensive suite of solutions that provides data management, synchronization and data exchange technologies that enable the rapid development and deployment of database-powered applications in remote and mobile environments. Overview of SAP database technologies | <urn:uuid:d35a79e0-6b84-40e6-b57b-a1f036d6ae95> | CC-MAIN-2017-04 | http://www.itworld.com/article/2725528/it-management/data-centers-are-up-to-big-data-s-power-challenge.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00251-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926842 | 698 | 2.578125 | 3 |
Softbank, which runs Japan's third-largest mobile phone network, is experimenting with using blimps as temporary cell towers for use during natural disasters.
The company said it will use helium-filled blimps that dangle antennas about 100 meters above the ground and provide mobile coverage over a radius of roughly three kilometers in a suburban landscape. Softbank will carry out trials through June of next year to test factors such as the coverage area, data speed and voice quality.
The blimps are equipped with two types of antennas, one to link up with users' cell phones and the other to exchange signals with a truck that carries a portable cell tower and is physically plugged into the company network. The trucks can be over five kilometers away from the blimps if line-of-sight is maintained.
Mobile coverage often goes down during earthquakes and other natural disasters, which cut power and damage fixed cell towers. Portions of Japan's northeast coast were without service for weeks and months after the magnitude 9 earthquake and resulting tsunami that hit last year. DoCoMo, Japan's largest carrier, has invested in a network of large cell towers with long range and smaller truck-based towers to provide emergency coverage.
Softbank developed the blimps together with Hokkaido University. They can be constructed in various sizes but all have the same, flattened-out shape, with their height about 60 percent of their diameter.
The blimps are kept steady with guide wires, and powered via a line that is just 1.3 millimeters and teflon coated to avoid wind effects. They use an onboard fan to keep the inner pressure stable and avoid instability as the outside temperature changes.
The company said it has received permission from Japan's Ministry of Internal Affairs to carry out the trials, which will use its 3G network. The trials will be carried out in Aichi Prefecture, about 270 kilometers (170 miles) southwest of Tokyo. | <urn:uuid:4e03faf0-d538-42e7-a738-366fb65c1b89> | CC-MAIN-2017-04 | http://www.cio.com/article/2396139/mobile/japan-s-softbank-testing-blimp-based-emergency-mobile-phone-system.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00553-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.962691 | 395 | 2.859375 | 3 |
In this article, learn about these concepts:
- The Server Message Block (SMB) and Common Internet File System (CIFS) protocols
- Features and benefits of using CIFS
- Mounting CIFS shares on a Linux client
This article helps you prepare for Objective 314.1 in Topic 312 of the Linux Professional Institute's (LPI) Mixed Environment specialty exam (302). The objective has a weight of 3.
This article assumes that you have a working knowledge of Linux
command-line functions and that you understand the basics of Samba
configuration. You should be familiar with the overall structure of the
smb.conf configuration file. You should also understand the basics of how
Linux mounts local and remote file systems (using the
mount command and /etc/fstab file). Familiarity
with the standard Linux text-mode
is helpful but not required.
Before proceeding with a description of how to use Linux as a client to an SMB/CIFS server, it's helpful to review the characteristics of the protocols to see how they compare with Linux's needs for a file system. This comparison comes in two parts: an examination of the original SMB protocol and an investigation of the ways in which the newer CIFS extensions change the SMB basics. You may want to review the developerWorks article on LPI Objective 310.1, which introduces some of the basic concepts behind SMB/CIFS (see Resources for a link).
Basic SMB features
SMB provides several unique features from a networking perspective, including its own naming system for computers (Network Basic Input/Output System [NetBIOS] names), workgroups, and user authentication protocols. For purposes of understanding how SMB and CIFS work as protocols for a Linux file-sharing client, the most important feature is the set of metadata that the protocol provides.
Metadata is data associated with, but not part of, a file. Examples include the file's timestamp, owner, permissions, and even its name. No doubt you're familiar with some of the common features of file metadata on Linux computers, and you may be familiar with some of the differences between Linux and other operating systems, such as Windows. Because SMB was designed for DOS, Windows, and IBM Operating System/2® (OS/2), it shares many of their metadata features. Most importantly, SMB lacks support for UNIX® and Linux metadata such as ownership, groups, and most permissions. SMB also lacks support for symbolic links and hard links as well as other special file types such as device nodes. SMB provides a few types of metadata that Linux doesn't normally understand, such as the hidden, archive, and system bits. You can map a Read-only bit to the Linux Write permission bit.
Another limit of SMB is its file-size limit of 2GiB. This limitation can obviously pose a problem in today's world of multi-gibibyte backup files, multimedia files, and so on.
To work around these SMB differences from Linux's file system expectations, Linux SMB clients must either ignore them or provide options to "fake" the data. These options are similar to those used when mounting NT file system (NTFS) or File Allocation Table (FAT) file systems on Linux. Fortunately, CIFS provides more and better options for handling some of these limitations.
You should be aware of the network ports that SMB uses, as well. These are User Datagram Protocol (UDP) ports 137 and 138 (for name resolution and datagram services) and TCP port 139 (for session services—in other words, most file transfers). You'll need this information if you ever have to debug SMB using low-level network diagnostic tools.
CIFS extensions to SMB
In the mid-1990s, Microsoft® decided to change SMB's name to CIFS and simultaneously added a new set of features. These features include support for symbolic and hard links and larger file sizes. CIFS also supports access to the server on port 445 in addition to the older port, 139.
At least as important as Microsoft's own extensions to SMB in CIFS are others' extensions. In particular, a set of CIFS features known as UNIX extensions provides support for file ownership, permissions, and some other UNIX-style metadata. If the client and server both support these features, you can make much more effective use of a CIFS server from Linux than you could a server that only supports SMB. As you might expect, Windows Server® operating systems don't support these extensions, so they're only useful when your Linux client connects to a Samba server. This server must also be configured with the following global option:
unix extensions = Yes
This option was set to
No by default in Samba
prior to version 3.0, but Samba 3.0 switched the default to
Yes, so you may not need to explicitly set the
In some respects, the simplest way to access an SMB/CIFS server from Linux
is to use a text-mode utility known as
smbclient. This program is similar to the
ftp client program, so if you're
ftp, you should have few problems
smbclient. If you're not familiar with
ftp, the idea behind the program is to initiate
a connection to the server that does not involve mounting shares
in a traditional manner. Instead, you type commands to view, delete,
download, or upload files.
smbclient, you type its name followed by
a service name, which takes the form //SERVER/SERVICE,
such as //TANGO/GORDON to access the GORDON share on the TANGO
server. Depending on the server's configuration, you will probably be
prompted for a password. If you enter it correctly, you are able to type
various commands to access files on the server. Table 1 summarizes some of
the more important
smbclient commands; consult
the utility's main page for information on more exotic commands.
Table 1: Important smbclient commands
|Displays a summary of commands|
|Changes to a new directory on the server|
|Deletes a file|
|Shows the files in the current directory (or one you specify)|
|Terminates the session|
|Transfers a file from the server to the client|
|Changes the working directory on the local computer|
|Creates a directory on the server|
|Transfers multiple files from the server to the client|
|Displays a remote file using your local pager|
|Transfers multiple files from the client to the server|
|Transfers a file from the client to the server|
|Deletes a directory|
|Renames a file on the server|
|Deletes one or more files on the server|
smbclient uses your current user
name to connect to the server; however, you can change your user name with
-U option. In fact, several other
command-line options are available, including options that make it
possible to transfer files without entering
smbclient's interactive mode. Therefore, you
smbclient in scripts to perform
automated file transfers. Consult the program's main page for details on
In use, an
smbclient session is likely to
resemble Listing 1.
Listing 1. Example smbclient session
$ smbclient //TANGO/GORDON/ Enter gordon's password: Domain=[RINGWORLD] OS=[Unix] Server=[Samba 3.4.12] smb: \> cd mystuff smb: \mystuff\> ls . D 0 Mon May 16 19:20:08 2011 .. D 0 Mon May 16 19:18:12 2011 xv-3.10a-1228.1.src.rpm 3441259 Tue May 18 19:09:26 2010 License.txt 27898 Mon May 16 19:17:15 2011 xorg.conf 1210 Fri Jan 21 04:18:13 2011 51198 blocks of size 2097152. 2666 blocks available smb: \mystuff\> get xorg.conf getting file \mystuff\xorg.conf of size 1210 as xorg.conf (9.4 KiloBytes/sec) (average 9.4 KiloBytes/sec) smb: \mystuff\> exit
smbclient makes an excellent debugging tool.
It's simple and gives you the ability to access your network in a way
other than mounting a share, which can be helpful if you're trying to
debug a problem.
Mounting SMB/CIFS shares
smbclient is a useful tool, it doesn't
give you the same sort of seamless access to the server that you're used
to from Windows clients. If you need such access, you must use other tools
to mount the SMB/CIFS shares. You can do this with the standard Linux
mount command; or you can edit your /etc/fstab
file to automatically mount SMB/CIFS shares when the computer boots.
Temporarily mounting shares
You can mount an SMB/CIFS share using the same
mount command you use to mount local volumes or
Network File System (NFS) exports. You can specify the file system type as
cifs; or, in most cases,
mount figures out to use this driver based on
the syntax of the command. Alternatively, you can call the helper program
mount.cifs directly. In principle, only the
device specification is different from that for mounting a local file
system; thus, to mount the GORDON share from the TANGO server, you could
type, as root:
# mount //TANGO/GORDON /mnt
In practice, though, this usage has a problem: It passes
root as the user name to the server. If the
server doesn't permit root to log in, the mount attempt will fail. You can
correct this problem by using the
-o user=name option to pass a user
name to the server:
# mount -o user=gordon //TANGO/GORDON /mnt Password:
Several other mount options, passed with the
mount, are available. Table 2
summaries the most useful of these options. Consult the
mount.cifs main page for information on
Table 2: Important mount.cifs options
|Specifies the user name to send to the server.|
|Specifies the password to send to the server. If the password is
not specified, |
|Specifies a file that contains the user name, password, and,
optionally, the workgroup name. Each value appears on its own
line, preceded by the strings
|Sets the user ID (UID) of the user who is to own the files mounted from the share.|
|Similar to the |
|Sets the file mode (permissions), in numeric form, to be assigned to files from the server.|
|Similar to |
|Prevents prompting for a password. This option typically works only if the share supports guest access.|
|If the server becomes inaccessible, processes that attempt to access files on the server will hang until the server returns.|
|If the server becomes inaccessible, processes that attempt to access files on the server will receive error messages. This is the default behavior.|
dir_mode options are usually unnecessary if you
connect to a server that supports the CIFS UNIX extensions. You can use
these features to override the values that the server provides in these
cases, though. Note also that these options all affect the way the files
appear on the client; permissions and ownership on the server are
unaffected by these options.
With the SMB/CIFS share mounted, you can access it just as you would a
local volume or an NFS volume. You can copy files with
cp, delete files with
rm, edit files directly with text editors or
other programs, and so on. Keep in mind, though, that if the server
doesn't support a feature, you might not be able to use it. For instance,
you can't use
chmod to change the mode of a
file unless the server supports the UNIX extensions. (A partial exception
in the case of
chmod is that you can change
Write permissions; these are mapped, in inverse fashion, to the SMB
When you're finished using a share, you can unmount it with the
umount command, just as if it were a local file
# umount /mnt
Permanently mounting shares
If you want a computer to mount an SMB/CIFS share permanently, you can do
so by adding an entry to /etc/fstab. This process works much like any
other translation of a
mount command to an
/etc/fstab entry. However, one option from Table 2
deserves special mention in this context:
credentials. Because most SMB/CIFS servers use
passwords for authentication, you must permanently store the password if
you expect a share to be mounted using /etc/fstab. Storing the password
directly in /etc/fstab using the
option is possible, but inadvisable; because /etc/fstab must be readable
by all the computer's users, a password stored in this way will also be
readable by everyone. Using
you to store the password in a file that's readable only by root, thus
improving password security.
A working /etc/password entry for an SMB/CIFS share might resemble the following:
//TANGO/BACKUPS /saveit cifs credentials=/etc/samba/creds.txt 0 0
The associated credentials file might look like this:
Caution: Be sure to give the credentials file suitable permissions—normally 0600 or 0400 with ownership by root or by the user whose credentials are stored in the file.
With this configuration in place, the //TANGO/BACKUPS share should be
mounted automatically whenever you reboot the computer or type
mount -a. If this feature doesn't work, verify
that the user name and password are correct, test with the
mount command, and perform other routine
The next article in this series, "Learn Linux, 302 (Mixed environments): NetBIOS and WINS", covers name resolution using the Windows Internet Name Service (WINS) and browsing, which enables computers to locate network shares in a tree-like hierarchy of computers and shares.
- "Learn Linux, 302 (Mixed environment): Concepts" (developerWorks February 2011) describes the basic principles of SMB and of CIFS.
- The Samba Wiki includes a page on the CIFS UNIX extensions that provides technical details about these features.
- At the LPIC Program site, find detailed objectives, task lists, and sample questions for the three levels of the LPI's Linux systems administration certification. In particular, look at the LPI-302 detailed objectives and the tasks and sample questions.
- Review the entire LPI exam prep series on developerWorks to learn Linux fundamentals and prepare for systems administrator certification based on LPI exam objectives prior to April 2009.
- Exam Preparation Resources for Revised LPIC Exams provides a list of other certification training resources maintained by LPI.
- In the developerWorks Linux zone, find hundreds of how-to articles and tutorials as well as downloads, discussion forums, and a wealth of other resources for Linux developers and administrators.
- Follow developerWorks on Twitter, or subscribe to a feed of Linux tweets on developerWorks.
- Stay current with developerWorks technical events and webcasts focused on a variety of IBM products and IT industry topics.
- Attend a free developerWorks Live! briefing to get up to speed quickly on IBM products and tools as well as IT industry trends.
- Watch developerWorks on-demand demos ranging from product installation and setup demos for beginners to advanced functionality for experienced developers.
- Get involved in the My developerWorks community. Connect with other developerWorks users while exploring the developer-driven blogs, forums, groups, and wikis. | <urn:uuid:ceddaf46-35c2-4d1b-bfcf-aaa21b547af4> | CC-MAIN-2017-04 | http://www.ibm.com/developerworks/linux/library/l-lpic3-314-1/index.html?ca=drs- | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00003-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.879674 | 3,474 | 2.84375 | 3 |
Computer security training isn’t just a matter of giving employees information. Knowing best practices is important, but it helps only if employees understand that they make a difference.
Talking about “viruses” which “infect” computers gives the wrong message. It suggests that attacks are just something that happens to computers, like catching a cold. The truth is that user errors make the majority of malware attacks possible, and that employees who think about security can avoid most of them.
Let’s start by going over best practices that encourage the proper mindset and promote secure action.
Email is where users make the most security mistakes. Employees need to recognize three things:
It’s not a “virus.” The attachment can’t do anything unless they open it. If they report suspicious mail to an administrator instead, their computers will be much safer.
Clicking on dubious links is another way employees invite attacks. What employees need to recognize here is:
In an ideal, bug-free world, users could access any website without risk. However, browsers do have bugs, so employees need to be cautious about what links they follow.
Weak passwords are a third big area for user error. Certain passwords are at the top of attackers’ lists for guessing, because they’re the most widely used ones. These include ones like “password” and “123456.” Criminals who guess them can get into their accounts and grab confidential information or manipulate company data. Employees need to know these things:
Employees who use easily-guessed passwords are effectively leaving the door unlocked. Anyone with malicious intentions will have an easy job of getting into their accounts and doing damage.
Smartphones and tablets are the newest targets for attack. They’re subject to the same kinds of attacks as desktop devices, but people don’t think about them as carefully. In addition to the other risks, they’re easy to lose. Employees need to recognize:
Encrypting their devices and requiring a strong password to unlock it is the best protection. Even so, employees should minimize the amount of sensitive information they store on them.
For each risk, the language needs to be about attacks and intrusions, not “infections.” Employees are responsible for keeping their devices and accounts safe, and what they do makes a huge difference.
Apex is the trusted choice when it comes to staying ahead of the latest information technology and security tips, tricks, and news. Contact us at (800) 310-2739 or send us an email at firstname.lastname@example.org for more information. | <urn:uuid:12aba117-f116-413f-8b34-01af96c6a030> | CC-MAIN-2017-04 | https://www.apex.com/educating-employees-on-cyber-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00517-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942735 | 546 | 2.953125 | 3 |
What is a Private Cloud?
A private cloud is a particular model of cloud computing that involves a distinct and secure cloud based environment in which only the specified client can operate. As with other cloud models, private clouds will provide computing power as a service within a virtualised environment using an underlying pool of physical computing resource. However, under the private cloud model, the cloud (the pool of resource) is only accessible by a single organisation providing that organisation with greater control and privacy.
- Higher security and privacy; public clouds services can implement a certain level of security but private clouds - using techniques such as distinct pools of resources with access restricted to connections made from behind one organisation’s firewall, dedicated leased lines and/or on-site internal hosting - can ensure that operations are kept out of the reach of prying eyes
- More control; as a private cloud is only accessible by a single organisation, that organisation will have the ability to configure and manage it inline with their needs to achieve a tailored network solution. However, this level of control removes somes the economies of scale generated in public clouds by having centralised management of the hardware
- Cost and energy efficiency; implementing a private cloud model can improve the allocation of resources within an organisation by ensuring that the availability of resources to individual departments/business functions can directly and flexibly respond to their demand. Therefore, although they are not as cost effective as a public cloud services due to smaller economies of scale and increased management costs, they do make more efficient use of the computing resource than traditional LANs as they minimise the investment into unused capacity. Not only does this provide a cost saving but it can reduce an organisation’s carbon footprint too
- Improved reliability; even where resources (servers, networks etc.) are hosted internally, the creation of virtualised operating environments means that the network is more resilient to individual failures across the physical infrastructure. Virtual partitions can, for example, pull their resource from the remaining unaffected servers. In addition, where the cloud is hosted with a third party, the organisation can still benefit from the physical security afforded to infrastructure hosted within data centres
- Cloud bursting; some providers may offer the opportunity to employ cloud bursting, within a private cloud offering, in the event of spikes in demand. This service allows the provider to switch certain non-sensitive functions to a public cloud to free up more space in the private cloud for the sensitive functions that require it. Private clouds can even be integrated with public cloud services to form hybrid clouds where non-sensitive functions are always allocated to the public cloud to maximise the efficiencies on offer.
More information on Interoute's Cloud Services | <urn:uuid:d94657f0-04fa-462f-bd80-a2243713fe4c> | CC-MAIN-2017-04 | http://www.interoute.com/cloud-article/what-private-cloud | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00454-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939651 | 536 | 2.984375 | 3 |
It can be all too easy to delete your data. All it takes is one wrong click, a disgruntled employee, or a computer virus. You might not notice you’ve lost anything until days or weeks after the fact. If you don’t have a backup of your data, your missing files could be just out of your reach. But all is not lost if you’ve lost data due to an accidental or malicious deletion. Our deleted data recovery experts here at Gillware can help.
Where Does Data Go When It Gets Deleted?
If you drag a file into your computer’s trash or recycle bin, you can still go right back in and drag it out. But what happens after you’ve emptied the bin? What if you delete something from your external hard drive? Is your data lost forever?
If somebody clicks “Yes” and you lose your company’s Quickbooks file, Gillware’s deleted data recovery technicians can get it back.
Different filesystems all handle deleted data a little differently. The most common filesystems we see at Gillware are Windows NTFS and Mac HFS+.
In the NTFS filesystem, the master file table identifies and points to the physical locations of every file on your data storage device. It’s constantly being updated and rewritten as files are created and deleted. The NTFS filesystem also has a feature called the bitmap. The bitmap keeps a little record of which spaces on the drive are being used and which are not.
When a file is deleted, the bitmap updates itself and says that the space occupied by that file is empty. The space isn’t empty, though. The filesystem hasn’t changed the sectors containing the deleted file. It has only flagged them as unused in the bitmap.
The same principle is at work in HFS+ filesystems, although the underlying mechanisms are a little different. Files are defined by the catalog and extents overflow files. The allocation table keeps a record of what parts of the hard drive are in use.
It’s important to avoid using your data storage device after file deletion has occurred. When you delete data, there’s suddenly a convenient new bit of “unused” space where your deleted file is. If you keep creating or modifying files after you’ve deleted something, you could end up writing data to that space. This can partially overwrite or even completely destroy your deleted file.
Think of your hard drive as a city and each file as a building. Deleting a file isn’t the same as knocking a building down. It’s more like putting a big “CONDEMNED” sign on the front door. No one is allowed in, but the building is still there. Eventually, the spot gets bulldozed and a new building gets put up in its place. But until then, your data is still safe.
Some Linux filesystems, such as Ext4, handle deleted files a little differently. When a file is deleted from an Ext4 partition, the extents pointing to the file’s location are erased. Sometimes this can be undone using the filesystem’s journaling capabilities. Deleted data recovery from Linux systems can be more difficult work than deleted data recovery from Mac or Windows systems.
How Does Gillware’s Deleted Data Recovery Process Work?
There are a lot of software data recovery programs out there that say they can help you if a critical file has been deleted. We don’t trust them. We use HOMBRE, our own in-house software, for deleted data recovery.
There’s a lot of danger involved in trying to recover deleted files on your own. If you install a data recovery program, you’re already risking the integrity of your lost data. Most software tools also lack the robust suite of analytical tools our own software has. HOMBRE has been developed by our data recovery engineers, exclusively for our data recovery engineers.
The process for deleted data recovery starts with a full write-blocked forensic image of your data storage device. Our engineers have to make an exact duplicate of your hard drive, all the way down to the binary level. Your filesystem isn’t going to freely tell us where all the data on the drive lives. This is mainly because it doesn’t actually know where the deleted data lives. It can’t find those files on its own anymore.
To recover deleted files, our technicians analyze the entire device. HOMBRE’s relational database helps them sort through all of the data on the device to turn up deleted files. Our engineers can restore these deleted files and assess them for file corruption.
If a data storage device has been in use since files were deleted from it, the likelihood of some of the deleted files becoming corrupted increases. Our engineers refer to this kind of corruption as hard corruption. The overwritten sectors can’t be rolled back to their previous states. There are many situations where our data recovery technicians can overcome hard corruption and repair damaged files.
If our technicians notice corruption in any critical deleted files, we make an effort to repair those files. Any irreparable corruption is noted on the list of recovery results. The last thing we want is for one of our clients to unwittingly receive nonfunctional data from us.
Why Should I Choose Gillware for My Deleted Data Recovery Needs?
At Gillware, we have taken steps to make the entire deleted data recovery process as financially risk-free as possible.
We start with a free evaluation of your deleted data recovery situation. We even offer free inbound shipping for any clients in the continental United States. Once our technicians have assessed the situation, we present you with a quote and probability of success. If you approve the quote, we don’t ask for payment yet. We go ahead and do the whole recovery process first.
Our deleted data recovery technicians recover as many deleted files as possible. We inspect the recovered data for corruption and repair any critical files as needed. We don’t show you a bill until we’ve successfully met your goals and recovered your critical data.
Our deleted data recovery technicians have logged thousands of hours of data recovery work over the years. They are incredibly well-acquainted with the ins and outs of data storage filesystems. Our financially risk-free service guarantee isn’t just because we hate when people give us money without getting anything in return. It’s also because we are confident in our data recovery engineers’ abilities to retrieve your deleted files.
Ready for Gillware to Assist You with Your Deleted Data Recovery Needs?
Best-in-class engineering and software development staff
Gillware employs a full time staff of electrical engineers, mechanical engineers, computer scientists and software developers to handle the most complex data recovery situations and data solutions
Strategic partnerships with leading technology companies
Gillware is proud to be a recommended provider for Dell, Western Digital and other major hardware and software vendors. These partnerships allow us to gain unique insight into recovering from these devices.
RAID Array / NAS / SAN data recovery
Using advanced engineering techniques, we can recover data from large capacity, enterprise grade storage devices such as RAID arrays, network attached storage (NAS) devices and storage area network (SAN) devices.
Virtual machine data recovery
Thanks to special engineering and programming efforts, Gillware is able to recover data from virtualized environments with a high degree of success.
SOC 2 Type II audited
Gillware has been security audited to ensure data safety, meaning all our facilities, networks, policies and practices have been independently reviewed and determined as completely secure.
Facility and staff
Gillware’s facilities meet the SOC 2 Type II audit requirements for security to prevent entry by unauthorized personnel. All staff are pre-screened, background checked and fully instructed in the security protocol of the company.
We are a GSA contract holder.
We meet the criteria to be approved for use by government agencies
GSA Contract No.: GS-35F-0547W
Our entire data recovery process can be handled to meet HIPAA requirements for encryption, transfer and protection of e-PHI.
No obligation, no up-front fees, free inbound shipping and no-cost evaluations.
Gillware’s data recovery process is 100% financially risk free. We only charge if the data you want is successfully recovered.
Our pricing is 40-50% less than our competition.
By using cutting edge engineering techniques, we are able to control costs and keep data recovery prices low.
Instant online estimates.
By providing us with some basic information about your case, we can give you an idea of how much it will cost before you proceed with the recovery.
We only charge for successful data recovery efforts.
We work with you to define clear data recovery goals for our technicians, and only charge you upon successfully meeting these goals and recovering the data that is most important to you.
Gillware is trusted, reviewed and certified
Gillware has the seal of approval from a number of different independent review organizations, including SOC 2 Type II audit status, so our customers can be sure they’re getting the best data recovery service possible.
Gillware is a proud member of IDEMA and the Apple Consultants Network. | <urn:uuid:6607d152-8ff7-4e72-a139-832f5500613a> | CC-MAIN-2017-04 | https://www.gillware.com/deleted-data-recovery-services/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00270-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943026 | 1,940 | 2.5625 | 3 |
Exploring LTE: Architecture and Interfaces (e) - Video
- Course Length:
- 1 hour of eLearning
NOTE: While you can purchase this course on any device, currently you can only run the course on your desktop or laptop.
Long Term Evolution (LTE) is explicitly designed to deliver high-speed, high quality services to mobile subscribers. In order to achieve this, the LTE network architecture introduces a number of new network nodes and interfaces to implement the necessary functionality and manage the exchange of packets between mobile devices and external packet data networks. This self-paced eLearning class discusses the overarching goals of LTE networks and then defines the unique network functions needed to achieve those goals. The course then describes the key interfaces between these functions, with particular emphasis on the LTE air interface, as well as the underlying protocols carried over these interfaces. Frequent interactions are used to ensure student comprehension of the essential technologies used in all LTE networks.
This course is intended for a technical audience looking for a detailed understanding of the important nodes, functions, and interfaces found in a typical LTE network.
After completing this course, the student will be able to:
• Discuss the rationale behind the 4G LTE network architecture
• Describe the critical network functions required in every LTE network
• Describe other nodes and functions typically found in large commercial wireless networks
• Identify the key interfaces between LTE nodes and the protocols carried over each interface
• Define EPS bearers and describe their role in supporting user services
• Explain the structure and functions of the LTE air interface
1. What is LTE?
1.1. 4G LTE
1.2. Packet data networks
2. LTE Network Nodes and Functions
2.1. E-UTRAN and EPC
3. Other Network Functions
4. LTE Network Interfaces and Protocols
4.1. Internet Protocol (IP)
4.2. S1-MME and S1-U
5. EPC Bearers
5.1. Default bearers
5.2. Dedicated bearers
6. LTE Air Interface
6.1. Air interface physical structure
6.2. OFDMA and SC-FDMA
6.3. Air interface physical channels
6.4. Uu protocol stack
• Welcome to LTE (eLearning)
• LTE-SAE Evolved Packet Core (EPC) Overview (eLearning)
• LTE Air Interface Signaling Overview (eLearning)
Purchase for:Login to Buy
Create a flexible eLearning plan to purchase eLearning courses for one or more individuals, where course prices are discounted dependent on the number of courses purchased. | <urn:uuid:79345b73-2447-4a96-8603-37caca065c6f> | CC-MAIN-2017-04 | https://www.awardsolutions.com/portal/elearning/exploring-lte-architecture-and-interfaces-e-h5v?destination=elearning-courses | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00206-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.810817 | 554 | 2.640625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.