text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
A sizeable number of US states and Canadian provinces are taking on electronic waste recycling as part of larger green agendas. The main reason governments are doing this is to prevent the release of toxic materials into the environment. According to Californians Against Waste, electronic waste now accounts for about 70% of toxic heavy metals in landfills. A secondary goal is the establishment of a cost-recovery revenue stream for valuable materials extracted from electronics, but the upfront capital investments for recycling programs and operations must first be digested by governments.
Governments generally take one of three legislative approaches:
- No-cost programs for consumers paid for through general tax revenues.
- Producer responsibility programs where manufacturers are responsible for the cost of recycling the equipment they produce.
- Consumer fees or point-of-sale levies that the government or vendors collect from consumers for equipment purchased. | <urn:uuid:9934fd22-e222-4cee-95eb-b1db2f222b85> | CC-MAIN-2017-04 | https://www.infotech.com/research/keep-abreast-of-electronics-recycling-laws | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00354-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94045 | 174 | 3.40625 | 3 |
Smartphones are perfect targets for hacking, tracking, surveillance, industrial espionage and malware.
Unlike, say, desktop PCs, smartphones often connect promiscuously to many public Wi-Fi networks. They can connect to multiple types of wireless networks, including Wi-Fi, mobile data networks, Bluetooth and NFC -- all of which are potential doorways for unauthorized access.
Smartphones, in fact, run two operating systems: there's the one you know about -- the one that does normal operating system jobs, and which you may diligently update with the latest security patches; and there's one you may not know about -- the one that controls the radio hardware and is rarely updated.
Smartphones can report location, which the phone figures out with GPS. And even when GPS is turned off, phones connect to cell towers, which can be triangulated to pinpoint a phone's location, or to Wi-Fi networks, which give away your location when you connect.
Carriers routinely sell location information to any organization willing to pay for it.
Smartphones are more likely to run apps from developers the user has never heard of and that can be loaded with secret, backdoor functions that can harvest personal data and send it off to some unknown server.
Yes, smartphones are super insecure. Everybody knows it. Nobody likes it. Yet who really does anything about it?
In the past week, two new ultra-secure smartphones have been in the news. One is called the Blackphone. The other is called the Black phone. No, I'm not making this up. The difference in their names is a space.
Here's what we know about the two most secure smartphones ever created.
The $629 phone was made in partnership with Silent Circle, a U.S.-based company founded by a former Navy SEAL and the inventor of Pretty Good Privacy (PGP).
Silent Circle is also known for shutting down its Silent Mail service last August, which the company reportedly did because it believed it would soon receive requests from the government to turn over the email data of its customers.
Blackphone is an Android device and more or less looks and feels like a regular Android phone. However, it uses a forked version of Android called the PrivatOS, which prevents apps from accessing personal information and works with privacy-enabled apps. For example, the built-in Web browser doesn't track your Web surfing. The phone also enables you to choose what personal information is available to each app. When you install apps, the installer presents you with individual permissions on each source of data that each app requests.
The Blackphone prevents its wireless radios from being logged via Wi-Fi as you walk around. Wi-Fi turns off when you're outside the range of a trusted hotspot. All data on the phone is encrypted, so if your phone is lost or stolen nobody else can gain access to the data. It has its own remote-delete tools as well.
The phone comes with a two-year subscription to Silent Circle's platform that encrypts phone calls and emails. The subscription covers three people -- the owner of the Blackphone and two friends or colleagues, regardless of what phones they use. It also comes with a two-year subscription to Disconnect, which anonymizes Wi-Fi connections, and SpiderOak, which is an anonymous cloud storage service.
Blackphone is designed for the general market, but Geeksphone claims that it's getting inquiries from government customers.
The Black phone
For the past two years, aerospace and defense contractor Boeing has been working on a special-purpose phone called the Black for customers who work in the government, the military and espionage. The phone was revealed in public FCC documents that all phone makers are required to file.
The Boeing Black phone is also an Android smartphone, but we know much less about it, because Boeing intends to keep its details secret. Papers filed with the FCC specifically request that information about the phone be kept secret, and a letter accompanying those papers says that even after the phone is available, it won't be available to the general public, nor will information about the phone be public.
The Black phone is small, thick and heavy. The handset is 5.2 in. tall. It's about twice as thick as an iPhone and much heavier. It has a modular design that enables users to attach add-ons, such as tracking tools, satellite transceivers, biometric sensors and solar charging devices. | <urn:uuid:f5063b18-c67b-439f-9e82-6f05e29a5cee> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2488160/mobile-security/secure-smartphones-are-nice--but-not-enough.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00564-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95896 | 906 | 2.53125 | 3 |
RAID isn’t the only way to link multiple hard drives together. There are other ways to combine separate data storage devices into what seems to be a single volume. One of these ways is JBOD. The acronym JBOD stands for “just a bunch of disks”. On a basic level, JBOD is fundamentally very similar to RAID-0, in that neither offer any sort of data redundancy or fault tolerance. Because of this, some people in the computer science world consider RAID-0 to actually be a kind of JBOD, not a “true” RAID level. Like RAID-0, JBOD has a high risk of failure. If you’ve lost data from your JBOD setup, our JBOD data recovery experts can help.
How Does JBOD Work?
In a JBOD, the disks are not necessarily connected. Technically, connecting several external hard drives to your computer, each with its own drive letter in Windows, is enough to create a JBOD. In the literal sense, it is “just a bunch of disks”. But usually, when we talk about JBOD, we’re talking about multiple disks appearing as one unified storage volume. Instead of striping the disks, you span them together. When you span two hard drives together, you stitch the end of one volume right onto the beginning of the next. This appears to you and your operating system to be a single large volume with no gaps or seams. Like RAID-0, when one drive in a JBOD fails, the entire volume becomes inaccessible.
One key difference between a JBOD and RAID is that the disks in a JBOD can be of various sizes. If you try to make a RAID-0 with two 2-terabyte disks, one 3-terabyte disk, and one 4-terabyte disk, the controller will only use 2 terabytes from the third and fourth drives and ignore the rest of their storage capacity, giving you 8 terabytes of disk space. On the other hand, a JBOD will offer the full 11 terabytes. This is one of the points in JBOD’s favor and one reason why someone might want to use it over RAID-0. (It is our belief, though, that nobody should be using RAID-0 for any reason in this day and age.)
Other Ways to Use JBOD
A JBOD doesn’t necessarily have to be RAID-0’s spanned cousin. Some software RAID setups start out with a JBOD. The disks are spanned into a single volume, and then the software handles striping, parity, and other RAID features. A software RAID created using Windows Storage Spaces, for example, can be created using disks of any size and can provide striping, mirroring, and single or double parity on top of a JBOD made with a wide variety of data storage devices. If you’ve ever wanted to set up a RAID-5 array using two 16-gigabyte flash drives and a 32-gigabyte SD card, you probably could (although it probably wouldn’t be very useful).
This enclosure had four hard drives linked in a JBOD. Our engineers were able to perform a successful JBOD data recovery operation for our client.
How We Recover Data from a Failed JBOD
Just because a JBOD spans its data instead of striping it doesn’t make it any less failure-prone than RAID-0. JBOD and RAID-0 are both more than twice as likely to fail as a single hard drive, and that number only goes up as you add more drives. Not only can any drive fail, but the controller card or enclosure can fail as well. For this reason, data in a JBOD is especially vulnerable.
When we receive a JBOD data recovery case, our engineers create disk images of each drive that are as complete as possible, repairing any drives with require it. In most cases, even the failed hard drive(s) can be over 90% imaged after our engineers repair it. Just like in RAID cases, our technicians must use custom software to link the drives together. But a JBOD array isn’t quite as challenging for our engineers to put back together as a RAID array. Because JBODs are simpler by design, there are far fewer puzzle pieces that need to be put together.
Why Choose Gillware for JBOD File Recovery?
The data recovery experts at Gillware have tens of thousands of hours of experience. These hours of experience encompass thousands of RAID and JBOD data recovery cases. In our cleanroom lab in Madison, WI, we offer our engineers’ quality services at an affordable price.
At Gillware Data Recovery, we offer financially risk-free data recovery from crashed JBOD setups. We start with a free evaluation, and can even provide you a UPS label to cover the cost of inbound shipping. As part of our standard service, evaluation takes 1-2 business days on average. Expedited service is also available, and is still financially risk-free.
After evaluating your JBOD data recovery situation, we present you with a statement of work. This includes a firm price quote, probability of success, and how long we expect the recovery work to take. We only move forward with the recovery if you approve the quote. We don’t show you a bill until we’ve finished our work. If we’ve successfully recovered your data, we send it back to you on a new password-protected hard drive.
Ready for Gillware to Assist You with Your JBOD Data Recovery Needs?
Best-in-class engineering and software development staff
Gillware employs a full time staff of electrical engineers, mechanical engineers, computer scientists and software developers to handle the most complex data recovery situations and data solutions
Strategic partnerships with leading technology companies
Gillware is proud to be a recommended provider for Dell, Western Digital and other major hardware and software vendors. These partnerships allow us to gain unique insight into recovering from these devices.
RAID Array / NAS / SAN data recovery
Using advanced engineering techniques, we can recover data from large capacity, enterprise grade storage devices such as RAID arrays, network attached storage (NAS) devices and storage area network (SAN) devices.
Virtual machine data recovery
Thanks to special engineering and programming efforts, Gillware is able to recover data from virtualized environments with a high degree of success.
SOC 2 Type II audited
Gillware has been security audited to ensure data safety, meaning all our facilities, networks, policies and practices have been independently reviewed and determined as completely secure.
Facility and staff
Gillware’s facilities meet the SOC 2 Type II audit requirements for security to prevent entry by unauthorized personnel. All staff are pre-screened, background checked and fully instructed in the security protocol of the company.
We are a GSA contract holder.
We meet the criteria to be approved for use by government agencies
GSA Contract No.: GS-35F-0547W
Our entire data recovery process can be handled to meet HIPAA requirements for encryption, transfer and protection of e-PHI.
No obligation, no up-front fees, free inbound shipping and no-cost evaluations.
Gillware’s data recovery process is 100% financially risk free. We only charge if the data you want is successfully recovered.
Our pricing is 40-50% less than our competition.
By using cutting edge engineering techniques, we are able to control costs and keep data recovery prices low.
Instant online estimates.
By providing us with some basic information about your case, we can give you an idea of how much it will cost before you proceed with the recovery.
We only charge for successful data recovery efforts.
We work with you to define clear data recovery goals for our technicians, and only charge you upon successfully meeting these goals and recovering the data that is most important to you.
Gillware is trusted, reviewed and certified
Gillware has the seal of approval from a number of different independent review organizations, including SOC 2 Type II audit status, so our customers can be sure they’re getting the best data recovery service possible.
Gillware is a proud member of IDEMA and the Apple Consultants Network. | <urn:uuid:150b14f4-481e-4bb3-849b-4fe5d4fd1154> | CC-MAIN-2017-04 | https://www.gillware.com/jbod-data-recovery-services/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00564-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927662 | 1,752 | 2.9375 | 3 |
March 07, 2016
This is the second post in a series that celebrates important ideas in the development of business systems and the applications that support them. The subject of this post is database technology and the software that supports it. Database technology typically means a database management system (DBMS), which is system software for creating and managing databases. The DBMS software provides programmers and others with an orderly way to create, retrieve, update and manage data.
In the Beginning
When many mainframe programmers started programming, access methods were the only way to store and retrieve the data that was used by programs. Prior to access methods channel programming was used to access data on external devices. Depending on retrievals needs and one’s knowledge of the options available, programmers created sequential, direct access and indexed files. Direct was considered something of a challenge because mapping a key to a record location wasn’t always straightforward. If you were lucky, there was an easy one-to-one pairing of key to record location. The access methods had names like SAM, BDAM, ISAM and VSAM. Many are still used today not just because the applications are still around but also because they still do the job well when matched to certain needs.
Things Have Changed
Today, many applications use database technology in place of access methods. There is good reason for this. Consider this straightforward list
from technologist Robert Harvey:
* You can query data in a database, basically ask it questions.
* You can look up data from a database relatively quickly.
* You can relate data from two different tables together using JOINs
* You can create useful reports from data in a database.
* Your data has a built-in structure to it and it is easy to see the order.
* Information of a given type is always stored only once so redundancy is eliminated.
* Databases are ACID, which means they have the properties of atomicity, consistency, isolation and durability. This is a set of characteristics that guarantee reliable handling of database transactions.
* Databases are fault-tolerant
* Databases can handle very large amounts of data.
* Databases are concurrent handling multiple users at the same time without corrupting the data.
* Databases scale well
In short, developers who use databases benefit from a wide range of well known, proven technologies developed over many years by a wide variety of very smart people.
When Does It Make Sense?
Today, it is hard to imagine developing a new application without using a DBMS. Uche Ogbuji, Enterprise Data Architect, Engineer and Entrepreneur writes
“Anyone looking for a good flame war can always drop into any software development forum and casually ask what database software should be used for the next project, or even ask whether he or she needs to bother with a relational database. Because databases technologies are such an important part of programmer philosophy, it is hard to find objective discussion for the hapless developer looking for a good, general-purpose database management system (DBMS) to use in a project.” Ogbuji’s IBM developerWorks article “Choosing a Database Management System” is 15 years old but the ideas still ring true today.
Posted March 07, 2016 | Permalink | <urn:uuid:2b134a11-6fd7-4e18-a5f9-aa3569c9e176> | CC-MAIN-2017-04 | http://www.ibmsystemsmag.com/Blogs/IT-Trendz/March-2016/It%E2%80%99s-Hard-to-Imagine-Not-Using-a-DBMS/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00409-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929965 | 672 | 3.3125 | 3 |
Cray engineers have been working on a new parallel computing language, called Chapel. Aimed at large-scale parallel computing environments, Chapel was designed with a focus on productivity and accessibility. The project originated from the DARPA High Productivity Computing Systems (HPCS) program, which challenged HPC vendors to improve the productivity of high-end computing systems.
To explain why Cray is pursuing a new computing language when there are currently established models in place, Principal Engineer Brad Chamberlain has penned a detailed blog post.
Chamberlain maintains that programmers have never had a decent programming language for large-scale parallel computing. By that, he means “one that contains sufficient concepts for expressing the parallelism and locality control required to leverage supercomputers, while also being as general, effective, and feature-rich as languages like Fortran, C, C++, or Java.”
“Ideally,” he continues, “such a language would strive to be more than simply ‘decent’ and feel as attractive and productive to programmers as Python or MATLAB are. Libraries and pragma-based notations are very reasonable and effective alternatives to creating a language. Yet, given the choice between the three, languages are almost always going to be preferable from the perspectives of: providing good, clear notation; supporting semantic checks on the programmer’s operations; and enabling optimization opportunities by expressing the programmer’s intent most clearly to the compiler, runtime, and system.”
The communities’ current go-to technologies for parallel programming, namely MPI and OpenMP, have done the job, but they are lower-level and lack many of the features of more modern languages.
As to the claim that HPC workflows necessitate lower-level techniques, Chamberlain clarifies that those who are completely satisfied with currently-available tools can certainly keep using them, but he wants to provide an alternative for those who find them lacking. He also wants to push back on the idea that HPC programming can only be done close to the metal.
It is possible to use abstractions that boost productivity as well as performance, Chamberlain contends. “With good design,” he writes, “not only can raising the level of abstraction improve programmability and portability, it can also help a compiler — to say nothing of subsequent programmers — better understand and optimize a piece of code.”
Chapel is not only a higher-level language; however. It was actually designed with a multiresolution philosophy. According to this overview, the approach allows users to begin by writing very abstract code and then add more detail until they are as close to the machine as their needs require.
The overarching goal of the Chapel initiative is to make parallel programming more accessible so that computational scientists, domain experts, and mainstream programmers can leverage the full benefits of parallelism as core counts proliferate.
Chapel 1.9.0 was released on April 17, 2014. More details about the project are laid out in an earlier blog post from Chamberlain. | <urn:uuid:7ecf915f-bc61-416d-a61b-3db326960cac> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/07/02/case-parallel-programming-alternative/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00409-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953308 | 629 | 2.84375 | 3 |
How can Spammers and Hackers Send Forged Email?
Everyone has seen spam messages arrive with a “From” address that is your own address, a colleague’s, a friends, or that of some company that you work with or use. These From addresses are forged to help the messages (a) get by your spam filters, and (b) get by your “eyeball filters”.
But how are these folks “allowed” to do that?
When email was first developed, there was no concept of the need for security; protections against identity theft and forgery were not part of the plan. As a result, it is actually trivial for one to send an email with a forged “From” address and even some forged “Received” tracking lines by just connecting to your target’s email server and telling it whatever you want.
Let’s try to send an email to the address “email@example.com” pretending to be from “Bank of America”. The purpose of this exercise is not to teach you how to send forged email so much (this is not a new technique) as to set the stage for understanding how to detect and combat these kinds of messages.
Step 1: Get an example message from Bank of America
In order to send a compelling forged email message, we need an example of a message that came from the source that you are trying to forge … so you can make your message “look like” a typical message that they send (e.g. in terms of the subject, content, wording and grammar, images, etc.). We are not going to consider the message subject or content in this article. Instead, we are concerned with the headers … the part of the message that identifies who sent it, who it is to, and what path it took over the Internet.
Taking an example message from Bank of America, and looking at the headers, I see (this excerpt of interesting headers):
Return-Path: <firstname.lastname@example.org> Received: from unknown [18.104.22.168] (EHLO mta5.ealerts.bankofamerica.com) by p02c12m115.mxlogic.net(mxl_mta-8.2.0-3) over TLS secured channel with ESMTP id 9e853d45.0.105970.00-2374.168431.p02c12m115.mxlogic.net (envelope-from <email@example.com>); Thu, 05 Feb 2015 04:50:03 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; s=200608; d=ealerts.bankofamerica.com; h=From:To:Subject:Date:MIME-Version:Reply-To:Message-ID:Content-Type; firstname.lastname@example.org; bh=YhJz6l84RyMXW/utiX0auH/xtTQ=; b=Dzp0SVPypvq8X0VHConCJLSIYwL23EbgP8VxN4qK765Y57++HbhTQDjaXWc0sFjV7BV6/Np3DD3E dYHiIoxpzSy1GHvwvIDaG0+md1ijOmDwsBgFpE70upc+9WVHaNOYXjWxkO1tsgdfjEeJprcK93Wx Oc5xp60eg3MRnIvLC3A= Received: by mta5.ealerts.bankofamerica.com id hqdcek163hsp for <recipient@actual_domain>; Thu, 5 Feb 2015 05:49:51 -0600 (envelope-from <email@example.com>) From: "Bank of America" <firstname.lastname@example.org> Reply-To: "Bank of America" <email@example.com> Message-ID: <firstname.lastname@example.org>
We see from this example, typical From addresses, Reply-to addresses, “Return Path” addresses (for bounces), Message-ID formats, that they use DKIM, and an example of a server at Bank of America that talked to our server to deliver the message.
For items that look like unique IDs (e.g. the Message-ID, Return-Path), the Hacker would use a slightly altered but similar one. S/he would use the same from address, omit the DKIM signature, and possibly use some fake “Received” lines to make the message appear to come from Bank of America’s servers.
Step 2: The Forged Message
Using this information, the Hacker would construct a forged message to be similar to this one. I highlighted in RED things I changed and in BLUE, things I added new:
Return-Path: <email@example.com> Received: by mta5.ealerts.bankofamerica.com id hqdcek163bnq for <firstname.lastname@example.org>; Thu, 5 Feb 2015 10:25:15 -0600 (envelope-from <email@example.com>) From: "Bank of America" <firstname.lastname@example.org> Reply-To: "Bank of America" <email@example.com>Message-ID: <firstname.lastname@example.org> Subject: Alert! You Bank of America account has been compromised To: email@example.com (Add the actual body of the message, next)
Step 3: Where to send the message?
Next, the hacker would determine the servers that handle inbound email for the target’s domain — luxsci.net. This could be easily done using the “dig” command at the Linux/Unix/Mac command line:
#>dig +short luxsci.net mx
This informs the hacker that s/he can use “inbound10.luxsci.com” as the server to connect to to send the message.
Step 4: Sending the forged email message
Next, the hacker fires up the standard “telnet” program and connects to inbound10.luxsci.com on port 25 (the standard port for inbound email delivery) and “talks SMTP” to that server… specifying who the message is from and to and what the message contains. The server accepts that (hopefully) and then filters it and (maybe) delivers it to the unsuspecting recipient.
The BLUE indicates content entered by the hacker; the black, responses of the mail server.
#>telnet inbound10.luxsci.com 25
Connected to inbound10.luxsci.com.
Escape character is ‘^]’.
220 rs302.luxsci.com ESMTP Sendmail 8.14.4/8.13.8; Thu, 5 Feb 2015 15:31:49 GMT
250-rs302.luxsci.com Hello static-71-174-64-40.bstnma.fios.verizon.net [22.214.171.124], pleased to meet you
250 2.1.0 firstname.lastname@example.org… Sender ok
250 2.1.5 email@example.com… Recipient ok
354 Enter mail, end with “.” on a line by itself
Received: by mta5.ealerts.bankofamerica.com id hqdcek163bnq for <firstname.lastname@example.org>; …
From: “Bank of America” <email@example.com>
Reply-To: “Bank of America” <firstname.lastname@example.org>
Subject: Alert! You Bank of America account has been compromised
250 2.0.0 t15FVnRD023664 Message accepted for delivery
221 2.0.0 rs302.luxsci.com closing connection
Connection closed by foreign host.
You can see in this dialog, that:
- The hacker pretended to be coming from one of Bank of America’s servers (ehlo ealerts.bankofamerica.com)
- The hacker set the “envelope” from and to addresses to match that in the fake message (the “rcpt to” and “mail from” SMTP commands)
- The hacker send the full fake message
- The server was “ok” with it.
You can also see in this dialog, that there does not appear to be anywhere that the sender identity is validated or checked. This is not actually the case.
The Received Forged Email
The unsuspecting recipient may soon find a message from “Bank of America” in his/her INBOX.
The recipient may believe it, click on it, and be sucked into some phishing scheme!
What are Your Defenses?
Since forged email has been around time, email has evolved and there are defenses against this that work to varying degrees.
Some of these include:
- SPF to verify that the server used to send email is authorized to do so
- DKIM to sign messages and prove that the sender is authorized to construct messages from this sender
- Filters to block wide-spread/common phishing attacks (doesn’t help with spear phishing — messages targeted at you, specifically)
- Allow and Deny lists that permit messages from specific IP addresses
- Common Sense
In our next post on this topic, we will analyze the message received by email@example.com and see what tell-tale signs exist indicating that it is fraud and start to examine how messages such as this can be and are stopped every day by properly configured email systems.
We will see that there is a large burden on the owner of the sending domain to put measures in place to ensure that recipients of messages “from” them can properly differentiate good messages from forged messages. We shall see, for example, how Bank of America does this.
Read Followup Post: Analyzing a Forged Email Message: How to Tell it was Forged? | <urn:uuid:8f06e43c-be04-4475-a09f-ef7773c5f7f7> | CC-MAIN-2017-04 | https://luxsci.com/blog/can-spammers-hackers-send-forged-email.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00097-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.814831 | 2,468 | 3.078125 | 3 |
Studying Can Be MURDER
Studying is perhaps the only sphere for which “murder” can have a positive connotation. That’s because the term serves as an acronym for a study system employed by many meticulous learners: mood, understand, recall, digest, expand and review. See? M-U-R-D-E-R, she wrote. Here’s an explanation of the system and how it works.
M is for Mood
This refers to creating the right environment for effective study. Of course, finding an appropriate setting for studying is a personal affair. People have to employ the things that work best for them, and individuals being individuals, this will fluctuate wildly. But no matter your preference, the main thing to keep in mind is concentration — the ideal environment will be one you hardly even notice as you study.
First, consider the location itself and your own immediate reaction to it. Is there a great deal of external stimuli around to distract you (TV, Internet, other people, etc.)? What you’ll be using to study also influences where you’ll be doing it. Think about what tools and materials you’ll need to use most during that time. Generally speaking, you shouldn’t have more than the bare necessities, as too much stuff can drive you to distraction.
U is for Understand
You’re bound to come across some unfamiliar topics and terms as you study a new subject. When you find yourself not comprehending a particular concept while reading a print or virtual resource, make a note of it and move on.
Also, keep a dictionary on hand so you can look up unknown words, understand them in their context and then move on to the next point. If the terms are so technical or cutting-edge that they won’t be found in a standard unabridged dictionary, try to find some kind of technology reference book that explains the ones you don’t understand.
R is for Recall
After you’ve finished going through the material, try to remember as much of it as you can. Even better, see how much you can write down in your own words. If you’re having trouble recollecting the information — or worse, are drawing a blank — go back to the first step and start over.
D is for Digest
Once you’ve recalled what you know, revisit the stuff you didn’t quite get. Go back and re-read the terms and concepts you had trouble with to see if you can make sense of it. If not, try to find another source or a knowledgeable expert who can explain it better.
E is for Expand
Once you’ve got all that down, move to broaden your knowledge of the topic even further by thinking critically about it. Proponents of the MURDER system recommend starting out with three rhetorical questions:
1) If I could speak to the author, what questions would I ask or what criticism would I offer?
2) How can I apply this material to what I am interested in?
3) How could I make this information interesting and understandable to others?
R is for Review
Finally, consider everything you’ve learned in a final review session. Although this obviously includes the topic (or topics) you studied, it also constitutes an evaluation of what study techniques and strategies you used and how well those worked. Figure out what suits you best, and just throw out the rest. | <urn:uuid:a1f43104-1ae6-4076-8d30-7574571b70e5> | CC-MAIN-2017-04 | http://certmag.com/studying-can-be-murder/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00061-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956125 | 720 | 3.125 | 3 |
Nine out of ten new web sites visited are found through Internet searches. In fact, web search has become an essential part of doing business online with more than 80 per cent of Internet users keying in a company name in a search engine even if they know the company’s web address. There’s no denying it “Googling” – or using any search engine for that matter – is as frequent an occurrence in offices as getting a cup of tea. But as use of search increases , so does the incidence of web-based malware. Hackers are exploiting vulnerabilities in web browsers as they catch up with the latest online behavioural and communication trends.
Analysis from the ScanSafe Security Threat Alert Team, which monitors web-based malware, shows that one-in-five Internet search results contain malware or offensive and illegal, content. Offensive content represents the greatest risk, accounting for 80 per cent of total search blocks.
Search engines have increasingly become a gateway for exposing businesses to security risks, such as Trojans, spyware, and keyloggers. Unsuspecting web users can be exposed to such malware from a wide range of web sites—including legitimate sites that have been compromised to unwittingly host malware. This malware can easily install itself on the corporate network and severely disrupt business operations.
Although it is an essential tool in the workplace, if secure web searching is ignored, it can become the Achilles’ heel in corporate web filtering policies and expose companies to security breaches, information leakage and legal issues. One example of malware exploiting search engines is through the use of “spamdexing’. Compromised sites are appended with hidden text containing keywords and links to other (typically compromised) sites which host exploit code. This increases the ranking of the exploit site in search engines, thus when users search on those particular keywords, the exploit site is returned prominently in the results. Those who click through to the site will typically become victims of so-called “drive-by-downloads’ of malware. The Zhelatin family of malware, commonly referred to as the “Storm worm’, has been discovered using this technique to foist new variants of the malware onto victims’ computers.
In another Storm-related incident, Zhelatin-infected bloggers inadvertently posted Zhelatin spam with malicious links to their blogs. This occurred because these bloggers had configured their blogs to automatically post content sent to a particular address. When the Zhelatin mass-spamming component activated, it sent the spam to the blog address as well. Other malware, such as the Trojan MeSpam, append malicious links to Web 2.0 related activities, such as blog comments, forum posts, and webmail. Of course, search engines crawling these sites will include the miscreant posts in their search results, thus further exposing users.
Evolving web threats
Web-based threats have been a prominent attack method for virus authors ever since the success of the 2001 Nimda worm that spread via email and exploited unpatched vulnerabilities on Web servers. Today, the interactive technologies that are the backbone of Web 2.0 provide fertile ground for cross-site scripting (XSS) attacks. In addition, a lucrative black market in zero-day vulnerabilities, exploit toolkits, and commercially produced malware creates an environment conducive to drive-by downloads of malicious content from even the most legitimate of web sites.
How do you search safely?
Search is one of the many useful features of the Internet that exists today and is a critical component of navigating the rich array of web content available. To search safely, with advance warning of malware or offensive content, companies can utilize a corporate safe web search tool, which will provide guidance to employees on acceptable websites based on the company’s own acceptable usage policies.
The important function that such services provide is the ability to notify web users of potential risks in real time. This distinction of real time is critical, as a site that was safe the last time it was crawled, may not be safe the next time it is accessed. By giving users the right information, in real time, they are able to take control of their online behaviour. This reduces the potential for accidental policy violations and makes it easier for administrators to maintain their security policy. Securing web searches in real time protects the user and the corporation, allowing the user to continue using productivity-enhancing search engines without the increased risk of exposure to malware and policy violations. | <urn:uuid:eef92bc1-18b1-495c-85a0-077ec4fe34c9> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2007/11/13/searching-for-a-cure-to-web-malware/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00061-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928665 | 916 | 2.875 | 3 |
First responders from around the nation got to take the controls of state-of-the-art robotics at Texas A&M's Disaster City on Wednesday.
The Texas A&M Engineering Experiment Station Center for Emergency Informatics hosted about 50 experts, A&M faculty, students and private vendors, demonstrating technology used by the U.S. armed forces. The two-day workshop focused on how to reduce the loss of life and property during floods.
Experts from the National Weather Service, Texas Task Force 1, the National Guard, American Red Cross, Texas A&M Forest Service and others descended on the College Station facility from as far away as South Carolina, Utah, California and Washington, D.C.
It was the 15th biannual event hosted by TEES and the seventh held at Disaster City, a high-tech sprawling collection of derailed trains, polluted ponds, faux meth labs and other obstacles where first responders get near real-world training.
The different groups used the event as a brainstorming workshop, with private companies getting feedback on what sort of technology first responders need and the service men and women getting to test out and mold the gadgets of the future.
"As these technologies like the robots that fly -- the UAVs -- are beginning to be transitioned from military use only into use for the private and public sector, first responders are looking at them as a tool as well," said David Martin, Texas Task Force 1 member. "It enables us to get a view of the situation we can't get from any other perspective without putting our firefighters or responders in harm's way."
For example, drones and robots can provide video from areas easier and safer than traditional means. A robotic boat can detect underwater debris in a flooded area and a aerial drone can quickly take photos or video from a disaster zone.
"It makes the search go faster and it gives you a better overview of the entire scene," Martin said. "Part of what we're doing here this week is exploring what those possibilities are. What is the technology out there and what are the ways we can use it that would benefit the search and rescue community."
Steven Rutherford with the South Carolina Emergency Response Task Force is interested how the technology can combat forest fires and hurricanes on the East Coast.
"Usually when hurricanes come in, they shut the beach down," Rutherford said. "We can take some of this knowledge and go out there and recon the beaches before we go in there to start searching buildings. So we can have a good layout of exactly how devastating it is."
©2014 The Eagle (Bryan, Texas) | <urn:uuid:7a30d258-8cab-4f2c-ab26-0cb66833d1ee> | CC-MAIN-2017-04 | http://www.govtech.com/public-safety/First-responders-get-look-at-newest-gadgets-in-the-field.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00455-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943916 | 530 | 2.828125 | 3 |
An efficient eDirectory design is based on the network layout, organizational structure of the company, and proper preparation.
If you are designing eDirectory for e-business, refer to Section 2.6, Designing eDirectory for e-Business.
The network layout is the physical setup of your network. To develop an efficient eDirectory design, you need to be aware of the following:
Users that need remote access
Network resources (such as number of servers)
Network conditions (such as frequent power outages)
Anticipated changes to the network layout
The organizational structure of the company will influence the eDirectory design. To develop an efficient eDirectory design you need,
The organizational chart and an understanding of how the company operates.
Personnel who have the skills needed to complete the design and implementation of your eDirectory tree.
You will need to identify personnel who can do the following:
Maintain the focus and schedule of the eDirectory design
Understand eDirectory design, design standards, and security
Understand and maintain the physical network structure
Manage the internetwork backbone, telecommunications, WAN design, and router placement | <urn:uuid:90c0e3e2-e30c-4473-9259-9ae5b6353c7d> | CC-MAIN-2017-04 | https://www.netiq.com/documentation/edir88/edir88/data/a5i2p8s.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00181-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.888215 | 234 | 2.65625 | 3 |
A Computerworld article last week reported on a reseacher’s prediction that shrinking the size of NAND flash memory for solid-state drives (SSDs) may cause the technology to lose significance altogether.
In a lecture delivered at this week’s Usenix Conference on File and Storage Technologies, Laura Grupp, a graduate student at the University of California, San Diego, argued that as flash memory is manufactured at smaller geometries, data errors and latency would increase. The idea behind shrinking the transistors is to boost capacity, which translates into lower cost per gigabyte. But that pushes performance and reliability in the opposite direction.
Grupp wrote about the phenomenon in a study titled The Bleak Future of NAND Flash Memory. “While the growing capacity of SSDs and high IOP rates will make them attractive for many applications, the reduction in performance that is necessary to increase capacity while keeping costs in check may make it difficult for SSDs to scale as a viable technology for some applications,” she wrote.
Grupp, along with John Davis of Microsoft Research and Steven Swanson of UCSD’s Non-Volatile Systems Lab, tested 45 types of NAND flash chips, spread across six vendors and multiple transistor geometries (between 25nm and 72nm). The researchers found that write speed for flash blocks had high variations in latency. In addition, they also discovered wide variations in error rates as the NAND flash wore out. Multi-level cell (MLC), and especially triple-level cell (TLC) NAND created the worst results, while single-level cell (SLC) performed the best.
Grupp, Swanson and Davis extrapolated the results to 6.5nm technology, which is the size NAND transistors are expected to be in 2024. At that size, the researchers estimate that read/write latency will double in multi-level flash, with triple-level suffering 2.5 times as much latency. Bit error rates are expected to increase as well, more than tripling those of current levels.
But since flash memory is a solid-state technology (versus the mechanical technology used in hard disks), SSDs will always have a natural advantage in speed and throughput. In general, reading and writing to an SSD is about 100 times faster than that of a hard drive.
Grupp concedes that even with 2024-level transistor sizes, SSDs will outperform their hard disk competition by a wide margin, 32,000 IOPS to 200 IOPS, respectively. But because of the latency and error rate issues, she believes that 6.5nm will be end of the line for flash memory.
For flash memory, there seems to be a choice of performance or capacity, but not both. This could have lasting impacts on data-intensive applications that lean heavily on IOPS performance. Without a replacement for NAND memory, performance could stall or even decline until another solid-state technology takes its place. | <urn:uuid:3a86682d-2e0f-4adb-b0ee-b12876b7b334> | CC-MAIN-2017-04 | https://www.hpcwire.com/2012/02/22/flash_memory_shrinking_into_obscurity_/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00575-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934919 | 610 | 3.015625 | 3 |
Dennis Kubes, a Texas coder who specializes in realtime search, offers up a C tutorial.
Programming in C can be tough for those new to the language, so Dennis Kubes wrote "The 5-minute Guide to C Pointers," and put it on his blog. Only four code snippets are needed for Kubes to illustrate his tutorial.
Those wishing an expanded overview should look at a video from simplcool titled "tutorial pointers in c++ part1" on YouTube. This almost eight-minute video covers C and C++, and says right upfront: "A pointer is a variable that contains the address of another variable."
Pointers were in my opinion the hardest subject of C until I read your guide.dirtycoder on denniskubes.com
What a? boss, I want this guy to lecture me :Dfr00tcrunchStarcraft on youtube.com
Maybe it shows my age, but I am yet to understand why so many developers nowadays have such a hard time grasping pointers, regardless of the language being used to teach them.pjmllp on news.ycombinator.com
Questions and answers
this helped alot thnx man ! ! now i know how to use pointers but i still don't know? why do we use em !!xbadnerd18x on youtube.com
Arrays are not pointers; pointers are not arrays. This is perhaps the most common misconception about C, and a "Guide to C Pointers" should not propagate it._kst_ on news.ycombinator.com
Let me add …
Hi nice tutorial, one suggestion though is that many people learning c pointers are unsure some of the merits of why they should use them when they encounter these beginner tutorialssulfide on denniskubes.com
They're post office box numbers.
In this analogy, the post office is all your RAM. The big place where my analogy breaks down a bit is that, in this little world, the post office will store things bigger than one box in multiple adjacent boxes, so to get anything into or out of the post office you have to specify which box you want them to start with and how many boxes they'll have to use.derleth on news.ycombinator.com
y is the name of the integer. its just like algebra
y = 1
x and y are both integers, the letters just represent numbers?murder on youtube.com
Every time someone tries offering a simplified explanation of pointers, I've countered with the old Buddhist saying that, "The pointing finger is not the moon," followed by a brief foray into syntax and operators, e.g.,
moon* finger = &luna;
As often as not, enlightenment occurs.rosser on news.ycombinator.com
What tutorial or book have you found most useful for C or C++?
Now read this: | <urn:uuid:f1e2e6b8-4c9a-43d1-9dfc-f06d61adcd77> | CC-MAIN-2017-04 | http://www.itworld.com/article/2725109/enterprise-software/c-pointers--the-5-minute-guide.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00299-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934922 | 608 | 3.078125 | 3 |
Researchers at Wake Forest Baptist Medical Center are embarking on a project that is so overloaded with sci-fiesque elements that if it were a movie, you might question the screenwriter's credibility.
The "body on a chip" project will use 3-D printing—or bioprinting—technology to create mini human-organ systems about the size of a quarter to test the body's response to drugs. It's funded by a $24 million grant from the Defense Department to develop antidotes to very strong agents in the areas of chemical and biological warfare.
The ultimate goal of bioprinting is to create large, functional, implantable organs that will address the growing gap between viable organ supply and demand for transplants. Along the way, the simpler, mini-versions can be used to more effectively test drugs.
A few groups have been experimenting with bioprinting tissues and organs, but the body-on-a-chip project is unique in connecting the structures together. The chip will be able to test the impact of agents—including intense chemical weapons, more mainstream drugs, and treatments—on the human body. The project offers an alternative to animal testing—which is often inefficient and inaccurate for measuring human responses—and enables the lab to test the full system's response, rather than just one type of organ.
Scientists started making tissues by hand about 25 years ago. Using a technique known as scaffolding, cells from a patient's tissue were layered on 3-D molds and grown in an incubator outside the body. Using bioprinting technology, they are now able to feed the same information into a computer to build the tissue.
"Printing came about as a way to scale up the tissues and organs we were already creating by hand," says Anthony Atala, director of the Wake Forest Institute of Regenerative Medicine in North Carolina and the lead investigator on the project. Bioprinting enables researchers to create tissues with much greater precision and accuracy.
Atala explains the four tissues types in order of complexity: Simplest are flat structures like skin; second are tubular structures, such as blood vessels or windpipes; third are hollow non-tubular organs, such as the stomach, bladder, and uterus; and last and most complex by far are solid organs, such as the heart, kidney, and liver. These have more cells per area, more cell types, and higher nutrition requirements, and they need much more vascularity and blood supply.
To this point, scientists have only implanted the first three types from handmade tissues in patients. No bioprinted structure has been implanted.
The mini-organs are small enough that they don't require a complex vascular tree to survive. The mini-livers, hearts, lungs, and kidneys are not fully functional native organs, but they mimic the functionality for the testing application.
The Wake Forest lab has developed one machine to bioprint different types of tissues. "It's like with an inkjet printer, where you have different colors," says Sang Jin Lee, a coinvestigator on the project. "Here we have different nozzles and different materials and cells."
The researchers are borrowing from computer microchip and biosensing technology. They will focus on one organ type at a time, beginning with the liver. As each is developed, it will be used to test drug responses individually; once they are completed, they will be connected on the chip to test the full system response.
A small handful of other groups are developing technologies to print tissues, although generally with a focus on individual organs, rather than the full system.
Organovo, a start-up in San Diego, is using bioprinting of tissues to improve research on drugs, with a recent focus on the liver.
"Reliance on animal models and cells in a petri dish [for testing] is problematic, because many diseases can't get good animal models or don't behave similarly in petri dishes," says Organovo CEO Keith Murphy. The company has succeeded in bioprinting liver tissue that lasted 40 days in a dish. Murphy says normally the tissue stops functioning in two days, which is not helpful for testing a drug that is administered for two years.
Organovo is focused on the immediate commercial impact of bioprinting, with testing done on each tissue independently. "We've contemplated putting [the parts] together over time, but you don't need 10 things to study the liver—you need the liver," explains Murphy.
"You can make living structures act like living tissues," he says. "You don't need the full organ to have an impact."
The Advanced Manufacturing Technology Group at the University of Iowa is bioprinting tissue with this idea in mind. Ibrahim Ozbolat, AMTech codirector and assistant professor of mechanical and industrial engineering, is focused on creating tissue that would accompany—not necessarily replace—the pancreas and produce insulin to help patients with diabetes.
"We're not interested in making a full natural pancreas," he says. "We're working on making something that is large enough and produces enough insulin that is transplantable."
These projects are all steps along the path toward bioprinting large organs, although that goal and its clinical application is years in the future.
"[Bioprinting organs] is still several billion dollars away," Murphy says. "If the funding is provided in five years, it could happen quickly. If it takes 20 years, it will be more over that time frame."
The hope is that as the technologies continue to develop, the manufacturing of organs could help solve the problem of rapidly growing transplant wait-lists.
Atala notes that over two sets of 10 years, the number of patients on wait-lists has doubled, while the number of organs transplanted has increased by only 1 percent—a problem the American Hospital Association has declared a public health crisis.
"This is really what drives us to do this," he says. "Everything builds on the next step."
(Image via Flickr user kakissel) | <urn:uuid:01d5137a-823d-4339-83ac-79654f4a5b36> | CC-MAIN-2017-04 | http://www.nextgov.com/health/2013/12/next-frontier-3-d-printing-human-organs/76013/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00025-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.96123 | 1,249 | 3.71875 | 4 |
In some ways, BGP is nice and simple. For instance, there’s only one BGP: BGP version 4. Many network professionals have been asking the question whether the BGP version 4 will soon be replaced by BGP 5. However, there is nothing to worry about. Today, we still run BGP version 4 as our Exterior Gateway Protocol (EGP). Even if today’s BGP4 does all kinds of things that the BGP4 from 20 years ago couldn’t do, such as routing IPv6, multicast and VPNs or using communities. BGP has shown itself to be very extensible, but it’s all still BGP—and a single BGP process at that. But BGP only handles the exterior side of routing—there’s also interior routing.
Interior Gateway Protocols
And there’s a lot more choice in Interior Gateway Protocols (IGPs). An old favorite is RIP, the Routing Information Protocol. Even the improved RIPv2 is too simple for most networks these days, as is RIPng (next generation) for IPv6. As such, Cisco created its own Interior Gateway Routing Protocol (IGRP) and then an enhanced version aptly named Enhanced IGRP (EIGRP). However, the most widely used IGP is OSPF: Open Shortest Path First. OSPF is an “open” implementation of the Shortest Path First or Dijkstra’s algorithm. (The name doesn’t refer to the possible openness of the shortest path.) OSPF version 2 is used for IPv4; OSPF version 3 for IPv6. Networks that run both IP versions and use OSPF as their IGP thus need to run both OSPFv2 and OSPFv3.
Back in the late 1980s when OSPF was developed, the OSI (Open Systems Interconnect) family of networking protocols was still in contention, and a lot of technology was borrowed/stolen by OSI from IP and by IP from OSI. As a result, OSI has the IS-IS routing protocol for OSI CLNP routing, which is very similar to OSPF in many ways (or the other way around). IS-IS stands for Intermediate System to Intermediate System, where “intermediate system” means “router”. IS-IS was later extended to support first IPv4 and then also IPv6 and is mainly used in very large IP networks.
With that introduction out of the way, I want to focus on the most common case: a network running BGP as the EGP and OSPF as the IGP and look at how the routing duties are divided over both protocols and how the two interact.
With OSPF being an IGP and BGP being an EGP suggests an obvious division of labor: OSPF handles the internal routing, BGP the routing towards external destinations. However, it’s not that simple. Yes, OSPF is in charge of internal routing. These routes show up as “O” routes in the output of “show ip route” on a Cisco router. If the network is split into multiple OSPF areas—not really necessary these days unless you have hundreds of routers—you may also see inter-area “O IA” routes.
O and O IA routes are only the address blocks that are used on router interfaces that actually run OSPF. That doesn’t include interfaces towards servers or PC’s and other end-user devices, or, in the case of ISPs, customers. To make those addresses show up in OSPF, we need to redistribute connected subnets and/or redistribute static routes:
router ospf 1
redistribute connected subnets
redistribute static subnets
The 1 in “router ospf 1” is the OSPF process or instance number. It’s possible to run multiple OSPF instances on the same router—which of course requires careful planning to keep everything straight. If redistribution of all connected and/or static routes in OSPF is more than you need, you can add “route-map <route-map-name>” and then use the indicated route map to filter out the unwanted routes to keep them from being redistributed.
By default, redistributed routes are made external type 2, and show up as “O E2”. It’s also possible to redistribute as external type 1 (with “metric-type 1”). The difference is that with O E1 routes the OSPF cost of the route includes the cost of the links to reach the external route, while with O E2 routes the cost of the internal links is ignored.
Obviously, BGP handles the routes towards external networks that are BGP-capable. However, it would be a bit embarrassing if all the BGP routers in the network would be telling their external neighbors completely different things. So all the BGP routers in the network need to talk to each other in order to tell a consistent story to external networks. This is what internal BGP (iBGP) is for. The “regular” BGP is thus external BGP (eBGP). When I say all the BGP routers, I really do mean all of them: if your network has 100 BGP routers, each of those needs to maintain iBGP sessions with the other 99. Well, unless you use route reflectors, but that’s a story for another day.
If you’re used to eBGP, iBGP requires some getting used to. Unlike eBGP, iBGP is just fine
working over many hops. However, this adds a complication. Consider the following network:
Suppose the iBGP session from router A to router D gets set up towards router D’s address on the link between B and D. Then, when the link between routers B and D goes down, D’s address on the interface that connects to that link goes down, and with it, the iBGP session configured towards that interface’s address. So rather than configure iBGP sessions towards interface addresses, we set up loopback interfaces for this. Unlike the loopback interface on a server or other host, which always uses address 127.0.0.1, routers use “real” addresses on their loopback interfaces, which works like this:
ip address 192.0.2.65 255.255.255.255
router bgp 9000
neighbor 192.0.2.67 remote-as 9000
neighbor 192.0.2.67 update-source loopback0
Unlike other interfaces, loopback interfaces can have a /32 prefix length, so they only use up a single address. The remote AS for neighbor 192.0.2.67 is the same as the local AS (9000), making this BGP session an iBGP session. The “update-source loopback0” line makes sure that the source address in outgoing BGP packets is the IP address configured on interface loopback0, so it matches the address the remote router is looking for. If now one path between the two iBGP routers goes down, the iBGP packets can be rerouted over another path and there’s no impact to BGP. Note that for this to work, the loopback interface addresses need2 to be present in the IGP—typically, connected routes will be redistributed to make this happen.
Also unlike eBGP, iBGP doesn’t update the AS path or the next hop address. This means that the next hop address in iBGP updates still points towards the IP address of the router in the neighboring network that the route was learned from. This address will reside in a point-to-point subnet between your eBGP router and the BGP router of the neighboring network. Your eBGP router will know this address because it’s present on a directly connected interface, but without further action, the rest of your routers won’t know this address, so the next hop address for iBGP routes won’t resolve and the iBGP routes can’t be used. Again, redistributing connected networks into OSPF (or your IGP of choice) solves this. Alternatively, you can configure “next-hop-self” on your iBGP sessions and the router will replace the next hop address in iBGP updates with its own address.
It’s also possible to redistribute routes into BGP. For instance, in a large ISP network you may redistribute connected and static routes in BGP rather than in IS-IS, because this will keep IS-IS lean and mean. The extra BGP routes are relatively inconsequential, and a nice benefit is that if they’re rerouted internally, this doesn’t trigger any BGP updates. Rather, the next hop addresses are resolved differently after an IS-IS change, which each router can do independently. Of course doing this requires good filters that make sure that the large numbers of small prefixes used by customers don’t leak into the global BGP table. A good way to accomplish such filtering is by adding a community to the routes that may be propagated in eBGP and then filter based on that community.
Back in the day, it wasn’t uncommon to redistribute all BGP routes into OSPF. With half a million prefixes in BGP, this practice has become less common. If you really want to live on the edge, you can redistribute BGP into OSPF and OSPF into BGP. Then, if your filters aren’t perfect, routes may round-trip between BGP and OSPF, with the result that the AS path gets removed. So now you’re advertising over BGP a whole bunch of routes that aren’t yours, but with a one-hop AS path, so a lot of your neighbors will start sending you traffic for these routes.
Last but not least, what happens when the same route is present in both BGP and OSPF? Obviously it’s hard to compare a BGP local preference to an OSPF metric. So what a (Cisco) router does is assign an “administrative distance” to each routing protocol. The route with the lowest distance then wins. OSPF has a distance of 110. BGP routes have a distance of 20 (better than OSPF and other IGPs) when they’re learned over eBGP and 200 (worse than OSPF and other IGPs) when they’re learned over iBGP. Static routes have a distance of 1 by default, but you can change this by including a distance value at the end of the “ip route …” command. A distance value of 250 will keep them out of the way of routing protocols. The administrative distance is the first number between square brackets in “show ip route” output. | <urn:uuid:8e377b40-8085-43ec-9d3e-ce4173d6fe22> | CC-MAIN-2017-04 | https://www.noction.com/blog/bgp_and_ospf_interaction | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00025-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92975 | 2,352 | 2.8125 | 3 |
The Internet Archive Wayback Machine, an indispensable chronicler of the Web for going on two decades now, late last week announced a major milestone.
From an Internet Archive blog post:
The Wayback Machine, a digital archive of the World Wide Web, has reached a landmark with 400 billion webpages indexed. This makes it possible to surf the web as it looked anytime from late 1996 up until a few hours ago.
The post lists a number of historical highlights, including:
- 2001 - The Wayback Machine is launched. Woo hoo.
- 2006 - Archive-It is launched, allowing libraries that subscribe to the service to create curated collections of valuable web content.
- March 25, 2009 - The Internet Archive and Sun Microsystems launch a new datacenter that stores the whole web archive and serves the Wayback Machine. This 3 Petabyte data center handled 500 requests per second from its home in a shipping container.
- October 26, 2012 - Internet Archive makes 80 terabytes of archived web crawl data from 2011 available for researchers, to explore how others might be able to interact with or learn from this content.
- October 2013 - New features for the Wayback Machine are launched, including the ability to see newly crawled content an hour after we get it, a "Save Page" feature so that anyone can archive a page on demand, and an effort to fix broken links on the web starting with WordPress.com and Wikipedia.org.
Not included in the timeline was mention of a fire on Nov. 6 of last year that did more than $600,000 to digitization equipment at the Internet Archive's scanning center in San Francisco.
The Wayback Machine has proven useful to me on a number of occasions, most memorably in assembling this collection of online news site images from Sept. 11, 2001, forever known as 9/11.
Four hundred billion is a lot of pages. In fact, the archive now serves up about 100 billion more pages than McDonald's has served hamburgers.
Welcome regulars and passersby. Here are a few more recent buzzblog items. And, if you’d like to receive Buzzblog via e-mail newsletter, here’s where to sign up. You can follow me on Twitter here and on Google+ here.
- Don't forget how close Microsoft came to losing Novell lawsuit.
- Catching up with the guy who live-blogged bin Laden raid.
- Band releases album as Linux kernel module
- Drone crashes into woman; operator blames 'hacker'
- 2014's 25 Geekiest 25th Anniversaries.
- Melissa virus turns 15 ... (age of the stripper still unknown).
- Snopes working overtime to debunk Flight 370 hoaxes.
- Journalists fall for phony "Facebook for Drunks" app.
- Teacher's online safety experiment takes trollish turn
- How Apple and Pepsi fumbled their 2004 Super Bowl ad.
- Electric car owner charged with "stealing' 5 cents worth of electricity.
- Geek-Themed Meme of the Week Archive.
- Judge orders patent troll to explain ‘Mr. Sham’ to jury
- Did you know Google could do this? I didn’t. | <urn:uuid:97a344f1-f0b9-46d8-9019-5f5244ff55fd> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2226899/software/wayback-machine-indexes-400-billionth-page.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00263-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.903597 | 668 | 2.53125 | 3 |
Denial of Service (DoS)
Denial of Service (DoS) is an attack where one computer and one Internet connection inundates a targeted system or resource. When an army of remotely controlled computers inundates all your resources, that’s called a Distributed Denial of Service (DDoS) attack. Both types of attacks attempt to prevent internal employees and customers from accessing an organization’s web-based service by either flooding the servers or crashing them.
There are two main types of DoS attacks: attacks designed to exhaust application or server resources and attacks that simply flood services. Application attacks exploit application vulnerabilities or business logic to bring down a service. Examples of application DoS attacks include Slowloris, which initiates many connections to a targeted server and never closes the connections, and buffer overflow attacks. Network attacks like SYN flood, ICMP flood, and UDP flood attacks are almost always performed by multiple attack sources as part of a DDoS attack.
DDoS attacks are increasing in both frequency and scale and have left some of the world’s largest data center and network operators dealing with their costly aftermath. Virtually every commercial and government organization is reliant on the availability of their online services, and service availability is wat risk from the rising tide of DDoS attacks. If you are concerned about the possibility of major service outages due to DDoS attacks, make sure that your networking vendor can scale to mitigate the largest multi-vector attacks at your network’s edge.
Learn how A10 helps organizations mitigate a DDoS attack. Download the white paper titled DDoS Attack Report: The Escalating Threat of DDoS Attacks. | <urn:uuid:1cc3a02d-6f16-4ab9-a64a-6553d3646105> | CC-MAIN-2017-04 | https://www.a10networks.com/resources/glossary/denial-service-dos | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00565-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940288 | 337 | 3.390625 | 3 |
Inter-Process Communication (IPC)
Inter-Process Communication (IPC) is a messaging sub-system that can enable data exchange between processors. Because IPC creates additional CPU overhead, it can result in delayed response times, slower software performance, and inaccurate real-time data per CPU.
A10 Networks ACOS operating system has been designed to avoid Inter-Process Communication. The operating system was designed from inception for multi-core CPUs, thus lifting single-CPU and single-core restrictions and making each processor truly independent.
Shared memory ensures that information is instantly accessible, enabling accurate, real-time, single-sourced data for global decision making criteria. | <urn:uuid:1b0bf5d6-27e4-4b1f-8552-a65d355d8da8> | CC-MAIN-2017-04 | https://www.a10networks.com/resources/glossary/inter-process-communication-ipc | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00565-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.89686 | 137 | 2.796875 | 3 |
1. Don’t call it ‘the’ blockchain
The first thing to know about blockchain is, there isn’t one: there are many.
Blockchains are distributed, tamper-proof public ledgers of transactions. The most well-known is the record of bitcoin transactions. However, in addition to tracking cryptocurrencies, blockchains are also being used to record loans, stock transfers, contracts, healthcare data and even votes.
2. Security, transparency: the network’s run by us
There’s no central authority in a blockchain system: Participating computers exchange transactions for inclusion in the ledger they share over a peer-to-peer network. Each node in the chain keeps a copy of the ledger, and can trust others’ copies of it because of the way they are signed. Periodically, they wrap up the latest transactions in a new block of data to be added to the chain. Alongside the transaction data, each block contains a computational ‘hash’ of itself and of the previous block in the chain.
Hashes, or digests, are short digital representations of larger chunks of data.
Modifying or faking a transaction in an earlier block would change its hash, requiring that the hashes embedded in it and all subsequent blocks be recalculated to hide the change. That would be extremely difficult to do before all honest actors added new, legitimate transactions – which reference the previous hashes – to the end of the chain.
3. Big business is taking an interest in blockchain technology
Blockchain technology was originally something talked about by anti-establishment figures seeking independence from central control, but it’s fast becoming part of the establishment. Companies such as IBM and Microsoft are selling it, and major banks and stock exchanges are buying.
4. No third-party in between
Because the computers making up a blockchain system contribute to the content of the ledger and guarantee its integrity, there is no need for a middleman or trusted third-party agency to maintain the database. That’s one of the things attracting banks and trading exchanges to the technology – but it’s also proving a stumbling block for bitcoin as traffic scales. The total computing power devoted to processing bitcoin is said to exceed that of the world’s fastest 500 supercomputers combined, however, recently, the volume of bitcoin transactions has increased that the network was taking up to 30 minutes to confirm that some of them had been included in the ledger. On the other hand, it typically only takes a few seconds to confirm credit card transactions, which do rely on a central authority between payer and payee.
5. Programmable money
One of the more interesting uses for blockchain is for storing a record not of what happened in the past, but of what should happen in the future. Organisations that are using blockchain technology to store and process ‘smart contracts,’ executed by the network of computers participating in the blockchain are doing so on a pay-as-you-go basis.
The technology allows them to respond to transactions by gathering, storing or transmitting information or transferring whatever digital currency the blockchain deals in. The immutability of the contracts is dependent on the blockchain in which they are stored in. | <urn:uuid:a4ca56c0-50d6-4d3f-800e-2c63dfd3a7a8> | CC-MAIN-2017-04 | http://www.cnmeonline.com/insight/5-things-you-should-know-about-blockchain/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00200-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944377 | 663 | 3 | 3 |
Health-watchers – always on the lookout for more powerful antioxidants – are getting a little help from supercomputing.
|Computer representation of Carnosine Source: ARC Centre of Excellence for Free Radical Chemistry and Biotechnology|
Scientists from the ARC Centre of Excellence for Free Radical Chemistry and Biotechnology at the University of Sydney leveraged techniques from quantum chemistry and supercomputing to custom design molecules with improved antioxidant ability.
Lead researchers Professor Leo Radom from the University’s School of Chemistry and Dr. Amir Karton from the University of Western Australia believe that these novel compounds hold the key to slowing the progression of age-related diseases, such as heart disease, cancer, diabetes, and Alzheimer’s disease.
“While most people consume wine, berries and chocolate for an antioxidant boost, we turned on our computers! We were able to use supercomputers to improve the power of natural antioxidants and this may provide future benefit to the health industry,” said Dr. Karton.
Chemically-speaking, an antioxidant is a molecule that inhibits the oxidation of other molecules. In the common vernacular, antioxidants help protect the body from disease-causing free-radicals. They are widely used as dietary supplements and hailed for their disease-fighting potential.
If asked to name some common antioxidants, most people could probably rattle off vitamins A, E and C with nary a thought. They might even cite the protective power of resveratrol or CoQ10, but this research out of Australia centers on carnosine, a specific type of antioxidant found in meat, fish and eggs.
Working with Michael Davies and David Pattison from the Heart Research Institute, the team investigated the ability of carnosine to scavenge the oxidant, hypochlorous acid. Normal amounts of hypochlorous acid work with the body’s immune system to fight off invading pathogens, but too much of the substance is associated with the development of heart disease.
“The supercomputer modelling allows us to probe deeply into the molecular structure and helps us to understand just why carnosine is such an effective antioxidant. Armed with this understanding, we are then able to design even better antioxidants,” said Professor Radom.
The work appears in the Journal of the American Chemical Society as well as the current edition of Nature Chemistry.
“Although we can’t yet claim to have uncovered the fountain of eternal youth, our findings are one more step towards better treatments for ageing-related disease, which we hope will improve longevity and the quality of life in the future,” said Dr. Karton. | <urn:uuid:f824eae8-4d25-4d43-a126-fba638c5e42e> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/02/19/computer_modeling_supports_antioxidant_breakthrough/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00228-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923354 | 540 | 3.40625 | 3 |
Teachers in Pennsylvania are being encouraged to use increasing amounts of technology in the classroom and it’s easy to see why. Lessons can be made much more interactive through rich content such as videos, the volume of information available to support learning is vast, and students are so familiar with technology from their home lives that it’s much more in sync with what they’re used to.
The downside to this trend is the increased risks that students face online by accessing inappropriate or potentially harmful content.
The law requires school boards and publicly-funded libraries to adopt and enforce acceptable use policies for internet access, which include both software and servers that block unsuitable content.
Many school districts in Pennsylvania have such policies in place, as well as providing links to national campaigns, and teaching internet safety in classrooms.
In addition to these policies, Pennsylvania also has a number of state-wide initiatives to help deliver its internet safety curriculum. The Center for Safe Schools provides an on-site program, delivered by law enforcement officers, to educate parents, teachers and students about the hidden dangers of the internet.
The program, called Protecting Kids Online, is free of charge for schools and community-based organizations through a grant-funded partnership.
Network monitoring software provides districts with an easy way to monitor and manage internet safety concerns on the school network, and increasing numbers of districts are adopting it.
Live monitoring of networked devices, combined with full logging and screenshot evidence, give educators all the information they need to proactively identify areas for concern. Logs can be used to help deal with incidents and to inform future lesson planning around internet safety, ensuring students are taught why something poses a risk, and how to avoid it in future.
Take a look at our website for more information on monitoring software. | <urn:uuid:80dc12f7-5a75-42a6-bec9-c9f3a841efb8> | CC-MAIN-2017-04 | https://www.imperosoftware.com/internet-safety-in-pennsylvania-schools/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00136-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948911 | 364 | 3.328125 | 3 |
Kaspersky Lab researchers have recently analysed a piece of malware that works well on all three of the most popular computer operating systems – the only thing that it needs to compromise targeted computers is for them to run a flawed version of Java.
The Trojan is written wholly in Java, and exploits an unspecified vulnerability (CVE-2013-2465) in the JRE component in Oracle Java SE 7 Update 21 and earlier, 6 Update 45 and earlier, and 5.0 Update 45 and earlier.
Once the malware is launched, it copies itself into the user’s home directory and sets itself to run every time the system is booted. It then contacts the botmasters’ IRC server via the IRC protocol, and identifies itself via a unique identifier it generated.
The malware’s main reason of existence is to make the infected machine flood specified IP addresses with requests when ordered to via a predefined IRC channel. The botmasters simply have to define the address of the computer to be attacked, the port number, the duration of the attack, and the number of threads to be used in it.
At the time of analysis, the botnet formed by machines “zombified” by this particular Trojan was targeting a bulk email service. | <urn:uuid:85c3f84a-1d80-4e28-9d15-0c0ec061ec14> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2014/01/29/java-based-malware-hits-windows-mac-and-linux/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00254-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.911693 | 254 | 2.8125 | 3 |
D-Learning, E-Learning, M-Learning and Beyond
Instructors frequently are called upon to teach people who are scattered across a large geographic area. While it’s sometime necessary to bring together those individuals in one location, their learning needs often can be met remotely.
Without sacrificing the quality education learners expect from a classroom-based program, trainers can use d-learning, e-learning and m-learning strategies to develop a comprehensive curriculum that will meet the diverse needs of their widespread audiences.
D-learning, or distance learning, is as old as written correspondence itself, but this educational technique didn’t catch on in academia until the 1970s with the foundation of several “open universities.” These programs allow students to correspond with an instructor at their own pace, as their schedule allows.
As the Internet gained popularity in the late ’80s and early ’90s, e-learning, electronic learning, became the remote education tool of choice. This method offers students the chance to meld text, audio, video with personal interaction with their instructors and peers.
As technology continues to advance, m-learning, or mobile learning, is allowing learners to access training courses and educational tools on their handheld devices wherever and whenever they want or need to learn.
It’s important for instructors to use a combination of these training strategies to meet the specific learning needs of their audience, said Dr. David Metcalf, a researcher at the Institute for Simulation and Training at the University of Central Florida and author of “M-Learning: Mobile E-Learning.”
By drawing on the best these methods have to offer, trainers can create a high-quality, customized program that will give their trainees the tools they need to succeed. As with any tool, each of these methods has their strengths and weaknesses.
Metcalf said d-learning works best for larger, more comprehensive programs that generally end in a certification or degree, whereas smaller, self-contained courses are better offered on an e-learning system. This allows instructors to easily track and analyze their students’ progress.
M-learning, on the other hand, can provide learners with relevant information when it’s needed most, supporting the high performance cultivated by other in-depth training techniques. By allowing users to access the information they need with a few simple keystrokes, m-learning techniques can enhance performance while workers are on the job, Metcalf said.
“If I were to look at where mobile learning fits in best, it’s not trying to replace a classroom or an online course module — it’s trying to enhance those in a way that makes sense for how you’re trying to get your work accomplished,” Metcalf said. “Using m-learning for reinforcement, reminders of things someone might have learned in a course but didn’t retain, is probably the best way to go.”
Technological advancements, however, will expand the scope of m-learning techniques in the years to come, he said. About 2 billion people already own cell phones, and as the quality of these phones improve, educational tools such as “point-and-shoot learning” and “augmented reality” programs will become fun, feasible options from which trainers to choose.
These techniques use the camera and global-positioning system (GPS) features that are available on higher-end cell phones to allow users to learn while interacting with their environment. Using the “point-and-shoot” method, learners receive personalized information about the images and text found in pictures they take with their camera phone. “Augmented reality” games integrate activities such as mobile scavenger hunts with a phone’s GPS capabilities to make the lessons more memorable by adding a physical component.
“The point-and-shoot model for learning is something that we’re really seeing people grasp onto because they don’t want to have to type a lot of information into a tiny little screen,” Metcalf said. “They want to get what they need in an instant.”
Yet, even with these new techno-friendly techniques, trainers will have to make sure their learning programs have a mixture of methods that will work best for their audience, he said — no new gadget instantly will make it easy to develop a remote-access education program.
“If you look at the instructional models that fit, you’re going to be looking at ways to deliver in those multiple formats and to | <urn:uuid:37e61edf-f26a-4ab9-924c-a8f07ef58b3d> | CC-MAIN-2017-04 | http://certmag.com/d-learning-e-learning-m-learning-and-beyond/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00310-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956765 | 946 | 3.171875 | 3 |
Canada has a series of large hydrocarbon basins with thick, organic-rich shales. Western Canada also contains the prolific and extensive Montney and Doig resource plays categorized primarily as tight sand and siltstone reservoirs. The risked shale gas in place for Canada is estimated at 2,413 trillion cubic feet, with 573 trillion cubic feet as risked, technically recoverable shale gas resource. In addition, estimated risked shale oil in place for Canada is about 162 billion barrels, with 8.8 billion barrels as risked, technically recoverable shale oil resource.
As new drilling occurs and more details is obtained on these large, emerging shale plays, the estimates of the size of their in place resources and their recoverability will change. The oil and gas industry in Canada was founded upon production of oil and natural gas from conventional reservoirs, because they are the easiest to get out of the ground. However, conventional reservoirs are the smallest portion of Canadas total oil and gas resources.
Ironically, what are referred to as unconventional reservoirs contain a far greater proportion of Canadas hydrocarbon resources. The terms conventional and unconventional actually refer to the reservoir rock quality because oil and gas cannot be distinguished. Industry needs to develop unconventional resources, such as tight oil, tight gas and shale gas, in order to have a continued supply of oil and gas now and into the future. Unconventional gas already accounts for more than 25% of the Canadian natural gas supply. Most of the activity directed at new production in Canada is unconventional so it is fair to say that unconventional has become conventional.
Currently, between Alberta and British Columbia over 175,000 wells have been stimulated using hydraulic fracturing. Canadian regulators and the natural gas industry are focused on the protection of surface and ground water and the mitigation of risk. All Canadian jurisdictions regulate the interface between water and the natural gas industry, and the application of evolving hydraulic fracturing techniques for unconventional gas development is no exception.
This report discusses about North American shale gas plays by giving the details of geological setting, resource estimate, reservoir properties, companies operating in that shale area, current activity, key company information and competitive landscape. | <urn:uuid:969be66c-7aed-4a3f-a491-7ff5449b930d> | CC-MAIN-2017-04 | https://www.mordorintelligence.com/industry-reports/canada-shale-business-overview-shale-properties-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00484-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960024 | 429 | 2.859375 | 3 |
Clouds in the ocean? Microsoft is testing the feasibility of locating cloud data centers on the ocean floor.
Water and IT equipment generally don't mix. Nonetheless, that's not preventing Microsoft from floating a cloud data center in the unlikeliest of places: the bottom of the Pacific Ocean.
Today, the Redmond, Wash., software giant took the wraps off
Project Natick, an experiment in establishing an energy-efficient, low-latency cloud computing facility on the ocean floor. Last summer, Microsoft dropped a more than 38,000-pound, 10-foot by 7-foot container more than half a mile from the Pacific coast.
Housing a server rack with the processing power of approximately 300 home PCs, the Project Natick vessel stayed underwater for three months. "Once the vessel was submerged last August, the researchers monitored the container from their offices in Building 99," blogged Microsoft spokesperson Athima Chansanchai.
Nestled in the company's Redmond, Wash., headquarters, Building 99 is home to Microsoft Research
. "Using cameras and other sensors, they recorded data like temperature, humidity, the amount of power being used for the system, even the speed of the current," continued Chansanchai.
While they evoke a floaty image, cloud data centers are typically situated on land. The tactic of placing servers under the water but near the shoreline could help cloud providers solve some of the challenges of keeping up with explosive demand for their services.
"That's one of the big advantages of the underwater data center scheme—reducing latency by closing the distance to populations and thereby speeding data transmission. Half the world's population, Cutler says, lives within 120 miles of the sea, which makes it an appealing option," wrote Chansanchai, citing a statistic provided by Microsoft Research's Ben Cutler, the project manager behind the experiment.
Another benefit to underwater data centers is reduced cooling costs.
"Cooling is an important aspect of data centers, which normally run up substantial costs operating chiller plants and the like to keep the computers inside from overheating. The cold environment of the deep seas automatically makes data centers less costly and more energy-efficient," Chansanchai stated. Microsoft is also exploring the possibility of using wave or tidal energy, raising the possibility of cloud services powered by renewable energy.
Project Natick may also have an effect on data center planning. "This project also shows it's possible to deploy data centers faster, turning it from a construction project—which requires permits and other time-consuming aspects—to a manufacturing one," said Chansanchai.
In a YouTube video from Microsoft, Cutler said the project's goal is to "deploy data centers at scale, anywhere in the world, from decision to power-on within 90 days."
Suggesting that Microsoft is pleased with results of the pilot, the company is exploring an expansion of the project.
"The team is currently planning the project's next phase, which could include a vessel four times the size of the current container with as much as 20 times the compute power. The team is also evaluating test sites for the vessel, which could be in the water for at least a year, deployed with a renewable ocean energy source," Chansanchai wrote. | <urn:uuid:f8ce9fd1-5249-4878-8969-10f5952ca525> | CC-MAIN-2017-04 | http://www.eweek.com/cloud/microsofts-project-natick-takes-cloud-computing-underwater.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00392-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947725 | 665 | 3.359375 | 3 |
Dog Genome Successfully SequencedBy Stacy Lawrence | Posted 12-09-2005
Published in the Dec. 8 issue of the scientific journal Nature, the dog research sheds light on both the genetic similarities to humans and the genetic differences between dog breeds. Humans share more of their ancestral DNA with dogs than with mice, confirming the utility of dog genetics for understanding human disease.
Examining data across breeds helps to illuminate the structure of genetic variation among breeds, which can now be used to understand the basis of physical and behavioral differences, as well the genetic underpinnings of diseases common to domestic dogs and their human companions.
"Of the more than 5,500 mammals living today, dogs are arguably the most remarkable," said senior author Eric Lander, director of the Broad Institute, professor of biology at MIT and systems biology at Harvard Medical School, and a member of the Whitehead Institute for Biomedical Research.
"The incredible physical and behavioral diversity of dogsfrom Chihuahuas to Great Danesis encoded in their genomes. It can uniquely help us understand embryonic development, neurobiology, human disease and the basis of evolution."
Dogs were domesticated from gray wolves as long as 100,000 years ago, but selective breeding over the past few centuries has made modern dog breeds a testament to biological diversity. More than two years ago, researchers embarked on a two-part project to assemble a complete map of the dog genome.
First, they acquired high-quality DNA sequence from a female boxer named "Tasha," covering nearly 99 percent of the dog's genome. Using this information as a genetic 'compass,' they then sampled the genomes of 10 different dog breeds and other related canine species, including the gray wolf and coyote.
By comparing these dogs, they pinpointed about 2.5 million individual genetic differences among breeds, called SNPs (single nucleotide polymorphisms), which serve as recognizable signposts that can be used to locate the genetic contributions to physical and behavioral traits, as well as disease.
Finally, the scientists used the SNP map to reconstruct how intense dog breeding has shaped the genome. They discovered that selective breeding carried large genomic regions of several million bases of DNA into breeds, creating 'haplotype blocks' that are about 100 times larger than seen in the human population
Sequencing of the dog genome began in June 2003, funded in large part by the National Human Genome Research Institute (NHGRI). The Broad Institute is part of NHGRI's Large-Scale Sequencing Research Network. NHGRI is one of 27 institutes and centers at the National Institutes of Health, an agency of the Department of Health and Human Services. All of the data can be accessed through public databases.
The Broad Institute's 60 core and associate researchers target fundamental issues, including chemical biology, genome sequencing and analysis, medical and population genetics, cancer, and infectious disease.
One of its primary benefactors, the Broad Foundation, recently announced it would double the initial donation from $100 million to an unprecedented $200 million. This is the largest gift of its kind to two universities for a joint research endeavor.
Commenting on the work fostered by the atmosphere at the Institute, benefactor Eli Broad said, "This unprecedented partnership between MIT and Harvard University, together with the Harvard Hospitals and the Whitehead Institute, has resulted in a research community that is tackling some of the most important questions in biomedicine."
Check out eWEEK.com's for the latest news, views and analysis of technology's impact on health care. | <urn:uuid:28160809-10f1-4ffe-9399-15f458e33f5d> | CC-MAIN-2017-04 | http://www.cioinsight.com/print/c/a/Health-Care/Dog-Genome-Successfully-Sequenced | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00328-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946408 | 726 | 3.5 | 4 |
Everyday Science Quiz Questions & Answers - Part 4
76. Question: Why is a new quilt warmer than an old one?
Answer: In a new quilt the cotton is not compressed and as such it encloses more air which is bad conductor of heat. Therefore, it does not allow heat to pass.
77. Question: Curved rail tracks or curved roads are banked or raised on one side. Why?
Answer: Because a fast moving train or vehicle leans inwards while taking turn and the banked or raised track provides required centripetal force to enable it to move round the curve.
78. Question: How do bats fly in dark?
Answer: When bats fly they produce ultrasonic sound waves which are reflected back to them from the obstacles in their way and hence they can fly without difficulty.
79. Question: Water pipes often burst at hill stations on cold frosty nights. Why?
Answer: The temperature may fall below 0oC during cold frosty nights which converts the water inside the pipes into ice, resulting in an increase in volume. This exerts great force on the pipes and as a result, they burst.
80. Question: Why are white clothes more comfortable in summer than dark or black ones?
Answer: White clothes are good reflectors and bad absorbents of heat, whereas dark or black clothes are good absorbents of heat. Therefore, white clothes are more comfortable because they do not absorb heat from the sun rays.
81. Question: Why does a rose appear red grass green in daylight?
Answer: Rose absorbs all the constituent colors of white light except red which is reflected to us. Similarly, grass absorbs all colors except green which is reflected t us.
82. Question: Why does a ship rise as it enters the sea from a river?
Answer: The density of sea water is high due to impurities and salts compared to river water as a result; the upthrust produced by the sea water on the ship is more than that of river water.
83. Question: Why are fuse provided in electric installations?
Answer: A safety fuse is made of a wire of metal having a very low melting point. When excess current flows in, the wire gets heated, melts and breaks the circuit. By breaking the circuit it saves electric equipment or installations from damage by excessive flow of current.
84. Question: Why is it easier to lift a heavy object under water than in air?
Answer: Because when a body is immersed in water, it experiences an upward thrust (Archimedes? Principle) and loses weight equal to the weight of the water displaced by its immersed potion, and hence, is easier to lift objects.
85. Question: If a highly pumped up bicycle tyre is left in the hot sunlight, it bursts. Why?
Answer: The air inside the tube increases in volume when heated up. As sufficient space for the expansion of the air is not available because the tube is already highly pumped, it may result in bursting of the tyre.
86. Question: What will be the color of green in blue light?
Answer: Grass will appear dark in color because it absorbs all other colors of the light except its own green color. The blue light falling on grass will be absorbed by it, and hence, it will appear dark in color.
87. Question: Why do two eyes give better vision than one?
Answer: Because two eyes do not form exactly similar images and he fusion of these two dissimilar images in the brain gives three dimensions of the stereoscopic vision.
This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you.
We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus | <urn:uuid:e25c1985-280a-4dd7-ba72-81e48f04e311> | CC-MAIN-2017-04 | http://www.knowledgepublisher.com/article-648.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00328-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935433 | 827 | 3.359375 | 3 |
By Cindy Kvek
There are two basic types of addresses used in Local Area Networking (LAN): logical addresses and physical addresses. Understanding the differences between the two addresses begins with developing an understanding of the following terms.
1. Address Resolution: A support function to assist the network devices to match the correct physical address to a known logical address. The physical address is the final piece of information required to complete the frame to be sent. AppleTalk uses AARP, and TCP/IP uses address resolution protocol (ARP). IPX does not use address resolution.
2. BOOTP: A configuration service used to provide an IP logical address and configuration information to a device over the network. This configuration is static and must be manually entered into the BOOTP server, using the physical address as identification of each device. The server will then send the BOOTP configuration as the device requests it, using the physical address as unique identification.
3. DHCP: A configuration service used to provide an IP logical address and configuration information to a device over the network. The configuration is leased for a specified period of time and, based on server configuration, may require renewal. DHCP and BOOTP services may be provided from the same server.
4. Frame: The combination of Layer 2 headers used to carry the packet from the original source logical address to the final destination. The Layer 2 physical addresses are used to identify the next network device the packet must be passed to toward the destination logical address.
5. IPX: Internetwork Packet Exchange/Sequenced Packet Exchange, usually identified as IPX/SPX. A Layer 3 protocol identified by Novell, Inc., to network Novell servers and clients. This protocol may still be in use, but Novell has updated their servers to support TCP/IP also.
6. LAN: Local Area Network. A LAN is a network and may refer to the entire network or a logical segment of the network. A LAN is the intranet of a business or home network local to the connected network devices.
7. Layer: Layer, used with a number, it identifies a reference layer of the Open System Communication (OSI) model. This model uses Layer 1 - 7 to identify the network functionality for hardware and software, including physical and logical addressing. Layer 2 (data link layer) identifies the functions of physical addressing and Layer 3 (network layer) identifies the functions of logical addressing.
8. Name Server: A service configured to support the matching of known names to logical addresses. This is a support function within a network to assist the devices to get the correct logical address to add to a message.
9. Protocol: A set of rules. Each different protocol uses a different set of rules.
10. Packet: The combination of the headers used to carry the payload or message from the original source to the final destination. A packet must be enveloped in a frame to help the physical network pass the packet.
11. Routed protocol: The set of rules of the Layer 3 protocol used to create the packet of the frame that carries the message.
12. Routing protocol: The set of rules of the router configuration used to choose the best path toward the destination network logical address. The routing protocol creates a routing table within the router to be used for path determination.
13. Routing Table: Each router, following the rules of the routing protocol, builds a table of logical network and logical area address. This table is used to help the router choose the best forwarding path toward the final destination device, based on the best match to the destination logical address identified in the Layer 3 header of the packet.
14. Subnet: A TCP/IP logical group of devices connected to the same network and divided by a router from other logical groups within the network. A broadcast domain is an identified function of a subnet.
15. TCP/IP: Transmission Control Protocol/Internet Protocol is a Layer 3 protocol used on the Internet and most other networks in the world today. There are two versions, IPv4 and IPv6. Each version differs at Layer 3. The need for additional IP addresses prompted the creation of IPv6. | <urn:uuid:c92a8fec-71e2-43d0-aa38-d5f084bf52e6> | CC-MAIN-2017-04 | https://www.globalknowledge.com/ca-en/content/articles/top-15-networking-terms-you-should-know/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00503-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.879316 | 858 | 3.984375 | 4 |
Site home page
Get alerts when Linktionary is updated
Book updates and addendums
Get info about the Encyclopedia of Networking and Telecommunicatons, 3rd edition (2001)
Download the electronic version of the Encyclopedia of Networking, 2nd edition (1996). It's free!
Contribute to this site
Electronic licensing info
SANs (Storage Area Networks)
Note: Many topics at this site are reduced versions of the text in "The Encyclopedia of Networking and Telecommunications." Search results will not be as extensive as a search of the book's CD-ROM.
SANs are dedicated networks that provide access to an array of storage devices such as RAID arrays, optical disks, and tape backups. One way to think of a SAN is as a high-speed network within a data center. In contrast, LANs extend outward from the data center. A typical SAN consists of a Fibre Channel subnetwork connected to the enterprise network, as illustrated in Figure S-1. SANs are a radical shift from the traditional server-attached storage because storage is offloaded from servers, freeing up server resources to handle data processing and other tasks.
This topic continues in "The Encyclopedia of Networking and Telecommunications" with a discussion of the following:
Associations, Initiatives, Forums, and Coalitions
The following organizations promote SANs, storage products, and interconnect technologies. Getting all the pieces of SANs to work together in an interoperable way is the goal of these groups:
Copyright (c) 2001 Tom Sheldon and Big Sur Multimedia. | <urn:uuid:4ba97c05-2b0f-4f91-af73-bc3ff3b7c571> | CC-MAIN-2017-04 | http://www.linktionary.com/s/san.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00229-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.900307 | 327 | 2.546875 | 3 |
On Monday, the Consumer Electronics Association announced that participants in the eCycling Leadership Initiative managed to recycle 460 million pounds of consumer electronics in 2011. This number is even more impressive when you consider that this is more than one and a half times the 300 million pounds that was recycled in 2010. Drop-off locations increased from 5,000 in 2010 to 7,500 in 2011. The CEA also reports that 96% of all the recycling was performed by certified third-party recycling facilities.
This is an important step in the right direction. The program has set an ambitious goal of 1 billion pounds recycled in 2016, which will keep a football-stadium’s worth of waste out of our nation’s landfills.
If you want to participate by recycling your unwanted consumer electronic devices, you can find the nearest drop-off location at the GreenerGadgets website. | <urn:uuid:9af37f17-3582-457c-8e17-7c6770cb5492> | CC-MAIN-2017-04 | https://hdtvprofessor.com/HDTVAlmanac/?p=1737 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00531-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961225 | 181 | 2.578125 | 3 |
In the Tactical Deception Field Manual FM 90-2 of the US Army, the concept of deception is described as those measures designed to mislead enemy forces by manipulation, distortion, or falsification of evidence to induce him to react in a manner prejudicial to his interests. In the cyber world the deception concept and deception techniques have been introduced in the early 1990 with the use of honeypots .
Honeypots are decoy systems that attract attackers to attempt to compromise them , whose value lies in being probed, attacked or compromised . In addition, honeypots can be used to gain advantage in network security. For instance they provide intelligence based on information and knowledge obtained through observation, investigation, analysis, or understanding .
Deception techniques such as honeypots are powerful and flexible techniques offering great insight into malicious activity as well as an excellent opportunity to learn about offensive practices. In this post I will be introducing how to create a honeypot for research purposes to learn about attack methods.
If you want to learn more about computer deception I recommend to read Fred Cohen articles. In regard to honeypots in I definitely recommend the landmark book authored by Lance Spitzner in 2002 and published by Addison-Wesley. One of the many things Lance introduces on his book is the concept of level of interaction to distinguish the different types of honeypots. Basically, this concept provides a way to measure the level of interaction that the system will provide to the attacker. In this post I will be using a medium interaction honeypot called Kippo.
A important aspect before running a honeypot is to make sure you are aware of the legal implications of running a honeypots. You might need to get legal counsel with privacy expertise before running one. The legal concerns are normally around data collection and privacy, especially for high-interaction honeypots. Also you might need permission from your hosting company if you would for example run a honeypot on a virtual private server (vps). Lance on his book as one full chapter dedicated to the legal aspects. Regarding hosting companies that might allow you to run a honeypot you might want to check Solar vps, VpsLand or Tagadap.
Let’s illustrate how to setup the Kippo SSH honeypot. Kippo is specialized in logging brute force attacks against SSH. It’s also able to store information about the actions the attacker took when they manage to break in. Kippo is considered a low interaction honeypot. In addition I will be demonstrating how to use a third party application called Kippo-graph to gather statistics and visualize them.
Based on the tests made the easiest way to setup Kippo is on a Debian linux distro. To install it we need a set of packages which are mentioned in the requirements section of the project page. On my case I had a Debian 6 64 bits system with the core build packages installed and made the following:
Using apt (advanced packaging tool) which is the easier way to retrieve, configure and install Debian packages in automated fashion. I installed subversion to be able to then download Kippo. Plus, installed all the packages mentioned in the requisites. Then verified python version to make sure is the one needed. During the installation of the mysql-server package you should be prompted to enter a password for the mysql.
# apt-get update
# apt-get install subversion python-zope python-crypto python-twisted mysql-server ntp python-mysqldb
# python –V
Check the status of MySQL, then try to login with the password inserted during the installation:
# service mysql status
# mysql -u root -p
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 42
Server version: 5.1.66-0+squeeze1 (Debian)
Check if we have a timesource configured and NTP is syncing:
Download Kippo using svn. Create the initial configuration file and then login into MySQL and create the necessary database and tables:
#svn checkout http://kippo.googlecode.com/svn/trunk/ /opt/kippo
#cp kippo.cfg.dist kippo.cfg
mysql -u root –p
mysql> CREATE DATABASE kippo;
mysql> USE kippo;
mysql> SOURCE /opt/kippo/doc/sql/mysql.sql
mysql> show tables;
Edit the kippo.cfg file and change the hostname directive, ssh port, and banner file. Also uncomment all the directives shown above regarding the ability of Kippo to log into the MySQL database. Make sure you adapt the fields to your environment and use strong passwords:
ssh_port = 48222
hostname = server
banner_file = /etc/issue.net
host = localhost
database = kippo
username = root
password = secret
Edit the file /etc/issue.net on the system and insert a banner similar to the following:
This system is for the use of authorized users only. Individuals using this computer system without authority, or in excess of their authority, are subject to having all of their activities on this system monitored and recorded by system personnel. In the course of monitoring individuals improperly using this system, or in the course of system maintenance, the activities of authorized users may also be monitored. Anyone using this system expressly consents to such monitoring and is advised that if such monitoring reveals possible evidence of criminal activity, system personnel may provide the evidence of such monitoring to law enforcement officials.
Verify which username and password is used to deceive the attacker that he got the correct credentials and break in:
# cd /opt/kippo/data
# cat userdb.txt
Then add a non-privileged user to be used to launch Kippo. Its also needed to change the ownership of the Kippo files and directories to the user just created:
# useradd -m –shell /bin/bash kippo
# cd /opt/
# chown kippo:kippo kippo/ -R
# su kippo
$ cd kippo
Starting kippo in background…Generating RSA keypair…
By default – as you might noticed in the kippo.cfg – Kippo runs on port 2222. Because we start Kippo as a non-privileged used we cannot change it to port 22. One way to circumvent this is to edit the /etc/ssh/sshd_config file and change the listening port to something unusual which will be used to manage the system. Then create an iptables rule that will redirect your TCP traffic destined to port 22 to the port where Kippo is running.
#cat /etc/ssh/sshd_config | grep Port
#service ssh restart
#iptables -t nat -A PREROUTING -i eth0 -p tcp –dport 22 -j REDIRECT –to-port 48022
Depending on your setup you might need or not additional firewall rules. In my case I had the system directly exposed to the Internet therefore I needed to create additional firewall rules. For the iptables on Debian you might want to check this wiki page.
Create a file with the enforcement rules. I will not be including the redirect rule because will allow me to have control when to start and stop redirecting traffic.
# Sample firewall configuration
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:RH-Firewall-1-INPUT – [0:0]
-A INPUT -j RH-Firewall-1-INPUT
-A FORWARD -j RH-Firewall-1-INPUT
-A RH-Firewall-1-INPUT -i lo -j ACCEPT
-A RH-Firewall-1-INPUT -p icmp –icmp-type any -j ACCEPT
-A RH-Firewall-1-INPUT -m state –state ESTABLISHED,RELATED -j ACCEPT
-A RH-Firewall-1-INPUT -m state –state NEW -m tcp -p tcp –dport 22 -j ACCEPT
-A RH-Firewall-1-INPUT -m state –state NEW -m tcp -p tcp –dport 2222 -j ACCEPT
-A RH-Firewall-1-INPUT -m state –state NEW -m tcp -p tcp –dport 48022 -j ACCEPT
-A RH-Firewall-1-INPUT -m state –state NEW -m tcp -p tcp –dport 48080 -j ACCEPT
-A RH-Firewall-1-INPUT -j REJECT –reject-with icmp-host-prohibited
I will be allowing ICMP traffic plus TCP port 22 and 2222 for Kippo and 48022 to access the system. Then the 48080 will be for the kippo-graphs.
Note that you might want to add the –source x.x.x.x directive to the rules that allow access to the real ssh and http deamon allowing only your IP address to connect to it.
Then we apply the iptables rules redirecting the contents of the file to the iptables-restore command. Then we need a small script for each time we restart the machine to have the iptables rules loaded as documented on the Debian wiki.
#iptables-restore < /etc/iptables.rules
/sbin/iptables-restore < /etc/iptables.up.rules
Change the file mode bits
#chmod +x /etc/network/if-pre-up.d/iptables
Subsequently we can install kippo-graphs. To do that we need a set of additional packages:
#apt-get install apache2 libapache2-mod-php5 php5-cli php5-common php5-cgi php5-mysql php5-gd
After that we download kippo-graph into the the webserver root folder, untar it, change the permissions of the generated-graphs folder and change the values in config.php.
# wget http://bruteforce.gr/wp-content/uploads/kippo-graph-0.7.2.tar –user-agent=””
# md5sum kippo-graph-0.7.2.tar
#tar xvf kippo-graph-0.7.2.tar
# cd kippo-graph
# chmod 777 generated-graphs
# vi config.php
Edit the ports configuration settings, under apache folder, to change the port into something hard to guess like 48080. And change the VirtualHosts directive to the port chosen.
#service apache2 restart
Then you can point the browser to your system IP and load the kippo-graphs url. After you confirmed its working you should stop apache. In my case I just start apache to visualize the statistics.
With this you should have a Kippo environment running plus the third party graphs. One important aspect is that, every time you reboot the system you need to: Access the system using the port specified on the sshd config file ; Apply the iptables redirection traffic ; Stop the apache service and start Kippo. This can be done automatically but I prefer to have control on those aspects because then I now when I start and stop the Kippo service.
#ssh vps.site.com -l root -p 48022
#iptables -t nat -A PREROUTING -i eth0 -p tcp –dport 22 -j REDIRECT –to-port 2222
#service apache2 stop
Stopping web server: apache2 … waiting .
$ cd /opt/kippo/
Starting kippo in background…
Loading dblog engine: mysql
Based on my experience It shouldn’t take more than 48 hours to have someone breaking in the system. You can than watch and learn. In addition after a couple of hours you should start seeing brute force attempts.
If you want to read more about other honeypots, ENISA (European Network and Information Security Agency) just recently released a study about honeypots called “Proactive Detection of Security Incidents II: Honeypot”. It’s the result of a comprehensive and in-depth investigation about current honeypot technologies. With a focus on open-source solution, a total of 30 different standalone honeypots were tested and evaluated. It’s definitely a must read.
In a future post I will write about the findings of running this deception systems to lure attackers.
The use of Deception Tecniques : Honeypots and decoys, Fred Cohen
The Art of Computer Virus Research and Defense, Peter Szor, Symantec Press
Honeypots. Tracking Hackers, Lance Spitzner, Addison-Wesley
Designing Deception Operations for Computer Network Defense. Jim Yuill, Fred Feer, Dorothy Denning, Fall | <urn:uuid:04994a90-5481-4f6a-908e-2cc74dc02d38> | CC-MAIN-2017-04 | https://countuponsecurity.com/2012/12/07/deception-techniques/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00191-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.837009 | 2,823 | 2.65625 | 3 |
In the five-layer TCP/IP model, which protocol is part of the same layer as IP?
The interface hme0 has the wrong IP address. Which command will re-establish the correct address of 192.168.20.118, broadcast of 192.168.20.127, and netmask of 255.255.255.192?
The traceroute command shows routing paths to a given destination. Which protocols are used by this diagnostic tool?
Given:broadcast address netmask IP address188.8.131.52 255.255.255.224 184.108.40.206The broadcast address is incorrect. What is the correct broadcast address?
A workstation is unable to route to the local router. You need to re-establish a default route to resolve the routing issue. When attempting to add the default route you get the message "network unreachable."What would cause this problem?
When analyzing network traffic in a fault analysis situation, a workstation Ethernet card is sending ALL frames up the stack (inbound), not just those that contain broadcast and its own Ethernet address. In which mode is an Ethernet card configured to pass all Ethernet frames up the protocol […]
Which LAN component can be used to connect two or more networks based on different protocol suites?
Which LAN topology can be implemented using a single continuous length of cable?
Which LAN component forwards a packet between two separate networks based on the software protocol address? | <urn:uuid:09995f1b-cae6-4616-afb1-7043e18c7f48> | CC-MAIN-2017-04 | http://www.aiotestking.com/sun/category/sun-certified-network-administrator-for-solaris-8/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00403-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913385 | 299 | 2.625 | 3 |
Young does not recommend that people rely on external lumbar support except in cases where obesity or physical problems leave the individual no other options. "A lot of newer chairs were designed to support the lumbar region in the lower back. But, the body is strong enough to support itself," said Young."When you do this, your pelvis and back are aligned properly and it allows you to move easily in the chair," said Young. Rogers approaches seating positions differently, dismissing the popular notion that elbows and knees should rest at 90-degree angles. "Think it terms of open angles. Instead of sitting with your legs at a 90-degree angle, try a 110-degree angle. Keep your elbow at 110-degree angle to your hand," said Rogers. While everyone sits at a computer differently, men and women tend to fall into gender-specific posture traps. "Men tend to be low writers. They like their chairs lower, and to sit back in them, and they need to learn to sit higher. Men strain their arms and wrists when they sit too low. Women are perchersthey sit away from the backrests and at the edge of their seats. Women tend to slouch because theyre so far away from their back support," said Rogers. Use Equipment Correctly When most people think of ergonomics, they think of wrist rests. Yet, even these long pieces of padding that are nearly standard in office settings are widely misused. "Wrist rest is a very unfortunate term because the general public thinks that it means theyre supposed to rest their wrist on it. Theres no protective fat under your wrist, and resting on this unprotected area could cause contact stress. I would be happier if they were called palm rests," said Read. None of the specialists suggested that people throw their wrist rests out the window, however. "Wrist rests were designed for resting between spells of typing, not during typing The killer combination is lazy typing and cold hands, suggestive of a smaller carpal tunnel. These two factors together almost guarantee that you will get a wrist or arm injury," said Rogers. Check out columnist Jeff Angus Management by Baseball, the IT managers "how-to" playbook. Click here. Each of the specialists referenced pianists when discussing the proper way to hold your hands and wrists when you type. Pianists use their fingers to hit keys, but keep their wrists raised and arms engaged, and hit the keys with their fingertips alone. "The worst setup is the keyboard on the keyboard tray [and] the mouse up on the desk surface. It leads to reaching injuries," said Rogers. Adjust Your Monitor Most people have their monitor height set too high, or worse, lack the ability to lower it. Read suggested that people sit squarely in front of their computer screens with their feet flat to make adjustments. "Your horizontal line of sight should hit the first one to two inches of the screen itself. When you need to look lower, you should use your eyeballs and not your neck." Those that wear bifocals should keep their monitors even lower, so that they are always looking at them through the bottom of their eyeglasses, "without dropping their heads," said Read. Next Page: Use special equipment.
Young instead suggests that people sit all the way back in their chair so that their sacrum touches the chairs back. | <urn:uuid:13e97efc-a300-4ca8-8253-2c4661098009> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/IT-Management/The-Perils-of-PC-Posture/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00035-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.978036 | 681 | 2.609375 | 3 |
We communicate mainly by words -- whether spoken or written. Everyone slips up occasionally when speaking but to do so in written form is a major negative. I admit it -- reading poorly written or misworded documents just makes me crazy... (It comes from my childhood -- my Mom always corrected my grammar)
So to save my sanity and to perhaps improve overall communications between people, I offer up this blog based on Jody Gilbert's two articles on the topics of grammatical and wording mistakes "that make you look stupid."
Last year, Jody Gilbert wrote an excellent article titled "10 flagrant grammar mistakes that make you look stupid." If you don't know when to use one word or the other, then read the article:
1. Loose versus lose -- Loose change versus lose my mind (a short trip sometimes).
2. It's for its -- think apostrophe for missing letter (It is to It's).
3. They're for their -- I admit it -- this one seems hard to mix up but I guess people do.
4. i.e. for e.g. -- These two are mixed up all the time. These are Latin terms -- i.e. means "that is" and e.g. means "for example."
5. Effect for affect -- another common error -- effect is a noun; affect is a transitive verb unless you are talking about someone's belongings (their affects).
6. You're for your -- see number 3
7. Different than for different from -- This one is easy -- never use different than.
8. Lay for lie -- Lay is to place something; lie is to recline. You don't lay down and you don't lie a book on the table...
9. Then for than -- Then refers to a time frame; than is a comparative word.
10. Could of, would of, should of for could have, would have, should have -- Bottom line, you never put of after these verbs.
Now on to this year's entry. Jody Gilbert struck again with "10 wording blunders that make you look stupid".
1. All intensive purposes -- yikes! Do people really use this term instead of "for all intents and purposes"?
2. Comprise -- nothing is comprised of something. For example, a correct sentence is "the team comprised seven people". Comprise is misused so much that we have actually come to think that "comprised of" is correct. Use "consisted of" if you must use the "of".
3. Heighth -- There is no such word. The word is height not heighth -- unlike its sister words, width or length. Don't you just love English?
4. Supposably -- yes, this one gets me going. The correct term is "supposedly".
5. One of my favorites -- irregardless -- is this a double negative meaning to regard something?
6. Infer or imply -- When do you infer something versus imply something? The rule of thumb is that imply means you are suggesting something and infer means you are interpreting something. That sure clears it up -- not.
7. Momento -- this is a Spanish word for moment not a word for something you bring back from your trip. The correct term is memento -- as in, "I brought my daughter a memento from the conference I attended".
8. Anticlimatic -- The correct term is anticlimactic -- as in a letdown. The other term means you are against the climate...
9. Tenant versus tenet -- One is a renter; the other is a principle as in a list of ethical tenets. But then again, maybe you have ethical renters...
10. Moot versus mute -- You may argue a moot point (meaning it is abstract or irrelevant) or you may remain mute on the subject meaning you have nothing to say.
So there you have the quick lesson for the day. Now here is a list of words that I would really like to see stricken from our vocabulary (most come from my teenage daughter).
1. Like -- as in, "I am, like, really tired of, like, having my teacher's assign, like, all this homework".
2. Whatever -- Example -- Mom: "You can't go out until you finish your homework'. Daughter: "Whatever..."
3. ad hoc -- We use it a lot in BI conversations and rarely get it right. The definition of "ad hoc" is done for particular purpose: done or set up solely in response to a specific situation or problem, without considering wider or longer-term issues (from Microsoft® Encarta® 2007). For example, an ad hoc meeting.
4 "It is what it is" -- what the heck does that mean, anyway? My contractor used it many times during our renovation. It was followed by a statement that we were either going to have to redo something or live with it...
So there are my pet peeves. I can't wait to read your favorites. Just list them in the comments and you will feel much better, I promise.
Yours in BI success.
Posted August 1, 2007 9:48 AM
Permalink | 4 Comments | | <urn:uuid:7fa4b956-69d2-4bf0-bbe6-aa2d8ea7a561> | CC-MAIN-2017-04 | http://www.b-eye-network.com/blogs/imhoff/archives/2007/08/what_we_have_he.php | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00440-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951555 | 1,091 | 2.5625 | 3 |
By Thomas G. Robinson, Executive Vice
In the networking world, though, there are often opportunities to turn the old architecture into something new and useful. For those who prefer complete network overhauls or forklift upgrades, such "old into new" pursuits might be seen as employing "slipcover" technology. This, however, would imply that the original network has lost its functionality. A better perspective is that the original network continues to have a solid foundation, but cannot achieve the full functionality required to meet current and future needs without modifications. An excellent example of this concept is the large scale deployment of HFC networks, where the "C" largely remains from the existing network and the "F" and upgraded electronics are added to enable the new/old system to meet current and future needs. It certainly is far more than a mere slipcover, but avoids the high cost of a complete overhaul such as development of a fiber-to-the-home (FTTH) network.
On the business and institutional side, there are a number of options to make the old new. For example, some organizations that have fiber-based Wide Area Networks (WANs) in a traditional star topology have found a need to now develop redundancy in the network. However, they can't at this time afford to install two discrete paths to and from every site or rebuild their topology as a large, redundant ring. One of the options in this case is to develop rim trunks between neighboring sites. This will enable at least an alternate route and can facilitate the development of multiple, interlocking, smaller rings by adding a relatively small amount of new fiber infrastructure.
Once implemented, the rim trunks will enable redundant paths from any of the sites. These paths, depending on budgetary considerations and how traffic is combined on the alternate path, can be established through: an always available, bi-directional ring; an automatically switched backup circuit using hot standby electronics; or a redundant path that is manually switched when needed.
In other cases, a limitation on the WAN may be the available fiber capacity remaining from the original WAN infrastructure, especially if a number of pairs were initially dedicated for specific, rather than integrated, applications. For example, for each site, one pair may have been dedicated for data; another for T-1 voice connections; and another for analog video. Now, however, the IT staff has decided that additional services are required, but additional fiber capacity to implement them in the traditional manner is not available.
Two main options exist here. One is to combine all services by digitizing and potentially also packetizing them such that they can then be transported using an integrated transport technology such as ATM, gigabit Ethernet or another proprietary technology. If a ring can be cost-effectively configured, potentially Sonet and transport methodologies on the horizon, such as Resilient Packet Ring (RPR), could also be implemented. The other option is to continue utilizing the traditional dedicated carriage methods, but employ wavelength division multiplexing in either a simple or dense fashion.
This will enable the existing services to be maintained in their current form, and new services to be added in a cost- effective manner without having to change the underlying type of transport or constructing additional fiber. Because the recent trend for both of these methodologies has been to come down in equipment cost, 2002 may end up being a good year for network managers to maximize their existing fiber infrastructure.
Other candidates for network modification rather than complete overhaul are some of the old all-coaxial institutional networks (I-Nets) that may have sat dormant for years or seen only minimal use. For a number of these I-Nets, current applications may have leapfrogged the data, voice and video carriage capabilities of RF systems such that optical networking is now required. However, with the strides that have been made with RF broadband-based communications, including the latest version of DOCSIS, the advent of PacketCable and recent innovations with broadband Ethernet systems that would enable 100 Mbps or greater transfer over an RF-based system, there are present and planned I-Nets that can benefit from an upgrade to an HFC infrastructure. Experience indicates that dormant I-Nets would need to be completely rung out, reconfigured and upgraded such that the fiber portion of the system runs deep into the network to dramatically reduce amplifier cascades. In some cases, a hybrid optical/HFC I-Net would also need to be developed. In this case, the HFC nodes could have excess fiber infrastructure and be located at or near high communications capacity institutions. This would enable these institutions to have pure fiber connectivity, while institutions further down the line could be connected to the lower capacity HFC network.
Whatever your networking scenario, my desire for all of us is that we encounter a safer and happier new year in 2002.
|Have a comment? Contact Tom by e-mail at: firstname.lastname@example.org| | <urn:uuid:9049370a-f40c-44da-8909-bbc9b46c827b> | CC-MAIN-2017-04 | https://www.cedmagazine.com/article/2001/12/turning-old-new | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00440-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947264 | 1,013 | 2.515625 | 3 |
Your programme once finshed should be transalated to some thing which tha mchine understands(Assembly language).For this process we would go for compilation.
Coming to the compilation option there plenty in fact which would be changing(options) with System to System and with versions .
Compile options would be specifying the type of code which your compiling along with other info about running above line or below lines,Array bounderies checking required or not ,in addition to doing the syntax checking etc.
The Compilation steps creates object module which would be subsequently need to undergo a Link edit step to get the actual Load module. when I mean load module that is instruction that computers could understand.
By specifiying a programming language and for specific version you could get the different compiler options just by doing a Google.. | <urn:uuid:efd43f94-71a3-46b4-a318-6a4ee8b41c5a> | CC-MAIN-2017-04 | http://ibmmainframes.com/about12429.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00128-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.8912 | 164 | 2.546875 | 3 |
Joined: 06 Jul 2005
Location: NOIDA (INDIA)
COBOL-85 is a later version of COBOL language. COBOL-85 also known as VS COBOL-II provides more features than COBOL-74. Some of the features of COBOL-85 are:
1. In COBOL-85 we can use following delimiters such as END-IF, END-PERFORM,END-STRING,etc.
2. We can use EVALUATE verb in place of complex IF structures.
3. CALL statement can pass variables by CONTENT and BY reference. In COBOL-74 we can only use pass by reference.
4. A new verb CONTINUE is introduced in order to over come the limitations of NEXT SENTENCE verb.
5. COBOL - 85 supports 32 bit addressing mode.
There are many more differences, that you can find from any COBOL-85 book. You can refer the book 'COBOL PROGRAMMING - By M K Roy and Ghosh Dastidar'.
I hope now you are somewhat clear about the difference. | <urn:uuid:193bfd36-6226-42f7-81d2-54e530246196> | CC-MAIN-2017-04 | http://ibmmainframes.com/about5771.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00486-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916016 | 242 | 2.75 | 3 |
Generally the send call will buffer the data esp if it's a small amount.
The recv causes a flush of the buffer (the actual send).
On 10/16/2012 3:16 PM, Victor Hunt wrote:
I share your feeling on the sleep() call. The program is fairly straight
forward. It opens a socket to our SMTP server. It then loads the various
SMTP commands into a string (one at a time) and runs a subroutine that does
the send() and recv() calls separated by a sleep() call.
I've been playing with the subroutine a bit. The data from the recv() call,
if any is even returned, is not used by the program. I commented out both
the sleep() and recv() calls. Nothing else was changed. The program no
longer sends an email. After uncommenting out the sleep() call, the email
is sent again.
This was late in the day and I didn't have time to do any debuging, which I
will do this morning to see if the send() call is failing for some reason.
Do the send() and recv() calls need to be used together? Can I do a send()
and never a recv()? At least as far as SMTP goes?
On Mon, Oct 15, 2012 at 4:57 PM, Scott Klement
It's hard to see how a sleep() call would help in sending an e-mail? I
could see using delays like this if you're using QtmmSendMail(), since
that program hands the file off to a background job which may take a
moment to handle the file
But if you're coding an SMTP client (i.e. coding your own SMTP routine
with the socket API) this should be a non-issue? Unless something
strange is going on?
On 10/15/2012 1:45 PM, Victor Hunt wrote:
I've run across an RPG program that uses socket APIs to send an email.
the socket is setup and opened, the program uses a subroutine to do all
send/receive functions. Just after the send, the subroutine runs the
API with a parameter of 1 second. It appears the original intent was to
give the email server and/or network time to produce/deliver a response
before the receive API runs. Does anyone think this built in delay is
really necessary? Depending on what is being sent, this program can run
a very long time with these roughly 1 second delays. Also, the program
doesn't do anything with the received data collected after the send API.
it really necessary to do a receive after a send? Seems I can tighten
up a bit.
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives | <urn:uuid:b0331959-2625-42d0-b1fc-4e4928c639aa> | CC-MAIN-2017-04 | http://archive.midrange.com/midrange-l/201210/msg00655.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00330-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.898566 | 655 | 2.734375 | 3 |
Carnivores, Herbivores & Omnivores Animals - Information and Facts
There are three types of animals: Herbivores, carnivores and omnivores.
Carnivores are flesh-eating mammals. This group includes a variety of animals such as cats, dogs, wolves, lions, tigers, and cheetahs. Most carnivores generally live alone but many of them also hunt in small groups. Carnivores usually feed on herbivores but many carnivores often attack and eat other carnivores too. The bigger the carnivore, the more it has to eat. The largest land carnivore is the polar bear. It is the only animal that actively hunts humans.
- The weasel is the smallest living carnivore with an overall length of about 8 inches and weight of 1.5 ounces.
- The grizzly bear or brown bear is the largest carnivore and weighs up to 850 pounds with a length of up to 8 feet.
- Carnivores are at the top of the food chain.
- Carnivores are divided into pinnipeds (fin footed) and fissipeds (land).
- Carnivores are not able to move their jaws side to side very easily.
Herbivores are animals that eat mostly plant materials. They are also called primary consumers. Herbivores are further subdivided into several types, such as frugivores or fruit-eating animals, folivores or leaf-eating animals, and nectarivores or nectar-eating animals. Herbivores usually have blunt teeth that are useful for stripping leaves, twigs, etc. Herbivorous birds do not have teeth to mince the vegetation they eat.
- The moose is a large herbivore that eats any kind of plant and fruit.
- Many herbivores have a digestive system that helps them get the most out of the plants they eat.
- The bee is a small pollinator that uses nectar and pollen from some kinds of plants to make honey.
- The stegosaurus and apatosaurus were herbivore dinosaurs.
- Herbivores spend more time eating than doing anything else.
Omnivores are animals that have specialized teeth that enable them to eat both plants and animals. Pigs, bears, foxes and chickens are examples of omnivorous animals. Because of their feeding habits, omnivores easily adapt to different environments. Omnivores have less specialized teeth than carnivores and herbivores. Some omnivores are pollinators which play a very important role in the life cycle of some kinds of plants.
- Some of the omnivores eat eggs of other animals.
- Omnivores cannot digest plants that do not produce fruits and grains.
- Omnivores eat plants so they are able to survive in many environments.
- Omnivores do not eat all kinds of plants.
- The housefly is a scavenger that also eats fruit-bearing plants.
- Black bears and grizzly bears belong to the order carnivora, but they are omnivores.
This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you.
We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus | <urn:uuid:3c26824d-ff16-49f5-8cbb-87a688375f3d> | CC-MAIN-2017-04 | http://www.knowledgepublisher.com/article-1124.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00330-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936622 | 733 | 3.5625 | 4 |
Technology: Can IT Help Utilities Lower Commercial Energy Costs?By Debra D'Agostino | Posted 2006-07-17 Email Print
XCel Energy plans to help customers move to renewable energy sources.
You know there's an energy crisis when even the energy companies are turning to alternative energy.
Xcel Energy Inc., a Minneapolis-based energy firm, runs a healthy business, with more than 3.3 million electricity and 1.8 natural gas customers in 10 states, and revenues of $10 billion annually. But about a year ago, company executives found themselves facing a troika of problems: increasing energy demands, decreasing resources, and a little issue called global warming. "We realized we had an obligation to behave in a manner that balances shareholder, regulatory and community priorities," says Michael Carlson, Xcel's vice president and CIO.
So Xcel came up with a plan: Use technology to identify which customers would be candidates for alternative energy. In partnership with the National Renewable Energy Laboratory (NREL), which is funded by the U.S. Department of Energy, "we started applying some modeling technologies that combine NREL's weather and environmental data with our grid generation and consumption data," Carlson says. The modeling tool, called the Renewable Planning Model, is being used to determine exactly which kinds of alternative energy are best suited for specific customers.
By using NREL's satellite terrain imagery to determine solar irradiation on rooftops, for example, Xcel can determine where, and how strongly, the sun shines on Xcel customers. With that data, the company can calculate where solar panels should be placed, how large they need to be to generate the most power, and how that power generation might affect Xcel's own energy grid.
Read the full story on CIOInsight.com: Technology: Can IT Help Utilities Lower Commercial Energy Costs? | <urn:uuid:31baceca-3f30-4975-90c9-5726573bbd98> | CC-MAIN-2017-04 | http://www.baselinemag.com/c/a/Business-Intelligence/Technology-Can-IT-Help-Utilities-Lower-Commercial-Energy-Costs | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00568-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931527 | 382 | 2.5625 | 3 |
Agency: Department of Agriculture | Branch: | Program: SBIR | Phase: Phase I | Award Amount: 75.00K | Year: 2003
NON-TECHNICAL SUMMARY: Nostoc commune is an edible cyanobacterium forming the spherical macrocolony, which is used as a potent herbal medicine and dietary supplement. Due to its nutraceutical and pharmacological value, the N. commune has received increasing attention, and the market demand has grown drastically during the last decade. However, low yield and unstable quality caused by inadequate production methods has prevented the further expansion of Nostoc commune in nutraceutical and pharmaceutical markets. In Algaen Corporation, we propose that Nostoc commune could be explored as a new crop plant used for nutrition food and herbal medicine by cultivating in a suitable photobioreactor on a commercial scale, using innovations in microalgal biotechnology. The technology developed from this project will enable us to launch commercial production of high quality edible Nostoc commune as an effective while affordable herbal medicine and dietary supplement, which has a current market of $50 million annually.
Agency: National Science Foundation | Branch: | Program: SBIR | Phase: Phase I | Award Amount: 99.94K | Year: 2003
This Small Business Innovation Research (SBIR) Phase I project proposes to improve the bioavailability of astaxanthin from the green algae, Haematococcus pluvialis, through molecular genetic manipulation of the organism. Natural astaxanthin is a potent bioactive antioxidant and offers tremendous potential for use in nutraceutical, pharmaceutical, aquaculture, and poultry industries. The green alga, Haematococcus pluvialis, is the richest known natural source of astaxanthin. One major constraint in the Haematococcus production system, however, is that astaxanthin-rich cells (cysts) possess thick cell walls that hinder astaxanthin extraction and subsequent bioavailability for humans and cultured animals. Chemical and physical cell disruption processes account for a major cost of the production, yet introduce the risk of oxidation of astaxanthin. In this Phase I project, certain features of Haematococcus will be genetically altered so as to facilitate fast and efficient extraction and digestion of cell-bound astaxanthin. The immediate commercial application of this project will be in the nutraceutical and aquaculture markets.
Agency: Department of Agriculture | Branch: | Program: SBIR | Phase: Phase I | Award Amount: 79.89K | Year: 2009
The crude oil will be depleted within 40 years, alternative fuels have to be developed to drive our transportation systems, and biodiesel appears to be the most promising fuel of the future. Biodiesel is renewable, non-toxic, and biodegradable, it can be used in existing diesel engines without modifying the engin, and can be blended in at any ratio with petroleum diesel. However, the development of biodiesel industry is severely limited by the supply of feedstock, namely soybean oil and canola oil. Due to limitation of available agriculture land and irrigation water supply, the production of these oil crops can not sustain the biodiesel production, other sources of plant oil have to be developed as feedstock for biodiesel. Microalgae are known to exhibit 10- to 20-fold higher growth rates than agricultural crop plants, and certain microalgal species can accumulate large amounts of lipids or oil (30-60% of dry weight). As a result, the concept of using microalgae as an alternative source of feedstock for biodiesel production was intensively studied in the past 40 years. However, the past research & development efforts have led to a conclusion that microalgae-based biodiesel was not economically viable because of high production cost. Such failure to develop a commercially viable microalgae-based biodiesel production system was largely due to the lack of cost-effective photobioreactors and efficient method for oil extraction from algae. In this SBIR project, we will demonstrate the feasibility of reducing the cost of using oil-rich green algae as feedstock for biodiesel production. We intend to optimize culture conditions for microalgal oil production in our proprietary photobioreactors. The feasibility of using innovative nano-materials for algal oil extraction will be demonstrated. The combined advantages from both improvements will enable use to reduce the overall cost in microalgal oil production. The results obtained from this Phase I project will provide a solid base for us to pursue a Phase II project, in which cost-effective production of microalgae-based oil will be demonstrated in pilot scale. The long-term goal of this project is to establish an environmentally sound, commercially feasible and economically profitable engineered process for commercial production of microalgae-based biodiesel. The successful completion of this project will lead to establishment of microalgae-based biodiesel production facilities, absorpton of atmospheric carbon dioxide by microalgae, and job creation/economical development in clean energy sector.
Agency: National Science Foundation | Branch: | Program: SBIR | Phase: Phase II | Award Amount: 499.87K | Year: 2007
This Small Business Innovative Research (SBIR) Phase II reserach develops an innovative biotechnology for commercial production of natural astaxanthin using genetically improved microalgal strain(s) grown in a proprietary large-scale photobioreactor, and to demonstrate the effectiveness of the new strains in improving bioavailability of astaxanthin. The proposed R&D efforts aim to overcome the major weakness inherent in the present production of astaxanthin-enriched Haematococcus: poor bioavailability of astaxanthin for humans and animals. The company will use several genetically modified Haematococcus strains with remarkably improved bioavailability of astaxanthin. The major objectives of the Phase II research are to design, construct, and evaluate an innovative large-scale photobioreactor system for sustainable mass culture of these new strains. The improved production system will increase astaxanthin productivity by 1.5- to 2-fold with at least 30% cost reduction. The broader impacts of this technology will be to overcome two major hurdles for the Haematococcus-based astaxanthin industry. The application of this biotechnology will lead to major increases in astaxanthin sales by 2015. It will also result in job expansion in the Haematococcus-astaxanthin production and related industries (e.g., cosmetic, pharmaceutical, and nutraceutical). Reduction in the production costs will lead to decreasing prices, making astaxanthin more affordable to allow more people to take advantage of astaxanthin as a strong antioxidant for improving health and well-being.
Algaen Corporation | Date: 2008-06-09
Algae food beverages, namely, seaweed drinks; Dried edible algae; Processed, edible seaweed. Fruits, namely, edible blue green algae. | <urn:uuid:6d401e28-1878-4d67-a983-0ebeac301ff9> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/algaen-corporation-1602609/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00568-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.902375 | 1,455 | 2.65625 | 3 |
In the connected home, a garden watering system might monitor and irrigate plants and be connected to and controlled by a smartphone app. In a more complex system, the watering system would be connected to the water supply utility and use intelligent cloud services to combine weather forecasts and pricing the minimise costs. In times of drought, the system might prioritise using limited water supplies for more valuable plants.
The technological and commercial effects of the connected home could have a wide reaching impact on the role of CIOs, and they could be directly or indirectly impacted by the smart home, depending on whether the company is creating the connected products and services or determining how other companies’ products will affect security.
Because smart homes introduce many types of connected devices to a new environment, CIOs must be involved in the security and architectural management of the integration, and host conversations about the ethics of connected home products. With the right preparation and team, CIOs can tackle the issues and smoothly navigate into the world of smart homes.
Support the connected home
Since no company will dominate all facets of the connected home, devices from different suppliers and industries will need to communicate and work together to serve the customer. Products from established vendors in existing industries will need to communicate with unproven companies and new technologies in a world containing up to 10 wireless standards. The smart home will require new architectures and infrastructures, and even novel solutions to allow all of the companies that create the technology to converse.
Build a team with the correct skill set
It’s fairly easy to conclude that the smart home will also require CIOs to review staffing needs. Some of the gaps can be filled with training, others will require new types of experts in embedded software development or embedded user experience design. CIOs may also need staff versed in working with a new type of vendors who specialise in IoT platforms or supporting new products.
In addition to the direct IT team, CIOs will need to educate executives on the current and potential difficulties of the smart home. Most early IoT adopters are discovering security and integration are more expensive and complex than anticipated, so it’s equally important that business partners understand the complexity of the tasks.
Manage the risks
The technology for smart homes comes with inherent security risks. Most of the smart home involves immature technology that will require security assessment. Before deciding on any methods, CIOs should host a discussion about the digital ethics of what the organisation can and should do with the information. Beyond that, CIO’s teams can contribute to authentication, information management, testing, software licensing, scalability, extensibility, and partner management.
Many of the technologies collect private or sensitive data at the consumer level. For example, some technology is constantly managing data about people’s lives and habits that CIOs will need to secure and ethically manage. Part of the CIOs role in regards to smart homes will involve educating enthusiastic businesses as to the risks of connected technology, and ensuring they are asking the right questions when it comes to hardware or software partners. | <urn:uuid:b3bc8420-faaa-4eed-a79c-a19f2d5da7dd> | CC-MAIN-2017-04 | http://www.cnmeonline.com/insight/why-cios-should-care-about-the-connected-home/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00203-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932073 | 631 | 2.53125 | 3 |
Common Networking Standards and Why They Are Relevant
Often, we don't have time to learn the reasons behind the standards we use. But learning what instigated a standard goes a long way toward not only understanding its importance, but also more easily and effectively applying it in your workplace.
In this hour-long webinar, Global Knowledge instructor Keith Sorn will discuss common networking standards and explain how they were determined and why they are relevant. He will fill you in on things like why it's important to use proper color-coding standards when making cable and why the length limitations on wired cable are essential. He will also explain new standards, such as power over fiber.
Keith is a computational physicist working in the IT field who splits his time between teaching at Global Knowledge and consulting. Keith teaches courses on UNIX/Linux (including writing of the kernel), programming, security and networking. He has worked with numerous and varied clients, including Lawrence Livermore National Laboratory, DoD, IBM, UNISYS and Lockheed, and he enjoys programming, penetration testing and his family life.
- Why STP and UTP are twisted
- Why we have five classes of addresses
- Why we have an ISO OSI networking model
- Why you need to know subnetting | <urn:uuid:8cedefe0-5ca8-4634-9985-d70a0bd75acb> | CC-MAIN-2017-04 | https://www.globalknowledge.com/ca-en/resources/resource-library/recorded-webinar/common-networking-standards-and-why-they-are-relevant/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00505-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963944 | 260 | 2.671875 | 3 |
Ethics in the Classroom
In this Internet age when kids and adults alike share anything from music to software very easily if not legally, I believe that one function of good instructors is to model ethical behavior for their students. I also believe that it is very important that instructors be overt in pointing out what is legal and what is not. Consider the following two scenarios:
- Jeff is teaching a Microsoft Windows 2000 Professional and Server class for a local community college. He freely checks out full-version, non-expiring copies of Microsoft Windows 2000 Professional and Microsoft Windows 2000 Server to his students to take home with them to set up their own personal networks so that they can have more hands-on lab time.
- Samantha is teaching the same class for a commercial training organization. She is doing the same things with respect to checking out software so the students can set up their home networks.
What do you say about Jeff and Samantha’s actions? Initially it might seem that both are wrong to be sharing software, since once the student gets it, it could be considered to be pirated. But the difference is that the licensing that Jeff’s department has with the MSDNAA (Microsoft Developer Network Academic Alliance) program actually allows him to share software with his students legally. Samantha does not have such a license, so what she is doing clearly violates copyright laws. If she is caught, she and her students could face steep fines. If she does not get caught, at minimum she is modeling to her students that it is acceptable to pirate software.
So since Jeff is legal in providing software to his students, should he just share it and say nothing? No, he needs to make sure each time the software is checked out that the student understands that he or she is getting the software as a result of a special license agreement with Microsoft. Otherwise, Jeff could be modeling the same illegal behavior Samantha displayed.
What about courseware? Many authorized textbooks come bound in such a way that photocopying is not that difficult. But is it legal? Usually it is not. I have known several commercial training organizations that were closed (and fined) once it became known that they were copying authorized materials and handing them out to their students without paying the vendor. I think it is a good idea for instructors to point out to their students how they can tell that their courseware is legal so that students will not unknowingly register for a cheaper class with purloined materials.
On another level, Tom is teaching a high-level, advanced authorized class for other trainers on a popular networking product. He has 10 women and two men in his class. Throughout the day, he readily answers questions from the women, and he keeps telling the men “I’ll be there in a minute.” Both men patiently wait throughout the day only to get to the end of the day with very few of their questions answered. Those questions that were answered were answered in a condescending manner, with Tom implying that the men did not really belong in the class. This is not a scene from the 1960s, nor is it a fabricated story—other than the fact that the 10 women were really 10 men, and the two men were really two women. I, with heavy networking training experience, was one of the two women treated in this manner. I had not experienced such discrimination since the 1970s, so it was a real eye-opener.
Was this illegal? Certainly not. Was it ethical? Definitely not! We as instructors owe our students professional and courteous treatment regardless of their background or other characteristics.
The end of this story is that I wrote a letter to the instructor who displayed preferential treatment to the men in the class. He said that he was not consciously aware of having treated any student different from any of the other students. He did thank me for the note and promised to improve his teaching style. I have followed up with his students, and he actually has improved.
More and more colleges and universities are adding an Ethics course to their curriculum. This is an especially important addition since these same colleges and universities are often providing courses in security that teach students how to hack so that they can prevent hacking. It is also very important since the lines of right and wrong are seriously blurred in our “ready access, I-want-it-now-for-nothing” Internet society. Commercial training organizations likely cannot sell such Ethics training, so they have a moral and ethical responsibility to include ethical behavior in the courses they teach.
Ann Beheler is executive director/dean of Collin County Community College’s Engineering Technology Division, which houses one of the nine Cisco CCNP academic instructor training centers in the world. E-mail Ann at firstname.lastname@example.org. | <urn:uuid:8f177d09-cbbd-4c29-a318-0a936e02077b> | CC-MAIN-2017-04 | http://certmag.com/ethics-in-the-classroom/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00349-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.981097 | 978 | 2.828125 | 3 |
With Chinese HPC surging over the last three years, the US and Western European nations aren’t the only countries suffering from performance anxiety. India now is trying to play catchup with China. In a speech on Tuesday by Indian Prime Minister Manmohan Singh, he called for spending Rs 5,000 crore (about $940 million) on domestic supercomputing to try to keep pace with China.
The initiative outlined by the PM were part of a broader strategy to push for much higher funding of science R&D in the country. A report by India’s IBN Live, notes that the country’s position in science prowess has been on declining relative to China’s due to increasing R&D investments there. From the report:
India has 17 percent of the world population, but publishes only 2.5 percent of the world’s scientific research. Meanwhile, China could topple the US as the world’s research powerhouse by 2013. In the 12th five year plan period, the Prime Minister hopes to double the R&D budget for science and technology from 1 percent of the GDP to at least 2 percent.
As mentioned above, part of that funding increase would go toward increase supercomputing capacity and capability, which would be implemented by the Indian Institute of Science, Bangalore. (Note that some reports in the Indian press imply or state that the idea is to build a $1 billion supercomputer, which is certainly not the case. The funding is clearly meant to build up supercomputing infrastructure and expertise on a much broader basis.) No specifics were provided on exactly how that money would get spent.
By the way, that 2 percent of GDP figure for R&D would put India ahead of China’s at 1.4 percent investment, but behind the US at 2.7 percent and Japan at 3.3 percent. But since all of those economies are larger that India’s, the country’s absolute R&D spending would still trail its competitors.
Also, it should be noted that despite the hand-wringing about Chinese ascendence in science and technology, that country still only accounts for 12.9 percent of global R&D spending. According to Batelle’s 2011 Global R&D Funding Forecast (December 2010), the US and Europe still dominate spending in absolute terms with 34.0 percent and 23.2 percent, respectively. India’s share of global R&D is just 3.0 percent.
The issue for US and Europe is that their global share of science R&D spending is slowly decreasing, while China’s and India’s is rising. That’s mainly the result of the much faster growing economies of the latter two countries in relation to the West, which has flat-lined (and during the recession, declined) over the past three years. During this period, neither the US and Europe have increased R&D spending as a proportion of GDP to compensate for the stagnant growth. Thus, the Indian Prime Minister’s plan to bump up R&D spending relative to GDP puts the country on a much faster path to increase its science and technology footprint — that is assuming the country’s economy can continue to expand. | <urn:uuid:a71b5c13-553a-447c-a34b-2eeed3fcb870> | CC-MAIN-2017-04 | https://www.hpcwire.com/2012/01/04/india_aims_to_double_r_d_spending_for_science/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00561-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959203 | 668 | 2.546875 | 3 |
Rubin is a security researcher with zveloLABS. His discovery centers around how the PIN is stored on Google Wallet – as a salted hash, but not within the secure hardware part of the phone known as the Secure Element. “It dawned on us,” he writes, “that a brute-force attack would only require calculating, at most, 10,000 SHA256 hashes. This is trivial even on a platform as limited as a smartphone. Proving this hypothesis took little time.”
Rubin’s discovery, comments Chester Wisniewski at Sophos, “is that a lost or stolen Android phone with Google Wallet configured is nearly as bad as handing over your credit card to whomever finds it.” The Wallet is designed to allow six attempts to input the PIN number before it automatically wipes the credit card details. “The trouble is the salted hash of your PIN is stored on the filesystem of the phone,” continues Wisniewski, “and Android phones are trivial to root. With root access you can bruteforce the PIN without using any of your official attempts.”
“Because the PIN is a four-digit code,” confirms McAfee’s Jimmy Shah, “an attacker can generate all possible PINs (0000-9999), hash them, and compare against the extracted PIN. On a real phone this takes about four seconds.”
Google, says Rubin, “was extremely responsive to the issue, but ran into several obstacles preventing them from releasing the fixed app.” The problems seem to be more to do with Android and bank policies rather than technology. By moving the PIN into the Secure Element chip, Google might be changing who is responsible for security of the PIN, and that might have an effect on whom the banks hold responsible for fraud.
Jaime Blasco, head of labs with AlienVault, explains that most payment cards require that the account holder take reasonable steps to protect their card details, in return for financial protection against card fraud. He argues that storing card details on the Google Wallet system – regardless of these latest PIN security issues – may compromise the card issuer’s security requirements. Since the Google Wallet is a hybrid on-device/cloud data storage system his own solution would have been to store the user’s PIN in the cloud, “meaning that a brute force cracker attack of this type would be a lot more difficult, if not impossible.”
Meanwhile, Google has stated: “The Zvelo study was conducted on their own phone on which they disabled the security mechanisms that protect Google Wallet by rooting the device. To date, there is no known vulnerability that enables someone to take a consumer phone and gain root access while preserving any Wallet information such as the PIN.”
Rubin disagrees, and feels “that the fact that this attack requires root permissions does not in the least bit diminish the risk it imposes on users of Google Wallet.” | <urn:uuid:fa37581e-a3ee-460e-9df9-bcd046d9439c> | CC-MAIN-2017-04 | https://www.infosecurity-magazine.com/news/google-wallet-vulnerable-to-brute-forcing-the-pin/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00313-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951783 | 622 | 2.53125 | 3 |
Will Network Complexities Hamper the Internet of Things?
Abstraction is the only way to build a true IoT “without having to physically rewire the entire world.”
The Internet of Things is arguably the most complex undertaking the world has ever seen. We’re talking about wiring up literally trillions of devices spread across the four corners of the globe and then corralling all of that data so it can be subjected to cutting-edge analytics to produce both macro views of reality and highly personalized digital experiences.
This is bigger than the pyramids, bigger than the Panama Canal, bigger than the moonshot.
The key element in all of this, as usual when dealing with data, is networking. In order to work properly, the IoT will need a coordinated, cohesive networking environment from the data center to the cloud to the plethora of devices out on the edge – negotiating all of the systems, protocols, topologies and other constructs in between.
The only way to realistically accomplish this is through abstraction, says Meshdynamics’ Francis Dacosta. For one thing, the extreme number of edge devices is too much for even IPv6 to handle, and the fact that most of these will have little to no memory or processing makes them difficult to incorporate into current software defined networking approaches. At the same time, there is the legion of M2M networks running on specialty program logic controllers (PLCs), which means they are largely isolated from the IP ecosystem. By abstracting all of these network topologies, we can at least entertain the possibility of building an integrated IoT without having to physically rewire the entire world, but it is still a tall order requiring the cooperation of numerous entrenched interests.
The scope of the problem starts to take shape when you consider the mish-mash of solutions currently vying for just the wireless portion of IoT connectivity. As EDN.com’s Christian Legare points out, local wireless LANs can consist of Bluetooth, Zigbee, 6LoWPAN, Thread and/or industrial protocols like WirelessHART, while the wide area is gravitating toward low-power WAN (LPWAN) through LoRa, SIGFOX and other technologies. And even once you get to wired IP infrastructure, there are all sorts of ways to reconfigure gateways, hubs, concentrators and other systems away from the centralized processing architectures of today’s Internet toward the more distributed workflows of the IoT. Groups like the OpenFog Consortium (OFC) are starting to address these issues, but the scope of the problem is so large that mainstream adoption of the IoT might not happen for a decade or more.
Vendor solutions are also starting to target a unified IoT network, but it is hard to see how one company, or even several, can make it happen. Cisco recently announced a series of gateways designed to link low-power LoRa wireless devices with Ethernet infrastructure. This should allow organizations to draw data from multiple devices without resorting to high-power cellular networks, but it is still a fairly limited solution given the range of other platforms vying for LPWAN dominance.
Meanwhile, Avaya recently demonstrated its SDN Fx architecture featuring the Open Network Adapter (ONA) IoT gateway, which seeks to leverage the Open vSwitch to provide connectivity for Ethernet-equipped devices. The SDN Fx architecture is built on the company’s Fabric Connect system, which scales out to 160,000 devices. This is impressive but a drop in the bucket compared to the full IoT. So again, at some point the Avaya IoT will have to connect to a broader networking ecosystem in order to deliver the functionality that the enterprise has come to expect.
While integrating today’s topologies to suit the IoT is a daunting prospect, it is by no means impossible. The mere presence of a largely integrated global voice and data network is proof that networks can function at this scale. But the question is whether it can be done cheaply while still providing performance that is effective enough to make the whole effort worthwhile.
At the moment, there are a lot of powerful interests who see substantial dollar signs at the end of this particular rainbow, so it seems that one way or another, it will get done.
Arthur Cole covers networking and the data center for IT Business Edge. He has served as editor of numerous publications covering everything from audio/video production and distribution, multimedia and the Internet to video gaming. | <urn:uuid:f0e7eb03-a8bd-42b4-9603-f6f891fa6eb7> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/datacenter/datacenter-blog/will-network-complexities-hamper-the-internet-of-things.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00341-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927955 | 908 | 2.5625 | 3 |
How Much Text is in a Kilobyte or Megabyte?
A bit is the most basic unit of information. At their most fundamental level, most modern computers operate on binary bits which means that they can have two states, usually specified as a 0 or 1. Long strings of these bits can be used to represent most types of information including text, pictures and music.
Most modern computers are binary systems and therefore, they are particularly well suited to working with bits. Pure binary information, however, is of little use to humans. The binary number 11000101110 is equivalent to 1582; it is obvious that we are much more suited to working with digits and text instead of ones and zeros.
To help make computers more like our language-based way of thinking, groups of bits are joined into bytes. One byte is comprised of 8 bits. A set of 8 bits was chosen because this provides 256 total possibilities which is sufficient for specifying letters, numbers, spaces, punctuation and other extended characters. This very sentence, for example is composed of 125 bytes because there are 125 letters, digits, spaces and punctuation marks. Keep in mind that we are discussing pure text; some word processing programs, include other sorts of formatting data, and therefore the filesizes will be greater than the number of characters in the file.
It is estimated that a kilobyte can accommodate about 1/2 of a typewritten page. Therefore, one full page requires about 2 kilobytes. The chart below illustrates the number of bytes in common terms such as kilobyte and megabyte and how much text could be stored:
|Name||Number of Bytes||Amount of Text|
|Kilobyte (KB)||210 or 1,024||1/2 page|
|Megabyte (MB)||220 or 1,048,576||500 pages or 1 thick book|
|Gigabyte (GB)||230 or 1,073,741,824||500,000 pages or 1,000 thick books|
|Terabyte (TB)||240 or 1,099,511,627,776||1,000,000 thick books|
|Petabyte||250 or 1,125,899,906,842,624||180 Libraries of Congress|
|Exabyte||260 or 1,152,921,504,606,846,976||180 thousand Libraries of Congress|
|Zettabyte||270 or 1,180,591,620,717,411,303,424||180 million Libraries of Congress|
|Yottabyte||280 or 1,208,925,819,614,629,174,706,176||180 billion Libraries of Congress|
The Library of Congress in Washington D.C. is said to be the world's largest library with over 28 million volumes. The numbers listed in the chart above are based on the assumption that the average book has 200 pages. Most Compact Discs (CD) can hold approximately 750 megabytes (mB) which is roughly equivalent to 375,000 pages of text! DVDs can store 4.7 gigabytes (gB) or 2.3 million pages. The next generation of optical media, Blu-Ray discs, can hold an astonishing 27 gigabytes or 13.5 million pages which is roughly equivalent to the text contained in 67,500 books!
|Data Measurement Chart|
|Bit||Single Binary Digit (1 or 0)|
This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you.
We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus | <urn:uuid:baacd724-54fa-4f9a-b603-e60b5a17616a> | CC-MAIN-2017-04 | http://www.knowledgepublisher.com/article-36.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00341-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.813957 | 815 | 3.6875 | 4 |
The quality of a decision often depends on the quality of a forecast. Forecasting models are an integral part of many DSS. One can build a forecast model or one may use preprogrammed software packages.
The major use of forecasting is to predict the value of variables at some time in the future. The future time period of interest depends on "when" we want to evaluate the results. For example, in an investment decision we may be interested in prices and income a year from today, while in a capital investment decision we may be interested in projected prices and income during the next five years. Generally speaking, we distinguish between two types of forecasts: (a) short run, where the forecast is used mainly in deterministic models, and (b) long run, where the forecast is used in both deterministic and probabilistic models.
Many types of forecasting models exist, but forecasting remains an extremely difficult task (cf., Makridakis and Wheelwright, 1982). What is going to happen in the future depends on many factors that are uncontrollable. Furthermore, data availability, accuracy, cost, and the time required to make a forecast play an important role in choosing a forecasting method. We can select forecasting methods based on convenience, popularity, expert advice, and guidelines from prior research. In general the last two approaches should be used in building Forecasting DSS.
The best Web resource on Forecasting Models and Methods is the Forecasting Principles site (hops.wharton.upenn.edu/forecast/) maintained by J. Scott Armstrong. It provides a comprehensive review of our knowledge about forecasting. The site also provides: evidence showing the relevance of forecasting principles to a given problem, expert judgment about the applicability of forecasting principles, sources of data and forecasts, details about how to use forecasting methods, and guidance to locating the most recent research findings.
Forecasting methods can be grouped in several ways. One classification scheme distinguishes between formal forecasting techniques and informal approaches such as intuition, expert opinions, spur-of-the-moment guesses, and seat-of-the-pants predictions.
The following paragraphs review the more formal and analytical methods that have been used in building Forecasting DSS. The methods reviewed include naïve extrapolation, judgment methods, moving averages, exponential smoothing, time series extrapolation, and regression and econometric models. Each method is discussed briefly and major issues associated with using the methods are summarized. According to Scott Armstrong, given enough data, quantitative methods are more accurate than judgmental methods. He notes that when large changes are expected, causal methods are more accurate than naive methods. Also, simple methods are preferable to complex methods; they are easier to understand, less expensive, and seldom less accurate.
Naïve Extrapolation. This technique involves collecting data and developing a chart or graph of the data. The user extrapolates or estimates the data for future time periods. This technique is easy to update and minimal quantitative knowledge is needed. It is easy and inexpensive to implement using a spreadsheet. However it provides limited accuracy.
Judgment Methods. Judgment methods are based on subjective estimates and expert opinion, rather than on hard data. They are often used for long-range forecasts, especially where external factors may play a significant role. They also are used where historical data are very limited or nonexistent. A group DSS could be used with a judgment method like the Delphi technique to obtain judgments. The results are not necessarily accurate, but the experts may be the best source of forecast information.
Moving Average. This type of forecast uses an average of historical values that "moves" or includes the new period in each succeeding forecast. It is for short-run forecasts and the results are easy to manipulate and test. Overall, a Forecasting DSS built using a moving average model will be easy to understand and inexpensive.
Exponential Smoothing. The historical data is mathematically altered to better reflect the forecasterís assumptions about the future of the variable being forecast. This model is similar to the moving average model, but it is harder to explain. A short-term forecast based on exponential smoothing is often acceptable.
Time-series Extrapolation. A time series is a set of values for a business or economic variable measured at successive intervals of time. For example, quarterly sales of a firm make up a time series. Managers use time-series analysis in decision-making because they believe that knowledge of past behavior of the time series might help understand the behavior of the series in the future. In managerial planning we often assume that history will repeat itself and that past tendencies will continue. Time-series analysis efforts conclude with the development of a time-series forecasting model that can then be used to predict future events. Both moving average and exponential smoothing use a time series of data.
Regression and Econometric Models. Association or causal forecasting methods use data analysis tools like linear and multiple regression to find data associations and, if possible, cause and effect relationships. Causal methods are more powerful than time-series methods, but they are also more complex. Their complexity comes from two sources: First, they include more variables, some of which are external to the situation. Second, they use sophisticated statistical techniques for evaluating variables. Causal approaches are most appropriate for intermediate term (3-5 year) forecasting. An econometric model using simultaneous equations for a supply-demand system is x demand= f (x price, yield, etc...) and x supply = f (x price, production inputs prices, etc...) Econometric Resources on the Internet (www.oswego.edu/~kane/econometrics/) by John Kane contains links to a variety of resources. You can work with Fairmodel, a macroeconometric model of the USA economy, to forecast the economy and do policy analysis (fairmodel.econ.yale.edu).
In general, subjective forecasting methods are used in those cases where quantitative methods are inappropriate or cannot be used. Time pressure, lack of data, or lack of money may prevent the use of quantitative models. Complexity of historical data may also inhibit its use. Model-Driven DSS primarily incorporate quantitative methods and often use multiple forecasting models. | <urn:uuid:14a946a3-7423-46f8-8bcb-a7814080e1c9> | CC-MAIN-2017-04 | http://dssresources.com/subscriber/password/dssbookhypertext/ch9/page17.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00249-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923045 | 1,274 | 2.90625 | 3 |
Definition: A balanced search tree in which every node has between m/2 and m children, where m>1 is a fixed integer. m is the order. The root may have as few as 2 children. This is a good structure if much of the tree is in slow memory (disk), since the height, and hence the number of accesses, can be kept small, say one or two, by picking a large m.
Also known as balanced multiway tree.
Generalization (I am a kind of ...)
balanced tree, search tree.
Specialization (... is a kind of me.)
2-3-4 tree, B*-tree, 2-3 tree.
See also B+-tree, multiway tree, UB-tree for multidimensional indexing, external memory data structure.
The origin of "B-tree" has never been explained by the authors. ... "balanced," "broad," or "bushy" might apply. Others suggest that the "B" stands for Boeing. [Bayer and McCreight were at Boeing Scientific Research Labs in 1972.] Because of his contributions, however, it seems appropriate to think of B-trees as "Bayer"-trees. - Douglas Comer, The Ubiquitous B-Tree, Computing Surveys, 11(2):123, June 1979.
After [HS83, page 499].
A tutorial on the basics, and variants.
Rudolf Bayer and Edward M. McCreight, Organization and Maintenance of Large Ordered Indices, Acta Informatica, 1:173-189, 1972.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 16 November 2009.
HTML page formatted Mon Feb 2 13:10:39 2015.
Cite this as:
Paul E. Black, "B-tree", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 16 November 2009. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/btree.html | <urn:uuid:23247e5d-2760-495b-8dae-9bbccaba20b4> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/btree.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00157-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937266 | 453 | 3.1875 | 3 |
The Wasps Know45 percent in Los Angeles
Trained wasps could be a low-tech, low-cost weapon in the war against terrorism.
Researchers at the University of Georgia-Tifton Campus created a handheld "Wasp Hound" -- five wasps, each a half-inch long, are placed in a plastic cylinder that is 15 inches tall. The Wasp Hound, which costs roughly $100, has a vent in one end and a camera that connects to a laptop computer.
When the wasps sense an odor they've been trained to detect, such as the chemicals from explosives or biological weapons, the insects gather by the vent -- a response that can be measured by the computer or actually seen by observers.
The researchers said their device is ready for pilot tests and could be available for commercial use in five to 10 years. The wasps, when exposed to some chemicals, can detect amounts as low as four parts per billion. -- USA Today
It takes Internet users about 50 milliseconds -- roughly the duration of one frame of standard television coverage -- to form lasting impressions of Web sites, according to Canadian researchers.
A recent study shows that the brain can make flash judgments almost as fast as the eye can receive information. Volunteers got the briefest glimpses of Web pages previously rated as being either easy on the eye or particularly jarring, and were asked to rate the Web sites on a sliding scale of visual appeal.
Even though the images flashed up for just 50 milliseconds, viewers' judgments of a Web site were nearly the same as judgments made after a longer period of scrutiny. -- Nature.com
Two Purdue University industrial designers have created a personal computer design that may change how people watch movies, listen to music, play games and read magazines.
The concept computer, called Bookshelf, targets digital copyrights and inconvenient accessibility. The device both physically resembles and functions like its namesake: Hard drive attachments containing movies, books, magazines and other content are placed on the Bookshelf for use. As attachments are added, the Bookshelf becomes its own multimedia library custom-built by its owner
The device won the $50,000 Judge's Award at Microsoft's Next Generation Windows PC Design Competition.
The Bookshelf's CPU is a 7-inch cube which operates hard-drive attachments supplied by digital service providers, allowing them to protect copyrights while still accommodating user convenience and portability. -- Purdue University
Pinellas County, Fla., Tax Collector Diane Nelson will begin online tax certificate sales this summer. Tax certificate sales let investors purchase property tax liens on real-estate parcels whose owners have not paid their annual taxes. The process allows the county tax collector to collect virtually 100 percent of taxes levied and gives added time for the delinquent taxpayer to repay.
The tax certificate sale is conducted as a reverse auction, giving the investor who bids the lowest interest rate the right to buy the certificate. The intended outcome is to mitigate the burden so the delinquent taxpayer can eventually repay. -- National Association of Counties
The National City Network (NCN) officially debuted December 2005 at the Congress of Cities in Charlotte, N.C.
The NCN is designed to provide a gateway for cities and towns across the country to learn about municipal issues and share information with each other. The NCN delivers city-related content via a Web portal, with policy analysis from respected think tanks, upcoming events and topical news stories from sources across the country.
NCN TV creates a multimedia experience, with a video archive of feature stories about cities, teleconferences and news events. In the coming months, NCN will continue to add content to both its Web portal and video archive. Cities will have an opportunity to add to NCN content. A Pilot Cities Program will launch early in 2006. -- The National League of Cities
He Said, She Said
The percentage of women going online is quickly approaching that of their male counterparts, and according to Pew Internet & American Life Project, while men log on for entertainment, women use the Internet for communicating.
The demographic profile of U.S. male and female adults who use the Internet is as follows.
Males: 68 percent
Females: 66 percent
Males: 80 percent
Females: 86 percent
Males: 76 percent
Females: 79 percent
Males: 63 percent
Females: 66 percent
Males: 34 percent
Females: 21 percent
Males: 70 percent
Females: 67 percent
Males: 67 percent
Females: 66 percent
Males: 50 percent
Females: 60 percent
Males: 72 percent
Females: 66 percent
Males: 72 percent
Females: 75 percent
Males: 62 percent
Females: 56 percent
The main motivators for those networking their homes are broadband Internet access and multiple PC ownership, according to the 2005 Canadian Technologically Advanced Family survey, which also found that 79 percent of U.S. households have a PC, while only 31 percent have two or more. -- The Yankee Group
Open Door Policy
Symantec's study of basic Wi-Fi security in neighborhoods in New York, Los Angeles, Chicago and Houston observed how well consumers are protecting their wireless networks and personal information.
The results? A large percentage of Wi-Fi users are leaving their doors wide open for hackers and crooks to steal their identities and personal information.
With a car, a laptop running free software and a simple antenna, Symantec logged thousands of wireless networks in four different markets.
The percentage of wireless access points detected in the following residential neighborhoods had no encryption whatsoever.
52 percent in New York
38 percent in Houston
35 percent in Chicago
Of 686,683 consumer fraud complaints filed with the FCC in 2005, the top categories were:
Identity theft: 37 percent
Internet auctions: 12 percent
Foreign money offers: 8 percent
Shop-at-home/catalog sales: 8 percent
Prize/sweepstakes and lotteries: 7 percent
Internet services and computer complaints: 5 percent
Business opportunities and work-at-home plans: 2 percent
Advance-fee loans and credit protection: 2 percent
Telephone services: 2 percent
Others: 17 percent
Source: FCC's Consumer Fraud and Identity Theft Complaint Data report
A major component of President Bush's Competitiveness Agenda is the Workforce Innovation in Regional Economic Development (WIRED) initiative, under which the U.S. Department of Labor will invest $195 million in 13 regional economies.
WIRED's goal is to transform regional economies by creating long-term strategic plans that prepare workers for high-skill, high-wage opportunities in the future.
The regional economies to receive $15 million over three years are:
Upstate New York
Piedmont Triad North Carolina
Western Alabama and eastern Mississippi
North central Indiana
Greater Kansas City
Denver metro region
Central and eastern Montana
The South Korean government plans to introduce legislation in September 2006, requiring financial institutions to compensate customers who are victims of online fraud and identity theft. The new laws cover virtually all financial losses resulting from online identity theft and account hacking -- even if the banks are not directly responsible.
The Korean Ministry of Finance and Economy said the new laws will help alleviate the fears of 23 million Korean e-banking, phone banking and ATM users over incurring losses due to online identity theft.
The move follows an incident earlier this year when Korea Exchange Bank refused to compensate customers who incurred losses from an online banking scam, unless they could prove the bank was at fault. -- Finextra.com
Surrender Your Searches
Not everyone agrees that search engines should comply with a recent government order for information about citizen search habits, according to a national poll of 800 U.S. respondents. The views were as follows:
Shouldn't turn over search queries: 50 percent
Should turn over queries: 44 percent
-- Center for Survey Research and Analysis at the University of Connecticut | <urn:uuid:0e9df25c-181f-482e-84f0-e8036583527d> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/100494264.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00001-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916437 | 1,655 | 2.75 | 3 |
One of the keys to a properly functioning data center with high availability is ensuring that clean, steady power reaches the IT equipment. Part of this critical task falls to uninterruptible power supplies and backup generators, but this power must be distributed effectively and safely. Breakers protect branch circuits from overloading, but this safety precaution can become a tremendous hassle is the circuit isn’t loaded properly. Furthermore, increased energy efficiency can be difficult to achieve without knowledge of how circuits perform over time. To address these and other concerns, branch circuit monitoring systems (BCMSs) can give data center managers the information they need to identify potential problems, avoid branch circuit overloading and identify candidates for efficiency improvements.
Why Monitor at the Branch Circuit Level?
Tracking overall data center power consumption along with overall IT power consumption is sufficient for calculating the power usage effectiveness (PUE) metric, but it lacks the granularity required to know what’s happening on individual branch circuits. Tracking power consumption down to the level of individual components or even power strips may be impractical, however, but monitoring power at the branch circuit level, although slightly less granular, can yield information that enables the data center manager to identify problems and avoid downtime.
Consider, for instance, the circuit breaker on a branch circuit. These devices are designed to ensure that a circuit is not overloaded, creating a potential hazard, by breaking the circuit when a certain maximum current is reached. When this happens, equipment connected to the circuit loses power. The problem with circuit breakers is they are unforgiving: they don’t care if the heavy power load is at a time of day when downtime could really hurt business.
Part of the difficulty with circuit breakers is that IT power loads can vary significantly over time. Relying exclusively on the manufacturer’s power ratings for equipment may leave a margin of safety when loading a circuit (i.e., the circuit load is the sum of the equipment ratings, and that sum does not exceed the maximum power of the circuit), but it can also leave the circuit’s capacity vastly underutilized. If a circuit is loaded with servers that, for instance, never reach half their rated power levels, that circuit will never exceed half its capacity. Data centers, thus, may often “overload” the circuit (in the sense that if every piece of equipment on the circuit reached maximum power consumption, the circuit capacity would be exceeded) to avoid wasting costly infrastructure. But as computational loads vary, this overloading can lead to a “real” overload, tripping the circuit breaker. Such an occurrence can easily correspond with heavy usage of the data center—a terrible time for an outage!
Furthermore, changes in the configuration of a data center, even if tracked carefully, can still increase the chances of overloading—or result in greater underutilization of circuit capacity. In a facility that is growing or making regular upgrades or adjustments, the number of changes can easily cause problems in this regard.
The problem is thus a lack of information: without measurements of power usage on branch circuits, data center managers can’t be sure that the load on the circuit isn’t too much. When connecting equipment, data center managers must choose an appropriate estimate of the power consumption of that equipment—usually some kind of average consumption. But an average doesn’t take into account the effect of power spikes, particularly if several pieces of equipment all exceed their average consumptions all at once. An Eaton white paper (“Who tripped the circuit?”) clearly summarizes this situation: “Under normal operating conditions, data center managers could mistakenly assume there’s plenty of capacity on a branch breaker or panel board. After all, day-to-day power consumption seems well within limits. But there’s always the potential for one or more servers to increase computational load, draw more power to match the workload, and cause a branch breaker to trip. With the increased use of denser, more space-saving equipment, the risk of overloading a branch breaker is higher than ever.”
Benefits of BCMS for the Data Center
This situation makes clear some of the benefits of a branch circuit monitoring. But simply taking periodic measurements of a circuit (e.g., with a handheld current meter) cannot detect the short-lived surges in power consumption that can still trip a breaker. A branch circuit monitoring system (BCMS) takes (nearly) continuous readings of current in each connected branch circuit, collecting the data and presenting it to the data center manager for analysis and, if necessary, action. By looking at power consumption trends over the course of a day or many days, the data center manager can identify loads that are in danger of tripping a breaker (or that significantly underutilize the circuit’s capacity).
In discussing the benefits of its Powerware Energy Management System for the data center, Eaton notes some of the benefits of tracking power consumption at the branch circuit level: “With this information, you always know how close a circuit is to exceeding its overall rating—and whether or not a device can be added to a branch circuit or panelboard. This insight enables you to operate at maximum efficiency, using data center assets, energy and space wisely. Without this insight, you wouldn’t know if you’re pushing the limits of a breaker or any safety factor.”
A good BCMS, like many other types of data center infrastructure management (DCIM) products, provides both local and remote access to collected data and information. A centralized interface enables the data center manager to monitor and review branch circuit data from anywhere—inside the data center or beyond. BCMSs often include data analysis tools as well, providing trending and information to aid the data center manager in getting a comprehensive power picture.
In addition to helping achieve optimal loading of branch circuits, a BCMS can also provide information that the data center manager can use to increase power efficiency. Examining power consumption at the branch circuit level—rather than just the facility level—enables better isolation of equipment that is underutilized or inefficient, aiding in equipment consolidation and other efficiency improvement projects.
Many data center best practices are justified by the low capital and operational costs of added infrastructure relative to the massive business costs of data center downtime. As data center managers seek to do more with less, underutilized and inefficient branch circuits are a drag on an optimally running facility, but overloading can result in downtime when usage is at its highest—a dreadful proposition for the business. Branch circuit monitoring through a centralized BCMS can enable better utilization of capacity while avoiding too close an approach to maximum circuit capacities, which, if exceeded, result in tripped breakers and thus downtime. Furthermore, continuous power monitoring at the branch circuit level allows the data center staff to identify potential problems and inefficiencies. Although some operational and maintenance costs are associated with a well-deployed BCMS, the benefits—particularly relative to the costs of downtime—can quickly outweigh these costs for many data centers.
Photo courtesy of Tom Raftery | <urn:uuid:4f7137e2-c298-40dc-9987-9371e37c5478> | CC-MAIN-2017-04 | http://www.datacenterjournal.com/bcms-in-the-data-center/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00029-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921058 | 1,455 | 2.53125 | 3 |
Last May, Google announced that it was using a herd of goats to mow the lawn at its Mountain View, Calif., headquarters. This year, HP has taken the lead in innovative uses for livestock in business. Its researchers have come up with a way to power data centers with cow manure.A farm of 10,000 dairy cows could fulfill the power requirements of a 1-megawatt data center - a medium-sized data center - with power left over to support other needs on the farm, say the HP Labs researchers.
In this process, the heat generated by the data center can be used to increase the efficiency of the anaerobic digestion of animal waste. This results in the production of methane, which can be used to generate power for the data center. This symbiotic relationship allows the waste problems faced by dairy farms and the energy demands of the modern data center to be addressed in a sustainable manner.
HP offers some fun facts about cows and manure that you might not already know:
* The average dairy cow produces about 55 kg (120 pounds) of manure per day, and approximately 20 metric tons per year - roughly equivalent to the weight of four adult elephants.
* The manure that one dairy cow produces in a day can generate 3.0 kilowatt-hours (kWh) of electrical energy, which is enough to power television usage in three U.S. households per day.
* A medium-sized dairy farm with 10,000 cows produces about 200,000 metric tons of manure per year. Approximately 70 percent of the energy in the methane generated via anaerobic digestion could be used for data center power and cooling, thus reducing the impact on natural resources.
* Pollutants from unmanaged livestock waste degrade the environment and can lead to groundwater contamination and air pollution. Methane is 21 times more damaging to the environment than carbon dioxide, which means that in addition to being an inefficient use of energy, disposal of manure through flaring can result in steep greenhouse gas emission taxes.
* In addition to benefiting the environment, using manure to generate power for data centers could provide financial benefit to farmers. HP researchers estimate that dairy farmers would break even in costs within the first two years of using a system like this and then earn roughly $2 million annually in revenue from selling waste-derived power to data center customers.
HP did not say when dung-powered data centers will be generally available, but you can read more about its cow power theories in this paper. | <urn:uuid:bb7cf948-c3d9-4ce3-be9b-e7e937e3843f> | CC-MAIN-2017-04 | http://www.banktech.com/infrastructure/hp-develops-manure-powered-data-centers/d/d-id/1293828 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00569-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935198 | 507 | 3.171875 | 3 |
It is difficult to predict a given person’s success. Whether in a work setting, or just generally in life, the problem with prediction is determining which characteristics are predictive.
Happily, there has been progress in this area. A pair of scientists, Duckworth and Dunn, have been studying a concept called grit for a number of years now.
- Grit noun /ɡrɪt/
- Perseverance and passion towards long-term goals.
The scientists found that a simple rating of grit for multiple types of people was more predictive of success than any other measurement available, including wealth, IQ, SAT scores, etc.
Talent is more familiar to us. It basically represents one’s ability to be exceptional in a particular area.
- Talent noun /ˈtalənt/
- Natural aptitude or skill.
The GT System is combines these two attributes into a composite score in order to predict an individual’s success.
Each person starts as a 5/5, meaning they have a five score in Grit, and a five score in Talent. From there, scores are adjusted based on the each line in each section.
|bachelors [+2/6]||IQ 120+ [+2/7]|
|twitter >1K [+2/6]||elite tester [+2/6]|
|twitter >10K [+3/7]||shadowlabs [+2/6]|
|project[s] >100K/users [+3/8]||minor creator [+2/7]|
|active github [+2/6]||exceptional [+2/8]|
|overweight [-2/4]||major creator [+3/8]|
|obese [-3/4]||non-creator [-2/3]|
Here are some descriptions for the various rating criteria:
A major project that has over 100,000 direct users per year. Examples might include a book, a website, a software program, a mobile app, a band, etc.
- Elite Tester
This is a peer rating, whereby most everyone on the team accepts that the person is an elite tester.
- Minor Creator
This is someone that has put out a lot of small things, e.g. little scripts, tutorials, talks, small applications, etc. They produce, but not on a large scale with big projects.
- Major Creator
This is someone that creates larger projects, complete with a large userbase, an announcement, a website, a talk, etc. Major projects, artistic creations, etc.
Perhaps the most subjective on the list right now, this is a peer rating whereby everyone who interacts with the person thinks they’re phenomenal in one or more ways. It seems more subjective than it is. When people have this, it’s obvious. The key is to ask people who are themselves major players.
[ NOTE: Attributes do not stack, so you only get one bonus for Twitter, one reduction for weight, etc. ]
Once scores have been calculated, you take the scores for each and rank them on the grade scale like so:
You then take the lowest score of the two, so if someone ends up with a B in Grit, but a D in talent, they are given a D ranking.
This is consistent with my experience and the experience of others I’ve spoken. The idea is that you are limited—over long periods and many different types of projects—not by your strengths, but by your weaknesses.
So if you’re extremely disciplined with a great work ethic, but not too creative or talented, that lack of creativity holds you back. Conversely, if someone is brilliant but can’t seem to ship anything, that person is limited by being unable to finish things.
The result, which should be both intuitive and shown in the data, is that true A and B players have high marks in both talent and grit.
The goal of this system is to be able to quickly rank people who you may want to work on something with. This could be at your main job, or a side project, or any number of things.
The idea is that you could, when time is a factor, refer to someone as an “A Player”, and have that actually mean something. And if you say someone else is a “C Player”, with low Grit, for example, the person you’re talking to can know exactly what they’re getting into by taking them on.
Again, I cannot stress enough that the system must be shown to be effective by comparing ratings to actual performance. There are a thousand rating systems out there, with arbitrary, biased, and fetishized ranking criteria, that fail to predict anything.
The goal of the GT Ranking system is to be truly predictive, as bourne out by data. This requires doing the hard work of comparing performance to ratings, as well as working hard on the list of rating criteria.
It could turn out that the system doesn’t work for whatever reason, and that would be fine. I’m willing to try, however. Let me know if you’d like to help.
- Any system of this type is bound to be imperfect. The question is whether it can be shown to have predictive value with respect to performance in a particular role. This requires comparing these ratings beforehand to performance attained later, which is an ongoing project.
- I would love input on more attributes to add, both for Grit and Talent, and in both directions (positive and negative).
- This attribute list is currently tuned for my use; you’d have to adjust it accordingly. | <urn:uuid:d35e0e98-9d82-4a70-9bb4-9eb6669e5b34> | CC-MAIN-2017-04 | https://danielmiessler.com/projects/gt-rating-system/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00388-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.9437 | 1,198 | 2.671875 | 3 |
Canada is one of the few countries to have passed adequate federal legislation related to protecting personal information. However, unreliable enforcement and a complicated processing system make the Personal Information Protection and Electronic Documents Act (PIPEDA) less effective than it can be. Ensure compliance with PIPEDA to improve overall information security and to avoid its complex violation investigation system.
Data Privacy Guidelines
PIPEDA outlines how organizations should collect, use, and disclose personal information, as well as how to facilitate the use of electronic documents in business transactions. The ten fair information principles in Table 1 have contributed to Canada's high security standing in the international playing field, especially influencing the EU's decision for granting approval regarding dataadequacy for cross-border exchange. | <urn:uuid:f4f92ec4-b2ad-4ddb-9436-35ffabd6e857> | CC-MAIN-2017-04 | https://www.infotech.com/research/is-pipeda-the-canadian-solution-to-data-privacy-nightmares | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00388-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921917 | 149 | 2.515625 | 3 |
The basics of network design and six top certs to help you master them
While driving your automobile, have you ever wondered what it took to make it operational? To produce an automobile, the manufacturer began with a very early stage in this process — design. According to the design specifications, the manufacturer compiled different components and features to produce the vehicle. Once completed, the car is ready to run over the road, but it will still require regular maintenance to ensure it operates as per the design process.
This is all fine for automobiles, but what does this have to do with PC networking? The same logic applies because the same processes and principles utilized to produce a functioning car also apply to building a computer network. Every high-performance network infrastructure begins with a thorough design, defined as an iterative process for building a network that meets customer needs.
Network designers start by collecting client requirements such as the number of servers, bandwidth, latency and many other operational specifications and then translate them into an architecture to meet the requirements. The customer may be external (another company) or internal — designing a network to host teams or departments in your same company.
The initial output of the network design process is a low-level design (LLD) document detailing all network elements including topology, device models and configuration, logical addressing details, network protocols, redundancy and high-availability descriptions. This document is important because it’s used to select and purchase appropriate network devices and to build the infrastructure.
Network designer skills
Excelling in design is a passion for many network engineers — an interesting integration of different components to create an infrastructure that matches customer needs. Successful network designers will possess four traits:
A thorough understanding of how networks operate — Good network designers have an in-depth understanding of network protocols and device operations. This allows them to anticipate how their design will run during operation. Knowing how the network will operate is important, not only for implementation, but also for potential future upgrades.
Exposure to different vendors — Talented network designers have a knowledge of and professional relationships with various vendors. They are able to work closely with vendors to meet their design needs.
Knowledge of network device capabilities — It is crucial for a designer to understand the capabilities of the various network devices including device throughput, capacity and performance metrics. This enables a designer to select and purchase the equipment that best matches their design parameters in a cost effective manner.
Experience — Like any other career, the more experience you have the more effective your work output. Nothing will shape a network designer’s skills better than actually building designs and learning from previous ones — this includes what worked, and as just as important, what didn’t work.
One of the best ways to master the ABCs of network design is to earn an official certification from a reputable organization. There are six network design certifications that are well-respected across the industry:
Cisco Certified Design Associate (CCDA)
An excellent and very popular design certificate, CCDA, although vendor specific, includes much that can be applied to any modern network infrastructure using a wide variety of equipment from various vendors. CCDA provides both a description for technologies as well as protocols used in design and best practices for creating high-performance and reliable networks.
CCDA is a beginning level certification for designers who are new to the field and looking to pick up fundamental design skills. By focusing on the design concepts of modularity and hierarchy, CCDA takes into consideration modern datacenter design as a way to accommodate other solutions like storage, systems and content networking, which use overlay techniques to run over standard network components. Achieving a CCDA certificate is a good starting point for a network design career.
Cisco Certified Design Professional (CCDP)
CCDP is a professional level certification that builds on the foundational-knowledge of the CCDA to do a deeper-dive into the concepts and technologies of network design. CCDP credentialed individuals have a recognized proficiency in designing and deploying scalable networks and multilayer-switched networks.
In order to sit for the CCDP exam, candidates must have CCDA and the CCNA Routing and Switching certifications, or any other CCIE certification. CCDAs are often working as senior network design engineers and senior analysts.
Juniper Networks Certified Design Associate (JNCDA)
Juniper focuses on promoting its career certificate holders into network engineers. Some people think Juniper is imitating Cisco’s success, which is reflected in greater adoption of Cisco’s products into the workplace. Whatever Juniper’s motive, it’s that their strategy is to attract greater numbers of network professionals to Juniper certifications.
JNCDA is a very popular network design credential designed to give practitioners a basic level knowledge of design theories and best practices. Earning a JNCDA requires candidates to possess a basic knowledge of routing and switching protocols, other Juniper network products, network security principles and hypervisors and load balancers. Although a beginner level certificate, JNCDA is rich in content dealing with modern network infrastructure design.
Juniper Networks Certified Design Specialist, Data Center (JNCDS-DC)
Juniper has developed a number of widely used products focusing on virtualized datacenter architecture that are high-capacity, low-latency and secure. With more Juniper datacenter products on the market, it was a natural step for the company to create JNCDS-DC.
This certification is recommended for network designers whose work involves creating network solutions for datacenters as well as interconnecting different datacenters. JNCDS-DC is designed for networking professionals and designers who already have a foundational knowledge of network design, theory, and best practices. JNCDS-DCs have the skills and abilities to addresses security concerns of modern virtualized networks using Juniper security products while following industry standard approaches. If you’re a network designer utilizing Juniper products to build datacenters, the JNCDS-DC is an attractive and useful certificate.
Cisco Certified Network Professional (CCNP)
Rather than design focused, CCNP is an operational certificate, and an excellent choice for designers. The CCNP shares most of the contents (ROUTE and SWITCH exams) with the CCDP certificate. This matches with the previously mentioned importance of having deep understanding of network technologies and protocols. CCNP is rich in knowledge of layer 2 and layer 3 technologies, infrastructure security, VPN and network services as well as how to troubleshoot them. Pursuing a CCNP is a solid step to take especially if one plans on going deeply in design realm.
VMware Certified Advanced Professional 6 — Data Center Virtualization Design
As virtualization is more widely adopted in datacenter infrastructure, the gap in responsibilities between network and systems engineers is shrinking. Network engineers must have an understanding of system virtualization concepts in order to design infrastructures that support such systems.
This merging of responsibilities for network and systems engineers is happening in enterprises of all sizes. All of this makes VMware Certified Advanced Professional 6 — Data Center Virtualization Design certificate increasingly attractive for network engineers. Certified individuals are skilled in the logical and physical design of the VMware vShpere system starting from business requirements to vShpere configuration of physical components. The certificate gets its importance from the increasing use of VMware products in large datacenters, making it an excellent choice for network designers. | <urn:uuid:3785de4b-cb83-4d29-a551-c6ec94062815> | CC-MAIN-2017-04 | http://certmag.com/basics-network-design-six-top-certs-help-master/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00322-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933901 | 1,522 | 3.265625 | 3 |
As silicon-based electronics come up against the physical limitations of nanoscale, researchers are scrambling to find a viable replacement that would breath new life into Moore’s law and satisfy the demand for ever faster, cheaper and more energy-efficient computers. A new computer made of carbon nanotubes, created by a team of Stanford engineers, may be the first serious silicon challenger.
A scanning electron microscopy image of a section of the first ever carbon nanotube computer. Credit: Butch Colyear
Carbon nanotubes, long chains of carbon atoms, have remarkable material and electronic properties which make them attractive as a potential electronics substrate. The Stanford team, led by Stanford professors Subhasish Mitra and H.-S. Philip Wong, contends that this new semiconductor material holds enormous potential for faster and more energy-efficient computing.
“People have been talking about a new era of carbon nanotube electronics moving beyond silicon,” said Mitra, an electrical engineer and computer scientist at Stanford. “But there have been few demonstrations of complete digital systems using this exciting technology. Here is the proof.”
According to a paper in the journal Nature, the simple computer is comprised of 142 low-power transistors, each of which contains carbon nanotubes that are about 10 to 200 nanometer long. The prototype has about the same power as a 1970s-era chip, called the Intel 4004, Intel’s first microprocessor.
“The system is a functional universal computer, and represents a significant advance in the field of emerging electronic materials,” write the authors in the Nature article.
The device employs a simple operating system that is capable of multitasking and can perform four tasks (instruction fetch, data fetch, arithmetic operation and write-back). The inclusion of 20 different instructions from the commercial MIPS instruction set highlights the general nature of this computer. For the demonstration, the team ran counting and integer-sorting workloads simultaneously.
Professor Jan Rabaey, a world expert on electronic circuits and systems at the University of California-Berkeley, noted that carbon had long been a promising candidate to replace silicon, but scientists weren’t sure if CNTs would be able to overcome certain hurdles.
While the first carbon nanotube-based transistors came on the scene about 15 years ago, the Stanford team showed that they could be used as the basis for more complex circuits.
“First, they put in place a process for fabricating CNT-based circuits,” explained Professor Giovanni De Micheli, director of the Institute of Electrical Engineering at École Polytechnique Fédérale de Lausanne in Switzerland. “Second, they built a simple but effective circuit that shows that computation is doable using CNTs.”
By showing that CNTs have a role in designing complex computing systems, other researchers will be more motivated to take the next step, potentially leading to the development of industrial-scale production of carbon nanotube semiconductors.
“There is no question that this will get the attention of researchers in the semiconductor community and entice them to explore how this technology can lead to smaller, more energy-efficient processors in the next decade,” observed Rabaey. | <urn:uuid:c2a8f809-13f3-455a-90c7-d747f82b485e> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/09/27/stanford_debuts_first_carbon_nanotube_computer/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00258-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.918096 | 681 | 4.0625 | 4 |
TrustStore is used by TrustManager and keyStore is used by KeyManager class in Java.
KeyManager and TrustManager performs different job in Java,
TrustManager determines whether remote connection should be trusted or not i.e. whether remote party is who it claims to and KeyManager decides which authentication credentials should be sent to the remote host for authentication during SSL handshake.
If you are an SSL Server you will use private key during key exchange algorithm and send certificates corresponding to your public keys to client, this certificate is acquired from keyStore. On SSL client side, if its written in Java, it will use certificates stored in trustStore to verify identity of Server.
KeyStore contains private keys and required only if you are running a Server in SSL connection or you have enabled client authentication on server side.
TrustStore stores public key or certificates from CA (Certificate Authorities) which is used to trust remote party or SSL connection. | <urn:uuid:0beaebeb-d020-4e87-bde6-60f14e00bc9f> | CC-MAIN-2017-04 | https://communities.ca.com/docs/DOC-231162731 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00194-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920808 | 189 | 2.890625 | 3 |
Have a question or want to start a discussion? Post it! No Registration Necessary. Now with pictures!
- Mark Scott
August 7, 2005, 3:37 am
rate this thread
Please can anyone confirm my thoughts on encapsulation with regards to
1. User has data to send (Layer 7)
2. Layer 4 Information is added = Segment
3. Layer 3 information (IP address) is added = packet
4. Layer 2 information (MAC) is added = frame
a layer sees information from an upper layer only as DATA.
If a frame hits a router, it extracts the packet and reframes it ready for
transmission to the next router.
Where my thinking goes off track is regarding Routers MAC addresses, would
the router then ARP for the next hop mac address then wrap the packet up
What happens when the packet / frame is passed between routers and switches?
Re: Encapsulation questions
-----BEGIN PGP SIGNED MESSAGE-----
Mark Scott wrote:
ARP is a function to find the MAC address for an IP that is unknown or
currently not in the table. It is not performed on a per packet or per
Routers and switches are quite different when it comes to encapsulation
as routers live one level higher so must perform additional steps.
A router when receieving a packet is acutally a frame destined to it's
MAC address. It will discard the layer 2 infomation (only if it is the
recipient) and then pass the packet onto route engine (for lack of a
better phrase). From there it will look up the destination in it's FIB
(forwarding information base also known as the route table) and find out
the next hop. It will find the attached interface for the next hop
encapsulate the packet with a layer 2 with the destination MAC address
of the next hop. If the next hop's MAC address table is not in the ARP
table it will perform the ARP at this time.
If the packet is destined for a connected interface instead of a next
hop router. The router will create the frame with the destination MAC
address of the final destination (the dest ip in the packet). Again if
this MAC isn't in the ARP table the router will send an ARP request and
figure it out.
Of course the above assumes everthing is ethernet. There are some
differences when it comes to other layer 2 protocols.
As far as switches, they will not strip layer 2 information as they
work on this layer (quite the same as a router doesn't strip the layer 3
information). So a swith will use it's CAM table to find where to send
the frame. If the destination is unknown the switch will flood the
frame out all ports looking for the reply to add to it's CAM table.
There is one more layer of encapsulation that both switches and router
perform and that is Layer 1... or turning the frame into bits. This is
hardly mentioned as it is just that easy. When the bits get to the
other side they are turned back into a frame.
There are ofcourse even some more advanced topics for both switches and
routers, but I hope this answers your question.
<bennetb at gmail dot com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
-----END PGP SIGNATURE-----
- » was there apersonailty disk made for first alert works with win 7 -64 bit
- — The site's Newest Thread. Posted in » CCTV, Alarms and other Physical Security | <urn:uuid:d56182fd-73ce-4bf9-b0c1-a90b260fd475> | CC-MAIN-2017-04 | http://forums.cabling-design.com/certification/encapsulation-questions-2983-.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00406-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.881517 | 801 | 2.546875 | 3 |
Big data is used for many purposes, usually to streamline business operations or increase competitive advantage. But it can also help make the world a better place. To this end, IST Research is using big data to combat illicit behaviors and improve humanitarian response efforts around the world.
Typically, data is generated by people who use technologies such as mobile apps, online ads, social media and the Internet generally. But not all parts of the world have access to smartphones or reliable Internet connectivity. IST Research specializes in finding ways to generate and collect data in challenging environments with populations that are outside the reach of these technologies, ensuring that the voice of marginalized people is reflected in the ‘big data’ that is used to inform resource allocation and policy making.
Human trafficking and big data
As one of its many initiatives, IST Research creates innovative technologies to fight human trafficking.
Search engines can only index a small percentage of sites on the surface web, while much of the deep web – such as dynamically generated or temporary sites – remains unsearchable. Even more inaccessible is the dark web, where anonymous sites with untraceable IP addresses are sometimes used for criminal activity. Specialized browsers allow access to these ‘dark web’ sites that can’t be indexed by typical commercial search engines.
IST Research collects and stores data from all parts of the web, including the surface web, deep web and the dark web, and provides data and analytics to anti-trafficking actors – including law enforcement, public prosecutors, and NGOs – to help them investigate and prosecute these hidden crimes more efficiently and effectively.
Data collection and analysis
IST Research uses multiple approaches to collect data and then aggregates it to get a broader view of activity within a geographic region. (See “Combining Actively and Passively Collected Data to Create Meaningful Results” to learn more.)
After collection, IST Research uses Internap infrastructure for data storage and processing. For efficient analysis, the company requires a highly configurable infrastructure with high IOPS and network performance for fast data transfer.
The company uses Internap’s bare-metal AgileSERVER based in the Secaucus, NJ data center. Bare metal was the clear choice for IST Research to achieve high performance in a cost-effective manner. The performance benefits of bare metal allow the company to provide more relevant and accurate intelligence in near real-time to aid counter-trafficking programs.
Prior to Internap, IST Research used Amazon Web Services (AWS) for their infrastructure needs, but after evaluating multiple vendors including AWS and Rackspace, none could match the same level of performance as Internap.
IST Research requires rapid data transfer throughout its infrastructure to conduct efficient analysis. Internap’s bare-metal AgileSERVER and Managed Internet Route OptimizerTM (MIRO) technology allows IST Research to meet its IOPS requirements and ultimately reduce data transfer time by more than 30%.
Configuration management tools & support
Unlike larger commodity cloud solutions, Internap allows IST Research to change configurations within hours. Dedicated account and support teams work in conjunction with the IST Research team to ensure optimal infrastructure configurations at all times.
The Edge is the term IST Research uses to refer to places without reliable Internet connectivity, power or other support resources. Those who live and work at the edge have traditionally been outside the reach of big data, but sometimes these are the places that need the most help. By creating new ways to engage with populations that live at the Edge, IST Research may indeed change the world through technology. | <urn:uuid:5a66069d-e00c-4d94-8df9-729fdcdf8269> | CC-MAIN-2017-04 | http://www.internap.com/2015/12/15/ist-research-human-side-of-big-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00158-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.903951 | 724 | 2.578125 | 3 |
We all are familiar with the early days of computing. Wave One was the era of bulky mainframe computing. Wave Two ushered in the era of the personal computer. When we surf the Internet or employ enterprise applications such as SAP, we are on the crest of the Third Wave: Networked computing.
Each wave has been driven by overwhelming pressure and unprecedented capabilities. Each has created new winners and destroyed old incumbents. With the dynamic innovations introduced by each new wave, people change the way they live, learn, work and play.
Its important that we recognize and prepare for these changes because decisions made today based on a cogent vision of the future are more likely to be good, sustainable decisions.
In his book The World is Flat (Farrar, Straus and Giroux, 2005), author Thomas Friedman pointed out that every business in the future will be an international business. The off-shoring phenomenon that is challenging the IT workforce is unlimited in its ability to shift the location of non-IT jobs.
Today, service calls from a home in Memphis might be answered at a help desk in Mumbai, India. And corporations increasingly structure their business processes to follow the sun, handing off work from one part of the globe to another throughout the day to take full advantage of work centers. More and more organizations will employ workers in all parts of the world to better serve their customers and strengthen their business.
And while work increasingly moves off-shore, onshore employees will continue challenging traditional work environments. People will choose to work where they live rather than having to live where they work. Alternative work arrangements, which now are a benefit for many companies, will become necessary to attract and retain workers. Energy and environmental concerns will cause more employees to work from home, though probably not in the same numbers as some early projections estimated.
With dramatic demographic shifts and capabilities created by IT innovations, the workforce of the future will be highly segmented.
Employees new to the workforcethe thumb generation of text messages and iPodsare very capable of accepting high degrees of automation. They will work and collaborate with more experienced workers who often want to work in a more traditional environment. Retirees may not wish to work full time or even in their original areas of expertise.
Immigrant workers will comprise a larger percentage of the workforce in many countries. We will manage more global workers. The flat world is driving us to a multi-national, multi-cultural workforce.
In addition to these demographic shifts, companies around the world will face the challenge of having fewer workers available to hire. About 75 million baby boomers are expected to leave the workforce over the next 15 years, and there will be only about 35 million employees to replace them, and all countries except India are projected to have a shortage of skilled workers.
With increasing digitization, many employees, especially knowledge workers, can be located anywhere. The off-shoring trend that started with information technology is expanding rapidly to other work.
These workforce shifts are taking place at the same time that organizations are experiencing an overwhelming need to increase employee productivity. Organizations are automating more and more of their processes, and that trend is changing the structure of the workforce.
The traditional workforce is moving from a pyramid structure with the CEO at the top and line-level employees at the bottom to a structure of concentric circles. In this model, the core workforce is in the middle, and it is surrounded by circles of full-time contractors, part-time contractors, outsourced contractors and work-on-demand teams.
The era of employees staying 30 years with one company until retirement is over. Full-time employees will focus on the true essence of the business. Long- and short-term contractors will fill in gaps, both functional and temporal. Organizations increasingly will outsource entire non-core business functions to reduce the requirements for a larger workforce. | <urn:uuid:e16ceda0-bec4-42d6-a82d-06be2d86effb> | CC-MAIN-2017-04 | http://www.cioupdate.com/trends/article.php/3694311/Preparing-for-the-Workforce-of-the-Future.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00066-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954045 | 790 | 2.515625 | 3 |
The Department of Justice’s antitrust filing against AT&T did break up the carrier – at least for a good 25 years.
In the Ali-Frazier of last century’s heavyweight telecom fights, the U.S. Department of Justice went toe-to-toe with AT&T after filing an antitrust action against the carrier in 1974.
Specifically, the Department of Justice accused AT&T of engaging in anti-competitive behavior and sought to break up the company. Invoking the Sherman Antitrust Act in its case, the government said that AT&T had monopoly power over America’s telecommunications, and argued that the company should sell off some of its subsidiaries, such as manufacturer Western Electric and research arm Bell Laboratories, which would then be carved into even smaller companies.
The government’s actions set off a fierce public debate. Proponents of the Justice Department’s trust-busting suit argued that breaking up AT&T would allow more companies to enter the market place, thus spurring greater innovation and competition. Opponents countered that AT&T should be exempt from antitrust rules in order to maintain uniform standards in telecommunications services. Breaking up Ma Bell, they said, could lead to a fragmented, disorganized telecom industry.
“AT&T provides the very cheapest service possible,” Carl Glick, a telecom analyst, told Time Magazine. “Justice gets so wrapped up in its rhetoric about the advantages of competition that it loses sight of the economic implications of its moves.”
After years of legal wrangling, AT&T eventually agreed to a settlement on the government’s terms in 1982. Under the agreement, AT&T would be allowed to keep its long-distance operations, Western Electric and Bell Laboratories in exchange for divesting from its 22 local phone monopolies. Since then, AT&T has had incremental success at gaining back some of its previous clout, as the company was allowed to merge with BellSouth, one of the RBOCs that formed after AT&T had been forced to divest from its local phone services. | <urn:uuid:d78aeb4d-0346-4b0b-ab68-44a1a94b0dff> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2287512/lan-wan/u-s--department-of-justice-vs--at-t.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00186-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959486 | 429 | 2.71875 | 3 |
NASA today said it was moving ahead with its plan to hold a Centennial Challenge completion next year that will ultimately result in future unmanned aircraft technology.
NASA said it picked Development Projects of Dayton, Ohio, to manage the Unmanned Aircraft Systems Airspace Operations Challenge competition that will focus on a variety emerging drone technologies but particularly the aircraft's ability to sense and avoid other air traffic.
[NASA NEWS: Skylab: NASA's first space station marks 40 years]
While NASA is providing a $500,000 prize, Development Projects will finalize rules and begin detailed preparations for the challenge, eventually registering competitors. The first competition to demonstrate team entries is expected in May 2014.
In 2012 NASA said was planning this Challenge in collaboration with the Federal Aviation Administration and the Air Force Research Lab. The type of challenge NASA said it is envisioning would be no easy task as it is looking to address one of the more complicated drone issues - sensing and avoiding other aircraft.
"NASA is considering initiation of an Unmanned Aircraft Systems Airspace Operations Challenge focused on finding innovative solutions to the problems surrounding the integration of UAS into the National Airspace System. The approach being considered would require competitors to maintain safe separation from other air traffic while operating their UAS in congested airspace, under a variety of scenarios. This will be accomplished through the use of sense and avoid technologies, as envisioned in the Next Generation Air Transportation System," NASA said.
NASA said the Challenge would be divided into two parts. The Level 1 Competition - would focus on a competitors ability to fly 4-Dimensional Trajectories to provide a reasonable expectation that the drones will be where they are supposed to be, when they are scheduled to be there, successfully employ Automatic Dependent Surveillance Broadcast (ADS-B), maintain safe separation from other ADS-B equipped air traffic, and operate safely in a number of contingency situations. ADS-B in equipped aircraft are able to receive messages broadcast from other aircraft and the air traffic management system that describe the current position, heading, and speed of nearby air traffic."
The Level 2 Competition would go beyond the first level and add a "requirement to maintain safe separation from air traffic not equipped with ADS-B and a requirement that the vehicle be able to communicate verbally with the Air Traffic Control system under lost link conditions. Competitors would be required to have a working Hardware-in-the-Loop Simulation for their flight vehicle. The HiLSim would be used at the beginning of the competition, prior to flight, to verify that a competing UASs flight operators, ground software, and flight software exhibit the proper responses in a variety of safety-critical situations. It would also be used to verify that a team is capable of performing the basic tasks required by the competition. HiLSim test suites would be provided prior to the competition to allow competitors to verify they are in compliance with contest requirements during development."
The competition addresses a number of concerns brought up in a recent Government Accountability Office report which said communications and effective system control are big challenges for unmanned aircraft developers if they want unfettered access to US airspace.
The bottom line for now seems to be that while research and development efforts are under way to mitigate obstacles to safe and routine integration of unmanned aircraft into the national airspace, these efforts cannot be completed and validated without safety, reliability, and performance standards, which have not yet been developed because of data limitations, the GAO concluded.
The GAO report noted that current domestic uses of drones are limited to activities such as law enforcement, forensic photography, border security, and scientific data collection. According to an industry forecast, the market for unmanned aircraft could be worth $89 billion with the associated research and development for production estimated to be $28.5 billion over the next 10 years.
The main issues include the ability for drones to avoid other aircraft in the sky; what backup network is available and how should the machine behave if it loses its communications link and other network issues.
From the GAO report:
Avoidance: To date, no suitable technology has been deployed that would provide UAS with the capability to sense and avoid other aircraft and airborne objects and to comply completely with FAA regulatory requirements of the national airspace. However, research and development efforts by FAA, DOD, NASA, and MITRE, among others, suggests that potential solutions to the sense and avoid obstacle may be available in the near term. The Department of the Army is working on a ground-based sense and avoid system that will detect other airborne objects and allow the pilot to direct the drone to maneuver to a safe location. The Army has successfully tested one such system, but it may not be useable on all types of drones, the GAO stated
Control: Ensuring uninterrupted command and control for both small and large UAS remains a key obstacle for safe and routine integration into the national airspace. Since unmanned aircraft fly based on pre-programmed flight paths and by commands from a pilot-operated ground control station, the ability to maintain the integrity of command and control signals are critically important to ensure that the drone operates as expected and as intended, the GAO said
Lost links: In a "lost link" scenario, the command and control link between the UAS and the ground control station is broken because of either environmental or technological issues, which could lead to loss of control of the drone. To address this type of situation, unmanned aircraft generally have pre-programmed maneuvers that may direct the machine to hover or circle in the airspace for a certain period of time to reestablish its radio link. If the link is not reestablished, then the drone will return to "home" or the location from which it was launched, or execute an intentional flight termination at its current location.
Network security: The jamming of the GPS signal being transmitted to the UAS could also interrupt the command and control of drone operations. In a GPS jamming scenario, the aircraft could potentially lose its ability to determine its location, altitude, and the direction in which it is traveling. Low cost devices that jam GPS signals are prevalent. According to one industry expert, GPS jamming would become a larger problem if GPS is the only method for navigating a UAS. This problem can be mitigated by having a second or redundant navigation system onboard the aircraft that is not reliant on GPS, which is the case with larger drones typically operated by DOD and DHS. Encrypting civil GPS signals could make it more difficult to "spoof" or counterfeit a GPS signal that could interfere with the drone navigation. Non-military GPS signals, unlike military GPS signals, are not encrypted and transparency and predictability make them vulnerable to being counterfeited, or spoofed., the GAO report stated.
Radio spectrum: Progress has been made in obtaining additional dedicated radio-frequency spectrum for drone operations, but additional dedicated spectrum, including satellite spectrum, is still needed to ensure secure and continuous communications for both small and large drone operations. The lack of protected radio-frequency spectrum for unmanned operations heightens the possibility that a pilot could lose command and control of an aircraft. Unlike manned aircraft-which use dedicated, protected radio frequencies-UAS currently use unprotected radio spectrum and, like any other wireless technology, remain vulnerable to unintentional or intentional interference. This remains a key security and safety vulnerability because, in contrast to a manned aircraft in which the pilot has direct physical control of the aircraft, interruption of radio transmissions can sever the drone's only means of control, the GAO said.
Check out these other hot stories: | <urn:uuid:efebe9ef-cea1-4af1-9292-460a17804f29> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2224613/security/nasa-lurches-toward-2014-unmanned-aircraft-competition.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00516-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951096 | 1,527 | 2.625 | 3 |
The significant cultural and technological shift of the Internet of Things’ deep embedment into people’s lives, bodies, homes and almost everything else they touch has allowed for efficiency, flexibility and convenience with our day-to-day lives.
That connectivity is an incredible thing, but one major question remains within the burgeoning IoT industry: how do companies secure all of this data?
Consider the information at stake. Wi-Fi-enabled security cameras can give real-time information about when and if someone’s home. Same with internet-connected alarm systems.
Even a smart TV has valuable information as information on Netflix and Amazon accounts can lead to a credit card or identity details.
Of course, the mother of all identity concerns comes from the smartphone: a centralised resource of account information that can connect with almost all smart devices, a smart home and even a car – something that becomes even more vulnerable as the age of self-driving cars approaches.
Recently, a CBS 60-Minutes story demonstrated the multitude of capabilities of a hacker that only has a person’s phone number.
It’s clear that the Internet of Things presents security concerns in ways that seemed unthinkable just a decade ago. The solution, though, may stem from one of the most unique innovations of the digital era: the blockchain.
Originally developed as part of the bitcoin digital currency platform, the open blockchain model has inherent transparency and permanence. These are essential to creating a secure means of direct authentication between smart devices.
The model currently used for Bitcoin can be propagated into other applications – any industry that requires archival integrity can adopt the blockchain.
For the Internet of Things, a blockchain can be created to manage device identity to prevent a spoofing attack where a malicious party impersonates another device to launch an attack to steal data or cause some other mayhem.
Blockchain identity chains will enable two or more devices to be able to communicate directly without going through a third-party intermediary, and in effect make spoofing more cost prohibitive.
Regarding this type of authentication, the model allows users to synchronise multiple devices against a single system of authority that is distributed and censorship resistant.
This would apply to an open blockchain, not permissioned or private. The identity chain, created for each device, is a permanent record. Through cryptography, only validated devices receive access. As new devices are added, their identity records become part of the blockchain for permanent reference.
Any change to a device configuration will be registered and authenticated in the context of the blockchain validation model, ensuring that any falsified records can be caught and ignored.
This is a new technology and will take some time to move from testing into our everyday lives. Many industry leaders and governments will begin testing this year. Beyond whether or not the tech works, many stakeholders will need to get on board.
An industry conglomerate that agrees on a blockchain design would be helpful. Having all the Internet of Things devices write to the same source or have systems that are interoperable will be critical.
It’s not necessarily that every Internet of Things device manufacturer or software developer write data to the same blockchain – instead, it could go further upstream and be an agreement between OEM manufacturers of essential components that are used in the authentication process flow.
In addition to baseline authentication (device model, serial number, etc.), the blockchain can create records of any data it generates – for example, a smart front door lock can have a transaction log of video activation when someone exits/enters the home or unlocks it remotely.
Each item in the history creates another historical link in its respective identity chain that can provide further data to use for authentication matching. If someone with malicious intent was to try and change the protocol of the door lock without the correct credentials or there was a change in the configuration, the blockchain validation model would not allow for the door lock to be changed.
An important component of the blockchain’s effectiveness comes from its standing as a public record, with user nodes all auditing the same record. Of course, with a public record, there will always be privacy concerns over sensitive data.
However, the blockchain protects against this through the use of one-way hashes. In the blockchain world, a cryptographic hash function is a mathematical algorithm that maps data and shortens its size to a bit string, “a hash function”, which is also designed to be one-way and infeasible to invert. This means it is nearly and practically impossible to obtain the content of a hash without the source data.
The Internet of Things is still a new industry, but it will become more pervasive and significant as technological innovations turn science fiction into people’s everyday lives. At this early stage, it’s critical to establish a scalable solution that will push the industry forward as the volume of connected devices grows exponentially.
Blockchain represents a unique type of solution, one that is established as a secure means of protecting financial data but flexible enough to be applied to any high-stakes record keeping.
With the Internet of Things demonstrating the ability to connect just about every aspect of a person’s life, it truly doesn’t get any more high stakes than that.
Sourced from Tiana Laurence, co-founder and CMO, Factom, Inc. | <urn:uuid:ea2bcb8d-6cd9-4b2c-bfdb-5c4ff88125a0> | CC-MAIN-2017-04 | http://www.information-age.com/how-blockchain-will-defend-internet-things-123461443/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00259-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920774 | 1,086 | 3.171875 | 3 |
Last week's Supercomputing 13 (SC13) show had its fair share of big news, but perhaps the biggest came from Intel, which announced its future Xeon Phi, codenamed "Knights Landing," would not be a co-processor any more, it would be its own CPU and accelerator all in one.
The Xeon Phi was Intel's attempt to compete in the GPU market, trying to close a decade or more gap between it and competitors Nvdia and AMD/ATI. Originally known as Larrabee, the project had become a white elephant and wasted a whole lot of time and money before getting on track as a co-processor that worked in conjunction with Xeon servers.
On the November 2013 list, 13 of the top 500 supercomputers are co-powered by Xeon Phi cards, including the top supercomputer on the list, China's Tianhe-2, with a peak of 54.9 teraflops of performance. Nvidia still dominates, with 38 machines in the top 500 and AMD, which has not aggressively pursued supercomputing, has two machines.
But now, with Intel's Knights Landing, there won't be a need to build servers with Xeon processors and Xeon Phi co-processor cards. The Knight's Landing generation will be its own processor, so there will be no more need to cram the Xeon Phi cards into the server box. Those cards were the size of a high-end GPU, which meant a lot of hardware jammed into the box and a lot of heat.
Much more important is what else it takes away. Knight's Landing will erasing the memory buffer and PCI Express bus that sat between the CPU and main memory and the coprocessor chip and frame buffer memory in the Xeon Phi card. Now that applications run entirely natively instead of offloading the data sets to the coprocessor, all of that latency goes away.
Now you will have both scalar processor cores and vector processor cores on the same chip sharing access to unified memory. This is huge. A fair amount of time has to go into offloading data sets from main memory to the frame buffer memory of the co-processor and then back to the CPU and main memory. It's why Nvidia had to come out with the CUDA language, because plain old C++ or Java wouldn't work.
So even if all the hardware speeds and feeds remained the same, with the removal of the busses, we would see a huge gain in application performance simply because data sets no longer have to be shuttled between two memory sets across a bus. Combine that with the promise that Knight's landing will triple performance over the current generation, and you are talking major gains in supercomputing performance.
The others aren't sitting still. AMD has its Heterogeneous System Architecture (HSA), which will continue the integration begun with Fusion, and Nvidia has a mysterious project called Project Denver, which will involve integration of its own 64-bit ARM processor with GPU technology sharing common memory.
None of this will happen tomorrow. Knights Landing will be released sometime in late 2014 or in early 2015. The chip will be made using 14nm process technology, will support new AVX 3.1 instructions, built-in DDR4 memory controller, on-package high-speed memory and likely other innovations. | <urn:uuid:9ee81f1a-2dd8-490c-8d35-8a7e46a7cd7d> | CC-MAIN-2017-04 | http://www.itworld.com/article/2703134/hardware/intel-changes-the-whole-supercomputing-game-with-knight-s-landing.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00471-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.966801 | 677 | 2.625 | 3 |
Data Masking Everywhere
The new era of computing has arrived: organizations are now able to process, analyze and derive maximum value from structured, unstructured and streaming data in real time. However, in the rush to achieve new insights, are privacy concerns being neglected? How can you support business goals while also ensuring the privacy of sensitive data? With the average cost of security-related incidents in the era of big data estimated to be over USD40 million, according to this Aberdeen Group Research Brief, you can’t afford to ignore data privacy as a top requirement.
With 2.5 quintillion bytes of data created every day, now is the time to understand sensitive data and establish business-driven privacy policies to keep customer, business, personally identifiable information (PII) and other types of sensitive data safe. Remember, however, that different types of data will require different protection policies. For example, text, audio, log files and clickstreams have unique characteristics and challenges around privacy. In addition, your privacy policies need to keep up with the velocity of your data—even two minutes is too late when it comes to preventing abuse.
Protecting privacy isn’t just a nice-to-have. It is required by more than 50 international privacy laws such as Argentina’s Personal Data Protection Act and Korea’s Act on Personal Information Protection.
Data masking provides intelligent data protection to address privacy concerns
Data masking replaces sensitive data with a nonsensitive substitute, but does so in a way that preserves the integrity of the data. This means masked data can be used to facilitate business processes without changing the supporting applications, databases or data storage facilities—which enables you to remove the risk without breaking your business.
Securosis Research has developed five laws for data masking:
- Masked data should not be reversible.
- Masked data should be representative of the original data set. The reason to mask data instead of generating random data is to provide nonsensitive data that still resembles production data. This could include geographic distributions, credit card distributions (perhaps leaving the first four numbers unchanged, but scrambling the rest) or maintaining human readability of names and addresses. The goal is to increase the utility of the information for further analysis or analytics.
- Masked data should maintain application and database integrity.
- Nonsensitive data should be masked only if it can be used to re-create or tie back to sensitive data. It isn’t necessary to mask everything—only those parts that are deemed sensitive. For example, if you scramble a medical ID but the treatment codes for a record could map back to only one record, you also need to scramble those codes.
- Data masking routines must be repeatable. One-off masking is both ineffective and impossible to maintain. Today’s IT environments are highly dynamic, and masking routines need to keep pace.
InfoSphere Optim™ Data Privacy provides a comprehensive set of data masking techniques to support data privacy and compliance requirements. For the first time, you can mask data across platforms, across data sources using a standard and repeatable process to ensure data privacy without impacting the stability of your applications with greater ease and unparalleled scalability and performance.
With InfoSphere Optim Data Privacy, you “mask and move” or “mask in place.” Masking and moving allows you to extract and mask data, and then insert or load the data into one or more destinations. Masking in place allows you to de-identify data and replace existing values.
InfoSphere Optim Data Privacy provides the most comprehensive set of data masking techniques on the market. The method you use will depend on the type of data you are masking and the result you want to achieve. Out-of-the-box capabilities for specific data types are included, such as random or sequential number generation, string literal substitution, concatenating expressions, arithmetic expressions, lookup values and user-defined functions, to name a few.
Some examples of situations in which masking techniques can be applied include:
- Data at rest or data in flight
- Relational data, flat files and data sets such as IBM IMS™ or VSAM
- Data being transformed through an extract, transform and load (ETL) tool
- Data accessed in SQL queries inside a database
- Data in reports and documents
- Data inside applications
- Data moving to, in and from big data platforms such as Hadoop
- Data used for testing big data environments
- Data used for analytics applications—for example, PureData Analytics or Teradata
- Data used for testing data warehouses
What is the benefit?
Focus on data security and privacy to deliver significant value.
- Prevent data breaches: Avoid disclosure or leakage of sensitive data
- Ensure data integrity: Prevent unauthorized changes to data, data structures, configuration files and logs
- Reduce cost of compliance: Automate and centralize controls and simplify audit review process
- Protect privacy: Prevent disclosure of sensitive information by masking or de-indentifying data in databases, applications, reports on demand across the enterprise
What to learn more? Check out these analyst reports and white papers.
- Analyst Report - Securosis Research: Understanding and Selecting Data Masking Solutions
- Data masking everywhere - Design standard and repeatable data privacy policies across the enterprise
- Gartner Inc. Magic Quadrant for Data Masking Technology | <urn:uuid:73be3c56-b116-4232-b29f-9d5338f52ffe> | CC-MAIN-2017-04 | http://www.ibmbigdatahub.com/blog/data-masking-everywhere?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+big-data-hub-blog+%28The+Big+Data+Hub+Blog%29 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00499-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.871439 | 1,116 | 2.734375 | 3 |
What Are And Why Do I Need ActiveX Controls?
Sometimes while surfing the Web, I run into a roadblock. This roadblock, unlike the viruses that can seep into computers everyday, is the box that appears in the middle of the screen and says, “Click OK to run an ActiveX control on this Web page.” However, if I do click “OK,” Internet Explorer (IE) always encounters a problem and needs to close. Because this can be an all-too-frequent event with all the online assisted reporting I perform, I did some research on ActiveX.
According to webopedia.com, ActiveX is a loosely defined set of technologies developed by Microsoft for sharing information among different applications. ActiveX spawned from Microsoft’s OLE (Object Linking and Embedding) and COM (Component Object Model). An ActiveX control is a component program object that can be re-used by many application programs within a computer or among computers in a network. The technology for creating ActiveX controls is part of Microsoft’s ActiveX technologies.
You would think that the definition would offer some insight to why IE consistently needs to close after I click “OK,” but it doesn’t. Through a few quick key strokes and mouse clicks, I found that Microsoft released an Internet Explorer ActiveX update on June 14, 2006 for Microsoft Windows XP Service Pack 2 (SP2) and Microsoft Windows Server 2003 Service Pack 1 (SP1). Basically, this update changes the way in which IE handles some Web pages that use ActiveX controls and Java applets, which run things like Adobe Reader, Apple QuickTime Player, Macromedia Flash Player, Microsoft Windows Media Player, etc. | <urn:uuid:2f01a493-5a55-461f-9048-b130b3d2dd17> | CC-MAIN-2017-04 | http://certmag.com/what-are-activex-controls-and-why-do-i-need-them/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00131-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.908099 | 359 | 2.6875 | 3 |
One of the talks announced for the Black Hat conference this summer in Las Vegas is about 802.11 driver vulnerabilities, which can affect users even if they aren't connected to a network.
All modern operating systems, such as Linux, BSD, Windows, and Mac OSX, have a similar fundamental security measure: the separation of kernel and user code. The kernel is the core of the operating system and controls processes, disk access, and hardware access. While programs are typically prevented from accessing the memory of other programs or directly controlling the hardware, the kernel has no such restrictions.
Vulnerabilities at the kernel layer are especially dangerous. Operating in the kernel, malicious code has complete control of the system. So-called "root kits" can alter the kernel to hide files from anti-virus scanners, hide running programs from the user, and capture input from the mouse or keyboard. Root kits have become an increasing risk with malicious software.
Device drivers function at the kernel level. Network device drivers are especially at risk as they handle remote data, which cannot be trusted. Any bugs in the code that handle remote packets can lead to system crashes, or worse, code execution at the kernel layer.
Remote driver bugs have typically been rare and can be quickly fixed once the vendor is notified. Kernel-layer bugs are very difficult to defend against without a vendor update. Antiviral software typically operates outside of the kernel, and firewall software can prevent connections on TCP/IP ports but not vulnerabilities at the wireless layer. 802.11 management packets contain no IP traffic data and are not passed to the wireless layer, but a flaw in the driver's handling of the management contents could lead to an exploitable vulnerability.
Many methods can be used to find vulnerabilities. The method du jour is "fuzzing." A fuzzer is a smart brute-force algorithm that provides enough structure to generate a packet that appears valid, but the contents of the fields are filled with iteratively randomized data. Fuzzing is not limited to wireless protocols; it has been a valuable technique for testing software responses to different types of invalid data for in-house developers and security researchers.
Fortunately, the risks of bugs in wireless drivers can be minimized. The window of exposure is extremely limited. Unlike someone attacking an Internet server, the attacker must be within radio range of the victim. Always run the latest version of the drivers for your wireless card, as they may contain fixes for vulnerabilities such as these.
The ultimate protection? Turn off your wireless card when you aren't using the network. | <urn:uuid:1ab69c56-bf7f-42d5-9c78-d6dc1cafde9b> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2305418/network-security/protecting-against-vulnerabilities-in-wireless-drivers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00039-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923607 | 519 | 3.203125 | 3 |
It’s time to have the talk with your child. You peeked at their browser history and, well, it’s time. It’s going to be awkward and uncomfortable for both of you, and things have changed a lot since you were that age. But better to hear it from a parent than learn it from a stranger or God-knows-who online.
No, not that talk. It’s time to discuss online privacy with your kids.
Won’t the internet and government regulate this for me?
Haha. Good one.
Because we all know how honest people are when asked their age before entering a website. And everyone with the ability to make a website or app has a thorough understanding of ethics and regulations when it comes to collecting data and serving advertisements to minors. With great power comes great responsibility, right?
COPPA has been heavily criticized for being ineffective and even counterproductive in protecting kids online. Children often resort to less age-appropriate content instead of waiting around for a parent’s approval. It doesn’t stop kids from accessing pornography or from being advertised to. Websites that might otherwise provide content that’s appropriate for kids often ban children altogether because of the compliance burden and potential fines for violating COPPA.
The UK has been a bit more pro-active in spreading online privacy awareness among British youth through the UKCCIS and its “Click clever, click safe” mantra. However, this is the same organization that in 2013 attempted to filter websites deemed unsafe or inappropriate for children, but inadvertently blocked the websites of LGBT rights groups and charities meant to educate children about drugs, health, and sex.
So no, you can’t depend on the internet self-regulating itself or on governments (which can only create regulations for their own country, anyway) to step in on your behalf.
Is children’s privacy really an issue?
You bet it is!
Three out of four children have access to a smartphone in the US. In the UK, 43 percent of nine- to 12-year-olds have a social media profile, according to the Library of Congress. One in three are on Facebook despite the 13-year-old age limit. A quarter of those kids on Facebook never touch the privacy restrictions on their profile, and a fifth of them publicly display their address and/or phone number. Facebook claims it is powerless to stop children from lying about their age and creating accounts.
And that’s just Facebook. It isn’t even cool anymore. Snapchat, Tumblr, Vine, Instagram, and Kik are all popular among teens and pre-teens. Who knows what will come next?
Social media and games pose the biggest threat to children’s privacy, because they request a significant amount of information upon registration. Profile info is used by the social network to serve targeted ads and recommend content. That info can also be used by scammers and predators to target kids. To be fair, it happens to adults, too. But kids are far more susceptible than adults.
The ramifications of ignoring a child’s online activity can have both immediate and long-term effects. You’ve probably heard of horror stories where a kid unknowingly spends thousands of dollars on in-app purchases in a mobile game. Drug dealers and sex offenders target kids online, as do identity thieves. In fact, Carnegie Mellon CyLabs says children are over 50 times as likely to have their social security number used by another person.
One in 40 families has a child who is a victim of identity theft, according to the Identity Theft Assistance Center and the Javelin Strategy & Research Group, and that figure is on the rise. Kids make great targets for identity theft because they have clean slates with no blemishes on their credit report. Identity fraud can go on for years without notice, because kids have no need for credit until they are old enough to buy a car, rent an apartment, or take out loans for college. When that day comes, however, these young victims are in for a rude awakening.
Enough of your fear-mongering! What can I do about it?
As a parent, there’s a fine line between protecting your kids’ privacy and invading it yourself. But there are a few simple precautions to take that will allow them freedom while safeguarding their interests.
Follow and friend your kids
Worried about what your kid is posting on Snapchat? Well, that’s easy. Install it, make an account, and follow them. Now you can safely monitor their public account activity from a reasonable distance, and they’ll likewise be more conscious about what they post. You can view their friends list on Facebook to see if there’s anyone shady. No, you won’t be able to screen what’s being said on private channels, but kids are allowed to have secrets.
Do the same for every social media account. Log into Minecraft to terrorize Junior’s village. Not only will it help keep your child safe, you’ll also get to know them and the world they live in better. It’s a win-win for all parties.
Don’t start making rules that seem arbitrary to your kid. Without being condescending, explain to them the risks and dangers of failing to protecting online privacy. Toss out some of those stats from above as proof.
Don’t go behind their back and spy on your kids, either. This will only further distrust and could leave them more exposed. When you take a measure that requires some oversight, be transparent about it.
Don’t use social logins on untrusted sites
Kids and adults alike get sucked into playing quizzes and taking surveys online, especially on Facebook. But many of these sites ask that the user log in with their social media profile before the results can be posted for friends to see. Tell your kid to avoid those games and quizzes, as many of them mine data from your child’s profile and their friends’ profiles, which is used by the company and third parties to target advertisements and who knows what else. Unless you recognize and trust the company that owns the website, don’t use your social media profiles to authenticate or authorize apps.
Adjusting kids’ privacy settings
Almost every social media app will have a tab full of privacy settings. Learn them. Read the privacy policies. Now that you have the same apps as your kid, sit down with them and disable what needs to be disabled. Remove the accounts from search results so strangers can’t send friend requests. Remove as much public profile info as possible–address, school, phone number, email address, etc. Tightening privacy settings for the most part won’t affect how a social media app functions, so your child shouldn’t put up much of a fight.
Protecting your child’s privacy is really just an extension of protecting your own privacy. You can perform many of these tasks together. We won’t cover every single app that your child may or may not have installed in this article, but we’ll touch on a few of the big ones.
First off, on all devices, location services have become the norm. This allows Apple, Google, Microsoft, and app makers to monitor the location of the user. For obvious reasons, it’s best to turn these off. Tell your kids not to geo-tag their photos on social networks–at least not until they’ve left that particular location and don’t plan to return. In newer versions of iOS and Android, you can disable the location-tracking permission on an app by app basis, or disable it entirely in the settings.
Front-facing cameras are also nearly universal on phones, tablets, and laptops nowadays. There’s no shortage of news stories about both hackers and law enforcement remotely enabling cameras unbeknownst to the user, snapping photos and learning their whereabouts. Place a sticker or piece of electrical tape over these cameras.
Always set a swipe pattern, PIN number, or password on your devices to keep both strangers and ill-willed acquaintances out of your kid’s business.
Facebook privacy settings
It’s likely that no social network on the internet knows more about us than Facebook, and the privacy settings of the world’s largest social network can unfortunately be a bit tricky to navigate. Start by going to the top-right corner of the home page and clicking the lock icon. Click to drop down “Who can see my stuff?” and switch it to “Just friends.” This should keep your child out of view from passing strangers.
In the next drop-down, “Who can contact me?”, you can set who is allowed to send your child friend requests. There’s no longer an option to make a profile un-searchable. Instead, the most private option you get is to only allow Friends of Friends to send friend requests. Your kid can still send friend requests to whomever he or she pleases, so it won’t limit who they can be friends with.
On the last section, block scammers, cyber bullies, and anyone else you don’t want your kid communicating with.
We’re not done yet. On the very bottom of this tab, click “See more settings.” Here you can prevent people from searching for your kids’ account by their phone number or email address. Do so.
Click the “Timeline and Tagging” tab on the left sidebar of this page. Set all these settings to “Friends” when available to keep strangers at bay. Here you may also want to add the option to review photos and statuses in which your child is tagged. This prevents any inappropriate photos and cyber-bullying from showing up on their account, which could otherwise come back to haunt them later.
Facebook now lets users choose to share statuses and photos but exempt specific people from seeing them. Let your kid know that they shouldn’t block you in this way, as anything that they don’t feel comfortable sharing with you shouldn’t be shared with the rest of the world.
Next up is the Followers tab. A follower is basically someone who can view your profile and posts but isn’t personally friends with you. Switch this from Everybody to Friends as another barrier to strangers.
After you’ve removed all the unnecessary permissions from all these apps (make sure to click Show All at the bottom), scroll down a bit further to the three panels below the app list. Under Apps Others Use, click Edit. This is a list of information that apps used by your friends can see on your profile. Tricky, right? Even after disabling all those app permissions, the apps used by your friends can still access your information. Uncheck everything and stick it to Big Brother.
Okay, last step. Go to the Security tab at the top of the left sidebar. The privacy-related bits we’re most concerned with here are Login Approvals, App Passwords, Your Browsers and Apps, and Where You’re Logged In.
- Login Approvals is basically the same as two-step authentication. Whenever logging in from a new device, a code will be sent to your phone as an extra layer of security. You will have to add a phone number if you haven’t already.
- App passwords lets you set a separate password for apps that support this function and allow you to log in with your Facebook account, such as Spotify and Skype. It’s a good idea to have different passwords for each app when possible. Learn more about creating and memorizing strong passwords here.
- If your kid gets a new phone or logs in on someone else’s device, the Your Browsers and Apps setting is important. It’s a log of devices that don’t require identity confirmations or send notifications when logged in. Remove any that aren’t among your current devices or that you don’t recognize.
- Where You’re Logged In is similar to the above setting, but for active logins. Again, remove any you don’t recognize or that aren’t yours.
Snapchat privacy settings
If sifting through all of Facebook’s privacy settings made you weary, you’ll be happy to know Snapchat is much simpler. Launch the app and click the ghost icon at the top of the screen, then the settings cog at the top right. Scroll down to the “Who can…” section. Set both Contact Me and View My Story to My Friends. Who can view my story can also be customized to a specific list of people. This is also where to block certain individuals.
Back on the settings page, click Login Verification to set up two-step authentication. This can be done with a phone number and SMS or using an authentication app like Google Authenticator or Authy. Login verification makes logging into Snapchat from a new device a two-step process, which is more secure.
Instagram privacy settings
Rounding out the top three most-used apps among teens is Instagram. To find Instagram’s privacy settings, click the head and shoulders icon on the bottom right, then the three dots on the top right.
You can elect to switch to a Private Account, but most would agree this sort of defeats the point of Instagram. Other than that, there’s not much to make private.
Instead, privacy and safety on Instagram is more about how the app is used. When you post a photo, don’t add a location until after your child has left said location, and only if they don’t plan to return anytime soon. Otherwise, strangers can determine where your kid hangs out or where they are as soon as the photo is posted. Not only could this mean a predator could find your kid, it also means a burglar could figure out if a family is home or not.
Twitter privacy settings
Similar to Instagram, there’s not much to hide on Twitter. Don’t add any personal details to tweets or the profile blurb, and you’ll be fine.
Tumblr privacy settings
Tumblr isn’t quite as popular among teens as other apps, but among the art and poetry lies a haven for porn, smut, and vulgarity. Tumblr doesn’t require a real name upon registration, so there’s no need to use one. Privacy settings can be accessed through the app or on the website.
Here you can disable messaging so strangers can’t contact children. If your child has his or her own Tumblr blog, it’s probably a good idea to disable comments and replies to posts. Blogs can be made private, but this makes them password protected. It’s between you and your child if you think this is best.
As with everything else, don’t post personal details and be smart about geo-tagging photos.
Parental controls can be enabled by either a built-in mechanism or by a third-party tool on Android, iOS, and most modern web browsers. These controls not only protect your child from inappropriate content, but can also prevent them from inadvertently divulging personal details about themselves or your family. These are mainly aimed at younger children; your 16-year-old won’t appreciate the level of micromanagement that these tools offer.
Android lacks dedicated parental control, but some phones come with the ability to create multiple user accounts. In the settings, check for for a “Users” section, where you can add a restricted profile. A restricted profile allows you to toggle which apps the user can access. This is especially useful if you allow a young child without a phone of his or her own to play with your tablet or phone. The account switches depending on the PIN or password entered on the lock screen.
If you’re worried about invasive apps or games that are likely to run up a bill, parents can require that their Google account password be entered before downloading an app or making in-app purchases. Apps can be filtered by low, medium, or high maturity levels.
Several apps out there make it easy to monitor and manage what children do with their phones. Norton Family Premier costs a whopping $49.99, but it comes with a slew of useful features including location tracking, the ability to block individual apps, and web filtering. Parents can see and limit when and how much screen time their kids get. It also works on multiple devices for families with multiple smartphone-touting rascals. Qustodio, Net Nanny, and PhoneSherriff are other solid premium options.
For free alternatives, check out Funamo, Lock2Learn, MM Guardian, and AppLock.
iPhones and iPads, unlike Android, have some parental controls built in. In the General settings of iOS, just click on Restrictions and create a passcode. Here you can disable installed apps and certain features. Safari, the App Store, FaceTime, music apps, Siri, and in-app purchases can all be turned off or filtered.
Social media and location services can be restricted as well.
If parents want to be able to monitor and manage iPhone use with more granularity, there’s an app for that. Netsanity, Qustodio, OurPact, and Kidslox all include features like curfews, timers, site blockers, and app hiders.
In Chrome settings on the desktop browser, scroll down to the People section. Uncheck “Let anyone add a person” so your child can’t easily circumvent it, then click “Add person.” You can choose to create a desktop shortcut especially for them. Select an icon for them and check “Control and view the websites this person visits from .” Navigate to the supervised users dashboard at https://chrome.google.com/manage. Choose your new profile, then click Manage on the top right of the Permissions frame. Here you can enter specific websites to block, or only allow certain websites to be accessed. Remember to enable SafeSearch as a general filter for kids. If you block a site that your tiny surfer wants access to, he or she can request it without even having to ask you face to face.
For more granular controls, a handful of extensions in the Chrome store should fulfill your needs. WebFilter Pro and Blocksi Web Filter offer features like time management, Youtube filtering, web filtering, whitelists, and blacklists.
Firefox doesn’t come with any built-in parental control measures, so you’ll have to rely on third-party plug-ins. FoxFilter is probably the most widely used. The sensitivity can be set to block keywords in the body text or just in the metadata, such as page title and URL. Specific keywords and websites can be blacklisted and whitelisted, and many keywords are included upon installation.
Microsoft introduced dedicated children’s accounts starting with Windows 8. On Windows 10, click on the start menu and go to Settings. Head to the Accounts > Family and Other Users, and hit “Add a family member.” On the following screen, choose “Add a child.” You may need to create an email account for them. Enter a phone number used to reset the password. Windows will then ask you if you want to let Microsoft target your kids with ads or send them promotional offers. Turn these off, as they are counterproductive to the whole privacy stance we’re trying to take.
Now that you have a child account set up, you can receive weekly reports on their activity and manage the settings online. You can choose to block inappropriate websites, add your own sites to the whitelist and blacklist, limit apps and games by rating, and set when and how long the computer can be used.
You might find this process a bit off-putting since your child has to register with an email address. If that’s the case, third-party applications are also available. You might recognize the names from our Android and iOS lists: Qustodio, Norton Family, and SocialShield are all solid options. SocialShield is particularly useful for monitoring social media, alerting parents to posts containing content about sex and drugs, suspicious friend requests, and messages that could lead to a real-world interaction.
To turn on parental controls in OSX, head to the System Preferences in the Apple Menu. Click Parental Controls, and add a new user with parental controls enabled. Now back on your administrator account, enable parental controls for the new user. If you spoiled your kid with his or her own Macbook, you can also manage parental controls from another computer.
To set restrictions, click through the tabs on the top. Apps lets you specify a permitted rating and what apps your kid can access. Web lets you filter access to websites. People restricts a child’s interaction with others through the Mail, iMessage, and Game Center apps. Time limits is for time management. Other can be used to censor language, block the built-in camera, and prevent password changes.
Qustodio and Norton Family are also available as third-party parental control software for Macs.
ID theft protection
ID theft can happen to anyone, and children are often targeted because few people think to check their kids’ credit reports. Be one of the few who do. In the US, all citizens are granted one free credit report from each of the three national credit reporting bureaus per year, which you can get from AnnualCreditReport.com. Order a copy and check it for any unauthorized or suspicious activity.
UK citizens don’t get the same courtesy, but a few credit reporting agencies offer free trials from which you may obtain your kids’ credit reports.
Credit reporting begins as soon as a child has an account opened in their name for which a credit check is required. From that point on, they have a credit score.
Teach your kids good habits early on, such as thoroughly checking each purchase on a credit card statement if they have one, and regularly monitoring bank accounts for any activity they didn’t authorize. Let them know the importance of safeguarding their social security numbers (national insurance numbers if you’re in the UK), as well as other ID numbers on driver’s licenses and medical insurance cards. These can all be used to commit fraud under your kid’s name and damage their credit for years to come.
As an added layer of protection, you might consider investing in an identity theft protection service. These agencies monitor your personal information, bank accounts credit cards, and public records for misuse. They offer assistance should discrepancies or fraud crop up, along with large insurance plans to compensate for any losses that occur as a result of identity theft. If your child has previously been a victim of identity theft, then they are more at risk, so these services are especially useful.
TrustedID is the only ID theft protection service that we’ve reviewed with a true family plan, but other agencies usually have options to enroll kids. Check out all of our ID theft protection reviews to find which one best suits your family.
Use a VPN
When inputting private information on registration pages and online shopping sites, make sure the site uses a verified SSL certificate. This is usually indicated by a lock icon and a URL prepended with HTTPS. This encrypts communication between the browser and the server. Install the HTTPS Everywhere extension on your browser to use HTTPS by default when available.
HTTPS isn’t available for most websites, however, so whatever information transmitted from your computer to the web is unencrypted and viewable by anyone who wants to see it. To better protect yourself and your children, invest in a VPN service. A VPN encrypts all your incoming and outgoing traffic, and it also routes that traffic through a server in a location of your choosing. This has the effect of making all your internet activity anonymous while hiding both the content of your connection and masking your IP address and true location.
Switching on the VPN before surfing the web or doing anything else online is a good habit to get into for both parents and kids.
When it comes to ease of use–even something a young child could learn to use–it’s tough to beat ExpressVPN. It’s one of the fastest VPNs we’ve tested and is designed with novices in mind. On the downside, it doesn’t offer family plans, and the individual plans are relatively expensive. Read our review of ExpressVPN.
Making your child anonymous
In combination with a VPN, the following list of precautions can make your child invisible or at least more of an enigma to the internet at large.
Fake personal information
You probably spent a fair amount of time teaching your kids how to spell their names, when their birthday is, and the address where they live. Now teach them to lie about it to strangers, when asked. Use a fake birthday on Facebook, if you use one at all. A first and middle name is preferable to a first and last. If you’re not expecting mail, use a phony address.
Enable ad blockers
Online advertisements aren’t just for advertising, they are also used to mine data from the person viewing them. Some are harmless, but others are downright malicious. Ad blockers and anti-tracking extensions can prevent ad companies from snooping on your kids. We recommend Ad Block Plus and Disconnect. Disconnect even offers an educational kids version, Disconnect Kids.
Whether for homework or curiosity, children will need to use search engines. Google and others will collect information on every user to create a profile around them, which is used to make recommendations and target ads. On both mobile devices and desktop browsers, you can set the default search engine to something more anonymous.
DuckDuckGo, StartPage, and ixquick don’t log IP addresses, use tracking cookies, or monitor what results you click on. StartPage and ixquick actually scrub your personal details before submitting the search query to Google or another major search engine on your behalf, so you get the same results without giving up any information.
This can be more difficult than it sounds. Depending on your child’s age, you may want to untag all photos of your kid that appear online. On Facebook, as mentioned above, you can opt to review any photo that your child is tagged in before the tag is made public to friends. But not all social networks have such granular controls.
If your child is on a sports team or club, this can get tricky. Discuss the issues with adult leaders and coaches about tagging kids in photos and making web pages and Facebook groups private. Lay ground rules with babysitters and fellow parents about posting photos online.
Likewise, tell your child to be respectful and not tag anyone in a status or photo without their permission.
Kid’s privacy isn’t just about protecting them from predators and fraudsters, although that’s certainly reason enough. But there’s a societal impact on kids who are bombarded by algorithm-triggered advertising and marketing. Kids are impressionable, and their minds can be shaped by what they see online. What they see on the internet is defined by what Google, Facebook, Microsoft, and other corporations that rely on advertising and mass dragnet data collection want them to see.
Likewise, in an age when nothing is forgotten, children are also shaped by what they leave behind. In a Wall Street Journal article, Julia Angwen sums it up best:
“They won’t have the freedom I had as a child to transform myself. In junior high school, for example, I wore only pink and turquoise. But when I moved across town for high school, I changed my wardrobe entirely and wore only preppy clothes with penny loafers. Nobody knew about my transformation because I left no trail, except a few dusty photographs in a shoebox in my parents’ closet. Try that in the age of Facebook.”
The internet can open up more of the world to your child than any generation before them. But we shouldn’t allow faceless for-profit corporations to mould them into a class of consumers limited to the online personas that they unknowingly helped to create.
“At the computer” by Lars Plougmann licensed under CC BY-SA 2.0
“Parental Controls in Leopard: Content Filtering” by Wesley Fryer licensed under CC BY-SA 2.0 | <urn:uuid:ad0fb00f-1639-4dd7-9615-462cdd6ca693> | CC-MAIN-2017-04 | https://www.comparitech.com/blog/vpn-privacy/protecting-childrens-privacy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00039-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933858 | 5,975 | 2.59375 | 3 |
A couple years ago NASA's Aqua satellite took an out-of-this-world shot of a cloud-free shot of Ireland. NASA noted at the time that the cloud-free view is rare as the country is almost entirely cloud covered 50% of the time according to the Irish Meteorological Service, Met Éireann. There are more clouds during the day than at night, and fog is common.
NASA said Ireland owes its greenness to moderate temperatures and moist air. The Atlantic Ocean, particularly the warm currents in the North Atlantic Drift, gives the country a more temperate climate than most others at the same latitude. Moist ocean air also contributes to abundant rainfall. Ireland receives between 750 and 2000 millimeters (29 and 78 inches) of rain per year, with more rain falling in the west and in the mountains. Most of the rain falls in light showers.
The snow shot comes from the winter of 2009-2010 was unusually cold and snowy, NASA said. Called "The Big Freeze" by the British media at the time, it brought widespread transportation problems, school closings, power failures and twenty five deaths. A low of -22.3°C (-8.1°F) was recorded on January 8, 2010, making it the coldest winter since 1978/79.
The shot and the snowy one for your St. Patrick's Day celebrations:
Check out these other hot stories: | <urn:uuid:36998b00-1b37-4b04-b86b-7382af1045b2> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2224301/security/nasa-satellite-snaps-rare-cloud-free-and-not-so-rare-snow-covered-emerald-ireland.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00489-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947809 | 288 | 3.015625 | 3 |
Following the pattern of the first ten amendments to the U.S. Constitution—the so-called Bill of Rights—many industries propose similar sets of principles designed to protect individuals and/or organizations in those industries. For instance, one might read about a patient’s bill of rights. According to Federico Guerrini at ZDNet, a recent proposal is an Internet users’ bill of rights. The Italian document, “meant to serve as the foundation for defining web users’ rights and obligations, was officially made public, under the name of the ‘Declaration of Internet Rights.’”
Such efforts, although laudable in their attempts to address abuses by companies and (more particularly) governments, are misguided on at least two fronts. First, by often making up new rights rather than simply applying established rights (right to life, property and so on), they muddy the philosophical waters and create a hodgepodge ideology that is difficult to justify. The arbitrary nature of these declarations effectively means anyone can arbitrarily reject them. Second and more importantly, bills of rights are generally worth about as much as the paper they’re written on.
According to the Fourth Amendment, “The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.” How dragnet surveillance by the NSA fails to violate that right, I will never understand. Yet government justices and other agents seem to find no conflict, given their decisions and policies. Similar arguments apply to the other amendments, which are trampled regularly in the U.S. Thus, there is no reason to think that an “Internet bill of rights” will fare any better. Perhaps before we muck things up with more such proposals, we should just focus first on getting the basics right: life, property and free association.
Read more about bill of rights | <urn:uuid:2071731b-2124-48a9-a89e-81bfedb3a09f> | CC-MAIN-2017-04 | http://www.datacenterjournal.com/internet-bill-rights-dont-bother/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00361-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960101 | 432 | 2.5625 | 3 |
Kaspersky Lab has taken out a US patent for an advanced technology that detects unauthorised modifications of data. Unsanctioned modification of data, regardless of whether it is intentional or accidental, results in data distortion and loss. Unauthorised modification of software code can lead to program execution errors. It is a well-known fact that most crimeware injects code into executable files, leading to the execution of malicious code when the infected files are running. Ensuring data integrity is therefore a major IT security issue.
File integrity can be ensured by using such technologies as hashing, digital signatures and tracking the most recent modifications made to a file. However, the first two methods are too resource-intensive to be used for ensuring the integrity of all the files on a computer system, while the standard implementation of the latter method is unreliable: many of today’s malicious programs are capable of altering time stamps to conceal any trace of file modification. Standard integrity control methods either consume too many system resources or can occasionally miss infected files, leading to further distribution of malicious programs.
The advanced technology developed by Kaspersky Lab’s Mikhail Pavlyushik is free of these shortcomings. It checks file integrity reliably and quickly, without significant resource consumption. Patent No. 7526516 was issued for the technology by the US Patent and Trademark Office on 28th April, 2009.
Quick and reliable tracking of file modifications
The technology is based on the interception of application requests to change timestamps for one or more files. Such requests are tracked for each file and stored in a database. This information is then provided to a special module (usually a component of the antivirus program) which compares the timestamp update counter with the relevant timestamp. Changes to the timestamp update counter which are not accompanied by the relevant changes to the timestamp indicate file modification and possible infection. The antivirus program can then scan the file for malicious code or display an alert.
The method and its software implementation that has been patented by Kaspersky Lab provide quick and reliable tracking of file modifications, triggering antivirus scans to prevent execution of malicious code. “The greatest advantage of this method is that it is fast and allows files to be scanned with minimal consumption of system resources,” said Kaspersky Lab’s Chief Intellectual Property Counsel Nadia Kashchenko. “The technology makes the antivirus program’s operation more transparent to the user without sacrificing its high level of protection. This is a very significant invention that is unique to Kaspersky Lab and it has already been implemented in the company’s products.”
Kaspersky Lab currently has more than 30 patent applications pending in the US and Russia related to a range of innovative technologies developed by company personnel.
About Kaspersky Lab
Kaspersky Lab is the largest antivirus company in Europe. It delivers some of the world’s most immediate protection against IT security threats, including viruses, spyware, crimeware, hackers, phishing, and spam. The Company is ranked among the world’s top four vendors of security solutions for endpoint users. Kaspersky Lab products provide superior detection rates and one of the industry’s fastest outbreak response times for home users, SMBs, large enterprises and the mobile computing environment. Kaspersky® technology is also used worldwide inside the products and services of the industry’s leading IT security solution providers. Learn more at www.kaspersky.com. For the latest on antivirus, anti-spyware, anti-spam and other IT security issues and trends, visit www.viruslist.com. | <urn:uuid:d61203da-c2b8-4a5f-ad1c-8c471d1b0f84> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/business/2009/Kaspersky_Lab_patents_technology_for_safeguarding_data_integrity | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00269-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.911446 | 741 | 2.640625 | 3 |
What is network? A computer network in simple form can be consisted on just two computers that are allied to share available resources as variety of hardware, files and to correspond with each other. But in broad sense, this network can enclose thousands of computers in it because running a big business in traditional way can be a difficult task without a network and that provides an easy way to be in touch and cooperate with employees too. That means computer’s networking is referred to the assemblage of various kinds of hardware components plus computer’s interconnection with the help of communications mediums. The key purpose of networking is to allow the sharing of an organization’s resources as well as information. | <urn:uuid:85e8a549-3746-4ab3-b3a1-e25b642f5339> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/tag/how-it-works | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00085-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.9479 | 141 | 3.28125 | 3 |
A conductive ink (CI) is a thermoplastic viscous paste that conducts electricity by inculcating conductive materials such as silver and copper. This ink comprises binder, conductor, solvent, and surfactants used during its manufacturing process. The Europe conductive inks market was valued at $500.2 million in 2012, and is projected to reach $605.4 million by 2018, growing at a CAGR of 4.4 % from 2013. Silver Flakes are the major type of amphoteric surfactant, and has a huge demand in Europe. Conductive inks are widely used in photovoltaic which is driving the demand of conductive inks in Europe.
The binder helps to clutch together all conductive materials in the ink and provides a strong support to the product. It is particularly used in the applications which require high reliability and flexibility. The conductor is another important part of the ink which allows the passage of electricity. The different types of conductors used in conductive inks are silver, copper, nickel, aluminum, and so on. Similarly, the solvent is used for the formation of solution, whereas the surfactants help in uniform mixing of the ink. Conductive inks have various applications such as photovoltaic, membrane switches, automotive, RFID/smart packaging, bio-sensors, printed circuit boards, and other applications.
The European chemical industry is a significant part of the country’s economy.The industry is divided in four segments including Base chemicals, Specialty chemicals, Pharmaceuticals, and Consumer chemicals. Germany is the largest chemical producer in Europe, followed by France, Italy, and The Netherlands. These four countries together account for 64.0% of the Europe chemical sales. In the past, most of Europe’s chemical industry growth was driven by domestic sales, but these days, the country’s growth is shared dependent on both the domestic and the export market. Germany is currently driving the European conductive inks market.
The key countries covered in Europe conductive inks market are Germany, U.K., France, and Others. The types of conductive inks studied include conductive silver ink, conductive copper ink, conductive copper ink, conductive polymers, carbon nanotube ink, dielectric inks, carbon/grapheme ink, and others. Further, as a part of qualitative analysis, the Europe conductive inks market research report provides a comprehensive review of the important drivers, restraints, opportunities, and burning issues in the conductive inks market.
The Europe Conductive inks market report also provides an extensive competitive landscape of the companies operating in this market. It also includes the company profiles of and competitive strategies adopted by various market players, including Applied nanotech holdings Inc. (U.S.), Conductive Compounds Inc. (U.S.), Creative Materials Inc. (U.S.), and E.I. Du Pont De Nemours and Company (U.S.).
With market data, you can also customize MMM assessments that meet your Company’s specific needs. Customize to get comprehensive industry standards and deep dive analysis of the following parameters:
- Market size and forecast (Deep Analysis and Scope)
- Competitive landscape with a detailed comparison of portfolio of each company mapped at the regional- and country-level
- Analysis of Forward chain integration as well as backward chain integration to understand the approach of business prevailing in the Europe Conductive inks market
- Detailed analysis of Competitive Strategies like new product Launch, expansion, Merger & acquisitions etc. adopted by various companies and their impact on Europe conductive inks Market
- Detailed Analysis of various drivers and restraints with their impact on the Europe conductive inks Market
- Upcoming opportunities in conductive inks market
- Trade data of CI market
- SWOT for top companies in conductive inks market
- Porters 5 force analysis for conductive inks market
- PESTLE analysis for major countries in conductive inks market
- New technology trends of the CI market
Please fill in the form below to receive a free copy of the Summary of this Report
Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement | <urn:uuid:957deff1-9eaa-4b6f-8eab-0c7f7c3e3cb2> | CC-MAIN-2017-04 | http://www.micromarketmonitor.com/market/europe-conductive-inks-7436760923.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00571-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939779 | 891 | 2.96875 | 3 |
People tend to think of the TOR network as a silver bullet, which is not the case. Even on TOR's distribution site it's clearly stated that TOR will not guarantee complete privacy.
What's TOR? If you don't know, TOR is a network of proxies designed to give some privacy and anonymity to its users.
From Wikipedia: Tor (The Onion Router) is a free software implementation of second-generation onion routing — a system enabling its users to communicate anonymously on the Internet. Originally sponsored by the US Naval Research Laboratory, Tor became an Electronic Frontier Foundation (EFF) project in late 2004. …
Like all current low latency anonymity networks, Tor is vulnerable to traffic analysis from observers who can watch both ends of a user's connection.
So here's the question. What other suspicious stuff is occurring on TOR? Let's take a look.
Here's a node that only accepts HTTP traffic for Google and MySpace; it resides under Verizon:
AS | IP | AS Name — 19262 | 18.104.22.168 | VZGNI-TRANSIT - Verizon Internet Services Inc.
While curious and perhaps even suspicious, it isn't necessarily malicious. It could just be a Samaritan particularly concerned with anonymous searches and MySpace profiles for some reason. But there's no way to tell, so why use such a node if you don't have to?
But how about this one?
Now here's a node that was monitoring SSL traffic and was engaging in Man-in-the-Middle (MITM) attacks. Definitely bad.
AS | IP | CC | AS Name — 3320 | 22.214.171.124 | DE | DTAG Deutsche Telekom AG
Here's how the testing was done:
A test machine with a Web server and a real SSL certificate was configured. A script was used to run through the current exit nodes in the directory cache. Connections were made to the test machine. A comparison of the certificates was made.
And the exit node at 126.96.36.199 provided a fake SSL certificate!
Now note, this was only one of about 400 plus nodes tested. But it only takes one.
Once the node faked the SSL of the test server, a well-known "payments and money transfer" site was tested, and it faked those SSL certificates as well.
Information was forwarded to the German authorities and the node is no longer available. It appears that prompt action was taken against it.
More details on the investigative process can be found here and here.
Any technology can be used in the wrong way, a fact that will never change. Be careful out there. | <urn:uuid:8e6836c7-b59c-4e30-8a57-8e177862a655> | CC-MAIN-2017-04 | https://www.f-secure.com/weblog/archives/00001321.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00298-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956512 | 547 | 2.78125 | 3 |
NASA engineers updated the software for a robotic Mars rover, correcting a computer glitch more than two months old while the robot hurtled through space on its way to Mars.
Late in November, NASA launched its $2.5 billion Mars Science Laboratory. Dubbed Curiosity, the SUV-size super rover is on an eight-month journey to Mars with a mission to help scientists learn whether life can exist, or has ever existed, on the Red Planet.
However, a problem caused a computer reset on the rover Nov. 29, three days after the launch, NASA reported last week. The problem was due to a cache access error in the memory management unit of the rover's computer processor, a RAD750 from BAE Systems.
"Good detective work on understanding why the reset occurred has yielded a way to prevent it from occurring again," said Mars Science Laboratory Deputy Project Manager Richard Cook, in a statement. "The successful resolution of this problem was the outcome of productive teamwork by engineers at the computer manufacturer and [NASA's Jet Propulsion Laboratory]."
Guy Webster, a spokesman for the JPL, told Computerworld that because of the processor glitch, the rover's ground team was unable to use the craft's star scanner, which is designed for celestial navigation.
That technology was not in use for several months, and NASA engineers had to guide the rover through one major trajectory adjustment using alternate means, according to Webster.
The fix, which was uploaded to the rover as it traveled through space, changed the configuration of unused data-holding locations, called registers.
NASA reported that engineers confirmed this week that the fix was successful and that the star scanner is working again.
Curiosity, equipped with 10 science instruments, is expected to land on Mars in August.
The super rover is set to join the rover Opportunity, which has been working on Mars for more than six years. Opportunity has been working alone since a second rover, Spirit, stopped functioning last year.
Curiosity will collect soil and rock samples, and analyze them for evidence that the area has, or ever had, environmental conditions favorable to microbial life.
Curiosity weighs one ton and is twice as long as and five times heavier than its predecessors.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed . Her email address is email@example.com. | <urn:uuid:fe0293aa-f137-4e48-b0e1-b8ca63851f3c> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2501681/emerging-technology/nasa-fixes-computer-glitch-on-robot-traveling-to-mars.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00536-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955154 | 510 | 2.90625 | 3 |
Question 3) Cert-XK0-002 – CompCert: Linux+
Objective : Configuration
SubObjective : Configure basic server network services
Single Answer Multiple Choice
Which Apache directive is used to specify the type and severity of errors encountered by the Apache server?
The LogLevel directive is used to set the type and severity of errors encountered by the Apache server. The following errors encountered by the Apache server during data processing are logged by default in the error_log file:Emerg: This error type indicates an emergency. The system stops functioning during an emergency.Alert: This error type is also an error demanding attention and requires immediate user action.Crit: This error type signifies a critical system condition.Error: This error type reports a general error condition that may affect the proper functioning of an application without hampering the overall functioning of the system.Notice: This error type is a lower priority notification to the user and alerts the user to system-generated events.Info: This error type is a lower priority notification to the user and provides the user with system-related information.Debug: This error type displays the debug-level system generated messages.
The ErrorLog directive is used to configure the name of the file to which the Apache server will write the error messages.
There are no Apache core module directives referred to as LogError and SetErrorLog.
Core-Apache HTTP server, LogLevel Directive, http://httpd.apache.org/docs/2.0/mod/core.html#loglevel
Log Files- Apache HTTP server, Error Log, http://httpd.apache.org/docs/1.3/logs.html
These questions are derived from the Transcender Practice Test for the CompTIA Linux+ certification exam. | <urn:uuid:f796437d-e309-4322-8a41-a6153f8d40bb> | CC-MAIN-2017-04 | http://certmag.com/question-3-cert-xk0-002-compcert-linux/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00444-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.790794 | 366 | 2.84375 | 3 |
It seems like everywhere you turn, people in technology are talking about the Cloud, as in “We’re going to the Cloud.” or “We’ll be up in the Cloud next year.” Those who are not so immersed in technology are asking what this mysterious Cloud is. So here’s my stab at explaining it.
Wikipedia defines Cloud computing as “a model for enabling ubiquitous, convenient, on-demand access to a shared pool of configurable computing resources.” The Cloud is a bunch of computers in a bunch of datacenters that can be connected to over the internet and used in various ways by the customers that rent them. Those various ways revolve around the 3 models of Cloud computing: IaaS – Infrastructure as a Service (pronounce eye-as); PaaS – Platform as a Service (pronounced paz) and SaaS – Software as a Service. IaaS is the most basic and least costly. Customers rent time on servers that may have an OS installed and not much else. It’s up to the consumer to deploy applications, patch the OS, run backups, etc. The PaaS model gives you more and costs more. It offers a computing environment where developers have all the tools they need to run an application and not worry about the details of patching the OS or doing backups. The SaaS model offers still more. Customers don’t even deal with the servers themselves, but only with software services that they subscribe to.
The Cloud provider with the biggest market is Amazon Web Service (AWS). Its market share is bigger than the next 3 market leaders combined. Its claim to fame is that it is reasonably priced and reliable. Microsoft’s Azure is in second place. It hangs its hat on enterprise services and is more expensive than AWS. Google and IBM bring up the rear.
Both consumers and producers of software are missing the boat and will be left behind if they don’t take a look at the Cloud. Consumers are those who currently host and administer Microsoft Exchange or a CRM system or some other application. They can “go to the Cloud” and say goodbye to the section of their IT department that builds and maintains company owned servers in a company owned datacenter. They can then sell or donate those servers because they won’t need them anymore. Producers of software no longer have to purchase and maintain a datacenter for their web site that sells office products. They can rent it in the cloud and let someone else worry about making sure the air conditioning doesn’t go out on that 100 degree day in August.
In summary, the Cloud is a game-changing technology made possible by recent advances in high-speed internet connections and more powerful, flexible computing resources. As those resources inevitably become cheaper and faster, more and more companies will move to the Cloud just as more and more people at home are streaming movies and TV shows rather than buying DVDs. I don’t want to rain on your parade, but it’s time you took a good look at the Cloud. | <urn:uuid:e1cef46b-9f16-421d-9643-da25b1b10de2> | CC-MAIN-2017-04 | http://blog.enowsoftware.com/scrum-developer/what-is-the-cloud | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00078-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957969 | 643 | 2.703125 | 3 |
Global Potash Market Report: 2015 Edition
- Report Description Table of Contents
Potash refers to a group of potassium bearing minerals and chemicals. The most dominant potash product in the market is the compound, potassium chloride (KCl), a naturally occurring, pink, salty mineral. For the most part, potash-bearing rock deposits are derived from the minerals in ancient seas that dried up millions of years ago. Fertilizer potash is principally derived from these potash rocks and requires separation from salt and other minerals.
More than 90% of the potash produced in the world is used for fertilizers. It is one of the three key ingredients for plant growth, which in turn is vital to meet growing food requirements. Its main functions include the production of sugar and starches, and the regulation of water conditions within the plant cells. It is also intimately linked with nitrogen use, by improving the effect of nitrogen fertilizer in the soil. Potassium activates over 60 enzymes which are involved in many important plant physiological processes.
It promotes photosynthesis, and intensifies the storage of assimilates; this is the reason why it is of particular importance for root and tuber crops such as potatoes. It also improves the quality and flavor of the vegetables and fruits. In brief, potassium raises yields and food value, builds disease resistance and improves shipping, handling and storage qualities. It is applied directly to soils, or physically mixed with other nutrients and applied directly, or chemically bound with nitrogen and /or phosphate. In animals, it helps growth and milk production. The most common forms of Potash are Potassium Chloride, Potassium Sulfate and Potassium Nitrate.
The key factors driving the growth of the potash market are growing demand of biofuels, rising global population, rising demand of rice and oil palm and declining arable land. Some of the noteworthy trends and developments of this industry are expansion in emerging countries and optimizing fertilizer application, rebound in potash consumption with favorable crop-to-potash price ratios and introduction of new potash projects. However, the expansion of global potash market is hindered by increasing concern of water availability, oversupply and falling prices of crop.
By combining SPSS Inc.’s data integration and analysis capabilities with our relevant findings, we have predicted the future growth of the market. We employed various significant variables that have an impact on this industry and created regression models with SPSS Base to determine the future direction of the industry. Before deploying the regression model, the relationship between several independent or predictor variables and the dependent variable was analyzed using standard SPSS output, including charts, tables and tests. | <urn:uuid:2bbc90db-ae4a-41b9-b535-9eaada4e4f46> | CC-MAIN-2017-04 | http://www.marketreportsonline.com/443806.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00526-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93162 | 541 | 2.9375 | 3 |
Figure 9.1 Break-even DSS developed in Excel.
A break-even calculation shows the level of operations in units produced at which revenues just cover costs (profit equals zero). The break-even volume can be computed in a number of ways. One approach divides fixed costs by the contribution margin to find the break-even quantity. The contribution margin is the selling price per unit minus the variable costs per unit. Also, the breakeven quantity can be calculated by solving the expression: (Price * Quantity Sold) - (Fixed Cost + (Variable Cost per unit * Quantity Sold)) = 0.
A typical break-even model assumes a specific fixed cost and a constant average variable cost. The break-even quantity can be calculated in a spreadsheet by using a goal-seeking capability to set profit equal to zero, where Profit equals Revenue minus Total Costs. Figure 9.1 shows a Break-even DSS developed in Excel.
A break-even model provides a quick glance at price, volume and profit relationships. Actually determining fixed and variable costs can be difficult, but in most cases managers can make reasonable assumptions. Also, break-even analysis ignores demand for a product so it is often desirable for a manager to use various forecasting models in conjunction with a break-even analysis. | <urn:uuid:cefd6261-0d5c-4e57-83e8-08a58d32024c> | CC-MAIN-2017-04 | http://dssresources.com/subscriber/password/dssbookhypertext/ch9/page9.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00252-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.858101 | 261 | 3.578125 | 4 |
Joined: 02 Jun 2007 Posts: 4 Location: Philippines
Im wondering, there's an impedance mismatch between set-level processing of DB2 and record-level processing of COBOL.. That's why we use CURSORs right? Would it be possible to UPDATE a table in one blow, affecting several rows in a program. If its possible, is it advisable resource-wise? A related reading material would be most appreciated..
Joined: 23 Nov 2006 Posts: 19270 Location: Inside the Matrix
Yes you do need a CURSOR if you want to update a table with multiple rows. You UPDATE it one at a time
No, you do not have to use a cursor to update multiple rows. You can update multiple (selected) rows in a table by specifying a WHERE and you can update every row in a table if you do not specify a WHERE.
You update one row at a time when you are updating a row that was read by a cursor. | <urn:uuid:dc134255-fe6a-43fb-8df5-932f9bb07643> | CC-MAIN-2017-04 | http://ibmmainframes.com/about24316.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00096-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.890091 | 201 | 2.75 | 3 |
Computer crime is not limited to the United States. I frequently hear about computer crime issues from faraway places like Europe, Asia and Australia. Technology-related crimes are a worldwide problem.
The Information Age has arrived and, of necessity, it has helped to bring international law enforcement agencies closer together. Because of the popularity and international recognition of the Internet, computer crimes recognize no boundaries. The Internet has changed the way the world does business, and crime usually follows the money.
As recently as five years ago, the Internet was used primarily for communications between universities, research facilities and technology firms. Today, the Internet plays an important role in international commerce, a role that becomes more important as each day passes. Expensive long-distance phone calls have been replaced by e-mail, and advertising has expanded from the pages of magazines at the newsstand to Web pages.
This transformation has taken place quickly, and keeping up has been difficult for law enforcement agencies. In years past, state and local law enforcement agencies' concerns were limited to crimes committed in their jurisdictions. When a crime crossed state lines or international borders, federal law enforcement officials were called in to assist. But it doesn't work that way anymore.
Today, state, county and municipal law enforcement agencies have to deal with crimes committed against their residents from locations throughout in the world. Financial frauds are committed daily over the Internet, and children are fair game for sexual predators in electronic chat rooms. The same individuals often exchange child pornography via e-mail and Internet news groups. Such crimes are more frequently than not investigated by local law enforcement agencies. As a result, when interstate or international boundaries are involved, cooperation between jurisdictions becomes vitally important.
Tools of the Bad Trade
In recent years, several new technologies have evolved into tools for criminals. In the old days, the crooks used computers to track their ill-gotten gains and to record their memoirs. A few brave hackers tried to crack the security on government computer networks. Now, state-of-the-art desktop publishing is one of the primary tools of counterfeiters. Stolen goods are fenced over the Internet. Computer records used in embezzlement are now destroyed when the crime gets close to discovery. Commercial software is pirated from manufacturers and sold as the genuine product in makeshift retail stores or from clandestine Web sites. Stolen or defective computer chips are also offered over the Internet.
Innovative new forensic software tools and methodologies are helping law enforcement agencies deal with online crime, but without the international cooperation of law enforcement agencies, technology-related crimes can be difficult or impossible to investigate.
I recently conducted a three-day training session for the Singapore Police Force. The topic was computer forensics, and after spending a week with its Computer Crime Unit (CCU), I quickly learned that the officers are dealing with the same types of problems we have in the United States. I also learned that cops are basically the same everywhere. The Singapore students were top-notch detectives who also knew a lot about computers and computer crime.
My training partner, Joe Enders, and I conducted the training session at the CCU office, where we found a number of similarities between computer crime units in the United States and in Singapore. However, in some ways, the Singapore police force is ahead of many U.S. law enforcement agencies, and could easily be used as a role model for the creation of a computer crime unit.
Singapore is small -- mostly just one city of 2 million residents. When the CCU is compared to U.S. law enforcement agencies of similar size, some interesting differences are noted. Obviously, differences exist between our laws and theirs, e.g., there are no privacy laws in Singapore. Its criminal-justice system does not involve the jury process. Criminal trials are decided by a judge, and the legal system is swift. Given this, you would think that the Singapore police would devote fewer resources to computer crime than U.S. departments. We were surprised to find the opposite to be true.
Management has devoted considerable resources to its computer crime unit, and considers computer skills and education when recruiting personnel for the unit. Though the unit is only a year old, I was impressed because it is staffed with several experienced investigators, and the unit already has some solid case experience.
It has been my experience that many computer crime units in the United States were created without a management plan tied to technology issues. Some computer crime units got their start with minimal management support and funding. They were created out of necessity, as a reaction to the reporting of a technology crime in their jurisdiction. These units have typically been staffed with part-time personnel who have other primary duties. In this situation, training is usually not a priority, and it is difficult for computer specialists to stay current with advancing technologies.
This is not always the case. I know of some computer crime units in the United States that are cutting-edge, and they are properly staffed and funded. The Oregon State Police Computer Crime Unit is one that stands out.
The main point about the computer crime unit in Singapore is that it was formed as part of a master plan created by management, who made it a priority to adequately staff the unit with seasoned full-time investigators. We also found it interesting that the percentage of staffing in Singapore's CCU appeared to be almost double that of most computer crime units in the United States.
U.S. law enforcement management might want to take notes on creating a computer crime unit. The Singapore police management team visited several computer crime units and private firms throughout the world before making any final decisions. The management approach was smart, well-planned and adequately funded. I strongly recommend this team as a point of reference for any agency contemplating the creation of a computer crime unit. In exchange, we might be able to help them deal with computer crime issues of mutual interest.
Problems associated with computer crimes in Singapore are similar to those in the United States. They have a difficult time working undercover on the Internet. The reason for this is that crooks are aware of jurisdictional boundaries. They know that the odds of getting caught are higher if they deal over the Internet with people in their own country. International crimes committed over the Internet provide a greater level of safety. This situation makes it particularly difficult for law enforcement in Singapore, where there are very few Internet service providers, and all of them are easily identified as being Singapore-based. A smart crook will only communicate with parties in another country.
As a result, Singapore police often seek the assistance of law enforcement agencies in other parts of the world to follow up Internet leads. The same problem exists in Hawaii, because the few Hawaiian Internet providers are also easily recognized.
It will take time, but cooperation between law enforcement agencies on a worldwide basis is very important in dealing with computer-related crime.
Michael R. Anderson, who retired from the IRS's Criminal Investigation Division in 1996, is internationally recognized in the fields of forensic computer science and artificial intelligence. Anderson pioneered the development of federal and international training courses that have evolved into the standards used by law enforcement agencies worldwide in the processing of computer evidence.
He also aided law enforcement agencies in 16 countries to process evidence and to aid in the prevention of computer theft. He continues to provide software free of charge to law enforcement and the military. He is currently a consultant. P.O. Box 929 Gresham, OR 97030.
November Table of Contents | <urn:uuid:ff8ad510-94d7-4a84-9e0b-e967be1a0bd5> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Lessons-from-a-Foreign.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00426-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.970294 | 1,522 | 2.6875 | 3 |
This is the first in a series of articles exploring the security aspects of XML technologies.
Anyone with an interest in security and XML, but mostly architects and developers working with XML technologies.
Let's start off with defining our terms.
Security is about maintaining the confidentiality, integrity, and availability of data and the systems that process them. Collectively, these are known as the CIA-triad.
Confidentiality refers to efforts made to prevent unauthorized disclosure of information to those who do not have the need, or right, to see it. Without confidentiality there can be no privacy, which is the ability to selectively reveal information about oneself.
Integrity refers to efforts made to prevent unauthorized or improper modification of systems and information. It also refers to the amount of trust that can be placed in a system and the accuracy of information within that system.
Availability refers to efforts made to prevent disruption of service and productivity.
There are two different ways of looking at the CIA properties of an information system:
- Information security, or InfoSec, focuses on features whose sole purpose is enforcing some aspect of security. The most important of those features are authentication, authorization, and auditing. These security features make heavy use of cryptography.
- Application security, or AppSec, focuses on regular features that are designed and implemented in such a way that they do not compromise security. This is best realized as part of a Security Development Lifecycle (SDL) like the one we have at EMC.
Authentication is the act of verifying the credentials of an entity, like a user or an application. Credentials can be usernames, passwords, fingerprints, SecurID passcodes, etc.
Authorization is the act of granting access to a specific resource. This can be an entire application, or a much smaller piece of functionality. Authorization is also referred to as access control. The de facto standard for authorization is eXtensible Access Control Markup Language (XACML).
Auditing is the act of storing information about who did what when. Auditing is important for non-repudiation.
Cryptography is the practice and study of techniques for secure communication in the presence of malicious third parties. Cryptography is fundamental to security.
XML and Security
Every technology has security implications, and so does XML. From the InfoSec perspective we see some XML-based standards for authentication and authorization. From the AppSec perspective we see some potential vulnerabilities resulting from the (improper) use of XML.
Below is a list of articles in the XML and Security series:
- XML and Security: Introduction to XACML - Access Control Policies in XML
- XML and Security: Real world examples of XACML security policies
- XML and Security: Implementing an XACML PEP in Java
- XML and Security: XQuery Injection
- XML and Security: SAML for Authentication
- XML and Security: Entity Expansion Attacks
- XML and Security: JAXB unmarshalling
- XML and Security: XML Signatures
- XML and Security: XML Encryption
Stay tuned for more articles and, more importantly, stay secure! | <urn:uuid:a4e49283-03ca-4a14-b7f9-01b4e7a92ab2> | CC-MAIN-2017-04 | https://community.emc.com/docs/DOC-19025 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00454-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927597 | 648 | 3.328125 | 3 |
(1-Nov-2011) Providing inner-city students with portable computers can make them attractive targets for crime.
That's the problem that Oakland Unified School District (OUSD) officials faced as they worked to improve technology access for the California district's 40,000 students.
Fortunately for Oakland and other districts both urban and suburban, effective theft-deterrence solutions — including physical etchings, tracking software and insurance — abound. "While commercial enterprises generally concern themselves more with the loss of data, for educational enterprises, the biggest risk of loss is the device itself," says James Quin, lead research analyst for Ontario-based Info-Tech Research Group.
Although device loss was a concern in Oakland, the devastating consequences of inadvertently creating crime victims loomed even larger. "We didn't want our kids to suffer from an attack, or even live in fear of an attack, just because they were carrying a computer back and forth to school," explains Ann Kruze, a district instructional technologist.
Click here for full article | <urn:uuid:f7657f1f-8a52-4459-9a6b-307560bee317> | CC-MAIN-2017-04 | https://www.infotech.com/research/it-edtech-k-12-how-to-combat-notebook-theft | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00362-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961591 | 210 | 2.703125 | 3 |
While it's unlikely that you will soon be able to put out a fire by pointing your iPhone speaker system at it, this video showcases how "acoustic suppression" techniques can extinguish flames:
From this DARPA Web site, researchers explain that since 2008, the government has been trying to "develop a fundamental understanding of fire with the aim of transforming approaches to firefighting."
Because fire in enclosed environments, including ship holds, aircraft cockpits and ground vehicles is so destructive, DARPA was looking for new ways to fight fire.
"Traditional fire-suppression technologies focus largely on disrupting the chemical reactions involved in combustion. However, from a physics perspective, flames are cold plasmas. DARPA theorized that by using physics techniques rather than combustion chemistry, it might be possible to manipulate and extinguish flames. To achieve this, new research was required to understand and quantify the interaction of electromagnetic and acoustic waves with the plasma in a flame."
As the video shows, "performers succeeded in demonstrating the ability to suppress, extinguish and manipulate small flames locally using electric and acoustic suppression techniques." DARPA says it's still not clear how to scale these methods for defense applications or purposes.
Also, no word whether "Freebird" or Lady Gaga is better suited for putting out flames.
Keith Shaw rounds up the best in geek video in his ITworld.tv blog. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter, Facebook, and Google+.
Now watch: Star Wars/Gotye parody video proves how unhip I am 32-year-old talks to 12-year-old self via VHS Meet the YouTube Complaints Department Watch a water balloon pop in space Did this 1985 film coin the phrase 'information superhighway' and predict Siri? | <urn:uuid:8f7bf5a0-3808-486a-aea2-0c9941e4967a> | CC-MAIN-2017-04 | http://www.itworld.com/article/2723442/it-management/watch-as-sound-extinguishes-these-flames.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00572-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.910003 | 382 | 3.21875 | 3 |
The first try is always just the first attempt
Cambridge, UK, July 20, 2000 - Kaspersky Lab Int., an international anti-virus software development company, announces the discovery of a first computer virus that affects the world's most popular PC-based design software AutoCAD (www.autocad.com) originally developed by Autodesk company. The ACAD.Star virus was published on the 18th of July on the Internet at one of the web sites dedicated to virus development by a hacker named AntiState. At the moment, Kaspersky Lab has not received any reports of this virus being "in-the-wild".
AutoCAD is widely used throughout the world for architectural design, construction, surveying, engineering, cartography, and movie and computer games production etc. It became possible for computer viruses to affect these systems after Autodesk has recently licensed from Microsoft its macro-programming language, Visual Basic Application (VBA). Exactly in this language, macro-viruses are created (now an average of 70% of all virus infections) for the popular office applications like Word, Excel, Access.
ACAD.Star is an extremely primitive macro-virus, 568 bytes in length, written in VBA macro-language, and able to affect only systems running the AutoCAD version 2000. It is primitive not only because of its length, but functionality as well: the virus's author made some fatal mistakes, which nearly disable the virus' capabilities for proliferating under normal operating conditions. Kaspersky Lab anti-virus experts have spent a lot of time and effort to produce a number of virus strains that are good enough to perform a comprehensive analysis. It is nearly impossible for an ordinary AutoCAD user to repeat this "gest" and accidentally create a set of special conditions allowing the virus to propagate. "We classify this case as a "first try," which, as is known, are not always successful," said Eugene Kaspersky, Head of Anti-Virus Research at Kaspersky Lab. "However, the discovery of this virus demonstrates security breaches in AutoCAD, which used to be virusless up to quite a recent time. We consider that these vulnerabilities could be further exploited by other AutoCAD-viruses - more vital and even dangerous."
At the beginning of 2000, Kaspersky Lab experts published an article (available on the web site here) describing their view of the future of macro-viruses, which will likely create a feeling of de javu. It reads: " By 1999, more then 100 software manufacturers had purchased a license to use VBA macro language in their software. This means that macro viruses will be able to migrate seamlessly from MS Office to new applications (either in use or still to come)." There in no need to comment that the recent virus outbreak confirms the above forecast.
Protection against the ACAD.Star virus has was added to the AntiViral Toolkit Pro (AVP) daily update on July 18. However, we recommend that you set up your protection with a universal defence against all types of macro-viruses, including those for AutoCAD. AVP Office Guard, which is based on the breakthrough principles of behaviour blocking and available in AVP for MS Office 2000 package, gives you a true 100% guarantee for full control over all macro-viruses on a protected system.
You can evaluate AVP for MS Office 2000 by downloading it from the Kaspersky Lab web site at www.avp2000.com.
To purchase AntiViral Toolkit Pro, please visit our online store or contact your nearest Kaspersky Lab distributor. | <urn:uuid:13b23065-73de-422e-8f12-9c894e8102e7> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/virus/2000/ACAD_Star_Computer_Viruses_Invade_AutoCAD | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00115-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942771 | 752 | 2.90625 | 3 |
NASA has published a new video from its Swift Satellite spotting a black hole devouring a star. Here's the video, which makes it look like it came from a Hollywood studio:
Rather than interpret what NASA says is going on, I'll just include their commentary in the YouTube description:
"In late March 2011, NASA's Swift satellite alerted astronomers to intense and unusual high-energy flares from a new source in the constellation Draco. They soon realized that the source, which is now known as Swift J1644+57, was the result of a truly extraordinary event -- the awakening of a distant galaxy's dormant black hole as it shredded and consumed a star. The galaxy is so far away that the radiation from the blast has traveled 3.9 billion years before reaching Earth.
"Most galaxies, including our own, possess a central supersized black hole weighing millions of times the sun's mass. According to the new studies, the black hole in the galaxy hosting Swift J1644+57 may be twice the mass of the four-million-solar-mass black hole lurking at the center of our own Milky Way galaxy. As a star falls toward a black hole, it is ripped apart by intense tides. The gas is corralled into a disk that swirls around the black hole and becomes rapidly heated to temperatures of millions of degrees.
"The innermost gas in the disk spirals toward the black hole, where rapid motion and magnetism creates dual, oppositely directed "funnels" through which some particles may escape. Particle jets driving matter at velocities greater than 80-90 percent the speed of light form along the black hole's spin axis. In the case of Swift J1644+57, one of these jets happened to point straight at Earth.
"Theoretical studies of tidally disrupted stars suggested that they would appear as flares at optical and ultraviolet energies. The brightness and energy of a black hole's jet is greatly enhanced when viewed head-on. The phenomenon, called relativistic beaming, explains why Swift J1644+57 was seen at X-ray energies and appeared so strikingly luminous.
"When first detected on March 28, the flares were initially assumed to signal a gamma-ray burst, one of the nearly daily short blasts of high-energy radiation often associated with the death of a massive star and the birth of a black hole in the distant universe. But as the emission continued to brighten and flare, astronomers realized that the most plausible explanation was the tidal disruption of a sun-like star seen as beamed emission.
Ummm, wow. Since apparently this happened 3.9 billion years ago, I'm assuming that the video is a simulation of what the event looked like based on the satellite readings - I don't think NASA satellites are that good in terms of their cameras (are they?).
Read more of Keith Shaw's ITworld.TV blog and follow the latest IT news at ITworld. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook. | <urn:uuid:69724571-ba27-4cd4-8cd0-2ceccb92918d> | CC-MAIN-2017-04 | http://www.itworld.com/article/2738531/networking/watch-a-black-hole-devour-a-star.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00023-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943564 | 638 | 2.984375 | 3 |
Since I posted on the subject of passing the CCIE Voice exam, I have had a number of requests to blog some more on some of the individual CCIE Voice exam objectives. So, in response to these requests, I'll be posting on these objectives starting with telephony protocols. If you are currently studying for your CCVP exams rather than CCIE Voice, you'll also find a lot of useful information here.
There are a number of telephony protocols including the Session Initiation Protocol (SIP), Media Gateway Control Protocol (MGCP), H.323, and the Skinny Client Control Protocol (SCCP). I am going to start this series of blog posts by taking a look at SIP.
SIP is a peer-to-peer, application-layer, text-based control signalling protocol, that is used to setup, modify, and terminate multimedia sessions. It builds on elements of a number of protocols, including SMTP, HTTP, and SDP, and can be transported over a number of protocols such as UDP and TCP (usually port 5060).
While SIP is the protocol used to setup the sessions, the Real Time Transport Protocol (RTP) is typically used to carry the actual multimedia traffic. SIP is supported on a number of Cisco products such as Unified Communications Manager (CallManager).
SIP can be used for a wide range of applications, including (but by no means limited to) setting up voice and video calls, conferencing, and instant messaging. SIP also has the capability to determine the location of a user and that user's willingness and ability to communicate.
There are a number of elements that may exist in a SIP network:
User Agent (UA): a UA is a SIP endpoint, such as a SIP IP phone, which can function as either a User Agent Client (UAC) or a User Agent Server (UAS).
UAC: this is the logical role that a UA performs when it is initiating a SIP request. A typical reason for initiating a SIP request would be when sending an INVITE request used to setup a SIP session for a voice or video call.
UAS: a UAS is a logical function that responds to SIP requests, including INVITE requests.
Redirect Server: as the name suggests, this is a SIP server that can redirect clients to alternative destination addresses.
Proxy Server: the primary function of SIP proxy servers is to provide routing for SIP requests between UACs and UASs, but it can also provide other functions such as policy enforcement, authentication and authorization of users, and providing features.
Registrar Server: this server type allows users to register their current location. This information is stored in a location service database.
Location Service: this is built by the registrar server, and includes bindings of users' globally reachable public addresses (Address of Record [AOR]) and their current contact addresses. This service is used by proxy servers and redirect servers when they need to find users' possible current locations.
Back-to-Back User Agent (B2BUA): this is simply a concatenation of the functionality of a UAC and a UAS. So, the B2BUA receives SIP requests, and re-originates these requests.
Presence Server: a presence server is a device that tracks the availability and willingness of parties to communicate, and distributes this information to interested other parties. Parties whose availability and willingness to communicate are tracked are known as presentities; and parties that are interested in knowing about presentities availability and willingness to communicate are known as watchers.
Many of the SIP network elements described above are roles or logical functions, rather than necessarily being distinct physical devices. So, for example, the Cisco SIP Proxy server product functions as both a proxy server and a registrar server, while an IOS router running SIP Survivable Remote Site Telephony (SRST) can function as both a SIP registrar server and a redirect server/B2BUA.
Next time, I'll be taking a look at how these SIP network elements communicate and interoperate. | <urn:uuid:5f60216c-e2e2-447a-bcda-24f0629f23c5> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2344006/cisco-subnet/ccie-ccvp-voice--understanding-telephony-protocols.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00509-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924887 | 858 | 2.90625 | 3 |
The same concepts that have lead to open source rockin' the software world have spawned the beginning of a revolution in biotech. An organization called Biofab, funded by the NSF and run through teams at Stanford and Berkeley, is applying open development approaches to creating building blocks (BioBricksTM from BioBricks Foundation) for the bio products of the future. Now, the first of those building blocks (based on E. coli) are just rolling off the production line. This, according to the organizers, represents "a new paradigm for biological research."
At its basis, Free and Open Source Software (FOSS) is about sharing and collaborating. The purpose of the open source licenses of which RMS (as Richard Stallman is known to fellow hackers) conceived was to ensure that users of software could have the freedom to use, modify and share the software as they wished. What has evolved is an enormous stock of freely available building blocks (about half a million by Black Duck's count) that make for faster, better, cheaper creation of software.
This goal is behind Biofab, to create biological building blocks that can be assembled into an unimaginable plethora of applications. Somewhat in contrast to the philosophical grassroots motivations that have gotten software development to this point, it's being driven by economic motivations, and there's some real money behind the project from the outset. It can cost tens of millions of dollars to create a single microbe that can do useful work because the current process is like creating a software application using machine language.
The geniuses behind Biofab are clearly modeling much of what they do on the FOSS model. Stanford's Drew Endy, Biofab director, talks about what they are building and a "biological operating system." In fact the name (Biofab International Open Facility Advancing Biotechnology) is oddly recursive like GNU (which stand for GNU is Not Unix). OK, but biology and software? You can't download even microscopic bugs over the internet can you? Well, since Watson and Crick discovered the double-helix, you kinda can. Sequences of genes in nucleic acids are known as "genetic codes" and are basically the programs that runs the cells with which they are associated, and BioBricks foundation decribes a BioBrick, as follows:Each distinct BioBrickTM standard biological part is a nucleic acid-encoded molecular biological function (e.g., turn on/off gene expression), along with the associated information defining and describing the part.
Sure sounds like software to me. In fact it's enough like software that the BioBrix guys figured out they needed and developed a very OSI-like license calledThe BioBrick Public Agreement. I gave it a quick read and it reads very much like a software license, a fairly permissive one with no reciprocal clauses that I could see. (By the way, it would violate the OSI requirement of unrestricted use with a "do no harm" clause, but that seems like a good thing given the bioweapon potential.)
The microbiology community sits where the software community was a few decades ago. A few big corporations with a lot of stake in keeping technology proprietary and a grass roots movement to open and shake things up a bit. My guess is that with the analogous trail having been blazed in the software world, things will unfold more quickly in biology. I'll be monitoring from the sidelines to see how it all plays out.
Near the end of composing this blog, I stumbled across an outstanding book, Biobazaar: The Open Source Revolution and Biotechnology by biologist/lawyer, Janet Hope. The author delves deeply into what she calls the "irresistible analogy" between open source software and microbiology. Biology aside, it is well worth the read just for concise history of open source she provides in Chapter One and the detailed treatment of open source licenses in Chapter Five. | <urn:uuid:0114837d-3f2b-46b0-aeb7-e0e515777a1e> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2229234/opensource-subnet/great-news---now-you-can-download-your-very-own-e--coli-bacterium.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00325-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951375 | 796 | 2.96875 | 3 |
Digital citizenship empowers students to make thoughtful decisions and develop a sound digital foundation for the rest of their lives. It’s a holistic and positive approach to helping students learn how to be safe and secure, as well as smart and effective participants in a digital world. That means helping them understand their rights and responsibilities, recognize the benefits and risks, and realize the personal and ethical implications of their actions.
Please share the new InCtrl Factsheet and help spread the word about Digital Citizenship!
If you have any questions about digital citizenship or InCtrl, please leave a comment below or you can contact me via email, Twitter (@KatCableClassrm) or post a comment to the Cable in the Classroom Facebook page. | <urn:uuid:1bf1a5c4-28f1-4acb-b96c-2e96b036b0c1> | CC-MAIN-2017-04 | https://www.ncta.com/platform/industry-news/spread-the-word-about-digital-citizenship/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00353-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921653 | 147 | 2.859375 | 3 |
NASA is looking at developing a public competition that would pit competitors in developing fast, powerful computers that would help support advanced applications.
According to NASA, despite tremendous progress made in the past few decades, computational fluid dynamics (CFD) tools in particular are too slow for simulation of complex geometry flows, particularly those involving flow separation and combustion applications. To enable high-fidelity CFD for multi-disciplinary analysis and design, the speed of computation must be increased by orders of magnitude, the space agency said.
+More on Network World: The zany world of identified flying objects+
“Opportunities exist to reduce time to solution by orders of magnitude by exploiting algorithmic developments in such areas as grid adaptation, higher-order methods and efficient solution techniques for high performance computing hardware. A potential prize challenge will require that speed gains are to be achieved primarily by algorithmic enhancements, not by hardware (i.e., scaling to larger number of cores),” NASA stated.
The challenge, if it is actually made official, will provide selected base geometries and flow conditions and the time it takes to perform simulations using what’s known as NASA's FUN3D code. NASA said FUN3D was developed in the late 1980s as a research code. The code’s original purpose was to study existing algorithms and to develop new algorithms for unstructured-grid fluid dynamic simulations.
The idea is that the problem that now takes 3,000 wall-clock hours on 3,000 cores, for example, will reduce to 30 or 3 hours for 100x or 1000x speed up, respectively. “These would be considered to be gains and, thus, Fast Computing capability will allow high-fidelity multidisciplinary analysis to be used in early stages of vehicle development, resulting in novel configurations that are energy efficient and environment friendly toward research and development objectives,” NASA stated.
The prize for demonstrating a LEVEL I - 100x increase is planned to be $225,000 and the purse for demonstrating a LEVEL II – a 1,000x speed increase is planned to be $500,000. Up to 20% of the prize purse may be used to reward competitors for successful completion of a qualification round for both LEVEL I, and, LEVEL II, NASA said.
For now, NASA has issued a Request For Information to determine interest in developing such a fast computing challenge.
Check out these other hot stories: | <urn:uuid:a341b5e0-5fac-4f9f-9b8e-99ae14c63a63> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2915181/data-center/nasa-exploring-half-million-dollar-fast-computing-challenge.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00289-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937615 | 497 | 2.9375 | 3 |
PNRP – The Peer Name Resolution Protocol is new protocol made by Microsoft which is one of the first technology that will change the way we think about naming resolution in computer networking and possibly be the next DNS – Domain Name System like technology. PNRP is the new DNS but there are so much differences between them that it deserves an article on this blog.
Just to remind, is few simple words, DNS is a technology that enables us to type the domain name in the browser and leaves to Domain Name System to translate the domain name to IP address of the server where the web page is published.
As we are stepping forward to IPv6 implementation in the whole world in next years, there are technologies and future services that will not function at their best using DNS. In this case Microsoft was one of the first to develop a new technology, decentralized technology that will rely on neighbor computer for the name resolution and completely rely on IPv6 addressing. The Per Name Resolution protocol was the answer.
In case of DNS, it depends on a hierarchical structure of naming, while PNRP depends on peer systems in order to resolve the computer system’s location. Mainly, PNRP is a referral system that operates lookups on the basis of data it is familiar with.
Here is a simple example, if you require to search Computer 1 and you are close to Computers 2 and 3, it is important for your system to know whether Computer 2 knows Computer 1 or not. If the response of Computer 2 is positive, only then a a link to Computer 1 is provided to you. If the reply is in negative, then the system asks Computer 3 whether it knows Computer 1 and the same method is used with Computer 2. If none of the computers knows Computer 1, then the request is sent to other computers close to the system till it successfully finds the one that is familiar with Computer 1.
There are number of ways in which PNRP is different from the DNS service:
It is actually a distributed system of naming that does not depend on a central or main server to find objects. It is completely without server, but there are times when the requirement of server is essential in order to establish the process of name resolution by themselves.
DNS service can host only a small number of names and in order to locate the names it depends on other DNS server over which it does not hold any authority while n PNRP can easily scale to billions of names. Many computers are able to host the same name and multiple paths are provided to that name.
As it depends on clients and not rely on servers we see that is completely distributed, PNRP can tolerate faults very well. It does not depend on servers but it can ask servers to about their neighbors. In that case server is giving the same answer as it would the PC but that is not rely a server role that is in use. Name publication is quick, free, and like DNS it does not need administrative interference.
In case of n Names the update take place in actual time, not like DNS, which solely depends on caching to improve the operations or performance. Due to this, the stale addresses are not returned back by PNRP like DNS server—in particular a former, non-dynamic DNS server—can.
PNRP not only support the naming of services but of computers as well. This happened due to the reason that the PNRP name includes a port, an address, as well as possible payload like function of service.
With the help of digital signatures the n PNRP names can be protected. This way of protection confirms that the replacing or spoofing cannot take place with fake names by harmful/malicious users.
In order to provide resolution services, PNRP depends on the cloud concept. The clouds of two different types can exist. Global cloud is the first one that includes the whole IPv6 global address scope, which surrounds the whole Internet. Link-local cloud is the second one and it is link-local IPv6 address scope based. In case of local links a single subnet is represented usually. The link-local clouds may be in several number but the global cloud is only one.
The world has not completely moved to IPv6 till now, in the same way it hasn’t moved to PNRP too and it still relies on DNS for name resolution services. However, PNRP is a latest and essential technology that will have a key impact on the operations of Internet as organizations start adopting to IPv6.
The PNRP server components are now included in Windows Server 2008 R2 as an add-on feature and in Windows Server 2012 as standard feature. Windows machines with Vista and Windows 7 are supporting PNRP now. | <urn:uuid:2b11f962-d954-44a0-a624-9aa50d96c590> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2012/pnrp | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00289-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947983 | 966 | 3.8125 | 4 |
Simply put, an access control system is a process put in place by a company to increase their security by allowing or denying individuals access to certain areas, be they physical (such as doors) or digital (such as certain computer files). A proper access control system keeps track of who accessed certain files or areas and at what times this admittance occurred. In order for a system such as this to work to its full potential, great care must be taken to ensure that it works with all of the other security measures already in place. This includes having a system that is fully functional with timecards, ID cards, and badges, if they are used. When all the components work in harmony, a proper system is impossible to beat. | <urn:uuid:a9190a56-19d6-4c1b-b72b-ba6ab58ba5e7> | CC-MAIN-2017-04 | http://www.cicaccess.com/how-does-access-control-work-apr.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00197-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.966557 | 146 | 2.859375 | 3 |
Cybersecurity threats to medical devices are a growing concern. The exploitation of cybersecurity vulnerabilities presents a potential risk to the safety and effectiveness of medical devices. While manufacturers can incorporate controls in the design of a product to help prevent these risks, it is essential that manufacturers also consider improvements during maintenance of devices, as the evolving nature of cyber threats means risks may arise throughout a device’s entire lifecycle.
The U.S. Food and Drug Administration issued a draft guidance outlining important steps medical device manufacturers should take to continually address cybersecurity risks to keep patients safe and better protect the public health.
The draft guidance details the agency’s recommendations for monitoring, identifying and addressing cybersecurity vulnerabilities in medical devices once they have entered the market.
“All medical devices that use software and are connected to hospital and health care organizations’ networks have vulnerabilities—some we can proactively protect against, while others require vigilant monitoring and timely remediation,” said Suzanne Schwartz, M.D., M.B.A., associate director for science and strategic partnerships and acting director of emergency preparedness/operations and medical countermeasures in the FDA’s Center for Devices and Radiological Health.
The draft guidance outlines postmarket recommendations for medical device manufacturers, including the need to proactively plan for and to assess cybersecurity vulnerabilities—consistent with the FDA’s Quality System Regulation. It also addresses the importance of information sharing via participation in an Information Sharing Analysis Organization (ISAO), a collaborative group in which public and private-sector members share cybersecurity information.
The draft guidance recommends that manufacturers should implement a structured and systematic comprehensive cybersecurity risk management program and respond in a timely fashion to identified vulnerabilities. Critical components of such a program should include:
- Applying the 2014 NIST voluntary Framework for Improving Critical Infrastructure Cybersecurity, which includes the core principles of “Identify, Protect, Detect, Respond and Recover;”
- Monitoring cybersecurity information sources for identification and detection of cybersecurity vulnerabilities and risk;
- Understanding, assessing and detecting presence and impact of a vulnerability;
- Establishing and communicating processes for vulnerability intake and handling;
- Clearly defining essential clinical performance to develop mitigations that protect, respond and recover from the cybersecurity risk;
- Adopting a coordinated vulnerability disclosure policy and practice; and
- Deploying mitigations that address cybersecurity risk early and prior to exploitation.
For the majority of cases, actions taken by manufacturers to address cybersecurity vulnerabilities and exploits are considered “cybersecurity routine updates or patches,” for which the FDA does not require advance notification, additional premarket review or reporting under its regulations. For a small subset of cybersecurity vulnerabilities and exploits that may compromise the essential clinical performance of a device and present a reasonable probability of serious adverse health consequences or death, the FDA would require medical device manufacturers to notify the agency.
The draft guidance indicates that in cases where the vulnerability is quickly addressed in a way that sufficiently reduces the risk of harm to patients, the FDA does not intend to enforce urgent reporting of the vulnerability to the agency if certain conditions are met. These conditions include: there are no serious adverse events or deaths associated with the vulnerability; within 30 days of learning of the vulnerability, the manufacturer notifies users and implements changes that reduce the risk to an acceptable level; and the manufacturer is a participating member of an ISAO and reports the vulnerability, its assessment and remediation to the ISAO.
“The FDA is encouraging medical device manufacturers to take a proactive approach to cybersecurity management of their medical devices,” said Schwartz. “Only when we work collaboratively and openly in a trusted environment, will we be able to best protect patient safety and stay ahead of cybersecurity threats.” | <urn:uuid:2d5deee0-d8e6-4691-9c8c-e5191b8a9490> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2016/01/19/cybersecurity-recommendations-for-medical-device-manufacturers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00253-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929378 | 764 | 2.5625 | 3 |
Since 2007, January 28th has marked Data Privacy Day (or Data Protection Day in Europe), the annual awareness day to promote privacy and data protection best practices. The day is recognized in the United States, Canada, the UK and 26 other European countries through a number of initiatives focused on raising awareness among businesses and personal users about the importance of protecting the privacy of their personal information, particularly in the context of social networking.
In 2013, 73% of the UK population and 87% of the US population accessed the internet every day, and in our workplaces more and more time is spent online. It’s also now recognized that social media is the number one online office activity for workers, with 67% using social media at work multiple times a day.
The rise of social media has undoubtedly had an impact on business, not only in terms of employee productivity (both positive and negative), but also the way in which organizations approach IT security. The internet is the greatest window of opportunity for hackers to enter the corporate network, with social media accounts now an increasingly common weak spot.
In recent weeks and months, we’ve seen a surge in the number of Facebook scams, designed to redirect you to another page and install malware, unbeknownst to the user. We’ve also witnessed a number of fake Facebook login scams where users click to share a post but are asked to login again, allowing hackers to steal username and password information and harvest personal details.
These types of scams are often very targeted. They can be used against specific organisations or groups of people on a range of different social media sites. They can even be used on LinkedIn to send corrupt CVs or job offers. All of these scams and threats can be compounded if they infect a work environment as malware can make its way into the corporate system.
There are of course, ways to mitigate the threat and keep your personal details and those of your employer away from the prying eyes of a hacker. Regularly changing your password, making sure it’s alpha-numeric or simply checking the site you’re viewing has a padlock or similar symbol in the top left corner of the page can help.
Companies should consider a defense in depth approach to their IT security if they are to protect their IT system from these types of attack and keep their data secure. Evidence suggests that to combat increasingly complex attack vectors, organizations need to adopt a layered strategy that prioritizes high-impact solutions, such as privilege management, application whitelisting and sandboxing to contain threats and provide additional peace of mind.
Find out more about how to secure your data with Defendpoint and make data privacy a key priority from today. | <urn:uuid:069b9d6b-033e-49ac-8b15-cce6dcceadcc> | CC-MAIN-2017-04 | https://blog.avecto.com/2015/01/defending-your-business-this-data-privacy-day/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00555-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943616 | 548 | 2.6875 | 3 |
There’s no disputing that the Internet has had massive effects on international trade, beyond what anyone could have predicted. There was a time when buying a new shovel meant a trip to the local hardware store a few minutes away, but that’s started to change in recent years. These days, people are (arguably) more likely to make such a purchase online, and this is true of many other services as well.
Long Distance Relationships on the Internet Pose “Communication Issues”
The problem is that sometimes the services one wants to consume can be in a geographically distant region from the end user. There’s nothing inherently wrong with that, but when one lives in (for example) Nigeria and is trying to download a video or a large file from (again, for example) Japan, we start to run into technical limitations.
If one were to pop the hood on the Internet (just don’t let the magic leak out!), one would see that the way traffic flows is not a one-way stream (like turning on a faucet). The transfer is more like a conversation between two people, one of whom is doing the bulk of the talking. They’ll tell a chunk of their story, pause, and wait for the listener to nod their head. When they see that nod, they go on with their story. Rinse and repeat until the story is told.
Why is this done? Societal and cultural factors aside, it’s partly so they don’t tell a long, winding story about the time they got their keys stuck in the toaster only to have the listener say, “Hmm? Sorry, I missed that.”
Long Distance Conversations Can Create Confusion and Missed Messages
Internet transfers work in the same manner. The host that will ultimately become the listener in the previous analogy will send “Hey, send me cutepuppies.mp4!’ to the host that will ultimately become the speaker. The speaker receives the request and sends the first chunk. The listener responds with an acknowledgement and the speaker sends the second chunk.
The problem here is that over long distances with default configurations, only a relatively small amount of data is in each of these chunks. Latency may only be measured in milliseconds, but it adds up quickly as the ‘conversation’ between the two hosts continues. What is normally a high-speed connection slows to a crawl. Many applications for things like FTP uploads have fairly low default values, which may not be what is needed for ‘long range’ transfers of data.
Fix the Flow of Information: Tweaking TCP Settings
Fortunately, this experience can be improved in many cases. By making tweaks to something called TCP windowing settings, we can increase the size of these chunks, which results in less back and forth. With larger chunks, we have more data ‘in flight’ at any given time; that is manifested to the end user as an increase in speed.
It should be noted that these TCP windowing settings are not the same things as the TCP MTU, which is the largest a packet can be before it is broken up into fragments. The specifics of changing these settings vary by operating system and are beyond the scope of this document; however, they are very easy to find online.
Communicating in the Real World in Real Time
Let’s see a real-world example. Say that I have a 100Mbps pipe, and I want to make full use of that to a destination that’s 80ms away. By calculating the BDP (bandwidth delay product; there are plenty of free calculators out there), I can see I’d need to tweak my settings so that about 1MB of data was on the wire at any given time to get 100Mbps of throughput. If, however, that destination is now 200ms away, that number jumps to about 2.4MB.
Remember: TCP windowing changes will not result in a faster ping time to a remote host; ping uses either UDP or ICMP, which are unaffected by these settings. The test for success will be a TCP-based transfer, such as a download of a larger file.
In situations where someone is experiencing slower-than-expected throughput with no evidence of resource contention (e.g. maxing out the capacity of the server or circuit), TCP window settings are a great place to start looking. It should be noted that changing these settings is something that should only be done by qualified personnel.
If you have questions about latency or TCP windowing, please contact us. We appreciate your business and are always here at your service! | <urn:uuid:b0ab1ece-b781-4f6f-b1c4-5575ed00f112> | CC-MAIN-2017-04 | http://www.codero.com/blog/tcp-windowing-101/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00215-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960602 | 967 | 2.609375 | 3 |
To get the general idea on the subject of FTP, it is important to be acquainted with the word protocol as a significant set of rules and conventions. The aim behind the protocols introduction intent was to make available communication (computer to computer) facilities. And well designed FTP is most suitable for this purpose and that’s why still in use. Though, it is on the screen since 1970s.
FTP is resided on the application layer of OSI standard model so particular tasks can be obtained with its use such as to uphold intercontinental communication and to produce standards relating online communication. File Transfer Protocol is well documented in the form of RFC 959. But all other related documents on the FTP technicality overviews are available at RFC sourcebook. The reliability of this protocol can be judged from that, it belongs to the oldest internet protocols family but can be implemented with the help of TCP protocol.
The uniqueness of this set of rules is hidden behind the usage of separate commands plus data links. But in order to known, how FTP works, it is necessary to put some light also on the functionality of it. The aim of its use is to generate a reliable standard files exchanging source over the TCP/IP network. FTP comes with the feature of flexibility, which is allowing it to put into practice over a non-TCP/IP network too. And another trait of it is its exchanging files ability with the different types of machines. But it offers an environment for interaction with lowest limitations between client and server machines. It uses communication channels via which packets can be transmitted directly to their destination.
Consistent end-to-end connection is possible with FTP, but a link is being used two types of connections, named as control connection initiated by user but to manage the data transfer, data connection is used. The common function of FTP is to shift files from a host side with TCP based internet help to a different host. Common purpose of being employed this protocol in a network is to upload documents and web pages from personal to an open web hosting machine (server). But both machines at two ends can use different control connections and data connections.
Well, authentication of FTP users is done through a signing in protocol clear text but it will be so if server is configured to permit this connection. User side can be consisted of FTP user interface, PI (protocol interpreter) and DTP (data transfer process). Anyhow, file system and DTP user both can form a client system. On the other hand, Server side is based on PI server and DTP server while both file system and DTP server can create server system. Data connection is established between DTP user and DTP server but control connection is established between FTP user interface and PI server. FTP server is listened with port 21 help but data connection can be made its first move from port 20 by server.
Following are some FTP commands which are named as: ABOR, ACCT, ALLO, AUTH, CCC and so on. | <urn:uuid:0bc44733-5b35-45c2-afc2-7ecd80919285> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2012/ft | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00087-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943689 | 606 | 3.8125 | 4 |
News & Info
The 3-2-1 Backup Plan
Data is important in both business and personal settings. It needs to be backed up or it could be lost, causing extensive damage. Below is a helpful rule to follow to ensure your files are safe and secure.
The accepted rule for backing up data is the Three-Two-One Rule. It can be best summarized as having:
- At least THREE copies
- In TWO different formats
- With ONE of those copies off-site.
Having three different copies means storing them in three different locations. Storing all three copies in the same location raises the risk of losing your data to an unplanned event. By keeping them in different sites, it reduces the risk of a single event destroying multiple copies.
What are different formats? Different formats means using at least two different methods to store your data. For example, burning your photos to a DVD from your PC’s hard drive qualifies as two different formats. Using different formats reduces the risks that all your backups will be damaged, as different formats have different strengths and weaknesses when it comes to redundancy.
Keeping one copy off-site ensures that even if something happens to your primary storage location, such as a fire, or a break-in, at least one copy is safe in another locale. By complying with this rule, you can rest assured that if something does go wrong at one of your physical locations, at least your data will remain safe.
If you would like help planning your organization’s backup strategy, please contact us at firstname.lastname@example.org | <urn:uuid:ddc7ac46-e884-4eac-926e-d2235102109d> | CC-MAIN-2017-04 | https://www.go-concepts.com/news-info/the-3-2-1-backup-plan/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00326-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939989 | 326 | 2.6875 | 3 |
What email address or phone number would you like to use to sign in to Docs.com?
If you already have an account that you use with Office or other Microsoft services, enter it here.
Or sign in with:
Signing in allows you to download and like content, which the author will be aware of.
Embed code for: (HO Chun-ngai, Donald)Technology and ECE
Select a size
Caritas Institute of Community Education
Higher Diploma in Early Childhood Education
Technology and Early Childhood Education
Student Name : Ho Chun Ngai, Donald
Student No. : 10125323
Today, the pace of life is increasing with the technology advancements. There are many kindergartens using ICT for children’s learning. ICT stands for Information and Communication Technology/Technologies. In early childhood education, teachers implement ICT tools and techniques to support teaching, learning and other cognitive activities. The ICT tools include computers, educational software applications, digital cameras, programmable toys and other similar devices. ICT has both its benefit and potential risk. From my point of view, the advantages of using the ICT in teaching outweigh their disadvantages.
To teach children technologies, teachers have to design activities for them and set clear objectives. The activities should help children understanding the relationship between humans and nature, and explore the relationship between technology and living. It can provide opportunities for children to learn at their own pace. In the teaching process, teachers have to provide scaffolding and guidance for children to use those technologies. Finally, children can master the basic exploration techniques and have the initial understanding of technology. The following part of the essay is to show some suggestion of teaching children technology with the use of ICT.
Firstly, teachers can organize space for children to learn technology, such as computer corner and computer lesson. About the arrangement of the computer corner, teachers can locate digital technologies in one classroom where have free access and a chance to use them on daily basis. It should be equipped with personal computers, interactive whiteboards and tablets. Then, teachers should introduce its name and its basic usage to children. After introducing all these technologies to student, teachers can ask which technology can be used base on the nature of the activities. For example, students have to understand that the sound recorder can be used if teachers want to record some animals sound and let students guess what the animal is.
About the computer lesson, teachers can conduct a various kinds of activities to help children exploring the use of technologies. By using the computers, digital camera and large (non-portable) ICT such as ceiling-mounted data projectors in the computer class, making e-photo albums and projecting web stories become possible to do it in the classroom. On the other hand, there are some software that teachers can use to teach children different skills, such as using “Tux Paint” to teach children drawing and painting, using “PowerPoint” to teach children multimedia presentation and using “Scratch” to teach children creative their own interactive stories and games. Then, students will understand a wide range of technologies and thus increase their creativity.
Secondly, except using the ICT in classroom, it also can be used outdoors and students thus know more about different kinds of technologies. Some technologies, like outdoor wireless camera, CD players and programmable toys, are suggested to use productively outside in the classroom and they act as a motivating factor in learning for some children. For example, Programmable toys can be considered as a mean to support the development of logical thinking and problem solving ability.
The above information can show what ICT and how ICT teachers can use to teach children technologies. Now, I would like to justify the rationale of using ICT for young children’s learning experience.
ICT is an efficient tool for supporting young children’s learning and development. ICT can provide a context for collaboration, cooperation and positive learning experiences between children, or between children and adults. Moreover, it can support various aspects of learning, including language development and the development of mathematical thinking (Kalas, 2010). The using of ICT also provide opportunities for scaffolding and supporting learning for children from diverse cultural or language backgrounds. “ New technologies offer teachers additional resources to use as they plan to meet a range of levels, learning styles, and the individual needs of students” (Van Scoter & Boss, 2002).
Choosing software is an important process because good software allows children to engage in self-directed exploration, and can be tailored to children’s individual needs.
On the top of that, ICT helps the children with special educational needs enhancing their learning ability. Children with special educational needs (SEN) refers to young person who experience learning difficulties which are more significant than those experienced by the majority of learners of the same age for a variety of reasons. ICT can help motivating the SEN children involve into the learning and playing process by overcoming some of the effects of their impairment as well as possible barriers creative by traditional ways of educational technology. The independence, integration and equal opportunities for SEN children can be greatly improved after the application of ICT. For example, Hornof and Cavender introduced software program called ‘EyeDraw’ in 2005. It enables individuals with severe motor impairments to draw with their eyes and has been tested successfully (Drigas and Loannidou, 2013).
About the role of the ICT in early childhood education, it acts as a supporting role or can be described as a “second teacher” towards child development. The use of the ICT supports the key areas of learning in ECE like collaboration and socio-dramatic play. Children using ICT in their play (alone, with peers, or with teachers). It supports the literacy development by helping children observe, fix, memorize, describe and share their impressions with others. For example, using videoconferencing is a way to share their news with parents about what’s happening in the ECE. Moreover, it supports children’s developing the mathematical thinking and problem solving skills. It is because ICT presents mathematics in visual and tangible forms. ICT’s role as a bridge is also important in ECE. It connects two cultures by bringing children’s home culture and experience into the ECE centre.
To explore the potential of using the ICT in early childhood education, the first thing that teacher need to do is to reflect on the process and problems off building a computer corner and an ICT corner. They should gradually concentrate on “how to recognize and exploit new possibilities brought in by new technologies” and “how to develop new pedagogies and exploit them to achieve the goal in a better way. It is necessary for the kindergarten to buy the up-to-date ICT rather than using the one that are not suitable and not appropriate. On the other hand, “integrating various categories and types of ICT, and extending different scenarios” (New Zealand Council for Educational Research, 2004) are highly recommended to adopt in the future development. After mastering the basic techniques of the ICT, children and teachers need to master further skills in integrating ICT into activities for the divided group. Therefore, we can see that the ICT in the ECE can be explored more thoroughly in the future.
Words : 1100
Drigas, A. S. & Ioannidou R. E. (2013). Special Education and ICTs. <http://imm.demokritos.gr/publications/Special_Education_ICTs.pdf>. Kalas, I. (2010). Recognizing the Potential of ICT in Early Childhood Education. The UNESCO Institute for Information Technology in Education. New Zealand Council for Educational Research. (2004). The role and potential of ICT in early childhood education. A review of New Zealand and international literature. Wellington. Van Scoter, J., & Boss, S. (2002). Learners, language, and technology: Making connections that support literacy. Portland, OR: Northwest Regional Educational Laborator.
ge development and the development of mathematical thinking (Kalas, 2010). | <urn:uuid:b94ccd45-0014-4784-b84b-264a2a8a805d> | CC-MAIN-2017-04 | https://docs.com/donaldho/2501/ho-chun-ngai-donald-technology-and-ece | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00354-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944025 | 1,681 | 3.234375 | 3 |
New MIT technology could provide fast and reliable extra-terrestrial broadband connections.
Space travel might just have got a little more entertaining, as any future colonists living on the Moon may be able to enjoy all the benefits of online access that their Earth-bound compatriots do, thanks to a new breakthrough by American researchers.
Working with NASA, a team from the Massachusetts Institute of Technology (MIT) Lincoln Laboratory has for the first time demonstrated that data communication technology that can provide those outside of our planet with the broadband connectivity currently in place on Earth.
The connection is stable enough to enable large data transfers and even provide high-definition video streaming, meaning astronauts would be able to communicate with friends or colleagues back on earth via video chat.
Alternatively, it could allow Moon dwellers to catch up on their favourite television shows, the researchers suggested.
The team will present their technology for the first time at the CLEO:2014 conference next month in San Jose, giving an overview of the on-orbit performance of their laser-based communication uplink between the moon and Earth, which beat the previous record transmission speed last fall by a factor of 4,800.
"This will be the first time that we present both the implementation overview and how well it actually worked," Mark Stevens of MIT Lincoln Laboratory said of the technology. "The on-orbit performance was excellent and close to what we’d predicted, giving us confidence that we have a good understanding of the underlying physics".
The team made history last year when their Lunar Laser Communication Demonstration (LLCD) transmitted data over the 384,633 kilometres between the moon and Earth at a download rate of 622 megabits per second, faster than any radio frequency (RF) system.
They also transmitted data from the Earth to the moon at 19.44 megabits per second, a factor of 4,800 times faster than the best RF uplink ever used.
"Communicating at high data rates from Earth to the moon with laser beams is challenging because of the 400,000-kilometer distance spreading out the light beam," Stevens says. "It’s doubly difficult going through the atmosphere, because turbulence can bend light — causing rapid fading or dropouts of the signal at the receiver."
In order to overcome the issue of the signal fading over such a long distance, the team’s technology combines a number of techniques, using four separate telescopes to send the uplink signal.
Each of these transmits light through a different column of air that experiences different bending effects from the atmosphere, Stevens said, increasing the chance that at least one of the laser beams will interact with the receiver, which is mounted on a satellite orbiting the moon.
This receiver uses another telescope to collect the light, which is then amplified and converted into data bit patterns which transmit the message.
Of the 40 watt signals sent by the transmitter on the ground, less than a billionth of a watt is received at the satellite — but this is still 10 times the signal strength necessary to achieve error-free communication according to Stevens. | <urn:uuid:e750468c-e89a-47b6-bce3-59c9a3a983cb> | CC-MAIN-2017-04 | http://www.cbronline.com/news/telecoms/is-nasa-bringing-high-speed-internet-access-to-the-moon_4275924 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00226-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93263 | 627 | 3.390625 | 3 |
Fiber optic cleaver is used to cut the fiberglass to make a good end face, as we know the quality of the bare fiber end face will determine the quality of the joint of the fibers in the fiber optic fusion process, and the joint point quality means higher or lower attenuation of the fiber connection line.
An optical fiber is cleaved by applying a sufficient high tensile stress in the vicinity of a sufficiently large surface crack, which then rapidly expands across the cross section at the sonic velocity.This idea has many different practical implementations in a variety of commercial cleaving equipment. Some cleavers apply a tensile stress to the fiber while scratching the its surface with a very hard scribing tool, usually a diamond edge.
Fiber optic cleavers are used in fusion splicing to make the ready to use optical fiber before putting them into the fusion splicer to melt them together. Some cleavers scratch the surface first, and then apply tensile stress, and some apply a tensile stress that is uniform across the cross section while others bend the fiber through a tight radius, producing high tensile stresses on the outside of the bend.
Commercial instruments for simultaneously cleaving all the fibers in a ribbon are also widely available. These ribbon cleavers operate on the same principles as single fiber cleavers. The average cleave quality of a ribbon cleaver is somewhat interior to that of a single fiber cleaver. Scribe-and-break cleaving can be done by hand or by tools that range from relatively inexpensive hand tools to elaborate automated bench tools. Any technique or tools is capable of good cleaves; the trick is consistent finishes time and time again. | <urn:uuid:4c2614e0-7acc-4bd8-87a0-9daee2b85091> | CC-MAIN-2017-04 | http://www.fs.com/blog/the-features-of-fiber-optic-cleaver.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00134-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934021 | 336 | 2.984375 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.